• Title/Summary/Keyword: 한국주식시장

Search Result 672, Processing Time 0.031 seconds

A Study on the K-REITs of Characteristic Analysis by Investment Type (K-REITs(부동산투자회사)의 투자 유형별 특성 분석)

  • Kim, Sang-Jin;Lee, Myenog-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.66-79
    • /
    • 2016
  • A discussion has recently emerged over the increase of approvals of K-REITs, which is concluded on the basis of how to raise funds for business activity, fulfill the expected rate of return and maximize the management of managing investment funds. In addition, corporations need to acknowledge the necessity of the capital structure reflected in the current economic environment and decision-making processes. This research analyzed the characteristics by investment types and influence factors about the debt ratio of K-REITs. The data were collected from general management about business state, investment, and finance from 2002 to 2015 in K-REITs (except for the GFC period of 2007~2009). The results of the research demonstrated the high ratios of the largest shareholder characteristics, which are corporation, pension funds, mutual funds, banks, securities, insurance, and, recently, the increasing ratio of the largest shareholder and major stockholder. The investment of K-REITs is increasing the role of institutional investors that take a leading development of K-REITs. The behaviors of simultaneous investment of institutional investors were analyzed to show that they received higher interest rates than other financial institutions and ran in parallel with attraction and compensation. The results of the multiple regressions analysis, utilizing variables about debt ratio were as follows. The debt ratio showed a negative (-) relation that profitability is increasing, which matches the pecking order theory and trade off theory. On the other hand, investment opportunities (growth potential) showed a negative (-) relation and assets scale that indicated a positive (+) relation. The research results are reflected as follows. K-REITs focused on private equity REITs more than public offering REITs, and in the case of financing the capital of others, loan capital is operated under the guarantee of tangible assets (most of real estate) more than financing of the stock market. Further, after the GFC, the capital of others was actively utilized in K-REITs business, and the debt ratio showed that the determinant factors by the ratio and characteristics of the largest shareholder and investment products.

Risk Aversion in Forward Foreign Currency Markets (선도환시장(先渡換市場)에서의 위험회피도(危險回避度)에 관한 연구(硏究))

  • Jang, Ik-Hwan
    • The Korean Journal of Financial Management
    • /
    • v.8 no.1
    • /
    • pp.179-197
    • /
    • 1991
  • 선도환의 가격을 결정하는 접근방법에는 2차자산(derivative assets)이라는 선도계약의 기본특성에 기초한 재정거래(arbitrage)에 의한 방법이 가장 많이 이용되고 있다. 재정거래방식에는 선도환과 현물외환가격간의 상호관련성에 의하여 선도환가격을 이자율평가설(covered interest rate parity : CIRP), 즉 현물가격과 양국간의 이자율차이의 합으로 표시하고 있다. 특히 현물가격과 이자율은 모두 현재시점에서 의사결정자에게 알려져 있기때문에 선도환가격은 확실성하에서 결정되어 미래에 대한 예측이나 투자자의 위험회피도와는 관계없이 결정된다는 것이 특징이다. 이자율평가설에 관한 많은 실증연구는 거래 비용을 고려한 경우 현실적으로 적절하다고 보고 있다(Frenkel and Levich ; 1975, 1977). 다른 방법으로는 선도환의 미래예측기능에만 촛점을 맞추어 가격결정을 하는 투기, 예측접근방법(speculative efficiency approach : 이하에서는 SEA라 함)이 있다. 이 방법 중에서 가장 단순한 형태로 표시된 가설, 즉 '선도환가격은 미래기대현물가격과 같다'는 가설은 대부분의 실증분석에서 기각되고 있다. 이에 따라 SEA에서는 선도환가격이 미래에 대한 기대치뿐만 아니라 위험프리미엄까지 함께 포함하고 있다는 새로운 가설을 설정하고 이에 대한 실증분석을 진행한다. 이 가설은 이론적 모형에서 출발한 것이 아니기 때문에, 특히 기대치와 위험프레미엄 모두가 측정 불가능하다는 점으로 인하여 실증분석상 많은 어려움을 겪게 된다. 이러한 어려움을 피하기 위하여 많은 연구에서는 이자율평가설을 이용하여 선도환가격에 포함된 위험프레미엄에 대해 추론 내지 그 행태를 설명하려고 한다. 이자율평가설을 이용하여 분석모형을 설정하고 실증분석을 하는 것은 몇가지 근본적인 문제점을 내포하고 있다. 먼저, 앞서 지적한 바와 같이 이자율평가설을 가정한다는 것은 SEA에서 주된 관심이 되는 미래예측이나 위험프레미엄과는 관계없이 선도가격이 결정 된다는 것을 의미한다. 따라서 이자율평가설을 가정하여 설정된 분석모형은 선도환시장의 효율성이나 균형가격결정에 대한 시사점을 제공할 수 없다는 것을 의미한다. 즉, 가정한 시장효율성을 실증분석을 통하여 다시 검증하려는 것과 같다. 이러한 개념적 차원에서의 문제점 이외에도 실증분석에서의 추정상의 문제점 또한 존재한다. 대부분의 연구들이 현물자산의 균형가격결정모형에 이자율평가설을 추가로 결합하기 때문에 이러한 방법으로 설정한 분석모형은 그 기초가 되는 현물가격모형과는 달리 자의적 조작이 가능한 형태로 나타나며 이를 이용한 모수의 추정은 불필요한 편기(bias)를 가지게 된다. 본 연구에서는 이러한 실증분석상의 편기에 관한 문제점이 명확하고 구체적으로 나타나는 Mark(1985)의 실증연구를 재분석하고 실증자료를 통하여 위험회피도의 추정치에 편기가 발생하는 근본원인이 이자율평가설을 부적절하게 사용하는데 있다는 것을 확인 하고자 한다. 실증분석결과는 본문의 <표 1>에 제시되어 있으며 그 내용을 간략하게 요약하면 다음과 같다. (A) 실증분석모형 : 본 연구에서는 다기간 자산가격결정모형중에서 대표적인 Lucas (1978)모형을 직접 사용한다. $$1={\beta}\;E_t[\frac{U'(C_{t+1})\;P_t\;s_{t+1}}{U'(C_t)\;P_{t+1}\;s_t}]$$ (2) $U'(c_t)$$P_t$는 t시점에서의 소비에 대한 한계효용과 소비재의 가격을, $s_t$$f_t$는 외환의 현물과 선도가격을, $E_t$${\beta}$는 조건부 기대치와 시간할인계수를 나타낸다. Mark는 위의 식 (2)를 이자율평가설과 결합한 다음의 모형 (4)를 사용한다. $$0=E_t[\frac{U'(C_{t+1})\;P_t\;(s_{t+1}-f_t)}{U'(C_t)\;P_{t+1}\;s_t}]$$ (4) (B) 실증분석의 결과 위험회피계수 ${\gamma}$의 추정치 : Mark의 경우에는 ${\gamma}$의 추정치의 값이 0에서 50.38까지 매우 큰 폭의 변화를 보이고 있다. 특히 비내구성제품의 소비량과 선도프레미엄을 사용한 경우 ${\gamma}$의 추정치의 값은 17.51로 비정상적으로 높게 나타난다. 반면에 본 연구에서는 추정치가 1.3으로 주식시장자료를 사용한 다른 연구결과와 비슷한 수준이다. ${\gamma}$추정치의 정확도 : Mark에서는 추정치의 표준오차가 최소 15.65에서 최대 42.43으로 매우 높은 반면 본 연구에서는 0.3에서 0.5수준으로 상대적으로 매우 정확한 추정 결과를 보여주고 있다. 모형의 정확도 : 모형 (4)에 대한 적합도 검증은 시용된 도구변수(instrumental variables)의 종류에 따라 크게 차이가 난다. 시차변수(lagged variables)를 사용하지 않고 현재소비와 선도프레미엄만을 사용할 경우 모형 (4)는 2.8% 또는 2.3% 유의수준에서 기각되는 반면 모형 (2)는 5% 유의수준에서 기각되지 않는다. 위와같은 실증분석의 결과는 앞서 논의한 바와 같이 이자율평가설을 사용하여 균형자산가격 결정모형을 변형시킴으로써 불필요한 편기를 발생시킨다는 것을 명확하게 보여주는 것이다.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Underpricing of Initial Offerings and the Efficiency of Investments (신주(新株)의 저가상장현상(低價上場現象)과 투자(投資)의 효율성(效率成)에 대한 연구(硏究))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.95-120
    • /
    • 1990
  • The underpricing of new shares of a firm that are offered to the public for the first time (initial offerings) is well known and has puzzled financial economists for a long time since it seems at odds with the optimal behavior of the owners of issuing firms. Past attempts by financial economists to explain this phenomenon have not been successful in the sense that the explanations given by them are either inconsistent with the equilibrium theory or implausible. Approaches by such authors as Welch or Allen and Faulhaber are no exceptions. In this paper, we develop a signalling model of capital investment to explain the underpricing phenomenon and also analyze the efficiency of investment. The model focuses on the information asymmetry between the owners of issuing firms and general investors. We consider a firm that has been owned and operated by a single owner and that has a profitable project but has no capital to develop it. The profit from the project depends on the capital invested in the project as well as a profitability parameter. The model also assumes that the financial market is represented by a single investor who maximizes the expected wealth. The owner has superior information as to the value of the firm to investors in the sense that it knows the true value of the parameter while investors have only a probability distribution about the parameter. The owner offers the representative investor a fraction of the ownership of the firm in return for a certain amount of investment in the firm. This offer condition is equivalent to the usual offer condition consisting of the number of issues to sell and the unit price of a share. Thus, the model is a signalling game. Using Kreps' criterion as the solution concept, we obtained an essentially unique separating equilibrium offer condition. Analysis of this separating equilibrium shows that the owner of the firm with high profitability chooses an offer condition that raises an amount of capital that is short of the amount that maximizes the potential profit from the project. It also reveals that the fraction of the ownership of the firm that the representative investor receives from the owner of the highly profitable firm in return for its investment has a value that exceeds the investment. In other words, the initial offering in the model is underpriced when the profitability of the firm is high. The source of underpricing and underinvestment is the signalling activity by the owner of the highly profitable firm who attempts to convince investors that his firm has a highly profitable project by choosing an offer condition that cannot be imitated by the owner of a firm with low profitability. Thus, we obtained two main results. First, underpricing is a result of a signalling activity by the owner of a firm with high profitability when there exists information asymmetry between the owner of the issuing firm and investors. Second, such information asymmetry also leads to underinvestment in a highly profitable project. Those results clearly show the underpricing entails underinvestment and that information asymmetry leads to a social cost as well as a private cost. The above results are quite general in the sense that they are based upon a neoclassical profit function and full rationality of economic agents. We believe that the results of this paper can be used as a basis for further research on the capital investment process. For instance, one can view the results of this paper as a subgame equilibrium in a larger game in which a firm chooses among diverse ways to raise capital. In addition, the method used in this paper can be used in analyzing a wide range of problems arising from information asymmetry that the Korean financial market faces.

  • PDF

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

Exploring Domestic ESG Research Trends: Focusing on Domestic Research on ESG from 2012 to 2021 (국내 ESG 연구동향 탐색: 2012~2021년 진행된 국내 학술연구 중심으로)

  • Park, Jae Hyun;Han, Hyang Won;Kim, Na Ra
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.1
    • /
    • pp.191-211
    • /
    • 2022
  • As the value of highly sustainable companies increases, ESG(Environmental, Social, and Governance) has emerged as the biggest topic of discussion for companies around the world. In addition, as domestically, more research is being done on ESG in line with global trends, it is necessary to examine ESG research trends. Accordingly, ESG academic papers that have been published for the past 10 years were collected for each year, and frequency analysis was conducted using text mining techniques regarding key themes and thesis titles. This paper analyzed the number of selected publications by year and the cumulated number of studies through bibliometric analysis. The findings suggested that the number of ESG papers is increasing each year and that academic interest in ESG-related issues continues to abound. Next, according to the results of frequency analysis of the keywords and titles of the research papers, the words- "ESG", "company", "society", "responsibility", "management", "investment", and "sustainability"- were extracted. This analysis identified the research fields and keywords that have been relevant to ESG in the past 10 years. As a result of comparing the major ESG issues presented in recent overseas studies and the common factors of the ESG key keywords presented in this study, it was confirmed that the environment is the focus of recent studies compared to previous studies. Third, it was found that the data used by domestic ESG studies mainly include the KEJI index, the KRX index, and the KCGS ESG evaluation index. After identifying the main research subjects of ESG papers, research found that 8 out of 152 domestic ESG studies were focused on SMEs. Through this study, it was possible to confirm the ESG research trend and increase in research, and future researchers divided the research topics and research keywords and presented basic data for selecting more diverse research topics. Based on both, the arguments of previous ESG studies conducted on SMEs and the results of this study, there is a lack of studies on guidelines for ESG practice and their application to SMEs, and more ESG research regarding SMEs will need to be conducted in the future.

Work & Life Balance and Conflict among Employees : Work-life Balance Effect that Reflects Work Characteristics (일·생활 균형과 구성원간 갈등관계 : 직장 내 업무 특성을 반영한 WLB 효과 중심으로)

  • Lee, Yang-pyo;Choi, Chang-bum
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.183-200
    • /
    • 2024
  • Recently, with the MZ generation's entry into society and the social participation of the female population, conflicts are occurring between workplace groups that value WLB and existing groups that emphasize collaboration due to differences in work orientation. Public institutions and companies that utilize work-life balance support systems show differences in job Commitment depending on the nature of the work and the activation of the support system. Accordingly, it is necessary to verify the effectiveness of the WLB support system actually operated by the company and present universally valid standards. The purpose of this study is, first, to verify the effectiveness of the support system for work-life balance and to find practical consensus amid changes in policies and perceptions of the working environment. Second, the influence of work-life balance level and job immersion according to work characteristics was analyzed to verify the mutual influence in order to establish standards for WLB operation that reflects work characteristics. For the study, a 2X2 matrix model was used to analyze the impact of work-life balance and work characteristics on job commitment, and four hypotheses were established. First, analysis of the job involvement level of conflict-type group members, second, analysis of the job involvement level of leading group members, third, analysis of the job involvement level of agreeable group members, and fourth, analysis of the job involvement level of cooperative group members. To conduct this study, an online survey was conducted targeting employees working in public institutions and large corporations. The survey was conducted for a total of 9 days from October 23 to 31, 2023, and 163 people responded, and the analysis was based on a valid sample of 152 people, excluding 11 copies that were insincere responses or gave up midway. As a result of the study's hypothesis testing, first, the conflict type group was found to have the lowest level of job engagement at 1.43. Second, the proactive group showed the highest level of job engagement at 4.54. Third, the conformity group showed a slightly lower level of job involvement at 2.58. Fourth, the cooperative group showed a slightly higher level of job involvement at 3.80. The academic implications of the study are that it subdivides employees' personalities into factors based on the level of work-life balance and nature of work. The practical implications of the study are that it analyzes the effectiveness of WLB support systems operated by public institutions and large corporations by grouping them.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.