• Title/Summary/Keyword: Variance estimation

Search Result 733, Processing Time 0.026 seconds

A comparison of imputation methods using nonlinear models (비선형 모델을 이용한 결측 대체 방법 비교)

  • Kim, Hyein;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.543-559
    • /
    • 2019
  • Data often include missing values due to various reasons. If the missing data mechanism is not MCAR, analysis based on fully observed cases may an estimation cause bias and decrease the precision of the estimate since partially observed cases are excluded. Especially when data include many variables, missing values cause more serious problems. Many imputation techniques are suggested to overcome this difficulty. However, imputation methods using parametric models may not fit well with real data which do not satisfy model assumptions. In this study, we review imputation methods using nonlinear models such as kernel, resampling, and spline methods which are robust on model assumptions. In addition, we suggest utilizing imputation classes to improve imputation accuracy or adding random errors to correctly estimate the variance of the estimates in nonlinear imputation models. Performances of imputation methods using nonlinear models are compared under various simulated data settings. Simulation results indicate that the performances of imputation methods are different as data settings change. However, imputation based on the kernel regression or the penalized spline performs better in most situations. Utilizing imputation classes or adding random errors improves the performance of imputation methods using nonlinear models.

Estimation of Volatility among the Stock Markets in ASIA using MRS-GARCH model (MRS-GARCH를 이용한 아시아 주식시장 간의 변동성 추정)

  • Lee, Kyung-Hee;Kim, Kyung-Soo
    • Management & Information Systems Review
    • /
    • v.38 no.1
    • /
    • pp.181-199
    • /
    • 2019
  • The purpose of this study is to examine whether or not the volatility of the 1997~1998 Asian crisis still affects the monthly stock returns of Korea, Japan, Singapore, Hong Kong and China from 1980 to 2018. This study investigated whether the volatility has already fallen to pre-crisis levels. To illustrate the possible structural changes in the unconditioned variance due to the Asian financial crisis, we use the MRS-GARCH model, which is a regime switching model. The main results of this study were as follows: First, the stock return of each country was weak in the high volatility regime except Japan resulted by the Asian financial crisis from 1997 to 1998 until March 2018, and the Asian stock market has not yet calmed down except for the global financial crisis period of 2007 and 2008. Second, the conditional volatility has been significantly and persistently decreased and eliminated after the Asian financial crisis. Thus, we could be judged that the Asian stock market was not fully recovered(stable) due to the Asian crisis including the capital liberalization high inflation, worsening current account deficit, overseas low interest rates and expansion of credit growth in 1997 and 1998, but the Asian stock market was largely settled down, except for the 2007 and 2008 in Global financial crises. Considering the similarity between the Asian stock markets and the similar correlation of the regime switching, it may be worthwhile to analyze the MRS-GARCH model.

Estimation of drift force by real ship using multiple regression analysis (다중회귀분석에 의한 실선의 표류력 추정)

  • AHN, Jang-Young;KIM, Kwang-il;KIM, Min-Son;LEE, Chang-Heon
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.57 no.3
    • /
    • pp.236-245
    • /
    • 2021
  • In this study, a drifting test using a experimental vessel (2,966 tons) in the northern waters of Jeju was carried out for the first time in order to obtain the fundamental data for drift. During the test, it was shown that the average leeway speed and direction by GPS position were 0.362 m/s and 155.54° respectively and the leeway rate for wind speed was 8.80%. The analysis of linear regression modes about leeway speed and direction of the experimental vessel indicated that wind or current (i.e. explanatory variable) had a greater influence upon response variable (e.g. leeway speed or direction) with the speed of the wind and current rather than their directions. On the other hand, the result of multiple regression model analysis was able to predict that the direction was negative, and it was demonstrated that predicted values of leeway speed and direction using an experimental vessel is to be more influential by current than wind while the leeway speed through variance and covariance was positive. In terms of the leeway direction of the experimental vessel, the same result of the leeway speed appeared except for a possibility of the existence of multi-collinearity. Then, it can be interpreted that the explanatory variables were less descriptive in the predicted values of the leeway direction. As a result, the prediction of leeway speed and direction can be demonstrated as following equations. Ŷ1= 0.4031-0.0032X1+0.0631X2-0.0010X3+0.4110X4 Ŷ2= 0.4031-0.6662X1+27.1955X2-0.6787X3-420.4833X4 However, many drift tests using actual vessels and various drifting objects will provide reasonable estimations, so that they can help search and rescue fishing gears as well.

Study on Estimation of Unmanned Enforcement Equipment Installation Criteria and Proper Installation Number (무인교통단속장비 설치 판단 기준 및 설치대수 산정 연구)

  • So, Hyung-Jun;Kim, Yong-Man;Kim, Nam-Seon;Hwang, Jae-Seong;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.6
    • /
    • pp.49-60
    • /
    • 2020
  • The number of traffic control equipment installed to prevent traffic accidents increases every year due to continuous installation by the National Police Agency and local governments. However, it is installed based on qualitative judgment rather than engineering analysis results. The purpose of this study was to present additional installations in the future by presenting the installation criteria considering the severity of accidents for each road type and calculating the appropriate number of installations. ARI indicators that can indicate the severity of traffic accidents were developed, and road types were classified through analysis of variance and cluster analysis, and accident information by road type was analyzed to derive ARI of clusters with high traffic accident severity. The ARI values required to determine the installation of equipment for each road type were presented, and 5,244 additional installation points were analyzed.

TLS (Total Least-Squares) within Gauss-Helmert Model: 3D Planar Fitting and Helmert Transformation of Geodetic Reference Frames (가우스-헬머트 모델 전최소제곱: 평면방정식과 측지좌표계 변환)

  • Bae, Tae-Suk;Hong, Chang-Ki;Lim, Soo-Hyeon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.315-324
    • /
    • 2022
  • The conventional LESS (LEast-Squares Solution) is calculated under the assumption that there is no errors in independent variables. However, the coordinates of a point, either from traditional ground surveying such as slant distances, horizontal and/or vertical angles, or GNSS (Global Navigation Satellite System) positioning, cannot be determined independently (and the components are correlated each other). Therefore, the TLS (Total Least Squares) adjustment should be applied for all applications related to the coordinates. Many approaches were suggested in order to solve this problem, resulting in equivalent solutions except some restrictions. In this study, we calculated the normal vector of the 3D plane determined by the trace of the VLBI targets based on TLS within GHM (Gauss-Helmert Model). Another numerical test was conducted for the estimation of the Helmert transformation parameters. Since the errors in the horizontal components are very small compared to the radius of the circle, the final estimates are almost identical. However, the estimated variance components are significantly reduced as well as show a different characteristic depending on the target location. The Helmert transformation parameters are estimated more precisely compared to the conventional LESS case. Furthermore, the residuals can be predicted on both reference frames with much smaller magnitude (in absolute sense).

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

The Economic Growth of Korea Since 1990 : Contributing Factors from Demand and Supply Sides (1990년대 이후 한국경제의 성장: 수요 및 공급 측 요인의 문제)

  • Hur, Seok-Kyun
    • KDI Journal of Economic Policy
    • /
    • v.31 no.1
    • /
    • pp.169-206
    • /
    • 2009
  • This study stems from a question, "How should we understand the pattern of the Korean economy after the 1990s?" Among various analytic methods applicable, this study chooses a Structural Vector Autoregression (SVAR) with long-run restrictions, identifies diverse impacts that gave rise to the current status of the Korean economy, and differentiates relative contributions of those impacts. To that end, SVAR is applied to four economic models; Blanchard and Quah (1989)'s 2-variable model, its 3-variable extensions, and the two other New Keynesian type linear models modified from Stock and Watson (2002). Especially, the latter two models are devised to reflect the recent transitions in the determination of foreign exchange rate (from a fixed rate regime to a flexible rate one) as well as the monetary policy rule (from aggregate targeting to inflation targeting). When organizing the assumed results in the form of impulse response and forecasting error variance decomposition, two common denominators are found as follows. First, changes in the rate of economic growth are mainly attributable to the impact on productivity, and such trend has grown strong since the 2000s, which indicates that Korea's economic growth since the 2000s has been closely associated with its potential growth rate. Second, the magnitude or consistency of impact responses tends to have subsided since the 2000s. Given Korea's high dependence on trade, it is possible that low interest rates, low inflation, steady growth, and the economic emergence of China as a world player have helped secure capital and demand for export and import, which therefore might reduced the impact of each sector on overall economic status. Despite the fact that a diverse mixture of models and impacts has been used for analysis, always two common findings are observed in the result. Therefore, it can be concluded that the decreased rate of economic growth of Korea since 2000 appears to be on the same track as the decrease in Korea's potential growth rate. The contents of this paper are constructed as follows: The second section observes the recent trend of the economic development of Korea and related Korean articles, which might help in clearly defining the scope and analytic methodology of this study. The third section provides an analysis model to be used in this study, which is Structural VAR as mentioned above. Variables used, estimation equations, and identification conditions of impacts are explained. The fourth section reports estimation results derived by the previously introduced model, and the fifth section concludes.

  • PDF

Parameter Estimation of Water Balance Analysis Method and Recharge Calculation Using Groundwater Levels (지하수위를 이용한 물수지분석법의 매개변수추정과 함양량산정)

  • An, Jung-Gi;Choi, Mu-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.4 s.165
    • /
    • pp.299-311
    • /
    • 2006
  • In this paper it is outlined the methodology of estimating the parameters of water balance analysis method for calculating recharge, using ground water level rises in monitoring well when values of specific yield of aquifer are not available. This methodology is applied for two monitoring wells of the case study area in northern area of the Jeiu Island. A water balance of soil layer of plant rooting zone is computed on a daily basis in the following manner. Diect runoff is estimated by using SCS method. Potential evapotranspiration calculated with Penman-Monteith equation is multiplied by crop coefficients($K_c$) and water stress coefficient to compute actual evapotranspiration(AET). Daily runoff and AET is subtracted from the rainfall plus the soil water storage of the previous day. Soil water remaining above soil water retention capacity(SWRC) is assumed to be recharge. Parameters such as the SCS curve number, SWRC and Kc are estimated from a linear relationship between water level rise and recharge for rainfall events. The upper threshold value of specific yield($n_m$) at the monitoring well location is derived from the relationship between rainfall and the resulting water level rise. The specific yield($n_c$) and the coefficient of determination ($R^2$) are calculated from a linear relationship between observed water level rise and calculated recharge for the different simulations. A set of parameter values with maximum value of $R^2$ is selected among parameter values with calculated specific yield($n_c$) less than the upper threshold value of specific yield($n_m$). Results applied for two monitoring wells show that the 81% of variance of the observed water level rises are explained by calculated recharge with the estimated parameters. It is shown that the data of groundwater level is useful in estimating the parameter of water balance analysis method for calculating recharge.

Estimation of the Convective Boundary Layer Height Using a UHF Radar (UHF 레이더를 이용한 대류 경계층 고도의 추정)

  • 허복행;김경익
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.1-14
    • /
    • 2001
  • The enhancement of the refractive index structure parameter $C_n^2$ often occurs where vertical gradients of virtual potential temperature ${\theta}_v$ and mixing ratio q have their maximum values. The $C_n^2$ can be a very useful parameter for estimating the convective boundary layer(CBL) height. The behavior of $C_n^2$ peaks, often used to locate the height of mixed layer, was investigated in the present study. In addition, a new method to determine the CBL height objectively using both $C_n^2$ and vertical air velocity variance ${\sigma}_w$ data of UHF radar was also suggested. The present analysis showed that the $C_n^2$ peaks in the backscatter intensity profiles often occurred not only at the top of the CBL but also at the top of a residual layer or at a cloud layer. The $C_n^2$ peaks corresponding to the CBL heights were slightly lower than the CBL heights derived from rawinsonde sounding data when vertical mixing owing to weak solar heating was not significant and the height of strong vertical ${\theta}_v$ gradients were not consistent with that of strong vertical q gradients. However, the $C_n^2$ peaks corresponding to the CBL heights were in good agreement with the rawinsonde-estimated CBL hegiths when vertical mixing owing to solar heating was significant and the vertical gradient of both ${\theta}_v$ and q in the entrainment zone was very strong. The maximum backscatter intensity method, which determines the height of $C_n^2$ peak as the CBL height, correctly estimated the CBL height when the $C_n^2$ profile had single peak, but this method erroneously estimated the CBL height when there was a residual layer or a cloud layer over the top of the CBL. The new method distinguished when there the CBL height from the peak due a cloud layer or a residual layer using both $C_n^2$ and ${\sigma}_w$ data, and correctly estimated the CBL height. As for estimation of diurnal variation of the CBL height, the new method backscatter intensity method even if the vertical profile of backscatter intensity had two peaks from the CBL height and a residual layer or a cloud layer.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.