• Title/Summary/Keyword: nonlinear prediction

Search Result 920, Processing Time 0.028 seconds

Quantitative Comparison of Univariate Kriging Algorithms for Radon Concentration Mapping (라돈 농도 분포도 작성을 위한 단변량 크리깅 기법의 정량적 비교)

  • KWAK, Geun-Ho;KIM, Yong-Jae;CHANG, Byung-Uck;PARK, No-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.71-84
    • /
    • 2017
  • Radon, which enters the interior environment from soil, rocks, and groundwater, is a radioactive gas that poses a serious risk to humans. Indoor radon concentrations are measured to investigate the risk of radon gas exposure and reliable radon concentration mapping is then performed for further analysis. In this study, we compared the predictive performance of various univariate kriging algorithms, including ordinary kriging and three nonlinear transform-based kriging algorithms (log-normal, multi-Gaussian, and indicator kriging), for mapping radon concentrations with an asymmetric distribution. To compare and analyze the predictive performance, we carried out jackknife-based validation and analyzed the errors according to the differences in the data intervals and sampling densities. From a case study in South Korea, the overall nonlinear transform-based kriging algorithms showed better predictive performance than ordinary kriging. Among the nonlinear transform-based kriging algorithms, log-normal kriging had the best performance, followed by multi-Gaussian kriging. Ordinary kriging was the best for predicting high values within the spatial pattern. The results from this study are expected to be useful in the selection of kriging algorithms for the spatial prediction of data with an asymmetric distribution.

Coupling Detection in Sea Ice of Bering Sea and Chukchi Sea: Information Entropy Approach (베링해 해빙 상태와 척치해 해빙 변화 간의 연관성 분석: 정보 엔트로피 접근)

  • Oh, Mingi;Kim, Hyun-cheol
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1229-1238
    • /
    • 2018
  • We examined if a state of sea-ice in Bering Sea acts as a prelude of variation in that of Chukchi Sea by using satellites-based Arctic sea-ice concentration time series. Datasets consist of monthly values of sea-ice concentration during 36 years (1982-2017). Time series analysis armed with Transfer entropy is performed to describe how sea-ice data in Chukchi Sea is affected by that in Bering Sea, and to explain the relationship. The transfer entropy is a measure which identifies a nonlinear coupling between two random variables or signals and estimates causality using modification of time delay. We verified this measure checked a nonlinear coupling for simulated signals. With sea-ice concentration datasets, we found that sea-ice in Bering Sea is influenced by that in Chukchi Sea 3, 5, 6 months ago through the transfer entropy measure suitable for nonlinear system. Particularly, when a sea-ice concentration of Bering Sea has a local minimum, sea ice concentration around Chukchi Sea tends to decline 5 months later with about 70% chance. This finding is considered to be a process that inflow of Pacific water through Bering strait reduces sea-ice in Chukchi Sea after lowering the concentration of sea-ice in Bering Sea. This approach based on information theory will continue to investigate a timing and time scale of interesting patterns, and thus, a coupling inherent in sea-ice concentration of two remote areas will be verified by studying ocean-atmosphere patterns or events in the period.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

The Improvement of Output Voltage of UPS Using a Parallel Control Method (병렬 제어기법을 이용한 UPS 출력 전압의 개선)

  • 成 炳 模;姜 弼 淳;朴 晟 濬;金 喆 禹
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.7 no.2
    • /
    • pp.158-164
    • /
    • 2002
  • This paper presents a proper parallel control method using a conventional control and a repetitive control for improving the output voltage waveform of uninterruptable power supply. Although first-order prediction control method shows a good characteristics to rectifier load, it is not sufficient to reduce steady state errors generated in nonlinear loads such as rectifier loads and phase controled loads. So we also employed a repetitive control method. A repetitive control method can eliminate steady state errors in the distorted output voltage caused by cyclic loads. The presented control scheme is verified through simulation and experiment. Experimental results Implemented on a single phase PWM inverter equipped with a LC output filter with 3 kVA, 60 Hz are shown.

Adaptive Multi-view Video Interpolation Method Based on Inter-view Nonlinear Moving Blocks Estimation (시점 간 비선형 움직임 블록 예측에 기초한 적응적 다시점 비디오 보상 보간 기법)

  • Kim, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.4
    • /
    • pp.9-18
    • /
    • 2014
  • Recently, many researches have been focused on multi-view video applications and services such as wireless video surveillance networks, wireless video sensor networks and wireless mobile video. In multi-view video signal processing, to exploit the strong correlation between images acquired by different cameras plays great role in developing a core technique of multi-view video coding. This paper proposes an adaptive multi-view video interpolation technique which is applicable for multi-view distributed video coding without requiring any cooperation amongst the cameras. The proposed algorithm estimates the non-linear moving blocks and employs disparity compensated view prediction, and then fills in the unreliable blocks. Through computer simulations, it is shown that the proposed method outperforms the conventional methods.

Design of Low Noise Engine Cooling Fan for Automobile using DACE Model (전산실험모형을 이용한 자동차 엔진 냉각홴의 저소음 설계)

  • Sim, Hyoun-Jin;Park, Sang-Gul;Joe, Yong-Goo;Oh, Jae-Eung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.19 no.5
    • /
    • pp.509-515
    • /
    • 2009
  • This paper proposes an optimal design scheme to reduce the noise of the engine cooling fan by adapting Kriging with two meta-heuristic techniques. An engineering model has been developed for the prediction of the noise spectrum of the engine cooling fan. The noise of the fan is expressed as the discrete frequency noise peaks at the BPF and its harmonics and line spectrum at the broad band by noise generation mechanisms. The object of this paper is to find the optimal design for noise reduction of the engine cooling fan. We firstly show a comparison of the measured and calculated noise spectra of the fan for the validation of the noise prediction program. Orthogonal array is applied as design of experiments because it is suitable for Kriging. With these simulated data, we can estimate a correlation parameter of Kriging by solving the nonlinear problem with genetic algorithm and find an optimal level for the noise reduction of the cooling fan by optimizing Kriging estimates with simulated annealing. We notice that this optimal design scheme gives noticeable results. Therefore, an optimal design for the cooling fan is proposed by reducing the noise of its system.

Design of Low Noise Engine Cooling Fan for Automobile using DACE Model (전산실험모형을 이용한 자동차 엔진 냉각팬의 저소음 설계)

  • Sim, Hyoun-Jin;Lee, Hae-Jin;Lee, You-Yub;Oh, Jae-Eung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.1307-1312
    • /
    • 2007
  • This paper proposes an optimal design scheme to reduce the noise of the engine cooling fan by adapting Kriging with two meta-heuristic techniques. An engineering model has been developed for the prediction of the noise spectrum of the engine cooling fan. The noise of the fan is expressed as the discrete frequency noise peaks at the BPF and its harmonics and line spectrum at the broad band by noise generation mechanisms. The object of this paper is to find the Optimal Design for Noise Reduction of the Engine Cooling Fan. We firstly show a comparison of the measured and calculated noise spectra of the fan for the validation of the noise prediction program. Orthogonal array is applied as design of experiments because it is suitable for Kriging. With these simulated data, we can estimate a correlation parameter of Kriging by solving the nonlinear problem with genetic algorithm and find an optimal level for the noise reduction of the cooling fan by optimizing Kriging estimates with simulated annealing. We notice that this optimal design scheme gives noticeable results. Therefore, an optimal design for the cooling fan is proposed by reducing the noise of its system.

  • PDF

Evaluation of short-term water demand forecasting using ensemble model (앙상블 모형을 이용한 단기 용수사용량 예측의 적용성 평가)

  • So, Byung-Jin;Kwon, Hyun-Han;Gu, Ja-Young;Na, Bong-Kil;Kim, Byung-Seop
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.28 no.4
    • /
    • pp.377-389
    • /
    • 2014
  • In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and this has led to various studies regarding energy saving and improvement of water supply reliability. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The concepts was demonstrated through application to observed from water plant (A) in the South Korea. Various statistics (e.g. the efficiency coefficient, the correlation coefficient, the root mean square error, and a maximum error rate) were evaluated to investigate model efficiency. The ensemble based model with an cross-validate prediction procedure showed better predictability for water demand forecasting at different temporal resolutions. In particular, the performance of the ensemble model on hourly water demand data showed promising results against other individual prediction schemes.

Development of Performance Analysis S/W for Wind Turbine Generator System (풍력발전시스템 성능 해석 S/W 개발에 관한 연구)

  • Mun, Jung-Heu;No, Tae-Soo;Kim, Ji-Yon;Kim, Sung-Ju
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.36 no.2
    • /
    • pp.202-209
    • /
    • 2008
  • Application of wind turbine generator system (WTGS) needs researches for performance prediction, pitch control, and optimal operation method. Recently a new type WTGS is developed and under testing. The notable feature of this WTGS is that it consists of two rotor systems positioned horizontally at upwind and downwind locations, and a generator installed vertically inside the tower. In this paper, a nonlinear simulation software developed for the performance prediction of the Dual Rotor WTGS and testing of various control algorithm is introduced. This software is hybrid in the sense that FORTRAN is extensively used for the purpose of computation and Matlab/Simulink provides a user friendly GUI-like environment.

Comparative Study of Dimension Reduction Methods for Highly Imbalanced Overlapping Churn Data

  • Lee, Sujee;Koo, Bonhyo;Jung, Kyu-Hwan
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.454-462
    • /
    • 2014
  • Retention of possible churning customer is one of the most important issues in customer relationship management, so companies try to predict churn customers using their large-scale high-dimensional data. This study focuses on dealing with large data sets by reducing the dimensionality. By using six different dimension reduction methods-Principal Component Analysis (PCA), factor analysis (FA), locally linear embedding (LLE), local tangent space alignment (LTSA), locally preserving projections (LPP), and deep auto-encoder-our experiments apply each dimension reduction method to the training data, build a classification model using the mapped data and then measure the performance using hit rate to compare the dimension reduction methods. In the result, PCA shows good performance despite its simplicity, and the deep auto-encoder gives the best overall performance. These results can be explained by the characteristics of the churn prediction data that is highly correlated and overlapped over the classes. We also proposed a simple out-of-sample extension method for the nonlinear dimension reduction methods, LLE and LTSA, utilizing the characteristic of the data.