• Title/Summary/Keyword: Change Order Forecasting Model

Search Result 36, Processing Time 0.023 seconds

Research on Location Selection Method Development for Storing Service Parts using Data Analytics (데이터 분석 기법을 활용한 서비스 부품의 저장 위치 선정 방안 수립 연구)

  • Son, Jin-Ho;Shin, KwangSup
    • The Journal of Bigdata
    • /
    • v.2 no.2
    • /
    • pp.33-46
    • /
    • 2017
  • Service part has the attribute causing a difficulty of the systematic management like a kind of diversity, uncertainty of demand, high request for quick response against general complete product. Especially, order picking is recognized as the most important work in the warehouse of the parts since inbound cycle of the service part long but outbound cycle is relatively short. But, increasing work efficiency in the warehouse has a limitation that cycle, frequency and quantity for the outbound request depend on the inherent features of the part. Through this research, not only are the types of the parts classified with the various and specified data but also the method is presented that it minimizes (that) the whole distances of the order picking and store location about both inbound and outbound by developing the model of the demand prediction. Based on this study, I expect that all of the work efficiency and the space utilization will be improved without a change of the inbound and outbound quantity in the warehouse.

  • PDF

Impact of GPS-RO Data Assimilation in 3DVAR System on the Typhoon Event (태풍 수치모의에서 GPS-RO 인공위성을 사용한 관측 자료동화 효과)

  • Park, Soon-Young;Yoo, Jung-Woo;Kang, Nam-Young;Lee, Soon-Hwan
    • Journal of Environmental Science International
    • /
    • v.26 no.5
    • /
    • pp.573-584
    • /
    • 2017
  • In order to simulate a typhoon precisely, the satellite observation data has been assimilated using WRF (Weather Research and Forecasting model) three-Dimensional Variational (3DVAR) data assimilation system. The observation data used in 3DVAR was GPS Radio Occultation (GPS-RO) data which is loaded on Low-Earth Orbit (LEO) satellite. The refractivity of Earth is deduced by temperature, pressure, and water vapor. GPS-RO data can be obtained with this refractivity when the satellite passes the limb position with respect to its original orbit. In this paper, two typhoon cases were simulated to examine the characteristics of data assimilation. One had been occurred in the Western Pacific from 16 to 25 October, 2015, and the other had affected Korean Peninsula from 22 to 29 August, 2012. In the simulation results, the typhoon track between background (BGR) and assimilation (3DV) run were significantly different when the track appeared to be rapidly change. The surface wind speed showed large difference for the long forecasting time because the GPS-RO data contained much information in the upper level, and it took a time to impact on the surface wind. Along with the modified typhoon track, the differences in the horizontal distribution of accumulated rain rate was remarkable with the range of -600~500 mm. During 7 days, we estimated the characteristics between daily assimilated simulation (3DV) and initial time assimilation (3DV_7). Because 3DV_7 demonstrated the accurate track of typhoon and its meteorological variables, the differences in two experiments have found to be insignificant. Using observed rain rate data at 79 surface observatories, the statistical analysis has been carried on for the evaluation of quantitative improvement. Although all experiments showed underestimated rain amount because of low model resolution (27 km), the reduced Mean Bias and Root-Mean-Square Error were found to be 2.92 mm and 4.53 mm, respectively.

The Meaning and Usefulness of Simulation Method for Business Process Reengineering -Focused on the Korean Supreme Court BPR Project (1994-2003)-

  • Hong, Sung-wan;Roh, Tae-hoon;Kang, Sung-min;Lee, Jung-woo;Kang, Ga-na
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.170-202
    • /
    • 2001
  • Simulation is used to reduce a risk involved in the new project and decision-making in an organization and to save cost and time by forecasting different situations. The objectives of this research are to acknowledge the need of simulation through the real life sample and to encourage the use of the simulation method in the future consulting project by continuously making the necessary improvements. This research analyzed the effectiveness of the simulation based on the sample use of simulation method in 1994 and 1997 for the BPR project of certification issuance process at the Supreme Court. In order to evaluate the value of the proposed simulation model, we examined the gap, which existed between the simulation result and the operational data collected by visiting the actual sites where AROS (Automated Registry Office System: automation system developed by LG-EDS Systems) is being utilized. We also identified the causes for the existing gap. According to the analysis result, (1) the gap came from the status change of thinking that the concentration of certification issuance request has eased after the computerization, (2) the gap existed in the operational process because they failed to consider the situational factors of each registry office in the simulation model, and (3) lastly the gap came from the difficulty of formulating the mathematical model for predicting the complex and diverse behavior pattern of individuals requesting the certification issuance. In order to narrow the existing gaps, we made a proposal to improve the certification issuance process where software of certification issuance vending machine was upgraded in order to help the people to use the service conveniently, more part time workers were hared when there was a overload of certification issuance request, and the quality of the certification Issuance vending machine is improved, In this research, we examined an efficient way of resource allocation based on the simulation conducted in 1994 and 1997. By reflecting changes since the simulation of 1994 and allocating the clerk and machine based on the predicted results of the simulation, we maximized the efficiency of the certification issuance process. In conclusion, this research examined the future usability of simulation method based on the analysis result and identified the key issues to consider when using the simulation method in the future consulting project.

  • PDF

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

A Comparative Study on General Circulation Model and Regional Climate Model for Impact Assessment of Climate Changes (기후변화의 영향평가를 위한 대순환모형과 지역기후모형의 비교 연구)

  • Lee, Dong-Kun;Kim, Jae-Uk;Jung, Hui-Cheul
    • Journal of Environmental Impact Assessment
    • /
    • v.15 no.4
    • /
    • pp.249-258
    • /
    • 2006
  • Impacts of global warming have been identified in many areas including natural ecosystem. A good number of studies based on climate models forecasting future climate have been conducted in many countries worldwide. Due to its global coverage, GCM, which is a most frequently used climate model, has limits to apply to Korea with such a narrower and complicated terrain. Therefore, it is necessary to perform a study impact assessment of climate changes with a climate model fully reflecting characteristics of Korean climate. In this respect, this study was designed to compare and analyze the GCM and RCM in order to determine a suitable climate model for Korea. In this study, spatial scope was Korea for 10 years from 1981 to 1990. As a research method, current climate was estimated on the basis of the data obtained from observation at the GHCN. Future climate was forecast using 4 GCMs furnished by the IPCC among SRES A2 Scenario as well as the RCM received from the NIES of Japan. Pearson correlation analysis was conducted for the purpose of comparing data obtained from observation with GCM and RCM. As a result of this study, average annual temperature of Korea between 1981 and 1990 was found to be around $12.03^{\circ}C$, with average daily rainfall being 2.72mm. Under the GCM, average annual temperature was between 10.22 and $16.86^{\circ}C$, with average daily rainfall between 2.13 and 3.35mm. Average annual temperature in the RCM was identified $12.56^{\circ}C$, with average daily rainfall of 5.01mm. In the comparison of the data obtained from observation with GCM and RCM, RCMs of both temperature and rainfall were found to well reflect characteristics of Korea's climate. This study is important mainly in that as a preliminary study to examine impact of climate changes such as global warming it chose appropriate climate model for our country. These results of the study showed that future climate produced under similar conditions with actual ones may be applied for various areas in many ways.

Determination of Unit Hydrograph for the Hydrological Modelling of Long-term Run-off in the Major River Systems in Korea (장기유출의 수문적 모형개발을 위한 주요 수계별 단위도 유도)

  • 엄병현;박근수
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.26 no.4
    • /
    • pp.52-65
    • /
    • 1984
  • In general precise estimation of hourly of daily distribution of the long-term run-off should be very important in a design of source of irrigation. However, there have not been a satisfying method for forecasting of stationar'y long-term run-off in Korea. Solving this problem, this study introduces unit-hydrograph method frequently used in short-term run-off analysis into the long-term run-off analysis, of which model basin was selected to be Sumgin-river catchment area. In the estimation of effective rainfall, conventional method neglects the Soil moisture condition of catchment area, but in this study, the initial discharge (qb) occurred just before rising phase of the hydrograph was selected as the index of a basin soil moisture condition and then introduced as 3rd variable in the analysis of the reationship between cumulative rainfall and cumulative loss of rainfall, which built a new type of separation method of effective rainfall. In next step, in order to normalize significant potential error included in hydrological data, especially in vast catchment area, Snyder's correlation method was applied. A key to solution in this study is multiple correlation method or multiple regressional analysis, which is primarily based on the method of least squres and which is solved by the form of systems of linear equations. And for verification of the change of characteristics of unit hydrograph according to the variation of a various kind of hydrological charateristics (for example, precipitation, tree cover, soil condition, etc),seasonal unit hydrograph models of dry season(autumn, winter), semi-dry season (spring), rainy season (summer) were made respectively. The results obtained in this study were summarized as follows; 1.During the test period of 1966-1971, effective rainfall was estimated for the total 114 run-off hydrograph. From this estimation results, relative error of estimation to the ovservation value was 6%, -which is mush smaller than 12% of the error of conventional method. 2.During the test period, daily distribution of long-term run-off discharge was estimated by the unit hydrograph model. From this estimation results, relative error of estimation by the application of standard unit hydrograph model was 12%. When estimating by each seasonal unit bydrograph model, the relative error was 14% during dry season 10% during semi-dry season and 7% during rainy season, which is much smaller than 37% of conventional method. Summing up the analysis results obtained above, it is convinced that qb-index method of this study for the estimation of effective rainfall be preciser than any other method developed before. Because even recently no method has been developed for the estimation of daily distribution of long-term run-off dicharge, therefore estimation value by unit hydrograph model was only compared with that due to kaziyama method which estimates monthly run-off discharge. However this method due to this study turns out to have high accuracy. If specially mentioned from the results of this study, there is no need to use each seasonal unit hydrograph model separately except the case of semi-dry season. The author hopes to analyze the latter case in future sudies.

  • PDF

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

A Regional Source-Receptor Analysis for Air Pollutants in Seoul Metropolitan Area (수도권지역에서의 권역간 대기오염물질 상호영향 연구)

  • Lee, Yong-Mi;Hong, Sung-Chul;Yoo, Chul;Kim, Jeong-Soo;Hong, Ji-Hyung;Park, Il-Su
    • Journal of Environmental Science International
    • /
    • v.19 no.5
    • /
    • pp.591-605
    • /
    • 2010
  • This study were to simulate major criteria air pollutants and estimate regional source-receptor relationship using air quality prediction model (TAPM ; The Air Pollution Model) in the Seoul Metropolitan area. Source-receptor relationship was estimated by contribution of each region to other regions and region itself through dividing the Seoul metropolitan area into five regions. According to administrative boundary, region I and region II were Seoul and Incheon in order. Gyeonggi was divided into three regions by directions like southern(region III), northern(IV) and eastern(V) area. Gridded emissions ($1km{\times}1km$) by Clean Air Pollicy Support System (CAPSS) of National Institute of Environmental Research (NIER) was prepared for TAPM simulation. The operational weather prediction system, Regional Data Assimilation and Prediction System (RDAPS) operated by the Korean Meteorology Administration (KMA) was used for the regional weather forecasting with 30km grid resolution. Modeling period was 5 continuous days for each season with non-precipitation. The results showed that region I was the most air-polluted area and it was 3~4 times more polluted region than other regions for $NO_2$, $SO_2$ and PM10. Contributions of $SO_2$ $NO_2$ and PM10 to region I, II and III were more than 50 percent for their own sources. However region IV and V were mostly affected by sources of region I, II and III. When emissions of all regions were assumed to reduce 10 and 20 percent separately, air pollution of each region was reduced linearly and the contributions of reduction scenario were similar to those of base case. As input emissions were reduced according to different ratio - region I 40 percent, region II and III 20 percent, region IV and V 10 percent, air pollutions of region I and III were decreased remarkably. The contributions to region I, II, III were also reduced for their own sources. However, region I, II and III affected more regions IV and V. Shortly, graded reduction of emission could be more effective to control air pollution in emission imbalanced area.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Establishment of Inundation Probability DB for Forecasting the Farmland Inundation Risk Using Weather Forecast Data (기상예보 기반 농촌유역 침수 위험도 예보를 위한 침수 확률 DB 구축)

  • Kim, Si-Nae;Jun, Sang-Min;Lee, Hyun-Ji;Hwang, Soon-Ho;Choi, Soon-Kun;Kang, Moon-Seong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.62 no.4
    • /
    • pp.33-43
    • /
    • 2020
  • In order to reduce damage from farmland inundation caused by recent climate change, it is necessary to predict the risk of farmland inundation accurately. Inundation modeling should be performed by considering multiple time distributions of possible rainfalls, as digital forecasts of Korea Meteorological Administration is provided on a six-hour basis. As building multiple inputs and creating inundation models take a lot of time, it is necessary to shorten the forecast time by building a data base (DB) of farmland inundation probability. Therefore, the objective of this study is to establish a DB of farmland inundation probability in accordance with forecasted rainfalls. In this study, historical data of the digital forecasts was collected and used for time division. Inundation modeling was performed 100 times for each rainfall event. Time disaggregation of forecasted rainfall was performed by applying the Multiplicative Random Cascade (MRC) model, which uses consistency of fractal characteristics to six-hour rainfall data. To analyze the inundation of farmland, the river level was simulated using the Hydrologic Engineering Center - River Analysis System (HEC-RAS). The level of farmland was calculated by applying a simulation technique based on the water balance equation. The inundation probability was calculated by extracting the number of inundation occurrences out of the total number of simulations, and the results were stored in the DB of farmland inundation probability. The results of this study can be used to quickly predict the risk of farmland inundation, and to prepare measures to reduce damage from inundation.