• Title/Summary/Keyword: modeling parameters

Search Result 3,203, Processing Time 0.023 seconds

Modeling of Estimating Soil Moisture, Evapotranspiration and Yield of Chinese Cabbages from Meteorological Data at Different Growth Stages (기상자료(氣象資料)에 의(依)한 배추 생육시기별(生育時期別) 토양수분(土壤水分), 증발산량(蒸發散量) 및 수량(收量)의 추정모형(推定模型))

  • Im, Jeong-Nam;Yoo, Soon-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.4
    • /
    • pp.386-408
    • /
    • 1988
  • A study was conducted to develop a model for estimating evapotranspiration and yield of Chinese cabbages from meteorological factors from 1981 to 1986 in Suweon, Korea. Lysimeters with water table maintained at 50cm depth were used to measure the potential evapotranspiration and the maximum evapotranspiration in situ. The actual evapotranspiration and the yield were measured in the field plots irrigated with different soil moisture regimes of -0.2, -0.5, and -1.0 bars, respectively. The soil water content throughout the profile was monitored by a neutron moisture depth gauge and the soil water potentials were measured using gypsum block and tensiometer. The fresh weight of Chinese cabbages at harvest was measured as yield. The data collected in situ were analyzed to obtain parameters related to modeling. The results were summarized as followings: 1. The 5-year mean of potential evapotranspiration (PET) gradually increased from 2.38 mm/day in early April to 3.98 mm/day in mid-June, and thereafter, decreased to 1.06 mm/day in mid-November. The estimated PET by Penman, Radiation or Blanney-Criddle methods were overestimated in comparison with the measured PET, while those by Pan-evaporation method were underestimated. The correlation between the estimated and the measured PET, however, showed high significance except for July and August by Blanney-Criddle method, which implied that the coefficients should be adjusted to the Korean conditions. 2. The meteorological factors which showed hgih correlation with the measured PET were temperature, vapour pressure deficit, sunshine hours, solar radiation and pan-evaporation. Several multiple regression equations using meteorological factors were formulated to estimate PET. The equation with pan-evaporation (Eo) was the simplest but highly accurate. PET = 0.712 + 0.705Eo 3. The crop coefficient of Chinese cabbages (Kc), the ratio of the maximum evapotranspiration (ETm) to PET, ranged from 0.5 to 0.7 at early growth stage and from 0.9 to 1.2 at mid and late growth stages. The regression equation with respect to the growth progress degree (G), ranging from 0.0 at transplanting day to 1.0 at the harvesting day, were: $$Kc=0.598+0.959G-0.501G^2$$ for spring cabbages $$Kc=0.402+1.887G-1.432G^2$$ for autumn cabbages 4. The soil factor (Kf), the ratio of the actual evapotranspiration to the maximum evapotranspiration, showed 1.0 when the available soil water fraction (f) was higher than a threshold value (fp) and decreased linearly with decreasing f below fp. The relationships were: Kf=1.0 for $$f{\geq}fp$$ Kf=a+bf for f$$I{\leq}Esm$$ Es = Esm for I > Esm 6. The model for estimating actual evapotranspiration (ETa) was based on the water balance neglecting capillary rise as: ETa=PET. Kc. Kf+Es 7. The model for estimating relative yield (Y/Ym) was selected among the regression equations with the measured ETa as: Y/Ym=a+bln(ETa) The coefficients and b were 0.07 and 0.73 for spring Chinese cabbages and 0.37 and 0.66 for autumn Chinese cabbages, respectively. 8. The estimated ETa and Y/Ym were compared with the measured values to verify the model established above. The estimated ETa showed disparities within 0.29mm/day for spring Chinese cabbages and 0.19mm/day for autumn Chinese cabbages. The average deviation of the estimated relative yield were 0.14 and 0.09, respectively. 9. The deviations between the estimated values by the model and the actual values obtained from three cropping field experiments after the completion of the model calibration were within reasonable confidence range. Therefore, this model was validated to be used in practical purpose.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.