• Title/Summary/Keyword: Prediction Analysis

Search Result 9,889, Processing Time 0.043 seconds

Life Prediction of Hydraulic Concrete Based on Grey Residual Markov Model

  • Gong, Li;Gong, Xuelei;Liang, Ying;Zhang, Bingzong;Yang, Yiqun
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.457-469
    • /
    • 2022
  • Hydraulic concrete buildings in the northwest of China are often subject to the combined effects of low-temperature frost damage, during drying and wetting cycles, and salt erosion, so the study of concrete deterioration prediction is of major importance. The prediction model of the relative dynamic elastic modulus (RDEM) of four different kinds of modified concrete under the special environment in the northwest of China was established using Grey residual Markov theory. Based on the available test data, modified values of the dynamic elastic modulus were obtained based on the Grey GM(1,1) model and the residual GM(1,1) model, combined with the Markov sign correction, and the dynamic elastic modulus of concrete was predicted. The computational analysis showed that the maximum relative error of the corrected dynamic elastic modulus was significantly reduced, from 1.599% to 0.270% for the BS2 group. The analysis error showed that the model was more adjusted to the concrete mixed with fly ash and mineral powder, and its calculation error was significantly lower than that of the rest of the groups. The analysis of the data for each group proved that the model could predict the loss of dynamic elastic modulus of the deterioration of the concrete effectively, as well as the number of cycles when the concrete reached the damaged state.

Development of Big Data-based Cardiovascular Disease Prediction Analysis Algorithm

  • Kyung-A KIM;Dong-Hun HAN;Myung-Ae CHUNG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.3
    • /
    • pp.29-34
    • /
    • 2023
  • Recently, the rapid development of artificial intelligence technology, many studies are being conducted to predict the risk of heart disease in order to lower the mortality rate of cardiovascular diseases worldwide. This study presents exercise or dietary improvement contents in the form of a software app or web to patients with cardiovascular disease, and cardiovascular disease through digital devices such as mobile phones and PCs. LR, LDA, SVM, XGBoost for the purpose of developing "Life style Improvement Contents (Digital Therapy)" for cardiovascular disease care to help with management or treatment We compared and analyzed cardiovascular disease prediction models using machine learning algorithms. Research Results XGBoost. The algorithm model showed the best predictive model performance with overall accuracy of 80% before and after. Overall, accuracy was 80.0%, F1 Score was 0.77~0.79, and ROC-AUC was 80%~84%, resulting in predictive model performance. Therefore, it was found that the algorithm used in this study can be used as a reference model necessary to verify the validity and accuracy of cardiovascular disease prediction. A cardiovascular disease prediction analysis algorithm that can enter accurate biometric data collected in future clinical trials, add lifestyle management (exercise, eating habits, etc.) elements, and verify the effect and efficacy on cardiovascular-related bio-signals and disease risk. development, ultimately suggesting that it is possible to develop lifestyle improvement contents (Digital Therapy).

A NODE PREDICTION ALGORITHM WITH THE MAPPER METHOD BASED ON DBSCAN AND GIOTTO-TDA

  • DONGJIN LEE;JAE-HUN JUNG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.4
    • /
    • pp.324-341
    • /
    • 2023
  • Topological data analysis (TDA) is a data analysis technique, recently developed, that investigates the overall shape of a given dataset. The mapper algorithm is a TDA method that considers the connectivity of the given data and converts the data into a mapper graph. Compared to persistent homology, another popular TDA tool, that mainly focuses on the homological structure of the given data, the mapper algorithm is more of a visualization method that represents the given data as a graph in a lower dimension. As it visualizes the overall data connectivity, it could be used as a prediction method that visualizes the new input points on the mapper graph. The existing mapper packages such as Giotto-TDA, Gudhi and Kepler Mapper provide the descriptive mapper algorithm, that is, the final output of those packages is mainly the mapper graph. In this paper, we develop a simple predictive algorithm. That is, the proposed algorithm identifies the node information within the established mapper graph associated with the new emerging data point. By checking the feature of the detected nodes, such as the anomality of the identified nodes, we can determine the feature of the new input data point. As an example, we employ the fraud credit card transaction data and provide an example that shows how the developed algorithm can be used as a node prediction method.

The Bankruptcy Prediction Analysis : Focused on Post IMF KSE-listed Companies (기업도산 예측력 분석방법에 대한 연구 : IMF후 국내 상장회사를 중심으로)

  • Jeong Yu-Seok;Lee Hyun-Soo;Chae Young-Il;Hong Bong-Hwa
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.75-89
    • /
    • 2006
  • This paper is concerned with analysing the bankruptcy prediction power of three models: Multivariate Discriminant Analysis(MDA), Logit Analysis, Neural Network. The research targeted the bankrupted companies after the foreign exchange crisis in 1997 to differentiate from previous research efforts, and all participating companies were randomly selected from the KSE listed companies belonging to manufacturing industry to improve prediction accuracy and validity of the model. In order to assure meaningful bankruptcy prediction, training data and testing data were not extracted within the corresponding period. The result is that prediction accuracy of neural networks is more excellent than that of logit analysis and MDA model when considering that execution of testing data was followed by execution of training data.

  • PDF

Comparative Study of NIR-based Prediction Methods for Biomass Weight Loss Profiles

  • Cho, Hyun-Woo;Liu, J. Jay
    • Clean Technology
    • /
    • v.18 no.1
    • /
    • pp.31-37
    • /
    • 2012
  • Biomass has become a major feedstock for bioenergy and other bio-based products because of its renewability and environmental benefits. Various researches have been done in the prediction of crucial characteristics of biomass, including the active utilization of spectroscopy data. Near infrared (NIR) spectroscopy has been widely used because of its attractive features: it's non-destructive and cost-effective producing fast and reliable analysis results. This work developed the multivariate statistical scheme for predicting weight loss profiles based on the utilization of NIR spectra data measured for six lignocellulosic biomass types. Wavelet analysis was used as a compression tool to suppress irrelevant noise and to select features or wavelengths that better explain NIR data. The developed scheme was demonstrated using real NIR data sets, in which different prediction models were evaluated in terms of prediction performance. In addition, the benefits of using right pretreatment of NIR spectra were also given. In our case, it turned out that compression of high-dimensional NIR spectra by wavelet and then PLS modeling yielded more reliable prediction results without handling full set of noisy data. This work showed that the developed scheme can be easily applied for rapid analysis of biomass.

A new finite element procedure for fatigue life prediction of AL6061 plates under multiaxial loadings

  • Tarar, Wasim;Herman Shen, M.H.;George, Tommy;Cross, Charles
    • Structural Engineering and Mechanics
    • /
    • v.35 no.5
    • /
    • pp.571-592
    • /
    • 2010
  • An energy-based fatigue life prediction framework was previously developed by the authors for prediction of axial, bending and shear fatigue life at various stress ratios. The framework for the prediction of fatigue life via energy analysis was based on a new constitutive law, which states the following: the amount of energy required to fracture a material is constant. In the first part of this study, energy expressions that construct the constitutive law are equated in the form of total strain energy and the distortion energy dissipated in a fatigue cycle. The resulting equation is further evaluated to acquire the equivalent stress per cycle using energy based methodologies. The equivalent stress expressions are developed both for biaxial and multiaxial fatigue loads and are used to predict the number of cycles to failure based on previously developed prediction criterion. The equivalent stress expressions developed in this study are further used in a new finite element procedure to predict the fatigue life for two and three dimensional structures. In the second part of this study, a new Quadrilateral fatigue finite element is developed through integration of constitutive law into minimum potential energy formulation. This new QUAD-4 element is capable of simulating biaxial fatigue problems. The final output of this finite element analysis both using equivalent stress approach and using the new QUAD-4 fatigue element, is in the form of number of cycles to failure for each element on a scale in ascending or descending order. Therefore, the new finite element framework can provide the number of cycles to failure at each location in gas turbine engine structural components. In order to obtain experimental data for comparison, an Al6061-T6 plate is tested using a previously developed vibration based testing framework. The finite element analysis is performed for Al6061-T6 aluminum and the results are compared with experimental results.

Comparison of Chlorophyll-a Prediction and Analysis of Influential Factors in Yeongsan River Using Machine Learning and Deep Learning (머신러닝과 딥러닝을 이용한 영산강의 Chlorophyll-a 예측 성능 비교 및 변화 요인 분석)

  • Sun-Hee, Shim;Yu-Heun, Kim;Hye Won, Lee;Min, Kim;Jung Hyun, Choi
    • Journal of Korean Society on Water Environment
    • /
    • v.38 no.6
    • /
    • pp.292-305
    • /
    • 2022
  • The Yeongsan River, one of the four largest rivers in South Korea, has been facing difficulties with water quality management with respect to algal bloom. The algal bloom menace has become bigger, especially after the construction of two weirs in the mainstream of the Yeongsan River. Therefore, the prediction and factor analysis of Chlorophyll-a (Chl-a) concentration is needed for effective water quality management. In this study, Chl-a prediction model was developed, and the performance evaluated using machine and deep learning methods, such as Deep Neural Network (DNN), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). Moreover, the correlation analysis and the feature importance results were compared to identify the major factors affecting the concentration of Chl-a. All models showed high prediction performance with an R2 value of 0.9 or higher. In particular, XGBoost showed the highest prediction accuracy of 0.95 in the test data.The results of feature importance suggested that Ammonia (NH3-N) and Phosphate (PO4-P) were common major factors for the three models to manage Chl-a concentration. From the results, it was confirmed that three machine learning methods, DNN, RF, and XGBoost are powerful methods for predicting water quality parameters. Also, the comparison between feature importance and correlation analysis would present a more accurate assessment of the important major factors.

Energy analysis-based core drilling method for the prediction of rock uniaxial compressive strength

  • Qi, Wang;Shuo, Xu;Ke, Gao Hong;Peng, Zhang;Bei, Jiang;Hong, Liu Bo
    • Geomechanics and Engineering
    • /
    • v.23 no.1
    • /
    • pp.61-69
    • /
    • 2020
  • The uniaxial compressive strength (UCS) of rock is a basic parameter in underground engineering design. The disadvantages of this commonly employed laboratory testing method are untimely testing, difficulty in performing core testing of broken rock mass and long and complicated onsite testing processes. Therefore, the development of a fast and simple in situ rock UCS testing method for field use is urgent. In this study, a multi-function digital rock drilling and testing system and a digital core bit dedicated to the system are independently developed and employed in digital drilling tests on rock specimens with different strengths. The energy analysis is performed during rock cutting to estimate the energy consumed by the drill bit to remove a unit volume of rock. Two quantitative relationship models of energy analysis-based core drilling parameters (ECD) and rock UCS (ECD-UCS models) are established in this manuscript by the methods of regression analysis and support vector machine (SVM). The predictive abilities of the two models are comparatively analysed. The results show that the mean value of relative difference between the predicted rock UCS values and the UCS values measured by the laboratory uniaxial compression test in the prediction set are 3.76 MPa and 4.30 MPa, respectively, and the standard deviations are 2.08 MPa and 4.14 MPa, respectively. The regression analysis-based ECD-UCS model has a more stable predictive ability. The energy analysis-based rock drilling method for the prediction of UCS is proposed. This method realized the quick and convenient in situ test of rock UCS.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Statistical Estimate and Prediction Values with Reference to Chronological Change of Body Height and Weight in Korean Youth (한국인 청소년 신장과 체중의 시대적 변천에 따른 통계학적 추정치에 관한 연구)

  • 강동석;성웅현;윤태영;최중명;박순영
    • Korean Journal of Health Education and Promotion
    • /
    • v.13 no.2
    • /
    • pp.130-166
    • /
    • 1996
  • As compared with body height and body weight by ages and sexes, by means of the data reported under other researchers from 1967 to 1994 for 33 years, this study obtained the estimate value of body height and body weight by ages and sexes for the same period, and figured out prediction value of body height and body weight in the ages of between 6 and 14 from 1995 to 2000. These surveys and measurements took for one year from October 1st 1994 to September 30th. As shown in the 〈Table 1〉, in order to calculate the establishment, estimate value and prediction value of the chronological regression model of body height and body weight, by well-grounded 17 representative research papers, this research statistically tested propriety of liner regression model by the residual analysis in advance of being reconciled to simple liner regression model by the autonomous variable-year and the subordinate variable-body weight and measured prediction value, theoretical value from 1962 to 1994 by means of 2nd or 3rd polynomial regression model, with this redult did prediction value from 1995 to 2000. 1. Chronological Change of Body Height and Body Weight The analysis result from regression model of the chronological body height and body weight for the aged 6 - 16 in both sexes ranging from 1962 to 1994, corned from the 〈Table 2-20〉. On the one hand, the measurement value of respective researchers had a bit changes by ages with age growing, but the other hand, theoretical value, prediction value showed the regular increase by the stages and all values indicated a straight line on growth and development with age growing. That is, in case of the aged 6, males had 109.93cm in 1962 and females 108.93cm, but we found the increase that males had 1I8.0cm, females 1I3.9cm. In theoretical value, prediction value, males showed the increase from 109.88cm to 1I7.89cm and females from 109.27cm to 1I5.64cm respectively. There was the same inclination toward all ages. 2. Comparision to Measurement Value and Prediction Value of Body Height and Body Weight in 1994 As shown in the 〈Table 21〉, in case of body height, measurement value and prediction value of body height and body weight by ages and sexes almost showed the similiar inclination and poor grade, in case of body weight, prediction value in males had a bit low value by all ages, and prediction value in females had a high value in adolescence, to the contrary, a low value in adult. 3. Prediction Value of Body Height and Body Weight from 1995 to 2000 This research showed that body height and body weight remarkably increased in adolescence but slowly in adult. This study represented that Korean physique was on the increase and must be measured continually hereafter.

  • PDF