• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.025 seconds

A Study on the traffic flow prediction through Catboost algorithm (Catboost 알고리즘을 통한 교통흐름 예측에 관한 연구)

  • Cheon, Min Jong;Choi, Hye Jin;Park, Ji Woong;Choi, HaYoung;Lee, Dong Hee;Lee, Ook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.58-64
    • /
    • 2021
  • As the number of registered vehicles increases, traffic congestion will worsen worse, which may act as an inhibitory factor for urban social and economic development. Through accurate traffic flow prediction, various AI techniques have been used to prevent traffic congestion. This paper uses the data from a VDS (Vehicle Detection System) as input variables. This study predicted traffic flow in five levels (free flow, somewhat delayed, delayed, somewhat congested, and congested), rather than predicting traffic flow in two levels (free flow and congested). The Catboost model, which is a machine-learning algorithm, was used in this study. This model predicts traffic flow in five levels and compares and analyzes the accuracy of the prediction with other algorithms. In addition, the preprocessed model that went through RandomizedSerachCv and One-Hot Encoding was compared with the naive one. As a result, the Catboost model without any hyper-parameter showed the highest accuracy of 93%. Overall, the Catboost model analyzes and predicts a large number of categorical traffic data better than any other machine learning and deep learning models, and the initial set parameters are optimized for Catboost.

A Study on the Win-Loss Prediction Analysis of Korean Professional Baseball by Artificial Intelligence Model (인공지능 모델에 따른 한국 프로야구의 승패 예측 분석에 관한 연구)

  • Kim, Tae-Hun;Lim, Seong-Won;Koh, Jin-Gwang;Lee, Jae-Hak
    • The Journal of Bigdata
    • /
    • v.5 no.2
    • /
    • pp.77-84
    • /
    • 2020
  • In this study, we conducted a study on the win-loss predicton analysis of korean professional baseball by artificial intelligence models. Based on the model, we predicted the winner as well as each team's final rank in the league. Additionally, we developed a website for viewers' understanding. In each game's first, third, and fifth inning, we analyze to select the best model that performs the highest accuracy and minimizes errors. Based on the result, we generate the rankings. We used the predicted data started from May 5, the season's opening day, to August 30, 2020 to generate the rankings. In the games which Kia Tigers did not play, however, we used actual games' results in the data. KNN and AdaBoost selected the most optimized machine learning model. As a result, we observe a decreasing trend of the predicted results' ranking error as the season progresses. The deep learning model recorded 89% of the model accuracy. It provides the same result of decreasing ranking error trends of the predicted results that we observe in the machine learning model. We estimate that this study's result applies to future KBO predictions as well as other fields. We expect broadcasting enhancements by posting the predicted winning percentage per inning which is generated by AI algorism. We expect this will bring new interest to the KBO fans. Furthermore, the prediction generated at each inning would provide insights to teams so that they can analyze data and come up with successful strategies.

A Study on Reducing Learning Time of Deep-Learning using Network Separation (망 분리를 이용한 딥러닝 학습시간 단축에 대한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.273-279
    • /
    • 2021
  • In this paper, we propose an algorithm that shortens the learning time by performing individual learning using partitioning the deep learning structure. The proposed algorithm consists of four processes: network classification origin setting process, feature vector extraction process, feature noise removal process, and class classification process. First, in the process of setting the network classification starting point, the division starting point of the network structure for effective feature vector extraction is set. Second, in the feature vector extraction process, feature vectors are extracted without additional learning using the weights previously learned. Third, in the feature noise removal process, the extracted feature vector is received and the output value of each class is learned to remove noise from the data. Fourth, in the class classification process, the noise-removed feature vector is input to the multi-layer perceptron structure, and the result is output and learned. To evaluate the performance of the proposed algorithm, we experimented with the Extended Yale B face database. As a result of the experiment, in the case of the time required for one-time learning, the proposed algorithm reduced 40.7% based on the existing algorithm. In addition, the number of learning up to the target recognition rate was shortened compared with the existing algorithm. Through the experimental results, it was confirmed that the one-time learning time and the total learning time were reduced and improved over the existing algorithm.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Performance Evaluation of Machine Learning Optimizers (기계학습 옵티마이저 성능 평가)

  • Joo, Gihun;Park, Chihyun;Im, Hyeonseung
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.766-776
    • /
    • 2020
  • Recently, as interest in machine learning (ML) has increased and research using ML has become active, it is becoming more important to find an optimal hyperparameter combination for various ML models. In this paper, among various hyperparameters, we focused on ML optimizers, and measured and compared the performance of major optimizers using various datasets. In particular, we compared the performance of nine optimizers ranging from SGD, which is the most basic, to Momentum, NAG, AdaGrad, RMSProp, AdaDelta, Adam, AdaMax, and Nadam, using the MNIST, CIFAR-10, IRIS, TITANIC, and Boston Housing Price datasets. Experimental results showed that when Adam or Nadam was used, the loss of various ML models decreased most rapidly and their F1 score was also increased. Meanwhile, AdaMax showed a lot of instability during training and AdaDelta showed slower convergence speed and lower performance than other optimizers.

Flood prediction in the Namgang Dam basin using a long short-term memory (LSTM) algorithm

  • Lee, Seungsoo;An, Hyunuk;Hur, Youngteck;Kim, Yeonsu;Byun, Jisun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.3
    • /
    • pp.471-483
    • /
    • 2020
  • Flood prediction is an important issue to prevent damages by flood inundation caused by increasing high-intensity rainfall with climate change. In recent years, machine learning algorithms have been receiving attention in many scientific fields including hydrology, water resources, natural hazards, etc. The performance of a machine learning algorithm was investigated to predict the water elevation of a river in this study. The aim of this study was to develop a new method for securing a large enough lead time for flood defenses by predicting river water elevation using the a long- short-term memory (LSTM) technique. The water elevation data at the Oisong gauging station were selected to evaluate its applicability. The test data were the water elevation data measured by K-water from 15 February 2013 to 26 August 2018, approximately 5 years 6 months, at 1 hour intervals. To investigate the predictability of the data in terms of the data characteristics and the lead time of the prediction data, the data were divided into the same interval data (group-A) and time average data (group-B) set. Next, the predictability was evaluated by constructing a total of 36 cases. Based on the results, group-A had a more stable water elevation prediction skill compared to group-B with a lead time from 1 to 6 h. Thus, the LSTM technique using only measured water elevation data can be used for securing the appropriate lead time for flood defense in a river.

Machine Learning-based Concrete Crack Detection Framework for Facility Maintenance (시설물의 유지관리를 위한 기계학습 기반 콘크리트 균열 감지 프레임워크)

  • Ji, Bongjun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.22 no.10
    • /
    • pp.5-12
    • /
    • 2021
  • The deterioration of facilities is an unavoidable phenomenon. For the management of aging facilities, cracks can be detected and tracked, and the condition of the facilities can be indirectly inferred. Therefore, crack detection plays a crucial role in the management of aged facilities. Conventional maintenances are conducted using the crack detection results. For example, maintenance activities to prevent further deterioration can be performed. However, currently, most crack detection relies only on human judgment, so if the area of the facility is large, cost and time are excessively used, and different judgment results may occur depending on the expert's competence, it causes reliability problems. This paper proposes a concrete crack detection framework based on machine learning to overcome these limitations. Fully automated concrete crack detection was possible through the proposed framework, which showed a high accuracy of 96%. It is expected that effective and efficient management will be possible through the proposed framework in this paper.

Conformity Assessment of Machine Learning Algorithm for Particulate Matter Prediction (미세먼지 예측을 위한 기계 학습 알고리즘의 적합성 평가)

  • Cho, Kyoung-woo;Jung, Yong-jin;Kang, Chul-gyu;Oh, Chang-heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.20-26
    • /
    • 2019
  • Due to the human influence of particulate matter, various studies are being conducted to predict it using past data measured in the atmospheric environment monitoring network. However, it is difficult to precisely set the measurement environment and detailed conditions of the previously designed predictive model, and it is necessary to design a new predictive model based on the existing research results because of the problems such as the missing of the weather data. In this paper, as a previous study for particulate matter prediction, the conformity of the algorithm for particulate matter prediction was evaluated by designing the prediction model through the multiple linear regression and the artificial neural network, which are machine learning algorithms. As a result of the prediction performance comparison through RMSE, 18.13 for the MLR model and 14.31 for the MLP model, and the artificial neural network model was more conformable for predicting the particulate matter concentration.

A Pre-processing Study to Solve the Problem of Rare Class Classification of Network Traffic Data (네트워크 트래픽 데이터의 희소 클래스 분류 문제 해결을 위한 전처리 연구)

  • Ryu, Kyung Joon;Shin, DongIl;Shin, DongKyoo;Park, JeongChan;Kim, JinGoog
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.411-418
    • /
    • 2020
  • In the field of information security, IDS(Intrusion Detection System) is normally classified in two different categories: signature-based IDS and anomaly-based IDS. Many studies in anomaly-based IDS have been conducted that analyze network traffic data generated in cyberspace by machine learning algorithms. In this paper, we studied pre-processing methods to overcome performance degradation problems cashed by rare classes. We experimented classification performance of a Machine Learning algorithm by reconstructing data set based on rare classes and semi rare classes. After reconstructing data into three different sets, wrapper and filter feature selection methods are applied continuously. Each data set is regularized by a quantile scaler. Depp neural network model is used for learning and validation. The evaluation results are compared by true positive values and false negative values. We acquired improved classification performances on all of three data sets.

Application of Machine Learning on Voice Signals to Classify Body Mass Index - Based on Korean Adults in the Korean Medicine Data Center (머신러닝 기반 음성분석을 통한 체질량지수 분류 예측 - 한국 성인을 중심으로)

  • Kim, Junho;Park, Ki-Hyun;Kim, Ho-Seok;Lee, Siwoo;Kim, Sang-Hyuk
    • Journal of Sasang Constitutional Medicine
    • /
    • v.33 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • Objectives The purpose of this study was to check whether the classification of the individual's Body Mass Index (BMI) could be predicted by analyzing the voice data constructed at the Korean medicine data center (KDC) using machine learning. Methods In this study, we proposed a convolutional neural network (CNN)-based BMI classification model. The subjects of this study were Korean adults who had completed voice recording and BMI measurement in 2006-2015 among the data established at the Korean Medicine Data Center. Among them, 2,825 data were used for training to build the model, and 566 data were used to assess the performance of the model. As an input feature of CNN, Mel-frequency cepstral coefficient (MFCC) extracted from vowel utterances was used. A model was constructed to predict a total of four groups according to gender and BMI criteria: overweight male, normal male, overweight female, and normal female. Results & Conclusions Performance evaluation was conducted using F1-score and Accuracy. As a result of the prediction for four groups, The average accuracy was 0.6016, and the average F1-score was 0.5922. Although it showed good performance in gender discrimination, it is judged that performance improvement through follow-up studies is necessary for distinguishing BMI within gender. As research on deep learning is active, performance improvement is expected through future research.