• Title/Summary/Keyword: Machine learning algorithm

Search Result 1,514, Processing Time 0.03 seconds

Thermal post-buckling measurement of the advanced nanocomposites reinforced concrete systems via both mathematical modeling and machine learning algorithm

  • Minggui Zhou;Gongxing Yan;Danping Hu;Haitham A. Mahmoud
    • Advances in nano research
    • /
    • v.16 no.6
    • /
    • pp.623-638
    • /
    • 2024
  • This study investigates the thermal post-buckling behavior of concrete eccentric annular sector plates reinforced with graphene oxide powders (GOPs). Employing the minimum total potential energy principle, the plates' stability and response under thermal loads are analyzed. The Haber-Schaim foundation model is utilized to account for the support conditions, while the transform differential quadrature method (TDQM) is applied to solve the governing differential equations efficiently. The integration of GOPs significantly enhances the mechanical properties and stability of the plates, making them suitable for advanced engineering applications. Numerical results demonstrate the critical thermal loads and post-buckling paths, providing valuable insights into the design and optimization of such reinforced structures. This study presents a machine learning algorithm designed to predict complex engineering phenomena using datasets derived from presented mathematical modeling. By leveraging advanced data analytics and machine learning techniques, the algorithm effectively captures and learns intricate patterns from the mathematical models, providing accurate and efficient predictions. The methodology involves generating comprehensive datasets from mathematical simulations, which are then used to train the machine learning model. The trained model is capable of predicting various engineering outcomes, such as stress, strain, and thermal responses, with high precision. This approach significantly reduces the computational time and resources required for traditional simulations, enabling rapid and reliable analysis. This comprehensive approach offers a robust framework for predicting the thermal post-buckling behavior of reinforced concrete plates, contributing to the development of resilient and efficient structural components in civil engineering.

Comparative characteristic of ensemble machine learning and deep learning models for turbidity prediction in a river (딥러닝과 앙상블 머신러닝 모형의 하천 탁도 예측 특성 비교 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.1
    • /
    • pp.83-91
    • /
    • 2021
  • The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.

A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics (실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구)

  • Ku, Jin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.1
    • /
    • pp.201-206
    • /
    • 2018
  • Recently, as technologies for realizing artificial intelligence have become more common, machine learning is widely used. Machine learning provides insight into collecting large amounts of data, batch processing, and taking final action, but the effects of the work are not immediately integrated into the learning process. In this paper proposed an adaptive learning model to improve the performance of real-time stream analysis as a big business issue. Adaptive learning generates the ensemble by adapting to the complexity of the data set, and the algorithm uses the data needed to determine the optimal data point to sample. In an experiment for six standard data sets, the adaptive learning model outperformed the simple machine learning model for classification at the learning time and accuracy. In particular, the support vector machine showed excellent performance at the end of all ensembles. Adaptive learning is expected to be applicable to a wide range of problems that need to be adaptively updated in the inference of changes in various parameters over time.

Machine Learning-Based Atmospheric Correction Based on Radiative Transfer Modeling Using Sentinel-2 MSI Data and ItsValidation Focusing on Forest (농림위성을 위한 기계학습을 활용한 복사전달모델기반 대기보정 모사 알고리즘 개발 및 검증: 식생 지역을 위주로)

  • Yoojin Kang;Yejin Kim ;Jungho Im;Joongbin Lim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.891-907
    • /
    • 2023
  • Compact Advanced Satellite 500-4 (CAS500-4) is scheduled to be launched to collect high spatial resolution data focusing on vegetation applications. To achieve this goal, accurate surface reflectance retrieval through atmospheric correction is crucial. Therefore, a machine learning-based atmospheric correction algorithm was developed to simulate atmospheric correction from a radiative transfer model using Sentinel-2 data that have similarspectral characteristics as CAS500-4. The algorithm was then evaluated mainly for forest areas. Utilizing the atmospheric correction parameters extracted from Sentinel-2 and GEOKOMPSAT-2A (GK-2A), the atmospheric correction algorithm was developed based on Random Forest and Light Gradient Boosting Machine (LGBM). Between the two machine learning techniques, LGBM performed better when considering both accuracy and efficiency. Except for one station, the results had a correlation coefficient of more than 0.91 and well-reflected temporal variations of the Normalized Difference Vegetation Index (i.e., vegetation phenology). GK-2A provides Aerosol Optical Depth (AOD) and water vapor, which are essential parameters for atmospheric correction, but additional processing should be required in the future to mitigate the problem caused by their many missing values. This study provided the basis for the atmospheric correction of CAS500-4 by developing a machine learning-based atmospheric correction simulation algorithm.

Prediction of Net Irrigation Water Requirement in paddy field Based on Machine Learning (머신러닝 기법을 활용한 논 순용수량 예측)

  • Kim, Soo-Jin;Bae, Seung-Jong;Jang, Min-Won
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.4
    • /
    • pp.105-117
    • /
    • 2022
  • This study tested SVM(support vector machine), RF(random forest), and ANN(artificial neural network) machine-learning models that can predict net irrigation water requirements in paddy fields. For the Jeonju and Jeongeup meteorological stations, the net irrigation water requirement was calculated using K-HAS from 1981 to 2021 and set as the label. For each algorithm, twelve models were constructed based on cumulative precipitation, precipitation, crop evapotranspiration, and month. Compared to the CE model, the R2 of the CEP model was higher, and MAE, RMSE, and MSE were lower. Comprehensively considering learning performance and learning time, it is judged that the RF algorithm has the best usability and predictive power of five-days is better than three-days. The results of this study are expected to provide the scientific information necessary for the decision-making of on-site water managers is expected to be possible through the connection with weather forecast data. In the future, if the actual amount of irrigation and supply are measured, it is necessary to develop a learning model that reflects this.

Reward Design of Reinforcement Learning for Development of Smart Control Algorithm (스마트 제어알고리즘 개발을 위한 강화학습 리워드 설계)

  • Kim, Hyun-Su;Yoon, Ki-Yong
    • Journal of Korean Association for Spatial Structures
    • /
    • v.22 no.2
    • /
    • pp.39-46
    • /
    • 2022
  • Recently, machine learning is widely used to solve optimization problems in various engineering fields. In this study, machine learning is applied to development of a control algorithm for a smart control device for reduction of seismic responses. For this purpose, Deep Q-network (DQN) out of reinforcement learning algorithms was employed to develop control algorithm. A single degree of freedom (SDOF) structure with a smart tuned mass damper (TMD) was used as an example structure. A smart TMD system was composed of MR (magnetorheological) damper instead of passive damper. Reward design of reinforcement learning mainly affects the control performance of the smart TMD. Various hyper-parameters were investigated to optimize the control performance of DQN-based control algorithm. Usually, decrease of the time step for numerical simulation is desirable to increase the accuracy of simulation results. However, the numerical simulation results presented that decrease of the time step for reward calculation might decrease the control performance of DQN-based control algorithm. Therefore, a proper time step for reward calculation should be selected in a DQN training process.

A Comparative Study on Collision Detection Algorithms based on Joint Torque Sensor using Machine Learning (기계학습을 이용한 Joint Torque Sensor 기반의 충돌 감지 알고리즘 비교 연구)

  • Jo, Seonghyeon;Kwon, Wookyong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2020
  • This paper studied the collision detection of robot manipulators for safe collaboration in human-robot interaction. Based on sensor-based collision detection, external torque is detached from subtracting robot dynamics. To detect collision using joint torque sensor data, a comparative study was conducted using data-based machine learning algorithm. Data was collected from the actual 3 degree-of-freedom (DOF) robot manipulator, and the data was labeled by threshold and handwork. Using support vector machine (SVM), decision tree and k-nearest neighbors KNN method, we derive the optimal parameters of each algorithm and compare the collision classification performance. The simulation results are analyzed for each method, and we confirmed that by an optimal collision status detection model with high prediction accuracy.

Study on Automatic Bug Triage using Deep Learning (딥 러닝을 이용한 버그 담당자 자동 배정 연구)

  • Lee, Sun-Ro;Kim, Hye-Min;Lee, Chan-Gun;Lee, Ki-Seong
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1156-1164
    • /
    • 2017
  • Existing studies on automatic bug triage were mostly used the method of designing the prediction system based on the machine learning algorithm. Therefore, it can be said that applying a high-performance machine learning model is the core of the performance of the automatic bug triage system. In the related research, machine learning models that have high performance are mainly used, such as SVM and Naïve Bayes. In this paper, we apply Deep Learning, which has recently shown good performance in the field of machine learning, to automatic bug triage and evaluate its performance. Experimental results show that the Deep Learning based Bug Triage system achieves 48% accuracy in active developer experiments, un improvement of up to 69% over than conventional machine learning techniques.

A Branch-and-Bound Algorithm for Finding an Optimal Solution of Transductive Support Vector Machines (Transductive SVM을 위한 분지-한계 알고리즘)

  • Park Chan-Kyoo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.31 no.2
    • /
    • pp.69-85
    • /
    • 2006
  • Transductive Support Vector Machine(TSVM) is one of semi-supervised learning algorithms which exploit the domain structure of the whole data by considering labeled and unlabeled data together. Although it was proposed several years ago, there has been no efficient algorithm which can handle problems with more than hundreds of training examples. In this paper, we propose an efficient branch-and-bound algorithm which can solve large-scale TSVM problems with thousands of training examples. The proposed algorithm uses two bounding techniques: min-cut bound and reduced SVM bound. The min-cut bound is derived from a capacitated graph whose cuts represent a lower bound to the optimal objective function value of the dual problem. The reduced SVM bound is obtained by constructing the SVM problem with only labeled data. Experimental results show that the accuracy rate of TSVM can be significantly improved by learning from the optimal solution of TSVM, rather than an approximated solution.

Thompson sampling based path selection algorithm in multipath communication system (다중경로 통신 시스템에서 톰슨 샘플링을 이용한 경로 선택 기법)

  • Chung, Byung Chang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1960-1963
    • /
    • 2021
  • In this paper, we propose a multiplay Thompson sampling algorithm in multipath communication system. Multipath communication system has advantages on communication capacity, robustness, survivability, and so on. It is important to select appropriate network path according to the status of individual path. However, it is hard to obtain the information of path quality simultaneously. To solve this issue, we propose Thompson sampling which is popular in machine learning area. We find some issues when the algorithm is applied directly in the proposal system and suggested some modifications. Through simulation, we verified the proposed algorithm can utilize the entire network paths. In summary, our proposed algorithm can be applied as a path allocation in multipath-based communications system.