• Title/Summary/Keyword: machine learning algorithms

Search Result 1,059, Processing Time 0.022 seconds

Prediction of Larix kaempferi Stand Growth in Gangwon, Korea, Using Machine Learning Algorithms

  • Hyo-Bin Ji;Jin-Woo Park;Jung-Kee Choi
    • Journal of Forest and Environmental Science
    • /
    • v.39 no.4
    • /
    • pp.195-202
    • /
    • 2023
  • In this study, we sought to compare and evaluate the accuracy and predictive performance of machine learning algorithms for estimating the growth of individual Larix kaempferi trees in Gangwon Province, Korea. We employed linear regression, random forest, XGBoost, and LightGBM algorithms to predict tree growth using monitoring data organized based on different thinning intensities. Furthermore, we compared and evaluated the goodness-of-fit of these models using metrics such as the coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE). The results revealed that XGBoost provided the highest goodness-of-fit, with an R2 value of 0.62 across all thinning intensities, while also yielding the lowest values for MAE and RMSE, thereby indicating the best model fit. When predicting the growth volume of individual trees after 3 years using the XGBoost model, the agreement was exceptionally high, reaching approximately 97% for all stand sites in accordance with the different thinning intensities. Notably, in non-thinned plots, the predicted volumes were approximately 2.1 m3 lower than the actual volumes; however, the agreement remained highly accurate at approximately 99.5%. These findings will contribute to the development of growth prediction models for individual trees using machine learning algorithms.

Selection of Machine Learning Techniques for Network Lifetime Parameters and Synchronization Issues in Wireless Networks

  • Srilakshmi, Nimmagadda;Sangaiah, Arun Kumar
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.833-852
    • /
    • 2019
  • In real time applications, due to their effective cost and small size, wireless networks play an important role in receiving particular data and transmitting it to a base station for analysis, a process that can be easily deployed. Due to various internal and external factors, networks can change dynamically, which impacts the localisation of nodes, delays, routing mechanisms, geographical coverage, cross-layer design, the quality of links, fault detection, and quality of service, among others. Conventional methods were programmed, for static networks which made it difficult for networks to respond dynamically. Here, machine learning strategies can be applied for dynamic networks effecting self-learning and developing tools to react quickly and efficiently, with less human intervention and reprogramming. In this paper, we present a wireless networks survey based on different machine learning algorithms and network lifetime parameters, and include the advantages and drawbacks of such a system. Furthermore, we present learning algorithms and techniques for congestion, synchronisation, energy harvesting, and for scheduling mobile sinks. Finally, we present a statistical evaluation of the survey, the motive for choosing specific techniques to deal with wireless network problems, and a brief discussion on the challenges inherent in this area of research.

Development of benthic macroinvertebrate species distribution models using the Bayesian optimization (베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발)

  • Go, ByeongGeon;Shin, Jihoon;Cha, Yoonkyung
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.4
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.

Recent Research & Development Trends in Automated Machine Learning (자동 기계학습(AutoML) 기술 동향)

  • Moon, Y.H.;Shin, I.H.;Lee, Y.J.;Min, O.G.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.32-42
    • /
    • 2019
  • The performance of machine learning algorithms significantly depends on how a configuration of hyperparameters is identified and how a neural network architecture is designed. However, this requires expert knowledge of relevant task domains and a prohibitive computation time. To optimize these two processes using minimal effort, many studies have investigated automated machine learning in recent years. This paper reviews the conventional random, grid, and Bayesian methods for hyperparameter optimization (HPO) and addresses its recent approaches, which speeds up the identification of the best set of hyperparameters. We further investigate existing neural architecture search (NAS) techniques based on evolutionary algorithms, reinforcement learning, and gradient derivatives and analyze their theoretical characteristics and performance results. Moreover, future research directions and challenges in HPO and NAS are described.

Artificial Intelligence for Clinical Research in Voice Disease (후두음성 질환에 대한 인공지능 연구)

  • Jungirl, Seok;Tack-Kyun, Kwon
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.33 no.3
    • /
    • pp.142-155
    • /
    • 2022
  • Diagnosis using voice is non-invasive and can be implemented through various voice recording devices; therefore, it can be used as a screening or diagnostic assistant tool for laryngeal voice disease to help clinicians. The development of artificial intelligence algorithms, such as machine learning, led by the latest deep learning technology, began with a binary classification that distinguishes normal and pathological voices; consequently, it has contributed in improving the accuracy of multi-classification to classify various types of pathological voices. However, no conclusions that can be applied in the clinical field have yet been achieved. Most studies on pathological speech classification using speech have used the continuous short vowel /ah/, which is relatively easier than using continuous or running speech. However, continuous speech has the potential to derive more accurate results as additional information can be obtained from the change in the voice signal over time. In this review, explanations of terms related to artificial intelligence research, and the latest trends in machine learning and deep learning algorithms are reviewed; furthermore, the latest research results and limitations are introduced to provide future directions for researchers.

Analysis of Machine Learning Research Patterns from a Quality Management Perspective (품질경영 관점에서 머신러닝 연구 패턴 분석)

  • Ye-eun Kim;Ho Jun Song;Wan Seon Shin
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.77-93
    • /
    • 2024
  • Purpose: The purpose of this study is to examine machine learning use cases in manufacturing companies from a digital quality management (DQM) perspective and to analyze and present machine learning research patterns from a quality management perspective. Methods: This study was conducted based on systematic literature review methodology. A comprehensive and systematic review was conducted on manufacturing papers covering the overall quality management process from 2015 to 2022. A total of 3 research questions were established according to the goal of the study, and a total of 5 literature selection criteria were set, based on which approximately 110 research papers were selected. Based on the selected papers, machine learning research patterns according to quality management were analyzed. Results: The results of this study are as follows. Among quality management activities, it can be seen that research on the use of machine learning technology is being most actively conducted in relation to quality defect analysis. It suggests that research on the use of NN-based algorithms is taking place most actively compared to other machine learning methods across quality management activities. Lastly, this study suggests that the unique characteristics of each machine learning algorithm should be considered for efficient and effective quality management in the manufacturing industry. Conclusion: This study is significant in that it presents machine learning research trends from an industrial perspective from a digital quality management perspective and lays the foundation for presenting optimal machine learning algorithms in future quality management activities.

Development of Machine Learning Based Seismic Response Prediction Model for Shear Wall Structure considering Aging Deteriorations (경년열화를 고려한 전단벽 구조물의 기계학습 기반 지진응답 예측모델 개발)

  • Kim, Hyun-Su;Kim, Yukyung;Lee, So Yeon;Jang, Jun Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.24 no.2
    • /
    • pp.83-90
    • /
    • 2024
  • Machine learning is widely applied to various engineering fields. In structural engineering area, machine learning is generally used to predict structural responses of building structures. The aging deterioration of reinforced concrete structure affects its structural behavior. Therefore, the aging deterioration of R.C. structure should be consider to exactly predict seismic responses of the structure. In this study, the machine learning based seismic response prediction model was developed. To this end, four machine learning algorithms were employed and prediction performance of each algorithm was compared. A 3-story coupled shear wall structure was selected as an example structure for numerical simulation. Artificial ground motions were generated based on domestic site characteristics. Elastic modulus, damping ratio and density were changed to considering concrete degradation due to chloride penetration and carbonation, etc. Various intensity measures were used input parameters of the training database. Performance evaluation was performed using metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analysis results show that neural networks and extreme gradient boosting algorithms present good prediction performance.

An Introduction of Machine Learning Theory to Business Decisions

  • Kim, Hyun-Soo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.19 no.2
    • /
    • pp.153-176
    • /
    • 1994
  • In this paper we introduce machine learning theory to business domains for business decisions. First, we review machine learning in general. We give a new look on a previous framework, version space approach, and we introduce PAC (probably approximately correct) learning paradigm which has been developed recently. We illustrate major results of PAC learning with business examples. And then, we give a theoretical analysis is decision tree induction algorithms by the frame work of PAC learning. Finally, we will discuss implications of learning theory toi business domains.

  • PDF

Prediction of Energy Harvesting Efficiency of an Inverted Flag Using Machine Learning Algorithms (머신 러닝 알고리즘을 이용한 역방향 깃발의 에너지 하베스팅 효율 예측)

  • Lim, Sehwan;Park, Sung Goon
    • Journal of the Korean Society of Visualization
    • /
    • v.19 no.3
    • /
    • pp.31-38
    • /
    • 2021
  • The energy harvesting system using an inverted flag is analyzed by using an immersed boundary method to consider the fluid and solid interaction. The inverted flag flutters at a lower critical velocity than a conventional flag. A fluttering motion is classified into straight, symmetric, asymmetric, biased, and over flapping modes. The optimal energy harvesting efficiency is observed at the biased flapping mode. Using the three different machine learning algorithms, i.e., artificial neural network, random forest, support vector regression, the energy harvesting efficiency is predicted by taking bending rigidity, inclination angle, and flapping frequency as input variables. The R2 value of the artificial neural network and random forest algorithms is observed to be more than 0.9.

Regression Algorithms Evaluation for Analysis of Crosstalk in High-Speed Digital System

  • Minhyuk Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1449-1461
    • /
    • 2024
  • As technology advances, processor speeds are increasing at a rapid pace and digital systems require a significant amount of data bandwidth. As a result, careful consideration of signal integrity is required to ensure reliable and high-speed data processing. Crosstalk has become a vital area of research in signal integrity for electronic packages, mainly because of the high level of integration. Analytic formulas were analyzed in this study to identify the features that can predict crosstalk in multi-conductor transmission lines. Through the analysis, five variables were found and obtained a dataset consisting of 302,500, data points. The study evaluated the performance of various regression models for optimization via automatic machine learning by comparing the machine learning predictions with the analytic solution. Extra tree regression consistently outperformed other algorithms, with coefficients of determination exceeding 0.9 and root mean square logarithmic errors below 0.35. The study also notes that different algorithms produced varied predictions for the two metrics.