• Title/Summary/Keyword: MachineLearning

Search Result 5,654, Processing Time 0.041 seconds

On successive machine learning process for predicting strength and displacement of rectangular reinforced concrete columns subjected to cyclic loading

  • Bu-seog Ju;Shinyoung Kwag;Sangwoo Lee
    • Computers and Concrete
    • /
    • v.32 no.5
    • /
    • pp.513-525
    • /
    • 2023
  • Recently, research on predicting the behavior of reinforced concrete (RC) columns using machine learning methods has been actively conducted. However, most studies have focused on predicting the ultimate strength of RC columns using a regression algorithm. Therefore, this study develops a successive machine learning process for predicting multiple nonlinear behaviors of rectangular RC columns. This process consists of three stages: single machine learning, bagging ensemble, and stacking ensemble. In the case of strength prediction, sufficient prediction accuracy is confirmed even in the first stage. In the case of displacement, although sufficient accuracy is not achieved in the first and second stages, the stacking ensemble model in the third stage performs better than the machine learning models in the first and second stages. In addition, the performance of the final prediction models is verified by comparing the backbone curves and hysteresis loops obtained from predicted outputs with actual experimental data.

Trends in image processing techniques applied to corrosion detection and analysis (부식 검출과 분석에 적용한 영상 처리 기술 동향)

  • Beomsoo Kim;Jaesung Kwon;Jeonghyeon Yang
    • Journal of Surface Science and Engineering
    • /
    • v.56 no.6
    • /
    • pp.353-370
    • /
    • 2023
  • Corrosion detection and analysis is a very important topic in reducing costs and preventing disasters. Recently, image processing techniques have been widely applied to corrosion identification and analysis. In this work, we briefly introduces traditional image processing techniques and machine learning algorithms applied to detect or analyze corrosion in various fields. Recently, machine learning, especially CNN-based algorithms, have been widely applied to corrosion detection. Additionally, research on applying machine learning to region segmentation is very actively underway. The corrosion is reddish and brown in color and has a very irregular shape, so a combination of techniques that consider color and texture, various mathematical techniques, and machine learning algorithms are used to detect and analyze corrosion. We present examples of the application of traditional image processing techniques and machine learning to corrosion detection and analysis.

Machine Learning Based Asset Risk Management for Highway Sign Support Systems

  • Myungjin CHAE;Jiyong CHOI
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.145-151
    • /
    • 2024
  • Road sign support systems are not usually well managed because bridges and pavement have budget and maintenance priority while the sign boards and sign supports are considered as miscellaneous items. The authors of this paper suggested the implementation of simplified machine learning algorithms for asset risk management in highway sign support systems. By harnessing historical and real-time data, machine learning models can forecast potential vulnerabilities, enabling early intervention and proactive maintenance protocols. The raw data were collected from the Connecticut Department of Transportation (CTDOT) asset management database that includes asset ages, repair history, installation and repair costs, and other administrative information. While there are many advanced and complicated structural deterioration prediction models, a simple deterioration curve is assumed, and prediction model has been developed using machine learning algorithm to determine the risk assessment and prediction. The integration of simplified machine learning in asset risk management for highway sign support systems not only enables predictive maintenance but also optimizes resource allocation. This approach ensures that decision-makers are not inundated with excessive detailed information, making it particularly practical for industry application.

Identifying the Effects of Repeated Tasks in an Apartment Construction Project Using Machine Learning Algorithm (기계적 학습의 알고리즘을 이용하여 아파트 공사에서 반복 공정의 효과 비교에 관한 연구)

  • Kim, Hyunjoo
    • Journal of KIBIM
    • /
    • v.6 no.4
    • /
    • pp.35-41
    • /
    • 2016
  • Learning effect is an observation that the more times a task is performed, the less time is required to produce the same amount of outcomes. The construction industry heavily relies on repeated tasks where the learning effect is an important measure to be used. However, most construction durations are calculated and applied in real projects without considering the learning effects in each of the repeated activities. This paper applied the learning effect to the repeated activities in a small sized apartment construction project. The result showed that there was about 10 percent of difference in duration (one approach of the total duration with learning effects in 41 days while the other without learning effect in 36.5 days). To make the comparison between the two approaches, a large number of BIM based computer simulations were generated and useful patterns were recognized using machine learning algorithm named Decision Tree (See5). Machine learning is a data-driven approach for pattern recognition based on observational evidence.

ACCELERATION OF MACHINE LEARNING ALGORITHMS BY TCHEBYCHEV ITERATION TECHNIQUE

  • LEVIN, MIKHAIL P.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.1
    • /
    • pp.15-28
    • /
    • 2018
  • Recently Machine Learning algorithms are widely used to process Big Data in various applications and a lot of these applications are executed in run time. Therefore the speed of Machine Learning algorithms is a critical issue in these applications. However the most of modern iteration Machine Learning algorithms use a successive iteration technique well-known in Numerical Linear Algebra. But this technique has a very low convergence, needs a lot of iterations to get solution of considering problems and therefore a lot of time for processing even on modern multi-core computers and clusters. Tchebychev iteration technique is well-known in Numerical Linear Algebra as an attractive candidate to decrease the number of iterations in Machine Learning iteration algorithms and also to decrease the running time of these algorithms those is very important especially in run time applications. In this paper we consider the usage of Tchebychev iterations for acceleration of well-known K-Means and SVM (Support Vector Machine) clustering algorithms in Machine Leaning. Some examples of usage of our approach on modern multi-core computers under Apache Spark framework will be considered and discussed.

Trend Analysis of Korea Papers in the Fields of 'Artificial Intelligence', 'Machine Learning' and 'Deep Learning' ('인공지능', '기계학습', '딥 러닝' 분야의 국내 논문 동향 분석)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.4
    • /
    • pp.283-292
    • /
    • 2020
  • Artificial intelligence, which is one of the representative images of the 4th industrial revolution, has been highly recognized since 2016. This paper analyzed domestic paper trends for 'Artificial Intelligence', 'Machine Learning', and 'Deep Learning' among the domestic papers provided by the Korea Academic Education and Information Service. There are approximately 10,000 searched papers, and word count analysis, topic modeling and semantic network is used to analyze paper's trends. As a result of analyzing the extracted papers, compared to 2015, in 2016, it increased 600% in the field of artificial intelligence, 176% in machine learning, and 316% in the field of deep learning. In machine learning, a support vector machine model has been studied, and in deep learning, convolutional neural networks using TensorFlow are widely used in deep learning. This paper can provide help in setting future research directions in the fields of 'artificial intelligence', 'machine learning', and 'deep learning'.

Research Trend on Machine Learning Healthcare Based on Keyword Frequency and Centrality Analysis : Focusing on the United States, the United Kingdom, Korea (키워드 빈도 및 중심성 분석 기반의 머신러닝 헬스케어 연구 동향 : 미국·영국·한국을 중심으로)

  • Lee Taekkyeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.149-163
    • /
    • 2023
  • In this study we analyze research trends on machine learning healthcare based on papers from the United States, the United Kingdom, and Korea. In Elsevier's Scopus, we collected 3425 papers related to machine learning healthcare published from 2018 to 2022. Keyword frequency and centrality analysis were conducted using the abstracts of the collected papers. We identified keywords with high frequency of appearance by calculating keyword frequency and found central research keywords through the centrality analysis by country. Through the analysis results, research related to machine learning, deep learning, healthcare, and the covid virus was conducted as the most central and highly mediating research in each country. As the implication, studies related to electronic health information-based treatment, natural language processing, and privacy in Korea have lower degree centrality and betweenness centrality than those of the United States and the United Kingdom. Thus, various convergence research applied with machine learning is needed for these fields.

Recent advances in deep learning-based side-channel analysis

  • Jin, Sunghyun;Kim, Suhri;Kim, HeeSeok;Hong, Seokhie
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.292-304
    • /
    • 2020
  • As side-channel analysis and machine learning algorithms share the same objective of classifying data, numerous studies have been proposed for adapting machine learning to side-channel analysis. However, a drawback of machine learning algorithms is that their performance depends on human engineering. Therefore, recent studies in the field focus on exploiting deep learning algorithms, which can extract features automatically from data. In this study, we survey recent advances in deep learning-based side-channel analysis. In particular, we outline how deep learning is applied to side-channel analysis, based on deep learning architectures and application methods. Furthermore, we describe its properties when using different architectures and application methods. Finally, we discuss our perspective on future research directions in this field.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.