• Title/Summary/Keyword: Learning/Training Algorithms

Search Result 432, Processing Time 0.022 seconds

Estimation of compressive strength of BFS and WTRP blended cement mortars with machine learning models

  • Ozcan, Giyasettin;Kocak, Yilmaz;Gulbandilar, Eyyup
    • Computers and Concrete
    • /
    • v.19 no.3
    • /
    • pp.275-282
    • /
    • 2017
  • The aim of this study is to build Machine Learning models to evaluate the effect of blast furnace slag (BFS) and waste tire rubber powder (WTRP) on the compressive strength of cement mortars. In order to develop these models, 12 different mixes with 288 specimens of the 2, 7, 28, and 90 days compressive strength experimental results of cement mortars containing BFS, WTRP and BFS+WTRP were used in training and testing by Random Forest, Ada Boost, SVM and Bayes classifier machine learning models, which implement standard cement tests. The machine learning models were trained with 288 data that acquired from experimental results. The models had four input parameters that cover the amount of Portland cement, BFS, WTRP and sample ages. Furthermore, it had one output parameter which is compressive strength of cement mortars. Experimental observations from compressive strength tests were compared with predictions of machine learning methods. In order to do predictive experimentation, we exploit R programming language and corresponding packages. During experimentation on the dataset, Random Forest, Ada Boost and SVM models have produced notable good outputs with higher coefficients of determination of R2, RMS and MAPE. Among the machine learning algorithms, Ada Boost presented the best R2, RMS and MAPE values, which are 0.9831, 5.2425 and 0.1105, respectively. As a result, in the model, the testing results indicated that experimental data can be estimated to a notable close extent by the model.

Analysis of massive data in astronomy (천문학에서의 대용량 자료 분석)

  • Shin, Min-Su
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1107-1116
    • /
    • 2016
  • Recent astronomical survey observations have produced substantial amounts of data as well as completely changed conventional methods of analyzing astronomical data. Both classical statistical inference and modern machine learning methods have been used in every step of data analysis that range from data calibration to inferences of physical models. We are seeing the growing popularity of using machine learning methods in classical problems of astronomical data analysis due to low-cost data acquisition using cheap large-scale detectors and fast computer networks that enable us to share large volumes of data. It is common to consider the effects of inhomogeneous spatial and temporal coverage in the analysis of big astronomical data. The growing size of the data requires us to use parallel distributed computing environments as well as machine learning algorithms. Distributed data analysis systems have not been adopted widely for the general analysis of massive astronomical data. Gathering adequate training data is expensive in observation and learning data are generally collected from multiple data sources in astronomy; therefore, semi-supervised and ensemble machine learning methods will become important for the analysis of big astronomical data.

CALS: Channel State Information Auto-Labeling System for Large-scale Deep Learning-based Wi-Fi Sensing (딥러닝 기반 Wi-Fi 센싱 시스템의 효율적인 구축을 위한 지능형 데이터 수집 기법)

  • Jang, Jung-Ik;Choi, Jaehyuk
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.341-348
    • /
    • 2022
  • Wi-Fi Sensing, which uses Wi-Fi technology to sense the surrounding environments, has strong potentials in a variety of sensing applications. Recently several advanced deep learning-based solutions using CSI (Channel State Information) data have achieved high performance, but it is still difficult to use in practice without explicit data collection, which requires expensive adaptation efforts for model retraining. In this study, we propose a Channel State Information Automatic Labeling System (CALS) that automatically collects and labels training CSI data for deep learning-based Wi-Fi sensing systems. The proposed system allows the CSI data collection process to efficiently collect labeled CSI for labeling for supervised learning using computer vision technologies such as object detection algorithms. We built a prototype of CALS to demonstrate its efficiency and collected data to train deep learning models for detecting the presence of a person in an indoor environment, showing to achieve an accuracy of over 90% with the auto-labeled data sets generated by CALS.

Deep learning-based monitoring for conservation and management of coastal dune vegetation (해안사구 식생의 보전 및 관리를 위한 딥러닝 기반 모니터링)

  • Kim, Dong-woo;Gu, Ja-woon;Hong, Ye-ji;Kim, Se-Min;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.6
    • /
    • pp.25-33
    • /
    • 2022
  • In this study, a monitoring method using high-resolution images acquired by unmanned aerial vehicles and deep learning algorithms was proposed for the management of the Sinduri coastal sand dunes. Class classification was done using U-net, a semantic division method. The classification target classified 3 types of sand dune vegetation into 4 classes, and the model was trained and tested with a total of 320 training images and 48 test images. Ignored label was applied to improve the performance of the model, and then evaluated by applying two loss functions, CE Loss and BCE Loss. As a result of the evaluation, when CE Loss was applied, the value of mIoU for each class was the highest, but it can be judged that the performance of BCE Loss is better considering the time efficiency consumed in learning. It is meaningful as a pilot application of unmanned aerial vehicles and deep learning as a method to monitor and manage sand dune vegetation. The possibility of using the deep learning image analysis technology to monitor sand dune vegetation has been confirmed, and it is expected that the proposed method can be used not only in sand dune vegetation but also in various fields such as forests and grasslands.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

A Study on Improvement of Buffer Cache Performance for File I/O in Deep Learning (딥러닝의 파일 입출력을 위한 버퍼캐시 성능 개선 연구)

  • Jeongha Lee;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.93-98
    • /
    • 2024
  • With the rapid advance in AI (artificial intelligence) and high-performance computing technologies, deep learning is being used in various fields. Deep learning proceeds training by randomly reading a large amount of data and repeats this process. A large number of files are randomly repeatedly referenced during deep learning, which shows different access characteristics from traditional workloads with temporal locality. In order to cope with the difficulty in caching caused by deep learning, we propose a new sampling method that aims at reducing the randomness of dataset reading and adaptively operating on existing buffer cache algorithms. We show that the proposed policy reduces the miss rate of the buffer cache by 16% on average and up to 33% compared to the existing method, and improves the execution time by up to 24%.

CycleGAN Based Translation Method between Asphalt and Concrete Crack Images for Data Augmentation (데이터 증강을 위한 순환 생성적 적대 신경망 기반의 아스팔트와 콘크리트 균열 영상 간의 변환 기법)

  • Shim, Seungbo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.171-182
    • /
    • 2022
  • The safe use of a structure requires it to be maintained in an undamaged state. Thus, a typical factor that determines the safety of a structure is a crack in it. In addition, cracks are caused by various reasons, damage the structure in various ways, and exist in different shapes. Making matters worse, if these cracks are unattended, the risk of structural failure increases and proceeds to a catastrophe. Hence, recently, methods of checking structural damage using deep learning and computer vision technology have been introduced. These methods usually have the premise that there should be a large amount of training image data. However, the amount of training image data is always insufficient. Particularly, this insufficiency negatively affects the performance of deep learning crack detection algorithms. Hence, in this study, a method of augmenting crack image data based on the image translation technique was developed. In particular, this method obtained the crack image data for training a deep learning neural network model by transforming a specific case of a asphalt crack image into a concrete crack image or vice versa . Eventually, this method expected that a robust crack detection algorithm could be developed by increasing the diversity of its training data.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

Data-Mining in Business Performance Database Using Explanation-Based Genetic Algorithms (설명기반 유전자알고리즘을 활용한 경영성과 데이터베이스이 데이터마이닝)

  • 조성훈;정민용
    • Korean Management Science Review
    • /
    • v.18 no.1
    • /
    • pp.135-145
    • /
    • 2001
  • In recent environment of dynamic management, there is growing recognition that information and knowledge management systems are essential for efficient/effective decision making by CEO. To cope with this situation, we suggest the Data-Miming scheme as a key component of integrated information and knowledge management system. The proposed system measures business performance by considering both VA(Value-Added), which represents stakeholder’s point of view and EVA (Economic Value-Added), which represents shareholder’s point of view. To mine the new information & Knowledge discovery, we applied the improved genetic algorithms that consider predictability, understandability (lucidity) and reasonability factors simultaneously, we use a linear combination model for GAs learning structure. Although this model’s predictability will be more decreased than non-linear model, this model can increase the knowledge’s understandability that is meaning of induced values. Moreover, we introduce a random variable scheme based on normal distribution for initial chromosomes in GAs, so we can expect to increase the knowledge’s reasonability that is degree of expert’s acceptability. the random variable scheme based on normal distribution uses statistical correlation/determination coefficient that is calculated with training data. To demonstrate the performance of the system, we conducted a case study using financial data of Korean automobile industry over 16 years from 1981 to 1996, which is taken from database of KISFAS (Korea Investors Services Financial Analysis System).

  • PDF

Discretization of Continuous-Valued Attributes for Classification Learning (분류학습을 위한 연속 애트리뷰트의 이산화 방법에 관한 연구)

  • Lee, Chang-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1541-1549
    • /
    • 1997
  • Many classification algorithms require that training examples contain only discrete values. In order to use these algorithms when some attributes have continuous numeric values, the numeric attributes must be converted into discrete ones. This paper describes a new way of discretizing numeric values using information theory. Our method is context-sensitive in the sense that it takes into account the value of the target attribute. The amount of information each interval gives to the target attribute is measured using Hellinger divergence, and the interval boundaries are decided so that each interval contains as equal amount of information as possible. In order to compare our discretization method with some current discretization methods, several popular classification data sets are selected for experiment. We use back propagation algorithm and ID3 as classification tools to compare the accuracy of our discretization method with that of other methods.

  • PDF