• Title/Summary/Keyword: train model

Search Result 1,719, Processing Time 0.027 seconds

Machine learning in concrete's strength prediction

  • Al-Gburi, Saddam N.A.;Akpinar, Pinar;Helwan, Abdulkader
    • Computers and Concrete
    • /
    • v.29 no.6
    • /
    • pp.433-444
    • /
    • 2022
  • Concrete's compressive strength is widely studied in order to understand many qualities and the grade of the concrete mixture. Conventional civil engineering tests involve time and resources consuming laboratory operations which results in the deterioration of concrete samples. Proposing efficient non-destructive models for the prediction of concrete compressive strength will certainly yield advancements in concrete studies. In this study, the efficiency of using radial basis function neural network (RBFNN) which is not common in this field, is studied for the concrete compressive strength prediction. Complementary studies with back propagation neural network (BPNN), which is commonly used in this field, have also been carried out in order to verify the efficiency of RBFNN for compressive strength prediction. A total of 13 input parameters, including novel ones such as cement's and fly ash's compositional information, have been employed in the prediction models with RBFNN and BPNN since all these parameters are known to influence concrete strength. Three different train: test ratios were tested with both models, while different hidden neurons, epochs, and spread values were introduced to determine the optimum parameters for yielding the best prediction results. Prediction results obtained by RBFNN are observed to yield satisfactory high correlation coefficients and satisfactory low mean square error values when compared to the results in the previous studies, indicating the efficiency of the proposed model.

KI-HABS: Key Information Guided Hierarchical Abstractive Summarization

  • Zhang, Mengli;Zhou, Gang;Yu, Wanting;Liu, Wenfen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4275-4291
    • /
    • 2021
  • With the unprecedented growth of textual information on the Internet, an efficient automatic summarization system has become an urgent need. Recently, the neural network models based on the encoder-decoder with an attention mechanism have demonstrated powerful capabilities in the sentence summarization task. However, for paragraphs or longer document summarization, these models fail to mine the core information in the input text, which leads to information loss and repetitions. In this paper, we propose an abstractive document summarization method by applying guidance signals of key sentences to the encoder based on the hierarchical encoder-decoder architecture, denoted as KI-HABS. Specifically, we first train an extractor to extract key sentences in the input document by the hierarchical bidirectional GRU. Then, we encode the key sentences to the key information representation in the sentence level. Finally, we adopt key information representation guided selective encoding strategies to filter source information, which establishes a connection between the key sentences and the document. We use the CNN/Daily Mail and Gigaword datasets to evaluate our model. The experimental results demonstrate that our method generates more informative and concise summaries, achieving better performance than the competitive models.

A Defect Detection Algorithm of Denim Fabric Based on Cascading Feature Extraction Architecture

  • Shuangbao, Ma;Renchao, Zhang;Yujie, Dong;Yuhui, Feng;Guoqin, Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.109-117
    • /
    • 2023
  • Defect detection is one of the key factors in fabric quality control. To improve the speed and accuracy of denim fabric defect detection, this paper proposes a defect detection algorithm based on cascading feature extraction architecture. Firstly, this paper extracts these weight parameters of the pre-trained VGG16 model on the large dataset ImageNet and uses its portability to train the defect detection classifier and the defect recognition classifier respectively. Secondly, retraining and adjusting partial weight parameters of the convolution layer were retrained and adjusted from of these two training models on the high-definition fabric defect dataset. The last step is merging these two models to get the defect detection algorithm based on cascading architecture. Then there are two comparative experiments between this improved defect detection algorithm and other feature extraction methods, such as VGG16, ResNet-50, and Xception. The results of experiments show that the defect detection accuracy of this defect detection algorithm can reach 94.3% and the speed is also increased by 1-3 percentage points.

Flow Assessment and Prediction in the Asa River Watershed using different Artificial Intelligence Techniques on Small Dataset

  • Kareem Kola Yusuff;Adigun Adebayo Ismail;Park Kidoo;Jung Younghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.95-95
    • /
    • 2023
  • Common hydrological problems of developing countries include poor data management, insufficient measuring devices and ungauged watersheds, leading to small or unreliable data availability. This has greatly affected the adoption of artificial intelligence techniques for flood risk mitigation and damage control in several developing countries. While climate datasets have recorded resounding applications, but they exhibit more uncertainties than ground-based measurements. To encourage AI adoption in developing countries with small ground-based dataset, we propose data augmentation for regression tasks and compare performance evaluation of different AI models with and without data augmentation. More focus is placed on simple models that offer lesser computational cost and higher accuracy than deeper models that train longer and consume computer resources, which may be insufficient in developing countries. To implement this approach, we modelled and predicted streamflow data of the Asa River Watershed located in Ilorin, Kwara State Nigeria. Results revealed that adequate hyperparameter tuning and proper model selection improve streamflow prediction on small water dataset. This approach can be implemented in data-scarce regions to ensure timely flood intervention and early warning systems are adopted in developing countries.

  • PDF

Study on the Vibration Characteristics of Yaw Gear System for Large-Capacity Offshore Wind Turbine

  • HyoungWoo Lee;SeoWon Jang;Seok-Hwan Ahn
    • Journal of Ocean Engineering and Technology
    • /
    • v.37 no.4
    • /
    • pp.164-171
    • /
    • 2023
  • Vibration and noise must be considered to maximize the efficiency of a yaw system and reduce the fatigue load acting on a wind turbine. This study investigated a method for analyzing yaw-system vibration based on the change in the load-duration distribution (LDD). A substructure synthesis method was combined with a planetary gear train rotational vibration model and finite element models of the housing and carriers. For the vibration excitation sources, the mass imbalance, gear mesh frequency, and bearing defect frequency were considered, and a critical speed analysis was performed. The analysis results showed that the critical speed did not occur within the operating speed range, but a defect occurred in the bearing of the first-stage planetary gear system. It was found that the bearing stiffness and first natural frequency increased with the LDD load. In addition, no vibration occurred in the operating speed range under any of the LDD loads. Because the rolling bearing stiffness changed with the LDD, it was necessary to consider the LDD when analyzing the wind turbine vibration.

Usage of coot optimization-based random forests analysis for determining the shallow foundation settlement

  • Yi, Han;Xingliang, Jiang;Ye, Wang;Hui, Wang
    • Geomechanics and Engineering
    • /
    • v.32 no.3
    • /
    • pp.271-291
    • /
    • 2023
  • Settlement estimation in cohesion materials is a crucial topic to tackle because of the complexity of the cohesion soil texture, which could be solved roughly by substituted solutions. The goal of this research was to implement recently developed machine learning features as effective methods to predict settlement (Sm) of shallow foundations over cohesion soil properties. These models include hybridized support vector regression (SVR), random forests (RF), and coot optimization algorithm (COM), and black widow optimization algorithm (BWOA). The results indicate that all created systems accurately simulated the Sm, with an R2 of better than 0.979 and 0.9765 for the train and test data phases, respectively. This indicates extraordinary efficiency and a good correlation between the experimental and simulated Sm. The model's results outperformed those of ANFIS - PSO, and COM - RF findings were much outstanding to those of the literature. By analyzing established designs utilizing different analysis aspects, such as various error criteria, Taylor diagrams, uncertainty analyses, and error distribution, it was feasible to arrive at the final result that the recommended COM - RF was the outperformed approach in the forecasting process of Sm of shallow foundation, while other techniques were also reliable.

Pixel-level prediction of velocity vectors on hull surface based on convolutional neural network (합성곱 신경망 기반 선체 표면 유동 속도의 픽셀 수준 예측)

  • Jeongbeom Seo;Dayeon Kim;Inwon Lee
    • Journal of the Korean Society of Visualization
    • /
    • v.21 no.1
    • /
    • pp.18-25
    • /
    • 2023
  • In these days, high dimensional data prediction technology based on neural network shows compelling results in many different kind of field including engineering. Especially, a lot of variants of convolution neural network are widely utilized to develop pixel level prediction model for high dimensional data such as picture, or physical field value from the sensors. In this study, velocity vector field of ideal flow on ship surface is estimated on pixel level by Unet. First, potential flow analysis was conducted for the set of hull form data which are generated by hull form transformation method. Thereafter, four different neural network with a U-shape structure were conFig.d to train velocity vectors at the node position of pre-processed hull form data. As a result, for the test hull forms, it was confirmed that the network with short skip-connection gives the most accurate prediction results of streamlines and velocity magnitude. And the results also have a good agreement with potential flow analysis results. However, in some cases which don't have nothing in common with training data in terms of speed or shape, the network has relatively high error at the region of large curvature.

A Study on the Concentration of Wave Energy by Construction of a Submerged Coastal Structure (해저구조물 설치에 따른 파랑에너지 집적에 관한 연구)

  • Gug, S.G.;Lee, J.W.
    • Journal of Korean Port Research
    • /
    • v.6 no.1
    • /
    • pp.69-91
    • /
    • 1992
  • A new type of horizontal submerged break water or fixed structure to control waves near coastal area is introduced to focus wave energy before or behind it. Intentionally, the water depth near the structure is changed gradually to get a refraction and diffraction effect. The concentration of wave energy due to the structure was analyzed for the selected design of structure. The shape of the submerged structure in consideration is a circular combined with elliptical curve not to cause reflection of waves at the extreme edge of the structure but cause wave scattering. The direction of the structure against the incident wave is changed easily in the model Applying a regular wave train the following were examined. 1) whether a crescent plain submerged structure designed by the wave refraction theory can concentrate wave energy at a focal zone behind and before it without wave breaking phenomenon. 2) Location of maximum wave amplification factor in terms of the incident wave direction, wave period, etc. In any event the study would contribute to control waves near coastal area and to protect a beach from erosion without interruption of ocean view it is an useful study for the concentration of wave energy efficiently with the increase of wave height.

  • PDF

Teaching-learning-based strategy to retrofit neural computing toward pan evaporation analysis

  • Rana Muhammad Adnan Ikram;Imran Khan;Hossein Moayedi;Loke Kok Foong;Binh Nguyen Le
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.37-47
    • /
    • 2023
  • Indirect determination of pan evaporation (PE) has been highly regarded, due to the advantages of intelligent models employed for this objective. This work pursues improving the reliability of a popular intelligent model, namely multi-layer perceptron (MLP) through surmounting its computational knots. Available climatic data of Fresno weather station (California, USA) is used for this study. In the first step, testing several most common trainers of the MLP revealed the superiority of the Levenberg-Marquardt (LM) algorithm. It, therefore, is considered as the classical training approach. Next, the optimum configurations of two metaheuristic algorithms, namely cuttlefish optimization algorithm (CFOA) and teaching-learning-based optimization (TLBO) are incorporated to optimally train the MLP. In these two models, the LM is replaced with metaheuristic strategies. Overall, the results demonstrated the high competency of the MLP (correlations above 0.997) in the presence of all three strategies. It was also observed that the TLBO enhances the learning and prediction accuracy of the classical MLP (by nearly 7.7% and 9.2%, respectively), while the CFOA performed weaker than LM. Moreover, a comparison between the efficiency of the used metaheuristic optimizers showed that the TLBO is a more time-effective technique for predicting the PE. Hence, it can serve as a promising approach for indirect PE analysis.

Prediction of Cryogenic- and Room-Temperature Deformation Behavior of Rolled Titanium using Machine Learning (타이타늄 압연재의 기계학습 기반 극저온/상온 변형거동 예측)

  • S. Cheon;J. Yu;S.H. Lee;M.-S. Lee;T.-S. Jun;T. Lee
    • Transactions of Materials Processing
    • /
    • v.32 no.2
    • /
    • pp.74-80
    • /
    • 2023
  • A deformation behavior of commercially pure titanium (CP-Ti) is highly dependent on material and processing parameters, such as deformation temperature, deformation direction, and strain rate. This study aims to predict the multivariable and nonlinear tensile behavior of CP-Ti using machine learning based on three algorithms: artificial neural network (ANN), light gradient boosting machine (LGBM), and long short-term memory (LSTM). The predictivity for tensile behaviors at the cryogenic temperature was lower than those in the room temperature due to the larger data scattering in the train dataset used in the machine learning. Although LGBM showed the lowest value of root mean squared error, it was not the best strategy owing to the overfitting and step-function morphology different from the actual data. LSTM performed the best as it effectively learned the continuous characteristics of a flow curve as well as it spent the reduced time for machine learning, even without sufficient database and hyperparameter tuning.