• Title/Summary/Keyword: ANN model

Search Result 830, Processing Time 0.033 seconds

Export-Import Value Nowcasting Procedure Using Big Data-AIS and Machine Learning Techniques

  • NICKELSON, Jimmy;NOORAENI, Rani;EFLIZA, EFLIZA
    • Asian Journal of Business Environment
    • /
    • v.12 no.3
    • /
    • pp.1-12
    • /
    • 2022
  • Purpose: This study aims to investigate whether AIS data can be used as a supporting indicator or as an initial signal to describe Indonesia's export-import conditions in real-time. Research design, data, and methodology: This study performs several stages of data selection to obtain indicators from AIS that truly reflect export-import activities in Indonesia. Also, investigate the potential of AIS indicators in producing forecasts of the value and volume of Indonesian export-import using conventional statistical methods and machine learning techniques. Results: The six preprocessing stages defined in this study filtered AIS data from 661.8 million messages to 73.5 million messages. Seven predictors were formed from the selected AIS data. The AIS indicator can be used to provide an initial signal about Indonesia's import-export activities. Each export or import activity has its own predictor. Conventional statistical methods and machine learning techniques have the same ability both in forecasting Indonesia's exports and imports. Conclusions: Big data AIS can be used as a supporting indicator as a signal of the condition of export-import values in Indonesia. The right method of building indicators can make the data valuable for the performance of the forecasting model.

An insight into the prediction of mechanical properties of concrete using machine learning techniques

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;M.Ramkumar Raja;Hany S. Hussein;T.M. Yunus Khan;Pooja Sabherwal
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.263-286
    • /
    • 2023
  • Experimenting with concrete to determine its compressive and tensile strengths is a laborious and time-consuming operation that requires a lot of attention to detail. Researchers from all around the world have spent the better part of the last several decades attempting to use machine learning algorithms to make accurate predictions about the technical qualities of various kinds of concrete. The research that is currently available on estimating the strength of concrete draws attention to the applicability and precision of the various machine learning techniques. This article provides a summary of the research that has previously been conducted on estimating the strength of concrete by making use of a variety of different machine learning methods. In this work, a classification of the existing body of research literature is presented, with the classification being based on the machine learning technique used by the researchers. The present review work will open the horizon for the researchers working on the machine learning based prediction of the compressive strength of concrete by providing the recommendations and benefits and drawbacks associated with each model as determining the compressive strength of concrete practically is a laborious and time-consuming task.

Prediction of Cryogenic- and Room-Temperature Deformation Behavior of Rolled Titanium using Machine Learning (타이타늄 압연재의 기계학습 기반 극저온/상온 변형거동 예측)

  • S. Cheon;J. Yu;S.H. Lee;M.-S. Lee;T.-S. Jun;T. Lee
    • Transactions of Materials Processing
    • /
    • v.32 no.2
    • /
    • pp.74-80
    • /
    • 2023
  • A deformation behavior of commercially pure titanium (CP-Ti) is highly dependent on material and processing parameters, such as deformation temperature, deformation direction, and strain rate. This study aims to predict the multivariable and nonlinear tensile behavior of CP-Ti using machine learning based on three algorithms: artificial neural network (ANN), light gradient boosting machine (LGBM), and long short-term memory (LSTM). The predictivity for tensile behaviors at the cryogenic temperature was lower than those in the room temperature due to the larger data scattering in the train dataset used in the machine learning. Although LGBM showed the lowest value of root mean squared error, it was not the best strategy owing to the overfitting and step-function morphology different from the actual data. LSTM performed the best as it effectively learned the continuous characteristics of a flow curve as well as it spent the reduced time for machine learning, even without sufficient database and hyperparameter tuning.

Estimation of frost durability of recycled aggregate concrete by hybridized Random Forests algorithms

  • Rui Liang;Behzad Bayrami
    • Steel and Composite Structures
    • /
    • v.49 no.1
    • /
    • pp.91-107
    • /
    • 2023
  • An effective approach to promoting sustainability within the construction industry is the use of recycled aggregate concrete (RAC) as a substitute for natural aggregates. Ensuring the frost resilience of RAC technologies is crucial to facilitate their adoption in regions characterized by cold temperatures. The main aim of this study was to use the Random Forests (RF) approach to forecast the frost durability of RAC in cold locations, with a focus on the durability factor (DF) value. Herein, three optimization algorithms named Sine-cosine optimization algorithm (SCA), Black widow optimization algorithm (BWOA), and Equilibrium optimizer (EO) were considered for determing optimal values of RF hyperparameters. The findings show that all developed systems faithfully represented the DF, with an R2 for the train and test data phases of better than 0.9539 and 0.9777, respectively. In two assessment and learning stages, EO - RF is found to be superior than BWOA - RF and SCA - RF. The outperformed model's performance (EO - RF) was superior to that of ANN (from literature) by raising the values of R2 and reducing the RMSE values. Considering the justifications, as well as the comparisons from metrics and Taylor diagram's findings, it could be found out that, although other RF models were equally reliable in predicting the the frost durability of RAC based on the durability factor (DF) value in cold climates, the developed EO - RF strategy excelled them all.

Numerical data-driven machine learning model to predict the strength reduction of fire damaged RC columns

  • HyunKyoung Kim;Hyo-Gyoung Kwak;Ju-Young Hwang
    • Computers and Concrete
    • /
    • v.32 no.6
    • /
    • pp.625-637
    • /
    • 2023
  • The application of ML approaches in determining the resisting capacity of fire damaged RC columns is introduced in this paper, on the basis of analysis data driven ML modeling. Considering the characteristics of the structural behavior of fire damaged RC columns, the representative five approaches of Kernel SVM, ANN, RF, XGB and LGBM are adopted and applied. Additional partial monotonic constraints are adopted in modelling, to ensure the monotone decrease of resisting capacity in RC column with fire exposure time. Furthermore, additional suggestions are also added to mitigate the heterogeneous composition of the training data. Since the use of ML approaches will significantly reduce the computation time in determining the resisting capacity of fire damaged RC columns, which requires many complex solution procedures from the heat transfer analysis to the rigorous nonlinear analyses and their repetition with time, the introduced ML approach can more effectively be used in large complex structures with many RC members. Because of the very small amount of experimental data, the training data are analytically determined from a heat transfer analysis and a subsequent nonlinear finite element (FE) analysis, and their accuracy was previously verified through a correlation study between the numerical results and experimental data. The results obtained from the application of ML approaches show that the resisting capacity of fire damaged RC columns can effectively be predicted by ML approaches.

Navigating the Transformative Landscape of Virtual Education Trends across India

  • Asha SHARMA;Aditya MISHRA
    • Fourth Industrial Review
    • /
    • v.4 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Purpose: Education is the part of a fundamental human right across the world. In recent years, the trend of virtual education has increased tremendously. The paper aims to find the impact of adoption, accessibility, interactions, knowledge, and satisfaction on the success of transformation towards virtual education. Research design, data and methodology: Primary data has been gathered through the use of responses from students taking admission in virtual higher education to standardized questionnaires. Of the 250, only 122 were considered complete and have been used in further studies. Convinced random sampling method has been used. The results were evaluated using the Likert Five-Point Scale. For applying these statistical tools software SmartPLS and SPSS 19 have been used. The fitness of the model has been re-checked through an Artificial Neural Network (ANN). Result: Results derived that adoption, accessibility, and interactions have a significant impact on knowledge, knowledge influences satisfaction level and satisfaction have a meaningful impact on the success of transformation towards virtual education. Conclusion: It can be concluded that virtual education has the potential to change the future of the education system and its potential in India. The highest importance is due to satisfaction (100%), adoption (98.7%), knowledge (91.4%), accessibility (62%), and interaction (29.2%).

A conditionally applied neural network algorithm for PAPR reduction without the use of a recovery process

  • Eldaw E. Eldukhri;Mohammed I. Al-Rayif
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.227-237
    • /
    • 2024
  • This study proposes a novel, conditionally applied neural network technique to reduce the overall peak-to-average power ratio (PAPR) of an orthogonal frequency division multiplexing (OFDM) system while maintaining an acceptable bit error rate (BER) level. The main purpose of the proposed scheme is to adjust only those subcarriers whose peaks exceed a given threshold. In this respect, the developed C-ANN algorithm suppresses only the peaks of the targeted subcarriers by slightly shifting the locations of their corresponding frequency samples without affecting their phase orientations. In turn, this achieves a reasonable system performance by sustaining a tolerable BER. For practical reasons and to cover a wide range of application scenarios, the threshold for the subcarrier peaks was chosen to be proportional to the saturation level of the nonlinear power amplifier used to pass the generated OFDM blocks. Consequently, the optimal values of the factor controlling the peak threshold were obtained that satisfy both reasonable PAPR reduction and acceptable BER levels. Furthermore, the proposed system does not require a recovery process at the receiver, thus making the computational process less complex. The simulation results show that the proposed system model performed satisfactorily, attaining both low PAPR and BER for specific application settings using comparatively fewer computations.

Using Machine Learning Techniques for Accurate Attack Detection in Intrusion Detection Systems using Cyber Threat Intelligence Feeds

  • Ehtsham Irshad;Abdul Basit Siddiqui
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.179-191
    • /
    • 2024
  • With the advancement of modern technology, cyber-attacks are always rising. Specialized defense systems are needed to protect organizations against these threats. Malicious behavior in the network is discovered using security tools like intrusion detection systems (IDS), firewall, antimalware systems, security information and event management (SIEM). It aids in defending businesses from attacks. Delivering advance threat feeds for precise attack detection in intrusion detection systems is the role of cyber-threat intelligence (CTI) in the study is being presented. In this proposed work CTI feeds are utilized in the detection of assaults accurately in intrusion detection system. The ultimate objective is to identify the attacker behind the attack. Several data sets had been analyzed for attack detection. With the proposed study the ability to identify network attacks has improved by using machine learning algorithms. The proposed model provides 98% accuracy, 97% precision, and 96% recall respectively.

EPB-TBM performance prediction using statistical and neural intelligence methods

  • Ghodrat Barzegari;Esmaeil Sedghi;Ata Allah Nadiri
    • Geomechanics and Engineering
    • /
    • v.37 no.3
    • /
    • pp.197-211
    • /
    • 2024
  • This research studies the effect of geotechnical factors on EPB-TBM performance parameters. The modeling was performed using simple and multivariate linear regression methods, artificial neural networks (ANNs), and Sugeno fuzzy logic (SFL) algorithm. In ANN, 80% of the data were randomly allocated to training and 20% to network testing. Meanwhile, in the SFL algorithm, 75% of the data were used for training and 25% for testing. The coefficient of determination (R2) obtained between the observed and estimated values in this model for the thrust force and cutterhead torque was 0.19 and 0.52, respectively. The results showed that the SFL outperformed the other models in predicting the target parameters. In this method, the R2 obtained between observed and predicted values for thrust force and cutterhead torque is 0.73 and 0.63, respectively. The sensitivity analysis results show that the internal friction angle (φ) and standard penetration number (SPT) have the greatest impact on thrust force. Also, earth pressure and overburden thickness have the highest effect on cutterhead torque.

Transfer Learning based DNN-SVM Hybrid Model for Breast Cancer Classification

  • Gui Rae Jo;Beomsu Baek;Young Soon Kim;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.1-11
    • /
    • 2023
  • Breast cancer is the disease that affects women the most worldwide. Due to the development of computer technology, the efficiency of machine learning has increased, and thus plays an important role in cancer detection and diagnosis. Deep learning is a field of machine learning technology based on an artificial neural network, and its performance has been rapidly improved in recent years, and its application range is expanding. In this paper, we propose a DNN-SVM hybrid model that combines the structure of a deep neural network (DNN) based on transfer learning and a support vector machine (SVM) for breast cancer classification. The transfer learning-based proposed model is effective for small training data, has a fast learning speed, and can improve model performance by combining all the advantages of a single model, that is, DNN and SVM. To evaluate the performance of the proposed DNN-SVM Hybrid model, the performance test results with WOBC and WDBC breast cancer data provided by the UCI machine learning repository showed that the proposed model is superior to single models such as logistic regression, DNN, and SVM, and ensemble models such as random forest in various performance measures.