• Title/Summary/Keyword: Performance Metrics

Search Result 812, Processing Time 0.044 seconds

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

Autoencoder-Based Automotive Intrusion Detection System Using Gaussian Kernel Density Estimation Function (가우시안 커널 밀도 추정 함수를 이용한 오토인코더 기반 차량용 침입 탐지 시스템)

  • Donghyeon Kim;Hyungchul Im;Seongsoo Lee
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.6-13
    • /
    • 2024
  • This paper proposes an approach to detect abnormal data in automotive controller area network (CAN) using an unsupervised learning model, i.e. autoencoder and Gaussian kernel density estimation function. The proposed autoencoder model is trained with only message ID of CAN data frames. Afterwards, by employing the Gaussian kernel density estimation function, it effectively detects abnormal data based on the trained model characterized by the optimally determined number of frames and a loss threshold. It was verified and evaluated using four types of attack data, i.e. DoS attacks, gear spoofing attacks, RPM spoofing attacks, and fuzzy attacks. Compared with conventional unsupervised learning-based models, it has achieved over 99% detection performance across all evaluation metrics.

Hybrid machine learning with HHO method for estimating ultimate shear strength of both rectangular and circular RC columns

  • Quang-Viet Vu;Van-Thanh Pham;Dai-Nhan Le;Zhengyi Kong;George Papazafeiropoulos;Viet-Ngoc Pham
    • Steel and Composite Structures
    • /
    • v.52 no.2
    • /
    • pp.145-163
    • /
    • 2024
  • This paper presents six novel hybrid machine learning (ML) models that combine support vector machines (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), extreme gradient boosting (XGB), and categorical gradient boosting (CGB) with the Harris Hawks Optimization (HHO) algorithm. These models, namely HHO-SVM, HHO-DT, HHO-RF, HHO-GB, HHO-XGB, and HHO-CGB, are designed to predict the ultimate strength of both rectangular and circular reinforced concrete (RC) columns. The prediction models are established using a comprehensive database consisting of 325 experimental data for rectangular columns and 172 experimental data for circular columns. The ML model hyperparameters are optimized through a combination of cross-validation technique and the HHO. The performance of the hybrid ML models is evaluated and compared using various metrics, ultimately identifying the HHO-CGB model as the top-performing model for predicting the ultimate shear strength of both rectangular and circular RC columns. The mean R-value and mean a20-index are relatively high, reaching 0.991 and 0.959, respectively, while the mean absolute error and root mean square error are low (10.302 kN and 27.954 kN, respectively). Another comparison is conducted with four existing formulas to further validate the efficiency of the proposed HHO-CGB model. The Shapely Additive Explanations method is applied to analyze the contribution of each variable to the output within the HHO-CGB model, providing insights into the local and global influence of variables. The analysis reveals that the depth of the column, length of the column, and axial loading exert the most significant influence on the ultimate shear strength of RC columns. A user-friendly graphical interface tool is then developed based on the HHO-CGB to facilitate practical and cost-effective usage.

Proposal for the Utilization and Refinement Techniques of LLMs for Automated Research Generation (관련 연구 자동 생성을 위한 LLM의 활용 및 정제 기법 제안)

  • Seung-min Choi;Yu-chul, Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.275-287
    • /
    • 2024
  • Research on the integration of Knowledge Graphs (KGs) and Language Models (LMs) has been consistently explored over the years. However, studies focusing on the automatic generation of text using the structured knowledge from KGs have not been as widely developed. In this study, we propose a methodology for automatically generating specific domain-related research items (Related Work) at a level comparable to existing papers. This methodology involves: 1) selecting optimal prompts, 2) extracting triples through a four-step refinement process, 3) constructing a knowledge graph, and 4) automatically generating related research. The proposed approach utilizes GPT-4, one of the large language models (LLMs), and is desigend to automatically generate related research by applying the four-step refinement process. The model demonstrated performance metrics of 17.3, 14.1, and 4.2 in Triple extraction across #Supp, #Cont, and Fluency, respectively. According to the GPT-4 automatic evaluation criteria, the model's performamce improved from 88.5 points vefore refinement to 96.5 points agter refinement out of 100, indicating a significant capability to automatically generate related research at a level similar to that of existing papers.

Classification of mandibular molar furcation involvement in periapical radiographs by deep learning

  • Katerina Vilkomir;Cody Phen;Fiondra Baldwin;Jared Cole;Nic Herndon;Wenjian Zhang
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.257-263
    • /
    • 2024
  • Purpose: The purpose of this study was to classify mandibular molar furcation involvement (FI) in periapical radiographs using a deep learning algorithm. Materials and Methods: Full mouth series taken at East Carolina University School of Dental Medicine from 2011-2023 were screened. Diagnostic-quality mandibular premolar and molar periapical radiographs with healthy or FI mandibular molars were included. The radiographs were cropped into individual molar images, annotated as "healthy" or "FI," and divided into training, validation, and testing datasets. The images were preprocessed by PyTorch transformations. ResNet-18, a convolutional neural network model, was refined using the PyTorch deep learning framework for the specific imaging classification task. CrossEntropyLoss and the AdamW optimizer were employed for loss function training and optimizing the learning rate, respectively. The images were loaded by PyTorch DataLoader for efficiency. The performance of ResNet-18 algorithm was evaluated with multiple metrics, including training and validation losses, confusion matrix, accuracy, sensitivity, specificity, the receiver operating characteristic (ROC) curve, and the area under the ROC curve. Results: After adequate training, ResNet-18 classified healthy vs. FI molars in the testing set with an accuracy of 96.47%, indicating its suitability for image classification. Conclusion: The deep learning algorithm developed in this study was shown to be promising for classifying mandibular molar FI. It could serve as a valuable supplemental tool for detecting and managing periodontal diseases.

Deep Learning-Based Methods for Inspecting Sand Quality for Ready Mixed Concrete

  • Rong-Lu Hong;Dong- Heon Lee ;Sang-Jun Park;Ju-Hyung Kim;Yong-jin Won;Seung-Hyeon Wang
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.383-390
    • /
    • 2024
  • Sand is a vital component within a concrete admixture for variety of structures and is classified as one of the crucial bulk material used. Assessing the Fineness Modulus (FM) of sand is an essential part of concrete production process because FM significantly affects the workability, cost-effectiveness, porosity, and concrete strength. Traditional sand quality inspection methods, like Sieve Analysis Test, are known to be laborious, time-consuming, and cost ineffective. Previous studies had mainly focused on measuring the physical characteristics of individual sand particles rather than real-time quality assessment of sand, particularly its FM during concrete production. This study introduces an image-based method for detecting flawed sand through deep learning techniques to evaluate the quality of sand used in concrete. The method involves categorizing sand images into three groups (Unavailable, Stable, Dangerous) and seven types based on FM. To achieve a high level of generalization ability and computational efficiency, various deep learning architectures (VGG16, ResNet-101 and MobileNetV3 small), were evaluated and chosen; with the inclusion of transfer learning to ensure model accuracy. A dataset of labeled sand images was compiled. Furthermore, image augmentation techniques were employed to effectively enlarge this dataset. The models were trained using the prepared dataset that were categorized into three discrete groups. A comparative analysis of results was performed based on classification performance metrics which identified the VGG16 model as the most effective achieving an impressive 99.87% accuracy in identifying flawed sand. This finding underscores the potential of deep learning techniques for assessing sand quality in terms of FM; positioning this research as a preliminary investigation into this topic of study.

Enhancement of concrete crack detection using U-Net

  • Molaka Maruthi;Lee, Dong Eun;Kim Bubryur
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.152-159
    • /
    • 2024
  • Cracks in structural materials present a critical challenge to infrastructure safety and long-term durability. Timely and precise crack detection is essential for proactive maintenance and the prevention of catastrophic structural failures. This study introduces an innovative approach to tackle this issue using U-Net deep learning architecture. The primary objective of the intended research is to explore the potential of U-Net in enhancing the precision and efficiency of crack detection across various concrete crack detection under various environmental conditions. Commencing with the assembling by a comprehensive dataset featuring diverse images of concrete cracks, optimizing crack visibility and facilitating feature extraction through advanced image processing techniques. A wide range of concrete crack images were collected and used advanced techniques to enhance their visibility. The U-Net model, well recognized for its proficiency in image segmentation tasks, is implemented to achieve precise segmentation and localization of concrete cracks. In terms of accuracy, our research attests to a substantial advancement in automated of 95% across all tested concrete materials, surpassing traditional manual inspection methods. The accuracy extends to detecting cracks of varying sizes, orientations, and challenging lighting conditions, underlining the systems robustness and reliability. The reliability of the proposed model is measured using performance metrics such as, precision(93%), Recall(96%), and F1-score(94%). For validation, the model was tested on a different set of data and confirmed an accuracy of 94%. The results shows that the system consistently performs well, even with different concrete types and lighting conditions. With real-time monitoring capabilities, the system ensures the prompt detection of cracks as they emerge, holding significant potential for reducing risks associated with structural damage and achieving substantial cost savings.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Comparison of Compton Image Reconstruction Algorithms for Estimation of Internal Radioactivity Distribution in Concrete Waste During Decommissioning of Nuclear Power Plant (원전 해체 시 방사성 콘크리트 폐기물 내부 방사능 분포 예측을 위한 컴프턴 영상 재구성 방법의 비교)

  • Lee, Tae-Woong;Jo, Seong-Min;Yoon, Chang-Yeon;Kim, Nak-Jeom
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.217-225
    • /
    • 2020
  • Concrete waste accounts for approximately 70~80% of the total waste generated during the decommissioning of nuclear power plants (NPPs). Based upon the concentration of each radionuclide, the concrete waste from the decommissioning can be used in the determination of the clearance threshold used to classify waste as radioactive. To reduce the cost of radioactive concrete waste disposal, it is important to perform decontamination before self-disposal or limited recycling. Therefore, it is necessary to estimate the internal radioactivity distribution of radioactive concrete waste to ensure effective decontamination. In this study, the performance metrics of various Compton reconstruction algorithms were compared in order to identify the best strategy to estimate the internal radioactivity distribution in concrete waste during the decommissioning of NPPs. Four reconstruction algorithms, namely, simple back-projection, filtered back-projection, maximum likelihood expectation maximization (MLEM), and energy-deconvolution MLEM (E-MLEM) were used as Compton reconstruction algorithms. Subsequently, the results obtained by using these various reconstruction algorithms were compared with one another and evaluated, using quantitative evaluation methods. The MLEM and E-MLEM reconstruction algorithms exhibited the best performance in maintaining a high image resolution and signal-to-noise ratio (SNR), respectively. The results of this study demonstrate the feasibility of using Compton images in the estimation of the internal radioactive distribution of concrete during the decommissioning of NPPs.

Prefetching Mechanism using the User's File Access Pattern Profile in Mobile Computing Environment (이동 컴퓨팅 환경에서 사용자의 FAP 프로파일을 이용한 선인출 메커니즘)

  • Choi, Chang-Ho;Kim, Myung-Il;Kim, Sung-Jo
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.138-148
    • /
    • 2000
  • In the mobile computing environment, in order to make copies of important files available when being disconnected the mobile host(client) must store them in its local cache while the connection is maintained. In this paper, we propose the prefetching mechanism for the client to save files which may be accessed in the near future. Our mechanism utilizes analyzer, prefetch-list producer, and prefetch manager. The analyzer records file access patterns of the user in a FAP(File Access Patterns) profile. Using the profile, the prefetch-list producer creates the prefetch-list. The prefetch manager requests a file server to return this list. We set the parameter TRP(Threshold of Reference Probability) to ensure that only reasonably related files can be prefetched. The prefetch-list producer adds the files to a prefetch-list if their reference probability is greater than the TRP. We also use the parameter TACP(Threshold of Access Counter Probability) to reduce the hoarding size required to store a prefetch-list. Finally, we measure the metrics such as the cache hit ratio, the number of files referenced by the client after disconnection and the hoarding size. The simulation results show that the performance of our mechanism is superior to that of the LRU caching mechanism. Our results also show that prefetching with the TACP can reduce the hoard size while maintaining similar performance of prefetching without TACP.

  • PDF