• Title/Summary/Keyword: Cost Prediction

Search Result 1,043, Processing Time 0.026 seconds

Assessment of cold-formed steel screwed beam-column conections: Experimental tests and numerical simulations

  • Merve Sagiroglu Maali;Mahyar Maali;Zhiyuan Fang;Krishanu Roy
    • Steel and Composite Structures
    • /
    • v.50 no.5
    • /
    • pp.515-529
    • /
    • 2024
  • Cold-formed steel (CFS) is a popular choice for construction due to its low cost, durability, sustainability, resistance to high environmental and seismic pressures, and ease of installation. The beam-column connections in residential and medium-rise structures are formed using self-drilling screws that connect two CFS channel sections and a gusset plate. In order to increase the moment capacity of these CFS screwed beam-column connections, stiffeners are often placed on the web area of each single channel. However, there is limited literature on studying the effects of stiffeners on the moment capacity of CFS screwed beam-column connections. Hence, this paper proposes a new test approach for determining the moment capacity of CFS screwed beam-column couplings. This study describes an experimental test programme consisting of eight novel experimental tests. The effect of stiffeners, beam thickness, and gusset plate thickness on the structural behaviour of CFS screwed beam-column connections is investigated. Besides, nonlinear elasto-plastic finite element (FE) models were developed and validated against experimental test data. It found that there was reasonable agreement in terms of moment capacity and failure mode prediction. From the experimental and numerical investigation, it found that the increase in gusset plate or beam thickness and the use of stiffeners have no significant effect on the structural behaviour, moment capacity, or rotational capacity of joints exhibiting the same collapse behaviour; however, the capacity or energy absorption capacities have increased in joints whose failure behaviour varies with increasing thickness or using stiffeners. Besides, the thickness change has little impact on the initial stiffness.

Hybrid machine learning with HHO method for estimating ultimate shear strength of both rectangular and circular RC columns

  • Quang-Viet Vu;Van-Thanh Pham;Dai-Nhan Le;Zhengyi Kong;George Papazafeiropoulos;Viet-Ngoc Pham
    • Steel and Composite Structures
    • /
    • v.52 no.2
    • /
    • pp.145-163
    • /
    • 2024
  • This paper presents six novel hybrid machine learning (ML) models that combine support vector machines (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), extreme gradient boosting (XGB), and categorical gradient boosting (CGB) with the Harris Hawks Optimization (HHO) algorithm. These models, namely HHO-SVM, HHO-DT, HHO-RF, HHO-GB, HHO-XGB, and HHO-CGB, are designed to predict the ultimate strength of both rectangular and circular reinforced concrete (RC) columns. The prediction models are established using a comprehensive database consisting of 325 experimental data for rectangular columns and 172 experimental data for circular columns. The ML model hyperparameters are optimized through a combination of cross-validation technique and the HHO. The performance of the hybrid ML models is evaluated and compared using various metrics, ultimately identifying the HHO-CGB model as the top-performing model for predicting the ultimate shear strength of both rectangular and circular RC columns. The mean R-value and mean a20-index are relatively high, reaching 0.991 and 0.959, respectively, while the mean absolute error and root mean square error are low (10.302 kN and 27.954 kN, respectively). Another comparison is conducted with four existing formulas to further validate the efficiency of the proposed HHO-CGB model. The Shapely Additive Explanations method is applied to analyze the contribution of each variable to the output within the HHO-CGB model, providing insights into the local and global influence of variables. The analysis reveals that the depth of the column, length of the column, and axial loading exert the most significant influence on the ultimate shear strength of RC columns. A user-friendly graphical interface tool is then developed based on the HHO-CGB to facilitate practical and cost-effective usage.

A comparison study between the realistic random modeling and simplified porous medium for gamma-gamma well-logging

  • Fatemeh S. Rasouli
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1747-1753
    • /
    • 2024
  • The accurate determination of formation density and the physical properties of rocks is the most critical logging tasks which can be obtained using gamma-ray transport and detection tools. Though the simulation works published so far have considerably improved the knowledge of the parameters that govern the responses of the detectors in these tools, recent studies have found considerable differences between the results of using a conventional model of a homogeneous mixture of formation and fluid and an inhomogeneous fractured medium. It has increased concerns about the importance of the complexity of the model used for the medium in simulation works. In the present study, we have suggested two various models for the flow of the fluid in porous media and fractured rock to be used for logging purposes. For a typical gamma-gamma logging tool containing a 137Cs source and two NaI detectors, simulated by using the MCNPX code, a simplified porous (SP) model in which the formation is filled with elongated rectangular cubes loaded with either mineral material or oil was investigated. In this model, the oil directly reaches the top of the medium and the connection between the pores is not guaranteed. In the other model, the medium is a large 3-D matrix of 1 cm3 randomly filled cubes. The designed algorithm to fill the matrix sites is so that this realistic random (RR) model provides the continuum growth of oil flow in various disordered directions and, therefore, fulfills the concerns about modeling the rock textures consist of extremely complex pore structures. For an arbitrary set of oil concentrations and various formation materials, the response of the detectors in the logging tool has been considered as a criterion to assess the effect of modeling for the distribution of pores in the formation on simulation studies. The results show that defining a RR model for describing heterogeneities of a porous medium does not effectively improve the prediction of the responses of logging tools. Taking into account the computational cost of the particle transport in the complex geometries in the Monte Carlo method, the SP model can be satisfactory for gamma-gamma logging purposes.

Development of System for Enhancing the Quality of Power Generation Facilities Failure History Data Based on Explainable AI (XAI) (XAI 기반 발전설비 고장 기록 데이터 품질 향상 시스템 개발)

  • Kim Yu Rim;Park Jeong In;Park Dong Hyun;Kang Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.3
    • /
    • pp.479-493
    • /
    • 2024
  • Purpose: The deterioration in the quality of failure history data due to differences in interpretation of failures among workers at power plants and the lack of consistency in the way failures are recorded negatively impacts the efficient operation of power plants. The purpose of this study is to propose a system that classifies power generation facilities failures consistently based on the failure history text data created by the workers. Methods: This study utilizes data collected from three coal unloaders operated by Korea Midland Power Co., LTD, from 2012 to 2023. It classifies failures based on the results of Soft Voting, which incorporates the prediction probabilities derived from applying the predict_proba technique to four machine learning models: Random Forest, Logistic Regression, XGBoost, and SVM, along with scores obtained by constructing word dictionaries for each type of failure using LIME, one of the XAI (Explainable Artificial Intelligence) methods. Through this, failure classification system is proposed to improve the quality of power generation facilities failure history data. Results: The results of this study are as follows. When the power generation facilities failure classification system was applied to the failure history data of Continuous Ship Unloader, XGBoost showed the best performance with a Macro_F1 Score of 93%. When the system proposed in this study was applied, there was an increase of up to 0.17 in the Macro_F1 Score for Logistic Regression compared to when the model was applied alone. All four models used in this study, when the system was applied, showed equal or higher values in Accuracy and Macro_F1 Score than the single model alone. Conclusion: This study propose a failure classification system for power generation facilities to improve the quality of failure history data. This will contribute to cost reduction and stability of power generation facilities, as well as further improvement of power plant operation efficiency and stability.

Pressure Drop Predictions Using Multiple Regression Model in Pulse Jet Type Bag Filter Without Venturi (다중회귀모형을 이용한 벤츄리가 없는 충격기류식 여과집진장치 압력손실 예측)

  • Suh, Jeong-Min;Park, Jeong-Ho;Cho, Jae-Hwan;Jin, Kyung-Ho;Jung, Moon-Sub;Yi, Pyong-In;Hong, Sung-Chul;Sivakumar, S.;Choi, Kum-Chan
    • Journal of Environmental Science International
    • /
    • v.23 no.12
    • /
    • pp.2045-2056
    • /
    • 2014
  • In this study, pressure drop was measured in the pulse jet bag filter without venturi on which 16 numbers of filter bags (Ø$140{\times}850{\ell}$) are installed according to operation condition(filtration velocity, inlet dust concentration, pulse pressure, and pulse interval) using coke dust from steel mill. The obtained 180 pressure drop test data were used to predict pressure drop with multiple regression model so that pressure drop data can be used for effective operation condition and as basic data for economical design. The prediction results showed that when filtration velocity was increased by 1%, pressure drop was increased by 2.2% which indicated that filtration velocity among operation condition was attributed on the pressure drop the most. Pressure was dropped by 1.53% when pulse pressure was increased by 1% which also confirmed that pulse pressure was the major factor affecting on the pressure drop next to filtration velocity. Meanwhile, pressure drops were found increased by 0.3% and 0.37%, respectively when inlet dust concentration and pulse interval were increased by 1% implying that the effects of inlet dust concentration and pulse interval were less as compared with those changes of filtration velocity and pulse pressure. Therefore, the larger effect on the pressure drop the pulse jet bag filter was found in the order of filtration velocity($V_f$), pulse pressure($P_p$), inlet dust concentration($C_i$), pulse interval($P_i$). Also, the prediction result of filtration velocity, inlet dust concentration, pulse pressure, and pulse interval which showed the largest effect on the pressure drop indicated that stable operation can be executed with filtration velocity less than 1.5 m/min and inlet dust concentration less than $4g/m^3$. However, it was regarded that pulse pressure and pulse interval need to be adjusted when inlet dust concentration is higher than $4g/m^3$. When filtration velocity and pulse pressure were examined, operation was possible regardless of changes in pulse pressure if filtration velocity was at 1.5 m/min. If filtration velocity was increased to 2 m/min. operation would be possible only when pulse pressure was set at higher than $5.8kgf/cm^2$. Also, the prediction result of pressure drop with filtration velocity and pulse interval showed that operation with pulse interval less than 50 sec. should be carried out under filtration velocity at 1.5 m/min. While, pulse interval should be set at lower than 11 sec. if filtration velocity was set at 2 m/min. Under the conditions of filtration velocity lower than 1 m/min and high pulse pressure higher than $7kgf/cm^2$, though pressure drop would be less, in this case, economic feasibility would be low due to increased in installation and operation cost since scale of dust collection equipment becomes larger and life of filtration bag becomes shortened due to high pulse pressure.

A Study on Enhancing Personalization Recommendation Service Performance with CNN-based Review Helpfulness Score Prediction (CNN 기반 리뷰 유용성 점수 예측을 통한 개인화 추천 서비스 성능 향상에 관한 연구)

  • Li, Qinglong;Lee, Byunghyun;Li, Xinzhe;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.29-56
    • /
    • 2021
  • Recently, various types of products have been launched with the rapid growth of the e-commerce market. As a result, many users face information overload problems, which is time-consuming in the purchasing decision-making process. Therefore, the importance of a personalized recommendation service that can provide customized products and services to users is emerging. For example, global companies such as Netflix, Amazon, and Google have introduced personalized recommendation services to support users' purchasing decisions. Accordingly, the user's information search cost can reduce which can positively affect the company's sales increase. The existing personalized recommendation service research applied Collaborative Filtering (CF) technique predicts user preference mainly use quantified information. However, the recommendation performance may have decreased if only use quantitative information. To improve the problems of such existing studies, many studies using reviews to enhance recommendation performance. However, reviews contain factors that hinder purchasing decisions, such as advertising content, false comments, meaningless or irrelevant content. When providing recommendation service uses a review that includes these factors can lead to decrease recommendation performance. Therefore, we proposed a novel recommendation methodology through CNN-based review usefulness score prediction to improve these problems. The results show that the proposed methodology has better prediction performance than the recommendation method considering all existing preference ratings. In addition, the results suggest that can enhance the performance of traditional CF when the information on review usefulness reflects in the personalized recommendation service.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Level of Service of Signalized Intersections Considering both Delay and Accidents (지체와 사고를 고려한 신호교차로 서비스수준 산정에 관한 연구)

  • Park, Je-Jin;Park, Seong-Yong;Ha, Tae-Jun
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.169-178
    • /
    • 2008
  • Level of Service (LOS) is one of ways to evaluate operational conditions. It is very important factor in evaluation especially for the facility of highways. However, some studies proved that ${\upsilon}/c$ ratio and accident rate is appeared like a second function which has a U-form. It means there is a gap between LOS and safety of highway facilities. Therefore, this study presents a method for evaluation of a signalized intersection which is considered both smooth traffic operation (delay) and traffic safety (accident). Firstly, as a result of our research, accident rates and EPDO are decreased when it has a big delay. In that reason, it is necessary to make a new Level of Service included traffic safety. Secondly, this study has developed a negative binominal regression model which is based on the relation between accident patterns and stream. Thirdly, standards of LOS are presented which is originated from calculation between annual delay costs and annual accident cost at each intersection. Lastly, worksheet form is presented as an expression to an estimation step of a signalized intersection with traffic accident prediction model and new LOS.

Mobility Support Scheme Based on Machine Learning in Industrial Wireless Sensor Network (산업용 무선 센서 네트워크에서의 기계학습 기반 이동성 지원 방안)

  • Kim, Sangdae;Kim, Cheonyong;Cho, Hyunchong;Jung, Kwansoo;Oh, Seungmin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.256-264
    • /
    • 2020
  • Industrial Wireless Sensor Networks (IWSNs) is exploited to achieve various objectives such as improving productivity and reducing cost in the diversity of industrial application, and it has requirements such as low-delay and high reliability packet transmission. To accomplish the requirement, the network manager performs graph construction and resource allocation about network topology, and determines the transmission cycle and path of each node in advance. However, this network management scheme cannot treat mobile devices that cause continuous topology changes because graph reconstruction and resource reallocation should be performed as network topology changes. That is, despite the growing need of mobile devices in many industries, existing scheme cannot adequately respond to path failure caused by movement of mobile device and packet loss in the process of path recovery. To solve this problem, a network management scheme is required to prevent packet loss caused by mobile devices. Thus, we analyse the location and movement cycle of mobile devices over time using machine learning for predicting the mobility pattern. In the proposed scheme, the network manager could prevent the problems caused by mobile devices through performing graph construction and resource allocation for the predicted network topology based on the movement pattern. Performance evaluation results show a prediction rate of about 86% compared with actual movement pattern, and a higher packet delivery ratio and a lower resource share compared to existing scheme.

Large eddy simulation on the turbulent mixing phenomena in 3×3 bare tight lattice rod bundle using spectral element method

  • Ju, Haoran;Wang, Mingjun;Wang, Yingjie;Zhao, Minfu;Tian, Wenxi;Liu, Tiancai;Su, G.H.;Qiu, Suizheng
    • Nuclear Engineering and Technology
    • /
    • v.52 no.9
    • /
    • pp.1945-1954
    • /
    • 2020
  • Subchannel code is one of the effective simulation tools for thermal-hydraulic analysis in nuclear reactor core. In order to reduce the computational cost and improve the calculation efficiency, empirical correlation of turbulent mixing coefficient is employed to calculate the lateral mixing velocity between adjacent subchannels. However, correlations utilized currently are often fitted from data achieved in central channel of fuel assembly, which would simply neglect the wall effects. In this paper, the CFD approach based on spectral element method is employed to predict turbulent mixing phenomena through gaps in 3 × 3 bare tight lattice rod bundle and investigate the flow pulsation through gaps in different positions. Re = 5000,10000,20500 and P/D = 1.03 and 1.06 have been covered in the simulation cases. With a well verified mesh, lateral velocities at gap center between corner channel and wall channel (W-Co), wall channel and wall channel (W-W), wall channel and center channel (W-C) as well as center channel and center channel (C-C) are collected and compared with each other. The obvious turbulent mixing distributions are presented in the different channels of rod bundle. The peak frequency values at W-Co channel could have about 40%-50% reduction comparing with the C-C channel value and the turbulent mixing coefficient β could decrease around 25%. corrections for β should be performed in subchannel code at wall channel and corner channel for a reasonable prediction result. A preliminary analysis on fluctuation at channel gap has also performed. Eddy cascade should be considered carefully in detailed analysis for fluctuating in rod bundle.