• Title/Summary/Keyword: support optimization

Search Result 765, Processing Time 0.032 seconds

Optimization for trapezoidal combined footings: Optimal design

  • Arnulfo Lueanos-Rojas
    • Advances in concrete construction
    • /
    • v.16 no.1
    • /
    • pp.21-34
    • /
    • 2023
  • This work presents a complete optimal model for trapezoidal combined footings that support a concentric load and moments around of the "X" and "Y" axes in each column to obtain the minimum area and the minimum cost. The model presented in this article considers a pressure diagram that has a linear variation (real pressure) and the equations are not limited to some cases. The classic model takes into account a concentric load and the moment around of the "X" axis (transverse axis) that is applied due to each column, i.e., the resultant force is located at the geometric center of the footing on the "Y" axis (longitudinal axis), and when the concentric load and moments around of the "X" and "Y" axes act on the footing is considered the uniform pressure applied on the contact surface of the footing, and it is the maximum pressure. Four numerical problems are presented to find the optimal design of a trapezoidal combined footing under a concentric load and moments around of the "X" and "Y" axes due to the columns: Case 1 not limited in the direction of the Y axis; Case 2 limited in the direction of the Y axis in column 1; Case 3 limited in the direction of the Y axis in column 2; Case 4 limited in the direction of the Y axis in columns 1 an 2. The complete optimal design in terms of cost optimization for the trapezoidal combined footings can be used for the rectangular combined footings considering the uniform width of the footing in the transversal direction, and also for different reinforced concrete design codes, simply by modifying the resisting capacity equations for moment, for bending shear, and for the punching shear, according to each of the codes.

Hybrid machine learning with HHO method for estimating ultimate shear strength of both rectangular and circular RC columns

  • Quang-Viet Vu;Van-Thanh Pham;Dai-Nhan Le;Zhengyi Kong;George Papazafeiropoulos;Viet-Ngoc Pham
    • Steel and Composite Structures
    • /
    • v.52 no.2
    • /
    • pp.145-163
    • /
    • 2024
  • This paper presents six novel hybrid machine learning (ML) models that combine support vector machines (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), extreme gradient boosting (XGB), and categorical gradient boosting (CGB) with the Harris Hawks Optimization (HHO) algorithm. These models, namely HHO-SVM, HHO-DT, HHO-RF, HHO-GB, HHO-XGB, and HHO-CGB, are designed to predict the ultimate strength of both rectangular and circular reinforced concrete (RC) columns. The prediction models are established using a comprehensive database consisting of 325 experimental data for rectangular columns and 172 experimental data for circular columns. The ML model hyperparameters are optimized through a combination of cross-validation technique and the HHO. The performance of the hybrid ML models is evaluated and compared using various metrics, ultimately identifying the HHO-CGB model as the top-performing model for predicting the ultimate shear strength of both rectangular and circular RC columns. The mean R-value and mean a20-index are relatively high, reaching 0.991 and 0.959, respectively, while the mean absolute error and root mean square error are low (10.302 kN and 27.954 kN, respectively). Another comparison is conducted with four existing formulas to further validate the efficiency of the proposed HHO-CGB model. The Shapely Additive Explanations method is applied to analyze the contribution of each variable to the output within the HHO-CGB model, providing insights into the local and global influence of variables. The analysis reveals that the depth of the column, length of the column, and axial loading exert the most significant influence on the ultimate shear strength of RC columns. A user-friendly graphical interface tool is then developed based on the HHO-CGB to facilitate practical and cost-effective usage.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Fault Detection Technique for PVDF Sensor Based on Support Vector Machine (서포트벡터머신 기반 PVDF 센서의 결함 예측 기법)

  • Seung-Wook Kim;Sang-Min Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.785-796
    • /
    • 2023
  • In this study, a methodology for real-time classification and prediction of defects that may appear in PVDF(Polyvinylidene fluoride) sensors, which are widely used for structural integrity monitoring, is proposed. The types of sensor defects appearing according to the sensor attachment environment were classified, and an impact test using an impact hammer was performed to obtain an output signal according to the defect type. In order to cleary identify the difference between the output signal according to the defect types, the time domain statistical features were extracted and a data set was constructed. Among the machine learning based classification algorithms, the learning of the acquired data set and the result were analyzed to select the most suitable algorithm for detecting sensor defect types, and among them, it was confirmed that the highest optimization was performed to show SVM(Support Vector Machine). As a result, sensor defect types were classified with an accuracy of 92.5%, which was up to 13.95% higher than other classification algorithms. It is believed that the sensor defect prediction technique proposed in this study can be used as a base technology to secure the reliability of not only PVDF sensors but also various sensors for real time structural health monitoring.

Development of Wastewater Treatment System by Energy-Saving Photocatalyst Using Combination of Solar Light, UV Lamp and $TiO_2$ (태양광/자외선/이산화티타늄($TiO_2$)을 이용한 에너지 절약형 광촉매 반응 처리시스템 개발)

  • 김현용;양원호
    • Journal of Environmental Health Sciences
    • /
    • v.29 no.1
    • /
    • pp.51-61
    • /
    • 2003
  • Pollution purification using titanium dioxide (TiO$_2$) photocatalyst has attracted a great deal of attention with increasing number of relent environmental problems. Currently, the application of TiO$_2$ photocatalyst has been focused on purification and treatment of waste water. However. the use of conventional TiO$_2$ powder photocatalyst results in disadvantage of stirring during the reaction and of separation after the reaction. And the usage of artificial UV lamp has made the cost of photocatalyst treatment system high. Consequently, we herein studied the pilot-scale design to aid in optimization of the energy-saving process for more through development and reactor design by solar light/UV lamp/ TiO$_2$system. In this study, we manufactured the TiO$_2$sol by sol-gel method. According to analysis by XRD, SEM and TEM, characterization of TiO$_2$ sol were nano-size (5-6 nm) and anatase type. Inorganic binder (SiO$_2$) was added to TiO$_2$ lot to be coated for support strongly, and support of ceramic bead was used to lower separation rate that of glass bead The influences were studied of various experimental parameters such as TiO$_2$ quantity, pH, flow rate. additives, pollutants concentration, climate condition and reflection plate by means of reaction time of the main chararteristics of the obtained materials. In water treatment system, variable realtor as solar light/ or UV lamp according to climate condition such as sunny and cloudy days treated the phenol and E-coli(Escherichia coli) effectively.

The Implementation of C Cross-Compiler for ES-C2340 DSP2 by Using the GNU Compiler (GNU 컴파일러를 이용한 ES-C2340 DSP2용 C 교차 컴파일러의 개발)

  • Lee, Si-Yeong;Gwon, Yuk-Chun;Yu, Ha-Yeong;Han, Gi-Cheon;Kim, Seung-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.255-269
    • /
    • 1997
  • In this paper, we describe the implementation of C cross-compiler for the ES-C2340 DSP2 processor by using the GNU compiler. For the rapid and efficient developing of the compiler and other parts like the processor-dependent back -end which is implemented newly to build the compiler. This approach has several advantages. First, as we use GNU compiler's well-proved excellent optimization method and multi-language support capability, we can improve he efficiency and generality of the compiler. Second, as we concentrate on the high-level language as logic approving tool in processor developing process. And to support the cross-compiler, we also implement a text-level pre-linker.

  • PDF

Variable Geocasting based on Ad Hoc Networks (Ad Hoc 네트워크 기반의 가변 지오캐스팅)

  • Lee Cheol-Seung;Lee Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.8
    • /
    • pp.1401-1406
    • /
    • 2006
  • Mobile Ad-hoc networks have recently attracted a lot of attention in the research community as well as in industry. Although the previous research mainly focused on various of Ad-hoc in routing, we consider, in this paper, how to efficiently support applications such as variable Goocasting basd on Ad-hoc networks. The goal of a geocasting uotocol is deliver data packets to a group of nodes that are located within a specified geocasting region. Previous research that support geocast nice in mobilie computing based on Ad-hoc have the non-optimization problem of data delivery path, overhead by frequent reconstruction of the geocast tree, and service disruption problem. In this paper, we propose the mobility pattern based geocast technique using variable service range according to the nobility of destination node and resource reservation to solve this problem. The experimental results show that our proposed mechanism has improved performance of Connection & Network Overhead than previous research.

Perceptual Color Difference based Image Quality Assessment Method and Evaluation System according to the Types of Distortion (인지적 색 차이 기반의 이미지 품질 평가 기법 및 왜곡 종류에 따른 평가 시스템 제안)

  • Lee, Jee-Yong;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1294-1302
    • /
    • 2015
  • A lot of image quality assessment metrics that can precisely reflect the human visual system (HVS) have previously been researched. The Structural SIMilarity (SSIM) index is a remarkable HVS-aware metric that utilizes structural information, since the HVS is sensitive to the overall structure of an image. However, SSIM fails to deal with color difference in terms of the HVS. In order to solve this problem, the Structural and Hue SIMilarity (SHSIM) index has been selected with the Hue, Saturation, Intensity (HSI) model as a color space, but it cannot reflect the HVS-aware color difference between two color images. In this paper, we propose a new image quality assessment method for a color image by using a CIE Lab color space. In addition, by using a support vector machine (SVM) classifier, we also propose an optimization system for applying optimal metric according to the types of distortion. To evaluate the proposed index, a LIVE database, which is the most well-known in the area of image quality assessment, is employed and four criteria are used. Experimental results show that the proposed index is more consistent with the other methods.