• Title/Summary/Keyword: Parametric Algorithms

Search Result 128, Processing Time 0.026 seconds

Self-Organizing Polynomial Neural Networks Based on Genetically Optimized Multi-Layer Perceptron Architecture

  • Park, Ho-Sung;Park, Byoung-Jun;Kim, Hyun-Ki;Oh, Sung-Kwun
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.4
    • /
    • pp.423-434
    • /
    • 2004
  • In this paper, we introduce a new topology of Self-Organizing Polynomial Neural Networks (SOPNN) based on genetically optimized Multi-Layer Perceptron (MLP) and discuss its comprehensive design methodology involving mechanisms of genetic optimization. Let us recall that the design of the 'conventional' SOPNN uses the extended Group Method of Data Handling (GMDH) technique to exploit polynomials as well as to consider a fixed number of input nodes at polynomial neurons (or nodes) located in each layer. However, this design process does not guarantee that the conventional SOPNN generated through learning results in optimal network architecture. The design procedure applied in the construction of each layer of the SOPNN deals with its structural optimization involving the selection of preferred nodes (or PNs) with specific local characteristics (such as the number of input variables, the order of the polynomials, and input variables) and addresses specific aspects of parametric optimization. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between the approximation and generalization (predictive) abilities of the model. To evaluate the performance of the GA-based SOPNN, the model is experimented using pH neutralization process data as well as sewage treatment process data. A comparative analysis indicates that the proposed SOPNN is the model having higher accuracy as well as more superb predictive capability than other intelligent models presented previously.reviously.

Human Motion Tracking based on 3D Depth Point Matching with Superellipsoid Body Model (타원체 모델과 깊이값 포인트 매칭 기법을 활용한 사람 움직임 추적 기술)

  • Kim, Nam-Gyu
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.255-262
    • /
    • 2012
  • Human motion tracking algorithm is receiving attention from many research areas, such as human computer interaction, video conference, surveillance analysis, and game or entertainment applications. Over the last decade, various tracking technologies for each application have been demonstrated and refined among them such of real time computer vision and image processing, advanced man-machine interface, and so on. In this paper, we introduce cost-effective and real-time human motion tracking algorithms based on depth image 3D point matching with a given superellipsoid body representation. The body representative model is made by using parametric volume modeling method based on superellipsoid and consists of 18 articulated joints. For more accurate estimation, we exploit initial inverse kinematic solution with classified body parts' information, and then, the initial pose is modified to more accurate pose by using 3D point matching algorithm.

Precise-Optimal Frame Length Based Collision Reduction Schemes for Frame Slotted Aloha RFID Systems

  • Dhakal, Sunil;Shin, Seokjoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.165-182
    • /
    • 2014
  • An RFID systems employ efficient Anti-Collision Algorithms (ACAs) to enhance the performance in various applications. The EPC-Global G2 RFID system utilizes Frame Slotted Aloha (FSA) as its ACA. One of the common approaches used to maximize the system performance (tag identification efficiency) of FSA-based RFID systems involves finding the optimal value of the frame length relative to the contending population size of the RFID tags. Several analytical models for finding the optimal frame length have been developed; however, they are not perfectly optimized because they lack precise characterization for the timing details of the underlying ACA. In this paper, we investigate this promising direction by precisely characterizing the timing details of the EPC-Global G2 protocol and use it to derive a precise-optimal frame length model. The main objective of the model is to determine the optimal frame length value for the estimated number of tags that maximizes the performance of an RFID system. However, because precise estimation of the contending tags is difficult, we utilize a parametric-heuristic approach to maximize the system performance and propose two simple schemes based on the obtained optimal frame length-namely, Improved Dynamic-Frame Slotted Aloha (ID-FSA) and Exponential Random Partitioning-Frame Slotted Aloha (ERP-FSA). The ID-FSA scheme is based on the tag set estimation and frame size update mechanisms, whereas the ERP-FSA scheme adjusts the contending tag population in such a way that the applied frame size becomes optimal. The results of simulations conducted indicate that the ID-FSA scheme performs better than several well-known schemes in various conditions, while the ERP-FSA scheme performs well when the frame size is small.

SSI effects on seismic behavior of smart base-isolated structures

  • Shourestani, Saeed;Soltani, Fazlollah;Ghasemi, Mojtaba;Etedali, Sadegh
    • Geomechanics and Engineering
    • /
    • v.14 no.2
    • /
    • pp.161-174
    • /
    • 2018
  • The present study investigates the soil-structure interaction (SSI) effects on the seismic performance of smart base-isolated structures. The adopted control algorithm for tuning the control force plays a key role in successful implementation of such structures; however, in most studied carried out in the literature, these algorithms are designed without considering the SSI effect. Considering the SSI effects, a linear quadratic regulator (LQR) controller is employed to seismic control of a smart base-isolated structure. A particle swarm optimization (PSO) algorithm is used to tune the gain matrix of the controller in both cases without and with SSI effects. In order to conduct a parametric study, three types of soil, three well-known earthquakes and a vast range of period of the superstructure are considered for assessment the SSI effects on seismic control process of the smart-base isolated structure. The adopted controller is able to make a significant reduction in base displacement. However, any attempt to decrease the maximum base displacement results in slight increasing in superstructure accelerations. The maximum and RMS base displacements of the smart base-isolated structures in the case of considering SSI effects are more than the corresponding responses in the case of ignoring SSI effects. Overall, it is also observed that the maximum and RMS base displacements of the structure are increased by increasing the natural period of the superstructure. Furthermore, it can be concluded that the maximum and RMS superstructure accelerations are significant influenced by the frequency content of earthquake excitations and the natural frequency of the superstructure. The results show that the design of the controller is very influenced by the SSI effects. In addition, the simulation results demonstrate that the ignoring the SSI effect provides an unfavorable control system, which may lead to decline in the seismic performance of the smart-base isolated structure including the SSI effects.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Evolutionally optimized Fuzzy Polynomial Neural Networks Based on Fuzzy Relation and Genetic Algorithms: Analysis and Design (퍼지관계와 유전자 알고리즘에 기반한 진화론적 최적 퍼지다항식 뉴럴네트워크: 해석과 설계)

  • Park, Byoung-Jun;Lee, Dong-Yoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.236-244
    • /
    • 2005
  • In this study, we introduce a new topology of Fuzzy Polynomial Neural Networks(FPNN) that is based on fuzzy relation and evolutionally optimized Multi-Layer Perceptron, discuss a comprehensive design methodology and carry out a series of numeric experiments. The construction of the evolutionally optimized FPNN(EFPNN) exploits fundamental technologies of Computational Intelligence. The architecture of the resulting EFPNN results from a synergistic usage of the genetic optimization-driven hybrid system generated by combining rule-based Fuzzy Neural Networks(FNN) with polynomial neural networks(PNN). FNN contributes to the formation of the premise part of the overall rule-based structure of the EFPNN. The consequence part of the EFPNN is designed using PNN. As the consequence part of the EFPNN, the development of the genetically optimized PNN(gPNN) dwells on two general optimization mechanism: the structural optimization is realized via GAs whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the EFPNN, the models are experimented with the use of several representative numerical examples. A comparative analysis shows that the proposed EFPNN are models with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Development of Detailed Design Automation Technology for AI-based Exterior Wall Panels and its Backframes

  • Kim, HaYoung;Yi, June-Seong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1249-1249
    • /
    • 2022
  • The facade, an exterior material of a building, is one of the crucial factors that determine its morphological identity and its functional levels, such as energy performance, earthquake and fire resistance. However, regardless of the type of exterior materials, huge property and human casualties are continuing due to frequent exterior materials dropout accidents. The quality of the building envelope depends on the detailed design and is closely related to the back frames that support the exterior material. Detailed design means the creation of a shop drawing, which is the stage of developing the basic design to a level where construction is possible by specifying the exact necessary details. However, due to chronic problems in the construction industry, such as reducing working hours and the lack of design personnel, detailed design is not being appropriately implemented. Considering these characteristics, it is necessary to develop the detailed design process of exterior materials and works based on the domain-expert knowledge of the construction industry using artificial intelligence (AI). Therefore, this study aims to establish a detailed design automation algorithm for AI-based condition-responsive exterior wall panels and their back frames. The scope of the study is limited to "detailed design" performed based on the working drawings during the exterior work process and "stone panels" among exterior materials. First, working-level data on stone works is collected to analyze the existing detailed design process. After that, design parameters are derived by analyzing factors that affect the design of the building's exterior wall and back frames, such as structure, floor height, wind load, lift limit, and transportation elements. The relational expression between the derived parameters is derived, and it is algorithmized to implement a rule-based AI design. These algorithms can be applied to detailed designs based on 3D BIM to automatically calculate quantity and unit price. The next goal is to derive the iterative elements that occur in the process and implement a robotic process automation (RPA)-based system to link the entire "Detailed design-Quality calculation-Order process." This study is significant because it expands the design automation research, which has been rather limited to basic and implemented design, to the detailed design area at the beginning of the construction execution and increases the productivity by using AI. In addition, it can help fundamentally improve the working environment of the construction industry through the development of direct and applicable technologies to practice.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.