• Title/Summary/Keyword: error optimization

Search Result 1,211, Processing Time 0.029 seconds

Motion Simplification of Virtual Character (가상 캐릭터의 동작 단순화 기법)

  • Ahn, Jung-Hyun;Oh, Seung-Woo;Wohn, Kwang-Yun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.10
    • /
    • pp.759-767
    • /
    • 2006
  • The level-of-detail (LoD), which is a method of reducing polygons on mesh, is one of the most fundamental techniques in real-time rendering. In this paper, we propose a novel level-of-detail technique applied to the virtual character's motion (Motion LoD). The movement of a virtual character can be defined as the transformation of each joint and it's relation to the mesh. The basic idea of the proposed 'Motion LoD' method is to reduce number of joints in an articulated figure and minimize the error between original and simplified motion. For the motion optimization, we propose an error estimation method and a linear system reconstructed from this error estimation for a fast optimization. The proposed motion simplification method is effectively useful for motion editing and real-time crowd animation.

A Suffix Tree Transform Technique for Substring Selectivity Estimation (부분 문자열 선택도 추정을 위한 서픽스트리 변환 기법)

  • Lee, Hong-Rae;Shim, Kyu-Seok;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.141-152
    • /
    • 2007
  • Selectivity estimation has been a crucial component in query optimization in relational databases. While extensive researches have been done on this topic for the predicates of numerical data, only little work has been done for substring predicates. We propose novel suffix tree transform algorithms for this problem. Unlike previous approaches where a full suffix tree is pruned and then an estimation algorithm is employed, we transform a suffix tree into a suffix graph systematically. In our approach, nodes with similar counts are merged while structural information in the original suffix tree is preserved in a controlled manner. We present both an error-bound algorithm and a space-bound algorithm. Experimental results with real life data sets show that our algorithms have lower average relative error than that of the previous works as well as good error distribution characteristics.

Neural Network Structure and Parameter Optimization via Genetic Algorithms (유전알고리즘을 이용한 신경망 구조 및 파라미터 최적화)

  • 한승수
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.215-222
    • /
    • 2001
  • Neural network based models of semiconductor manufacturing processes have been shown to offer advantages in both accuracy and generalization over traditional methods. However, model development is often complicated by the fact that back-propagation neural networks contain several adjustable parameters whose optimal values unknown during training. These include learning rate, momentum, training tolerance, and the number of hidden layer neurOnS. This paper presents an investigation of the use of genetic algorithms (GAs) to determine the optimal neural network parameters for the modeling of plasma-enhanced chemical vapor deposition (PECVD) of silicon dioxide films. To find an optimal parameter set for the neural network PECVD models, a performance index was defined and used in the GA objective function. This index was designed to account for network prediction error as well as training error, with a higher emphasis on reducing prediction error. The results of the genetic search were compared with the results of a similar search using the simplex algorithm.

  • PDF

A Threshold Optimization Method for Decentralized Cooperative Spectrum Sensing in Cognitive Radio Networks (인지 무선 네트워크 내 분산 협력 대역 검출을 위한 문턱값 최적화 방법)

  • Kim, Nak-Kyun;Byun, Youn-Shik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.2
    • /
    • pp.253-263
    • /
    • 2015
  • Lately, spectrum sensing performance has been improved by using cooperate spectrum sensing which each results of sensing of several secondary users are reported to the fusion center. Using Cognitive Radio, secondary user is able to share a bandwidth allocated to primary user. In this paper, we propose a new decentralized cooperative spectrum sensing scheme which compensates the performance degradation of existing decentralized cooperative spectrum sensing considering the error probability of the channel which sensed result of the secondary user is delivered to the fusion center in decentralized cooperative spectrum sensing. In addition, a sensing threshold optimization of minimizing the error probability of decentralized cooperative spectrum sensing is introduced by deriving the equation and the optimal sensing threshold has been confirmed to maximize the decentralized cooperative spectrum sensing performance.

Predicting concrete's compressive strength through three hybrid swarm intelligent methods

  • Zhang Chengquan;Hamidreza Aghajanirefah;Kseniya I. Zykova;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.32 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • One of the main design parameters traditionally utilized in projects of geotechnical engineering is the uniaxial compressive strength. The present paper employed three artificial intelligence methods, i.e., the stochastic fractal search (SFS), the multi-verse optimization (MVO), and the vortex search algorithm (VSA), in order to determine the compressive strength of concrete (CSC). For the same reason, 1030 concrete specimens were subjected to compressive strength tests. According to the obtained laboratory results, the fly ash, cement, water, slag, coarse aggregates, fine aggregates, and SP were subjected to tests as the input parameters of the model in order to decide the optimum input configuration for the estimation of the compressive strength. The performance was evaluated by employing three criteria, i.e., the root mean square error (RMSE), mean absolute error (MAE), and the determination coefficient (R2). The evaluation of the error criteria and the determination coefficient obtained from the above three techniques indicates that the SFS-MLP technique outperformed the MVO-MLP and VSA-MLP methods. The developed artificial neural network models exhibit higher amounts of errors and lower correlation coefficients in comparison with other models. Nonetheless, the use of the stochastic fractal search algorithm has resulted in considerable enhancement in precision and accuracy of the evaluations conducted through the artificial neural network and has enhanced its performance. According to the results, the utilized SFS-MLP technique showed a better performance in the estimation of the compressive strength of concrete (R2=0.99932 and 0.99942, and RMSE=0.32611 and 0.24922). The novelty of our study is the use of a large dataset composed of 1030 entries and optimization of the learning scheme of the neural prediction model via a data distribution of a 20:80 testing-to-training ratio.

Reliability-based Design Optimization using Multiplicative Decomposition Method (곱분해기법을 이용한 신뢰성 기반 최적설계)

  • Kim, Tae-Kyun;Lee, Tae-Hee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.4
    • /
    • pp.299-306
    • /
    • 2009
  • Design optimization is a method to find optimum point which minimizes the objective function while satisfying design constraints. The conventional optimization does not consider the uncertainty originated from modeling or manufacturing process, so optimum point often locates on the boundaries of constraints. Reliability based design optimization includes optimization technique and reliability analysis that calculates the reliability of the system. Reliability analysis can be classified into simulation method, fast probability integration method, and moment-based reliability method. In most generally used MPP based reliability analysis, which is one of fast probability integration method, if many MPP points exist, cost and numerical error can increase in the process of transforming constraints into standard normal distribution space. In this paper, multiplicative decomposition method is used as a reliability analysis for RBDO, and sensitivity analysis is performed to apply gradient based optimization algorithm. To illustrate whole process of RBDO mathematical and engineering examples are illustrated.

An Optimization of distributed Hydrologic Model using Multi-Objective Optimization Method (다중최적화기법을 이용한 분포형 수문모형의 최적화)

  • Kim, Jungho;Kim, Taegyun
    • Journal of Wetlands Research
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • In this study, the multi-objective optimization method is attemped to optimize the hydrological model to estimate the runoff through two hydrological processes. HL-RDHM, a distributed hydrological model that can simultaneously estimate the amount of snowfall and runoff, was used as the distributed hydrological model. The Durango River basin in Colorado, USA, was selected as the watershed. MOSCEM was used as a multi-objective optimization method and parameter calibration and hydrologic model optimization were tried by selecting 5 parameters related to snow melting and 13 parameters related to runoff. Data from 2004 to 2005 were used to optimize the model and verified using data from 2001 to 2004. By optimizing both the amount of snow and the amount of runoff, the RMSE error can be reduced from 7% to 40% of the simulation value based on the initial solution at three SNOTEL points based on the RMSE. The USGS observation point of the outflow is improved about 40%.

A Study of Feedrate Optimization for Tolerance Error of NC Machining (NC가공에서 허용오차를 고려한 가공속도 최적화에 관한 연구)

  • Lee, Hee-Seung;Lee, Cheol-Soo;Kim, Jong-Min;Heo, Eun-Young
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.22 no.5
    • /
    • pp.852-858
    • /
    • 2013
  • In numerical control (NC) machining, a machining error in equipment generally occurs for a variety of reasons. If there is a change in direction in the NC code, the characteristics of the automatic acceleration or deceleration function cause an overlap of each axis of the acceleration and deceleration zones, which in turn causes a shift in the actual processing path. Many studies have been conducted for error calibration of the edge as caused by automatic acceleration or deceleration in NC machining. This paper describes a geometric interpretation of the shape and processing characteristics of the operating NC device. The paper then describes a way to determine a feedrate that achieves the desired tolerance by using linear and parabolic profiles. Experiments were conducted by the validate equations using a three-axis NC machine. The results show that the machining errors were smaller than the machine resolution. The results also clearly demonstrate that the NC machine with the developed system can successfully predict machining errors induced with a change in direction.

Modeling of Co(II) adsorption by artificial bee colony and genetic algorithm

  • Ozturk, Nurcan;Senturk, Hasan Basri;Gundogdu, Ali;Duran, Celal
    • Membrane and Water Treatment
    • /
    • v.9 no.5
    • /
    • pp.363-371
    • /
    • 2018
  • In this work, it was investigated the usability of artificial bee colony (ABC) and genetic algorithm (GA) in modeling adsorption of Co(II) onto drinking water treatment sludge (DWTS). DWTS, obtained as inevitable byproduct at the end of drinking water treatment stages, was used as an adsorbent without any physical or chemical pre-treatment in the adsorption experiments. Firstly, DWTS was characterized employing various analytical procedures such as elemental, FT-IR, SEM-EDS, XRD, XRF and TGA/DTA analysis. Then, adsorption experiments were carried out in a batch system and DWTS's Co(II) removal potential was modelled via ABC and GA methods considering the effects of certain experimental parameters (initial pH, contact time, initial Co(II) concentration, DWTS dosage) called as the input parameters. The accuracy of ABC and GA method was determined and these methods were applied to four different functions: quadratic, exponential, linear and power. Some statistical indices (sum square error, root mean square error, mean absolute error, average relative error, and determination coefficient) were used to evaluate the performance of these models. The ABC and GA method with quadratic forms obtained better prediction. As a result, it was shown ABC and GA can be used optimization of the regression function coefficients in modeling adsorption experiments.

A Cross-Layer Unequal Error Protection Scheme for Prioritized H.264 Video using RCPC Codes and Hierarchical QAM

  • Chung, Wei-Ho;Kumar, Sunil;Paluri, Seethal;Nagaraj, Santosh;Annamalai, Annamalai Jr.;Matyjas, John D.
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.53-68
    • /
    • 2013
  • We investigate the rate-compatible punctured convolutional (RCPC) codes concatenated with hierarchical QAM for designing a cross-layer unequal error protection scheme for H.264 coded sequences. We first divide the H.264 encoded video slices into three priority classes based on their relative importance. We investigate the system constraints and propose an optimization formulation to compute the optimal parameters of the proposed system for the given source significance information. An upper bound to the significance-weighted bit error rate in the proposed system is derived as a function of system parameters, including the code rate and geometry of the constellation. An example is given with design rules for H.264 video communications and 3.5-4 dB PSNR improvement over existing RCPC based techniques for AWGN wireless channels is shown through simulations.