• 제목/요약/키워드: training optimization

검색결과 416건 처리시간 0.02초

Fast Training of Structured SVM Using Fixed-Threshold Sequential Minimal Optimization

  • Lee, Chang-Ki;Jang, Myung-Gil
    • ETRI Journal
    • /
    • 제31권2호
    • /
    • pp.121-128
    • /
    • 2009
  • In this paper, we describe a fixed-threshold sequential minimal optimization (FSMO) for structured SVM problems. FSMO is conceptually simple, easy to implement, and faster than the standard support vector machine (SVM) training algorithms for structured SVM problems. Because FSMO uses the fact that the formulation of structured SVM has no bias (that is, the threshold b is fixed at zero), FSMO breaks down the quadratic programming (QP) problems of structured SVM into a series of smallest QP problems, each involving only one variable. By involving only one variable, FSMO is advantageous in that each QP sub-problem does not need subset selection. For the various test sets, FSMO is as accurate as an existing structured SVM implementation (SVM-Struct) but is much faster on large data sets. The training time of FSMO empirically scales between O(n) and O($n^{1.2}$), while SVM-Struct scales between O($n^{1.5}$) and O($n^{1.8}$).

  • PDF

Optimization Strategies for Federated Learning Using WASM on Device and Edge Cloud (WASM을 활용한 디바이스 및 엣지 클라우드 기반 Federated Learning의 최적화 방안)

  • Jong-Seok Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제17권4호
    • /
    • pp.213-220
    • /
    • 2024
  • This paper proposes an optimization strategy for performing Federated Learning between devices and edge clouds using WebAssembly (WASM). The proposed strategy aims to maximize efficiency by conducting partial training on devices and the remaining training on edge clouds. Specifically, it mathematically describes and evaluates methods to optimize data transfer between GPU memory segments and the overlapping of computational tasks to reduce overall training time and improve GPU utilization. Through various experimental scenarios, we confirmed that asynchronous data transfer and task overlap significantly reduce training time, enhance GPU utilization, and improve model accuracy. In scenarios where all optimization techniques were applied, training time was reduced by 47%, GPU utilization improved to 91.2%, and model accuracy increased to 89.5%. These results demonstrate that asynchronous data transfer and task overlap effectively reduce GPU idle time and alleviate bottlenecks. This study is expected to contribute to the performance optimization of Federated Learning systems in the future.

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • 제13권2호
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

Application of Ant Colony Optimization and Particle Swarm Optimization for Neural Network Model of Machining Process (절삭가공의 Neural Network 모델을 위한 ACO 및 PSO의 응용)

  • Oh, Soo-Cheol
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • 제18권9호
    • /
    • pp.36-43
    • /
    • 2019
  • Turning, a main machining process, is a widespread process in metal cutting industries. Many researchers have investigated the effects of process parameters on the machining process. In the turning process, input variables including cutting speed, feed, and depth of cut are generally used. Surface roughness and electric current consumption are used as output variables in this study. We construct a simulation model for the turning process using a neural network, which predicts the output values based on input values. In the neural network, obtaining the appropriate set of weights, which is called training, is crucial. In general, back propagation (BP) is widely used for training. In this study, techniques such as ant colony optimization (ACO) and particle swarm optimization (PSO) as well as BP were used to obtain the weights in the neural network. Particularly, two combined techniques of ACO_BP and PSO_BP were utilized for training the neural network. Finally, the performances of the two techniques are compared with each other.

Applications of Soft Computing Techniques in Response Surface Based Approximate Optimization

  • Lee, Jongsoo;Kim, Seungjin
    • Journal of Mechanical Science and Technology
    • /
    • 제15권8호
    • /
    • pp.1132-1142
    • /
    • 2001
  • The paper describes the construction of global function approximation models for use in design optimization via global search techniques such as genetic algorithms. Two different approximation methods referred to as evolutionary fuzzy modeling (EFM) and neuro-fuzzy modeling (NFM) are implemented in the context of global approximate optimization. EFM and NFM are based on soft computing paradigms utilizing fuzzy systems, neural networks and evolutionary computing techniques. Such approximation methods may have their promising characteristics in a case where the training data is not sufficiently provided or uncertain information may be included in design process. Fuzzy inference system is the central system for of identifying the input/output relationship in both methods. The paper introduces the general procedures including fuzzy rule generation, membership function selection and inference process for EFM and NFM, and presents their generalization capabilities in terms of a number of fuzzy rules and training data with application to a three-bar truss optimization.

  • PDF

Topology optimization of steel plate shear walls in the moment frames

  • Bagherinejad, Mohammad Hadi;Haghollahi, Abbas
    • Steel and Composite Structures
    • /
    • 제29권6호
    • /
    • pp.771-783
    • /
    • 2018
  • In this paper, topology optimization (TO) is applied to find a new configuration for the perforated steel plate shear wall (PSPSW) based on the maximization of reaction forces as the objective function. An infill steel plate is introduced based on an experimental model for TO. The TO is conducted using the sensitivity analysis, the method of moving asymptotes and SIMP method. TO is done using a nonlinear analysis (geometry and material) considering the buckling. The final area of the optimized plate is equal to 50% of the infill plate. Three plate thicknesses and three length-to-height ratios are defined and their effects are investigated in the TO. It indicates the plate thickness has no significant impact on the optimization results. The nonlinear behavior of optimized plates under cyclic loading is studied and the strength, energy and fracture tendency of them are investigated. Also, four steel plates including infill plate, a plate with a central circle and two types of the multi-circle plate are introduced with equal plate volume for comparing with the results of the optimized plate.

Optimal Design of Nonlinear Structural Systems via EFM Based Approximations (진화퍼지 근사화모델에 의한 비선형 구조시스템의 최적설계)

  • 이종수;김승진
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국퍼지및지능시스템학회 2000년도 춘계학술대회 학술발표 논문집
    • /
    • pp.122-125
    • /
    • 2000
  • The paper describes the adaptation of evolutionary fuzzy model ins (EFM) in developing global function approximation tools for use in genetic algorithm based optimization of nonlinear structural systems. EFM is an optimization process to determine the fuzzy membership parameters for constructing global approximation model in a case where the training data are not sufficiently provided or uncertain information is included in design process. The paper presents the performance of EFM in terms of numbers of fuzzy rules and training data, and then explores the EFM based sizing of automotive component for passenger protection.

  • PDF

PSO based tuning of PID controller for coupled tank system

  • Lee, Yun-Hyung;Ryu, Ki-Tak;Hur, Jae-Jung;So, Myung-Ok
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제38권10호
    • /
    • pp.1297-1302
    • /
    • 2014
  • This paper presents modern optimization methods for determining the optimal parameters of proportional-integral-derivative (PID) controller for coupled tank systems. The main objective is to obtain a fast and stable control system for coupled tank systems by tuning of the PID controller using the Particle Swarm Optimization algorithm. The result is compared in terms of system transient characteristics in time domain. The obtained results using the Particle Swarm Optimization algorithm are also compared to conventional PID tuning method like the Ziegler-Nichols tuning method, the Cohen-Coon method and IMC (Internal Model Control). The simulation results have been simulated by MATLAB and show that tuning the PID controller using the Particle Swarm Optimization (PSO) algorithm provides a fast and stable control system with low overshoot, fast rise time and settling time.

Optimization Numeral Recognition Using Wavelet Feature Based Neural Network. (웨이브렛 특징 추출을 이용한 숫자인식 의 최적화)

  • 황성욱;임인빈;박태윤;최재호
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 한국신호처리시스템학회 2003년도 하계학술대회 논문집
    • /
    • pp.94-97
    • /
    • 2003
  • In this Paper, propose for MLP(multilayer perception) neural network that uses optimization recognition training scheme for the wavelet transform and the numeral image add to noise, and apply this system in Numeral Recognition. As important part of original image information preserves maximum using the wavelet transform, node number of neural network and the loaming convergence time did size of input vector so that decrease. Apply in training vector, examine about change of the recognition rate as optimization recognition training scheme raises noise of data gradually. We used original image and original image added 0, 10, 20, 30, 40, 50㏈ noise (or the increase of numeral recognition rate. In case of test image added 30∼50㏈, numeral recognition rate between the original image and image added noise for training Is a little But, in case of test image added 0∼20㏈ noise, the image added 0, 10, 20, 30, 40 , 50㏈ noise is used training. Then numeral recognition rate improved 9 percent.

  • PDF

Predictive Optimization Adjusted With Pseudo Data From A Missing Data Imputation Technique (결측 데이터 보정법에 의한 의사 데이터로 조정된 예측 최적화 방법)

  • Kim, Jeong-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • 제20권2호
    • /
    • pp.200-209
    • /
    • 2019
  • When forecasting future values, a model estimated after minimizing training errors can yield test errors higher than the training errors. This result is the over-fitting problem caused by an increase in model complexity when the model is focused only on a given dataset. Some regularization and resampling methods have been introduced to reduce test errors by alleviating this problem but have been designed for use with only a given dataset. In this paper, we propose a new optimization approach to reduce test errors by transforming a test error minimization problem into a training error minimization problem. To carry out this transformation, we needed additional data for the given dataset, termed pseudo data. To make proper use of pseudo data, we used three types of missing data imputation techniques. As an optimization tool, we chose the least squares method and combined it with an extra pseudo data instance. Furthermore, we present the numerical results supporting our proposed approach, which resulted in less test errors than the ordinary least squares method.