• Title/Summary/Keyword: Optimal value function

Search Result 533, Processing Time 0.031 seconds

Calibration of a Network Link Travel Cost Function with the Harmony Search Algorithm (화음탐색법을 이용한 교통망 링크 통행비용함수 정산기법 개발)

  • Kim, Hyun Myung;Hwang, Yong Hwan;Yang, In Chul
    • Journal of Korean Society of Transportation
    • /
    • v.30 no.5
    • /
    • pp.71-82
    • /
    • 2012
  • Some previous studies adopted a method statistically based on the observed traffic volumes and travel times to estimate the parameters. Others tried to find an optimal set of parameters to minimize the gap between the observed and estimated traffic volumes using, for instance, a combined optimization model with a traffic assignment model. The latter is frequently used in a large-scale network that has a capability to find a set of optimal parameter values, but its appropriateness has never been demonstrated. Thus, we developed a methodology to estimate a set of parameter values of BPR(Bureau of Public Road) function using Harmony Search (HS) method. HS was developed in early 2000, and is a global search method proven to be superior to other global search methods (e.g. Genetic Algorithm or Tabu search). However, it has rarely been adopted in transportation research arena yet. The HS based transportation network calibration algorithm developed in this study is tested using a grid network, and its outcomes are compared to those from incremental method (Incre) and Golden Section (GS) method. It is found that the HS algorithm outperforms Incre and GS for copying the given observed link traffic counts, and it is also pointed out that the popular optimal network calibration techniques based on an objective function of traffic volume replication are lacking the capability to find appropriate free flow travel speed and ${\alpha}$ value.

Clinical and Imaging Parameters Associated With Impaired Kidney Function in Patients With Acute Decompensated Heart Failure With Reduced Ejection Fraction

  • In-Jeong Cho;Sang-Eun Lee;Dong-Hyeok Kim;Wook Bum Pyun
    • Journal of Cardiovascular Imaging
    • /
    • v.31 no.4
    • /
    • pp.169-177
    • /
    • 2023
  • BACKGROUND: Acute worsening of cardiac function frequently leads to kidney dysfunction. This study aimed to identify clinical and imaging parameters associated with impaired kidney function in patients with acute decompensated heart failure with reduced ejection fraction (HFrEF). METHODS: Data from 131 patients hospitalized with acute decompensated HFrEF (left ventricular ejection fraction, < 40%) were analyzed. Patients were divided into two groups according to the glomerular filtration rate (GFR) at admission (those with preserved kidney function [GFR ≥ 60 mL/min/1.73 m2] and those with reduced kidney function [GFR < 60 mL/min/1.73 m2]). Various echocardiographic parameters and perirenal fat thicknesses were assessed by computed tomography. RESULTS: There were 71 patients with preserved kidney function and 60 patients with reduced kidney function. Increased age (odds ratio [OR], 1.07; 95% confidence interval [CI], 1.04-1.12; p = 0.005), increased log N-terminal pro b-type natriuretic peptide (OR, 1.74; 95% CI, 1.14-2.66; p = 0.010), and increased perirenal fat thickness (OR, 1.19; 95% CI, 1.10-1.29; p < 0.001) were independently associated with reduced kidney function, even after adjusting for variable clinical and echocardiographic parameters. The optimal average perirenal fat thickness cut-off value of > 12 mm had a sensitivity of 55% and specificity of 83% for kidney dysfunction prediction. CONCLUSIONS: Thick perirenal fat was independently associated with impaired kidney function in patients hospitalized for acute decompensated HFrEF. Measurement of perirenal fat thickness may be a promising imaging marker for the detection of HFrEF patients who are more susceptible to kidney dysfunction.

Optimal Design of a Continuous Time Deadbeat Controller (연속시간 유한정정제어기의 최적설계)

  • Kim Seung Youal;Lee Keum Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.2
    • /
    • pp.169-176
    • /
    • 2000
  • Deadbeat property is well established in digital control system design in time domain. But in continuous time system, deadbeat is impossible because of it's ripples between sampling points inspite of designs using the related digital control system design theory. But several researchers suggested delay elements. A delay element is made from the concept of finite Laplace Transform. From some specifications such as internal model stability, physical realizations as well as finite time settling, unknown coefficents and poles in error transfer functions with delay elements can be calulted so as to satisfy these specifications. For the application to the real system, robustness property can be added. In this paper, error transfer function is specified with 1 delay element and robustness condition is considered additionally. As the criterion of the robustness, a weighted sensitive function's $H_{infty}$ norm is used. For the minimum value of the criterion, error transfer function's poles are calculated optimally. In this sense, optimal design of the continuous time deadbeat controller is obtained.

  • PDF

Effect of the Extracorporeal Circulation on Renal Function in Adult Open Heart Patients (개심술시 체외순환이 신장기능에 미치는 영향)

  • Lee, Jae-Won;Seo, Gyeong-Pil
    • Journal of Chest Surgery
    • /
    • v.18 no.4
    • /
    • pp.718-731
    • /
    • 1985
  • Renal dysfunction is a common complication of open-heart surgery: a form of controlled hemorrhagic shock, and successful perioperative management of renal dysfunction depends on recognition of the risk factors and optimal management of factors influencing renal function, including cardiopulmonary bypass, and early detection of renal failure. Changes in renal functional parameters including Ccr, Cosm, CH2O, FENa, and RFI were observed prospectively in forty five patients operated on at Dept. of Thoracic and Cardiovascular Surgery, S.N.U.H., from April to June, 1985. They were 23 males and 22 females with 35 acquired and 10 congenital heart diseases and the mean age and body surface area of them were 38.010.3 years [22-63] and 1.5518 M2[1.151.92] respectively. Followings are the conclusion. 1. The Ccr, representative of renal function, is significantly improved from 90.231.3 ml/min/M2 preoperatively to 101.536.4 ml/min/M2 postoperative and day [P<0.05], and all patients were classified as postoperative renal functional class I of Abel, which representing adequate renal protection during our cardiopulmonary bypass. 2. The Cosm is significantly elevated at immediate postperfusion time and remained high at postoperative one day representing osmotic diuresis at that time, but CH2O shows no significant changes at immediate postperfusion period and is decreased significantly at postoperative one day, representing recovery of renal concentrating ability at that time with decreasing urine flow. 3. The absolute value and changing tendency in FENa and RFI during perioperative period shows no diagnostic reliability on these parameters, but those of CH2O appear to reveal future renal function more accurately than Ccr 4. The depth of hypothermia may be protective upon renal function against the ill effects of prolonged nonpulsatile cardiopulmonary bypass. 5. The depth of the hypothermia, pump time of more than 150 minutes, poor cardiac function, and intraoperative events such as embolism appear to be related with immediate postperfusion renal function. 6. Hemoglobinuria and hemolysis, poor preoperative renal function, history of cardiac surgery, and massive transfusion associated with bleeding appear not to be related with renal dysfunction.

  • PDF

Performance of a Bayesian Design Compared to Some Optimal Designs for Linear Calibration (선형 캘리브레이션에서 베이지안 실험계획과 기존의 최적실험계획과의 효과비교)

  • 김성철
    • The Korean Journal of Applied Statistics
    • /
    • v.10 no.1
    • /
    • pp.69-84
    • /
    • 1997
  • We consider a linear calibration problem, $y_i = $$\alpha + \beta (x_i - x_0) + \epsilon_i$, $i=1, 2, {\cdot}{\cdot},n$ $y_f = \alpha + \beta (x_f - x_0) + \epsilon, $ where we observe $(x_i, y_i)$'s for the controlled calibration experiments and later we make inference about $x_f$ from a new observation $y_f$. The objective of the calibration design problem is to find the optimal design $x = (x_i, \cdots, x_n$ that gives the best estimates for $x_f$. We compare Kim(1989)'s Bayesian design which minimizes the expected value of the posterior variance of $x_f$ and some optimal designs from literature. Kim suggested the Bayesian optimal design based on the analysis of the characteristics of the expected loss function and numerical must be equal to the prior mean and that the sum of squares be as large as possible. The designs to be compared are (1) Buonaccorsi(1986)'s AV optimal design that minimizes the average asymptotic variance of the classical estimators, (2) D-optimal and A-optimal design for the linear regression model that optimize some functions of $M(x) = \sum x_i x_i'$, and (3) Hunter & Lamboy (1981)'s reference design from their paper. In order to compare the designs which are optimal in some sense, we consider two criteria. First, we compare them by the expected posterior variance criterion and secondly, we perform the Monte Carlo simulation to obtain the HPD intervals and compare the lengths of them. If the prior mean of $x_f$ is at the center of the finite design interval, then the Bayesian, AV optimal, D-optimal and A-optimal designs are indentical and they are equally weighted end-point design. However if the prior mean is not at the center, then they are not expected to be identical.In this case, we demonstrate that the almost Bayesian-optimal design was slightly better than the approximate AV optimal design. We also investigate the effects of the prior variance of the parameters and solution for the case when the number of experiments is odd.

  • PDF

Imbalanced SVM-Based Anomaly Detection Algorithm for Imbalanced Training Datasets

  • Wang, GuiPing;Yang, JianXi;Li, Ren
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.621-631
    • /
    • 2017
  • Abnormal samples are usually difficult to obtain in production systems, resulting in imbalanced training sample sets. Namely, the number of positive samples is far less than the number of negative samples. Traditional Support Vector Machine (SVM)-based anomaly detection algorithms perform poorly for highly imbalanced datasets: the learned classification hyperplane skews toward the positive samples, resulting in a high false-negative rate. This article proposes a new imbalanced SVM (termed ImSVM)-based anomaly detection algorithm, which assigns a different weight for each positive support vector in the decision function. ImSVM adjusts the learned classification hyperplane to make the decision function achieve a maximum GMean measure value on the dataset. The above problem is converted into an unconstrained optimization problem to search the optimal weight vector. Experiments are carried out on both Cloud datasets and Knowledge Discovery and Data Mining datasets to evaluate ImSVM. Highly imbalanced training sample sets are constructed. The experimental results show that ImSVM outperforms over-sampling techniques and several existing imbalanced SVM-based techniques.

Centralized Clustering Routing Based on Improved Sine Cosine Algorithm and Energy Balance in WSNs

  • Xiaoling, Guo;Xinghua, Sun;Ling, Li;Renjie, Wu;Meng, Liu
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.17-32
    • /
    • 2023
  • Centralized hierarchical routing protocols are often used to solve the problems of uneven energy consumption and short network life in wireless sensor networks (WSNs). Clustering and cluster head election have become the focuses of WSNs. In this paper, an energy balanced clustering routing algorithm optimized by sine cosine algorithm (SCA) is proposed. Firstly, optimal cluster head number per round is determined according to surviving node, and the candidate cluster head set is formed by selecting high-energy node. Secondly, a random population with a certain scale is constructed to represent a group of cluster head selection scheme, and fitness function is designed according to inter-cluster distance. Thirdly, the SCA algorithm is improved by using monotone decreasing convex function, and then a certain number of iterations are carried out to select a group of individuals with the minimum fitness function value. From simulation experiments, the process from the first death node to 80% only needs about 30 rounds. This improved algorithm balances the energy consumption among nodes and avoids premature death of some nodes. And it greatly improves the energy utilization and extends the effective life of the whole network.

Development of a Multiobjective Optimization Algorithm Using Data Distribution Characteristics (데이터 분포특성을 이용한 다목적함수 최적화 알고리즘 개발)

  • Hwang, In-Jin;Park, Gyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.12
    • /
    • pp.1793-1803
    • /
    • 2010
  • The weighting method and goal programming require weighting factors or target values to obtain a Pareto optimal solution. However, it is difficult to define these parameters, and a Pareto solution is not guaranteed when the choice of the parameters is incorrect. Recently, the Mahalanobis Taguchi System (MTS) has been introduced to minimize the Mahalanobis distance (MD). However, the MTS method cannot obtain a Pareto optimal solution. We propose a function called the skewed Mahalanobis distance (SMD) to obtain a Pareto optimal solution while retaining the advantages of the MD. The SMD is a new distance scale that multiplies the skewed value of a design point by the MD. The weighting factors are automatically reflected when the SMD is calculated. The SMD always gives a unique Pareto optimal solution. To verify the efficiency of the SMD, we present two numerical examples and show that the SMD can obtain a unique Pareto optimal solution without any additional information.

Analysis of Helicopter Maneuvering Flight Using the Indirect Method - Part I. Optimal Control Formulation and Numerical Methods (Indirect Method를 이용한 헬리콥터 기동비행 해석 - Part I. 최적제어 문제의 정식화와 수치해법)

  • Kim, Chang-Joo;Yang, Chang-Deok;Kim, Seung-Ho;Hwang, Chang-Jeon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.36 no.1
    • /
    • pp.22-30
    • /
    • 2008
  • This paper deals with the nonlinear optimal control approach to helicopter maneuver problems using the indirect method. We apply a penalty function to the deviation from a prescribed trajectory to convert the system optimality to an unconstrained optimal control problem. The resultant two-point boundary value problem has been solved by using the multiple-shooting method. This paper focuses on the effect of the number of shooting nodes and initialization methods on the numerical solution in order to define the minimum number of shooting nodes required for numerical convergence and to provide a method increasing convergence radius of the indirect method. The results of this study can provide an approach to improve numerical stability and convergence of the indirect method when we solve the optimal control problems of an inherently unstable helicopter system.

Inverse quantization of DCT coefficients using Laplacian pdf (Laplacian pdf를 적용한 DCT 계수의 역양자화)

  • 강소연;이병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.857-864
    • /
    • 2004
  • Many image compression standards such as JPEG, MPEG or H.263 are based on the discrete cosine transform (DCT) and quantization method. Quantization error. is the major source of image quality degradation. The current dequantization method assumes the uniform distribution of the DCT coefficients. Therefore the dequantization value is the center of each quantization interval. However DCT coefficients are regarded to follow Laplacian probability density function (pdf). The center value of each interval is not optimal in reducing squared error. We use mean of the quantization interval assuming Laplacian pdf, and show the effect of correction on image quality. Also, we compare existing quantization error to corrected quantization error in closed form. The effect of PSNR improvements due to the compensation to the real image is in the range of 0.2 ∼0.4 ㏈. The maximum correction value is 1.66 ㏈.