DOI QR코드

DOI QR Code

SMOOTHING APPROXIMATION TO l1 EXACT PENALTY FUNCTION FOR CONSTRAINED OPTIMIZATION PROBLEMS

  • Received : 2014.10.30
  • Accepted : 2015.02.18
  • Published : 2015.05.30

Abstract

In this paper, a new smoothing approximation to the l1 exact penalty function for constrained optimization problems (COP) is presented. It is shown that an optimal solution to the smoothing penalty optimization problem is an approximate optimal solution to the original optimization problem. Based on the smoothing penalty function, an algorithm is presented to solve COP, with its convergence under some conditions proved. Numerical examples illustrate that this algorithm is efficient in solving COP.

Keywords

1. Introduction

Consider the following COP:

where f, gi : Rn → R, i ∈ I = {1, 2,...,m} are continuously differentiable functions and X0 = {x ∈ Rn | gi(x) ≤ 0, i = 1, 2,...,m} is the feasible set to (P).

To solve (P), many exact penalty function methods have been introduced in literatures, see, [1,3,4,5,7,13,25]. In 1967, Zangwill [25] first the classical l1 exact penalty function as follows:

where ρ > 0 is a penalty parameter. Obviously, it is not a smooth function. In many studies, another popular penalty function for (P) is defined as:

which is called l2 penalty function. Although F2(x, ρ) is continuously differentiable, it is not an exact penalty function.

In recent years, the lower order penalty function

has been introduced and investigated in [10,11,18]. Recently, Huang and Yang [6,23] and Rubinov et al. [14,15,16] discussed a nonlinear Lagrangian penalty function,

for some k ∈ (0,+ ∞).

It is noted that two penalty functions (3) and (4) (0 < k ≤ 1) are exact, but not smooth, which makes certain efficient methods (e.g., Newton methods) not applicable. Therefore, the smoothing methods for these exact penalty functions (1), (3) or (4) (0 < k ≤ 1) attracts much attention, see, [2,8,9,10,11,12,18,19,20,21,22,24,26]. Chen et al. [2] introduced a smooth function to approximate the classical l1 penalty function by integrating the sigmoid function 1/(1 + e−αt). Lian [8] and Wu et al. [19] proposed a smoothing approximation to l1 exact penalty function for inequality constrained optimization. Pinar et al. [12] also proposed a smoothing approximation to l1 exact penalty function and an ϵ-optimal minimum can be obtained by solving the smoothed penalty problem. Xu et al. [21] discussed a second-order differentiability smoothing to the classical l1 exact penalty function for constrained optimization problems.

In this paper, we aim to smooth l1 exact penalty function of the form (1). First, we define a function pϵ(t) as follows:

It is easy to prove that pϵ(t) is continuously differentiable on R. Using pϵ(t) as the smoothing function, a new smoothing approximation to l1 exact penalty function is obtained, based on the smoothed penalty function obtained thereafter an algorithm for solving COP is given in this paper.

The rest of this paper is organized as follows. In Section 2, we introduce a smoothing function for the classical l1 exact penalty function and some fundamental properties of the smoothing function. In Section 3, the algorithm based on the smoothed penalty function is proposed and its global convergence is presented, with some numerical examples given. Finally, conclusions are given in Section 4.

 

2. A smoothing penalty function

Let p(t) = max{t, 0}. Then, the penalty function (1) is turned into

where ρ > 0. The corresponding penalty optimization problem to F(x, ρ) is defined as

In order to p(t), we define function pϵ(t) : R1 → R1 as

where ϵ > 0 is a smoothing parameter.

Remark 2.1. Obviously, pϵ(t) has the following attractive properties: pϵ(t) is continuously differentiable on R and .

Figure 1 shows the behavior of p(t) (represented by the real line), p0.5(t) (represented by the dot line), p0.1(t) (represented by the broken and dot line), p0.001(t) (represented by the broken line).

FIGURE 1The behavior of p(t) and pϵ(t).

Consider the penalty function for (P) given by

Clearly, Fϵ(x, ρ) is continuously differentiable on Rn. Applying (6), the following penalty problem for (P) is obtained

Now, the relationship between (Pρ) and (NPρ,ϵ) is studied.

Lemma 2.1. For any given x ∈ Rn, ϵ > 0 and ρ > 0, we have

Proof. For x ∈ Rn and i ∈ I, by the definition of pϵ(t), we have

That is,

Thus,

which implies

Therefore,

that is,

The proof completes. □

A direct result of Lemma 2.1 is given as follows.

Corollary 2.2. Let {ϵj} → 0 be a sequence of positive numbers and assume xj is a solution to (NPρ,ϵ) for some given ρ > 0. Let x′ be an accumulation point of the sequence {xj}. Then x′ is an optimal solution to (Pρ).

Definition 2.3. For ϵ > 0, a point xϵ ∈ Rn is called ϵ-feasible solution to (P) if gi(xϵ) ≤ ϵ, ∀i ∈ I.

Definition 2.4. For ϵ > 0, a point xϵ ∈ X0 is called ϵ-approximate optimal solution to (P) if

where f∗ is the optimal objective value of (P).

Theorem 2.5. Let x∗ be an optimal solution of problem (Pρ) and x′ be an optimal solution to (NPρ,ϵ) for some ρ > 0 and ϵ > 0. Then,

Proof. From Lemma 2.1, for ρ > 0, we have that

Under the assumption that x∗ is an optimal solution to (Pρ) and x′ is an optimal solution to (NPρ,ϵ), we get

Therefore, we obtain that

That is,

This completes the proof. □

Theorem 2.5 show that an approximate solution to (NPρ,ϵ) is also an approximate solution to (Pρ) when the error ϵ is sufficiently small.

Lemma 2.6 ([20]). Suppose that x∗ is an optimal solution to (Pρ). If x∗ is feasible to (P), then it is an optimal solution to (P).

Theorem 2.7. Suppose that x∗ satisfies the conditions in Lemma 2.6 and x′ be an optimal solution to (NPρ,ϵ) for some ρ > 0 and ϵ > 0. If x′ is ϵ-feasible to (P). Then,

that is, x′ is an approximate optimal solution to (P).

Proof. Since x′ is ϵ-feasible to (P), it follows that

As x∗ is a feasible solution to (P), we have

By Theorem 2.5, we get

Thus,

That is,

By Lemma 2.6, x∗ is actually an optimal solution to (P). Thus x′ is an approximate optimal solution to (P). This completes the proof. □

By Theorem 2.7, an optimal solution to (NPρ,ϵ) is an approximate optimal solution to (P) if it is ϵ-feasible to (P). Therefore, we can obtain an approximately optimal solution to (P) by solving (NPρ,ϵ) under some mild conditions.

 

3. Algorithm and numerical examples

In this section, using the smoothed penalty function Fϵ(x, ρ), we propose an algorithm to solve COP, defined as Algorithm 3.1.

Algorithm 3.1

Remark 3.1. In this Algorithm 3.1, as N > 1 and 0 < η < 1, the sequence {ϵj} → 0 (j → + ∞) and the sequence {ρj} → + ∞ (j → + ∞).

In practice, it is difficult to compute . We generally look for the local minimizer or stationary point of Fϵj (x, ρj) by computing xj+1 such that ∇Fϵj (x, ρj) = 0. For x ∈ Rn, we define

Then, the following result is obtained.

Theorem 3.1. Assume that . Let {xj} be the sequence generated by Algorithm 3.1. Suppose that the sequence {Fϵj (xj , ρj)} is bounded. Then {xj} is bounded and any limit point x∗ of {xj} is feasible to (P), and satisfies

where λ ≥ 0 and μi ≥ 0, i = 1, 2,...,m.

Proof. First, we will prove that {xj} is bounded. Note that

and by the definition of pϵ(t), we have

Suppose to the contrary that {xj} is unbounded. Without loss of generality, we assume that ║xj║ → + ∞ as j → + ∞. Then, and from (11) and (12), we have

which results in a contradiction since the sequence {Fϵj (xj , ρj)} is bounded. Thus {xj} is bounded.

We show next that any limit point x∗ of {xj} is feasible to (P). Without loss of generality, we assume that . Suppose that x∗ is not feasible to (P). Then there exits some i ∈ I such that gi(x∗) ≥ α > 0. Note that

If j → + ∞, then for any sufficiently large j, the set {i | gi(xj) ≥ α} is not empty. Because I is finite, then there exists an i0 ∈ I that satisfies gi0 (xj) ≥ α. If j → + ∞, ρj → + ∞, ϵj → 0, it follows from (13) that Fϵj (xj , ρj) → + ∞, which contradicts the assumption that {Fϵj (xj , ρj)} is bounded. Therefore, x∗ is feasible to (P).

Finally, we show that (10) holds. By Step 2 in Algorithm 3.1, ∇Fϵj (xj , ρj) = 0, that is

For j = 1, 2,..., let

Then γj > 1. From (14), we have

Let

Then we have

When j → ∞, we have that λj → λ ≥ 0, , ∀i ∈ I. By (16) and (17), as j → + ∞, we have

For i ∈ I0(x∗), as j → + ∞, we get . Therefore, μi = 0, ∀i ∈ I0(x∗). So, (10) holds, and this completes the proof. □

Theorem 3.1 points out that the sequence {xj} generated by Algorithm 3.1 may converge to a K-T point to (P) under some conditions.

Now, we will solve some COP with Algorithm 3.1 on MATLAB. In each example, we let ϵ = 10−6, then it is expected to get an ϵ-solution to (P) with Algorithm 3.1 on MATLAB. Numerical results show that Algorithm 3.1 yield some approximate solutions that have a better objective function value in comparison with some other algorithms.

Example 3.2. Consider the example in [8],

Let x0 = (0, 0, 0, 0), ρ0 = 4, N = 10, ϵ0 = 0.01, η = 0.05 and ϵ = 10−6.

Numerical results of Algorithm 3.1 for solving (COP1) are given in Table 1.

Table 1.Numerical results of Algorithm 3.1 with x0 = (0, 0, 0, 0), ρ0 = 4, N = 10

Therefore, we get an approximate solution

at the 3’th iteration. One can easily check that x3 is a feasible solution since the constraints of (COP1) at x3 are as follows:

The objective function value is given by f(x3) = −44.233515. The solution we obtained is slightly better than the solution obtained in the 4’th iteration by method in [8] (the objective function value f(x∗) = −44.23040) for this example.

Now we change the initial parameters. Let x0 = (0, 0, 0, 0), ρ0 = 8, N = 6, ϵ0 = 0.01, η = 0.03 and ϵ = 10−6. Numerical results of Algorithm 3.1 for solving (COP1) are given in Table 2. Further, with the same parameters ρ0, N, ϵ0, η as above, we change the starting point to x0 = (8, 8, 8, 8). New numerical results are given in Table 3.

Table 2.Numerical results of Algorithm 3.1 with x0 = (0, 0, 0, 0), ρ0 = 8, N = 6

Table 3.Numerical results of Algorithm 3.1 with x0 = (8, 8, 8, 8), ρ0 = 8, N = 6

It is easy to see from Tables 2 and 3 that the convergence of Algorithm 3.1 is the same and the objective function values are almost the same. That is to say, the efficiency of Algorithm 3.1 does not completely depend on how to choose a starting point in this example.

Note: j is the number of iteration in the Algorithm I.

Example 3.3. Consider the example in [19],

Let

Thus problem (COP2) is equivalent to the following problem:

Let x0 = (1, 1), ρ0 = 8, N = 10, ϵ0 = 0.5, η = 0.01 and ϵ = 10−6. Numerical results of Algorithm 3.1 for solving (COP2’) are given in Table 4.

Table 4.Numerical results of Algorithm 3.1 with x0 = (1, 1), ρ0 = 8, N = 10

By Table 4, an approximate optimal solution to (COP2’) is obtained at the 3’th iteration, that is x∗ = (0.800000, 1.200000) with corresponding objective function value f(x∗) = −7.200000. The solution we obtained is similar with the solution obtained in the 4’th iteration by method in [19] (the objective function value f(x∗) = −7.2000) for this example.

 

4. Conclusion

This paper has presented a smoothing approximation to the l1 exact penalty function and an algorithm based on this smoothed penalty problem. It is shown that the optimal solution to the (NPρ,ϵ) is an approximate optimal solution to the original optimization problem under some mild conditions. Numerical results show that the algorithm proposed here is efficient in solving some COP.

References

  1. M.S. Bazaraa and J.J. Goode, Sufficient conditions for a globally exact penalty function without convexity, Mathematical Programming Study, 19 (1982), 1-15. https://doi.org/10.1007/BFb0120980
  2. C.H. Chen and O.L. Mangasarian, Smoothing methods for convex inequalities and linear complementarity problems, Math. Program, 71 (1995), 51-69. https://doi.org/10.1007/BF01592244
  3. G. Di Pillo and L. Grippo, An exact penalty function method with global conergence properties for nonlinear programming problems, Math. Program, 36 (1986), 1-18. https://doi.org/10.1007/BF02591986
  4. G. Di Pillo and L. Grippo, Exact penalty functions in contrained optimization, SIAM J. Control. Optim., 27 (1989), 1333-1360. https://doi.org/10.1137/0327068
  5. S.P. Han and O.L. Mangasrian, Exact penalty function in nonlinear programming, Math. Program, 17 (1979), 257-269. https://doi.org/10.1007/BF01588250
  6. X.X. Huang and X.Q. Yang, Convergence analysis of a class of nonlinear penalization methods for constrained optimization via first-order necessary optimality conditions, J. Optim. Theory Appl., 116 (2003), 311-332. https://doi.org/10.1023/A:1022503820909
  7. J.B. Lasserre, A globally convergent algorithm for exact penalty functions Eur. J. Oper. Res., 7 (1981), 389-395. https://doi.org/10.1016/0377-2217(81)90097-7
  8. S.J. Lian, Smoothing approximation to l1 exact penalty function for inequality constrained optimization, Appl. Math. Comput., 219 (2012), 3113-3121. https://doi.org/10.1016/j.amc.2012.09.042
  9. B.Z. Liu, On smoothing exact penalty functions for nonlinear constrained optimization problems, J. Appl. Math. Comput., 30 (2009), 259-270. https://doi.org/10.1007/s12190-008-0171-z
  10. Z.Q. Meng, C.Y. Dang and X.Q. Yang, On the smoothing of the square-root exact penalty function for inequality constrained optimization, Comput. Optim. Appl., 35 (2006), 375-398. https://doi.org/10.1007/s10589-006-8720-6
  11. K.W. Meng, S.J. Li and X.Q. Yang, A robust SQP method based on a smoothing lower order penalty function, Optimization, 58 (2009), 22-38. https://doi.org/10.1080/02331930701761193
  12. M.C. Pinar and S.A. Zenios, On smoothing exact penalty function for convex constrained optimization, SIAM J. Optim., 4 (1994), 486-511. https://doi.org/10.1137/0804027
  13. E. Rosenberg, Exact penalty functions and stability in locally Lipschitz programming, Math. Program 30 (1984), 340-356. https://doi.org/10.1007/BF02591938
  14. A.M. Rubinov, B.M. Glover and X.Q. Yang, Extended Lagrange and penalty function in continuous optimization, Optimization, 46 (1999), 327-351. https://doi.org/10.1080/02331939908844460
  15. A.M. Rubinov, X.Q. Yang and A.M. Bagirov, Penalty functions with a small penalty parameter, Optim. Methods Softw., 17 (2002), 931-964. https://doi.org/10.1080/1055678021000066058
  16. A.M. Rubinov and X.Q. Yang, Nonlinear Lagrangian and Penalty Functions in Optimization, Kluwer Academic, Dordrecht, 2003.
  17. X.L. Sun and D. Li, Value-estimation function method for constrained global optimization, J. Optim. Theory Appl., 24 (1999), 385-409. https://doi.org/10.1023/A:1021736608968
  18. Z.Y. Wu, F.S. Bai, X.Q. Yang and L.S. Zhang, An exact lower order penalty function and its smoothing in nonlinear programming, Optimization, 53 (2004), 57-68. https://doi.org/10.1080/02331930410001699928
  19. Z.Y. Wu, H.W.J. Lee, F.S. Bai, L.S. Zhang and X.M. Yang, Quadratic smoothing approximation to l1 exact penalty function in global optimization, J. Ind. Manag. Optim., 1 (2005), 533-547. https://doi.org/10.3934/jimo.2005.1.533
  20. X.S. Xu, Z.Q. Meng, J.W. Sun and R. Shen, A penalty function method based on smoothing lower order penalty function, J. Comput. Appl. Math., 235 (2011), 4047-4058. https://doi.org/10.1016/j.cam.2011.02.031
  21. X.S. Xu, Z.Q. Meng, J.W. Sun, L.Q. Huang and R. Shen, A second-order smooth penalty function algorithm for constrained optimization problems, Comput. Optim. Appl., 55 (2013), 155-172. https://doi.org/10.1007/s10589-012-9504-9
  22. X.Q. Yang, Smoothing approximations to nonsmooth optimization problems, J. Aust. Math. Soc. B, 36 (1994), 274-285. https://doi.org/10.1017/S0334270000010444
  23. X.Q. Yang and X.X. Huang, A nonlinear Lagrangian approach to constrained optimization problems, SIAM J. Optim., 11 (2001), 1119-1144. https://doi.org/10.1137/S1052623400371806
  24. X.Q. Yang, Z.Q. Meng, X.X. Huang and G.T.Y. Pong, Smoothing nonlinear penalty function for constrained optimization, Numer. Funct. Anal. Optim., 24 (2003), 357-364. https://doi.org/10.1081/NFA-120022928
  25. W.I. Zangwill, Nonlinear programming via penalty function, Mgmt. Sci., 13 (1967), 334-358.
  26. S.A. Zenios, M.C. Pinar and R.S. Dembo, A smooth penalty function algorithm for network-structured problems, Eur. J. Oper. Res., 64 (1993), 258-277. https://doi.org/10.1016/0377-2217(93)90181-L

Cited by

  1. Smoothing of the lower-order exact penalty function for inequality constrained optimization vol.2016, pp.1, 2016, https://doi.org/10.1186/s13660-016-1126-9
  2. A New Smoothing Nonlinear Penalty Function for Constrained Optimization vol.22, pp.2, 2017, https://doi.org/10.3390/mca22020031