Computing 59, 91-93 (1997)
~
l
'
~
© Spfinger-Verlag 1997 Printed in Austria
Can Linear Programs be Used to Test Global Optimization Algorithms?* H. D. Tuan, Nagoya Received December 1, 1995; revised November 18, 1996
Abstract
The paper contains critical remarks on global optimisation algorithms published by R. Horst and N. V. Thoai.
AMS Subject Classification: 90C30. Key words: Global optimization, linear programming.
Global optimization algorithms are devised to solve nonconvex minimization problems (of a certain class), with many local minima which fail to be global (multiextremality). Since these algorithms purport to compute a global optimum and not just a local optimum, it is natural that they should be tested on problems that exhibit multiextremality. It seems therefore out of the question that linear programs are inappropriate for testing global optimization algorithms. However, some recent studies devoted to global optimization [1], [2] seem to suggest a different point of view. The paper [1] reports computational experiments with three algorithms for minimizing a concave function under linear constraints: polyhedral annexation (PAA), cone splitting (CSA) and outer approximation (OAA). The illustrative example, as well as a substantial number of test problems used in this paper (about half of the total number of problems reported in Table 1 of [1]) are of the form: min{ ¢( l( x) )lx ~ D},
(1)
*Editor's Note: In Computing 42 (1989), we published an article 'Modification, Implementation and Comparison of Three Algorithms for Globally Solving Linearly Constrained Concave Minimization Problems' by R. Horst and N. V. Thoai. On December 1, 1995 we received a manuscript of H. D. Tuan expressing doubts on the correctness of this paper. After some correspondence with the authors we received an Erratum from R. Horst and N. V. Thoai (see p. 94 of this issue) and the above modified version of H. D. Tuan's objections. We decided to publish both communications and to leave it to the reader to form an opinion on this controversy. [R. F. Albrecht]
92
H.D. Tuan
where D c R+ is a polyhedron, l(x) is a linear function nonnegative on R~ and ~(t) belongs to one of the following types: q~(t) = -It[ 3/2 q~(t) = - ~/f+ t 2 q~(t) = -Itiln(1 + Itl)
(type (2) in [1]) (type (3) in [1]) (type (4) in [1])
(2) (3) (4)
In all these cases ~(t) is a monotone decreasing function of t on the halfline [0, + ~), so each of these problems is actually equivalent to the linear program
min{c(x)lx c O}.
(5)
where c(x) = -l(x). Clearly it does not make sense to test OAA on linear programs, because the inefficiency of OAA on linear programs is obvious and has been long known. It is also easy to see that the algorithms CSA and PAA, when applied to a linear program (5) terminate right at the first iteration, provided the initial vertex x ° is taken to be a local minimum, as prescribed in the basic version of these algorithms. Thus, testing CSA or PAA on a linear program (5) merely reduces to finding a vertex of D achieving a local minimum of c(x) over D, i.e. to solving this linear program. Such a vicious circle is of course meaningless for our purpose. However, the results reported in [1] do not agree at all with the above observations. Specifically, for solving the linear program given in the illustrative example in this paper, CSA or PAA requires solving 25 linear programs. More surprisingly, ten out of 21 test problems reported in Table 1 are linear programs and to solve each of these linear programs these algorithms had to solve on average more than 200 linear programs of the same size as the original program (in a particular case, even 893 linear programs were needed). At first glance such a huge number of needed auxiliary linear programs could be attributed to a bad choice of the initial vertex x ° (in fact, the modified versions of CSA and PAA allow the initial vertex to be chosen with more freedom than in the basic version). However, this explanation does not seem to be in line with the following statements of the authors (see [1], page 288): 'For CSA and PAA the performance of these procedures depends heavily on the choice o f the initial vertex v °. However, until now, there are no generally valid rules for this choice. It turned out that choosing a local minimum was, in the average, not better than choosing an arbitrary vertex.'
These statements imply that, although the authors were aware of the importance of the choice of the initial vertex, they chose an arbitrary vertex because according to their experience, choosing a local minimum would not reduce the number of linear programs needed on average. Since this is wrong, one should conclude that the huge numbers of linear programs needed for solving a single linear program, as reported in Table 1, are not due to a defect of the algorithms, but rather to a bad implementation.
Can Linear Programs be Used to Test Global Optimization Algorithms?
93
In any case, even supposing that linear programs as test problems do present some interest, it does not seem appropriate to illustrate the typical behaviour of a global optimization algorithm by showing, on a unique example, how it works on a linear program. This is all the more regrettable because this linear program is actually incorrectly solved. (The optimal solution given in [1] is wrong, as can easily be checked by solving the linear program directly). Later, the method used in [1] for constructing test problems (1) has been generalized in [2], where it is argued as if by starting from a given problem min{c(x)]x ~ D} and using monotone increasing functions ~O(t) one could generate a whole collection of nontrivial, all different, test problems min{$(c(x))[x D}. Unfortunately, the 'new' problems generated this way are only new in appearance; they are actually all identical to the old one, at least with regard to concave minimization algorithms like CSA, PAA, OAA. By disguising linear programs into seemingly nonlinear problems, and polyhedrons into seemingly nonpolyhedral convex sets, the effect of monotone functions qJ(t) is sometimes misleading. For instance, in a paper devoted to concave minimization under nonlinear convex constraints, to illustrate how an algorithm devised for this problem works, an example is given where the nonlinear convex constraints are IX1 -~- 0.5X211"5 _~<1.2
(6)
(xl +x2) 2 <- 1.8
(7)
exp(0.2xl + 0.1x2) _< 1.5
(8)
- 1 2 < x i<12
i=1,2
(9) Obviously, this system actually defines a polyhedron and not a nonpolyhedral convex set as it is supposed to be. By treating the feasible set (6)-(9) as a nonpolyhedral convex set, the corresponding concave minimization problem is solved through 15 auxiliary linear programs, whereas actually only two linear programs would have been needed if the constraints were treated as linear.
References
[1] Horst, R., Thoai, N. V.: Modification implementation and comparison of three algorithms for globally solving linearly constrained concave minimization problems. Computing 42, 271-289 (1989). [2] Thoai, N. V.: On the construction of test problems for concave minimization algorithms. J. Global Optim. 5, 399-402 (1994). H. D. Tuan Department of Electronic-Mechanical Engineering Nagoya University Furo-cho, Chikusa-ku Nagoya 464-01 Japan e-mail:
[email protected]