Sei sulla pagina 1di 14

IMA Journal of Management Mathematics Advance Access published June 2, 2014

IMA Journal of Management Mathematics (2014) Page 1 of 14


doi:10.1093/imaman/dpu012

Continuous variable neighbourhood search with modified Nelder–Mead for


non-differentiable optimization

Milan Dražić, Zorica Dražić

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
Faculty of Mathematics, University of Belgrade, Belgrade, Serbia
Nenad Mladenović
Brunel University, London, UK
Dragan Urošević∗
Mathematical Institute SANU, Belgrade, Serbia

Corresponding author: draganu@mi.sanu.ac.rs
and
Qiu Hong Zhao
School of Economics and Management, Beihang University, Beijing, China
[Received on 12 April 2013; accepted on 6 May 2014]

Several variants of variable neighbourhood search (VNS) for solving unconstrained and constrained con-
tinuous optimization problems have been proposed in the literature. In this paper, we suggest two new
variants, one of which uses the recent modified Nelder–Mead (MNM) direct search method as a local
search and the other an extension of the MNM method obtained by increasing the size of the simplex each
time the search cannot be continued. For these new and some previous VNS variants, extensive computa-
tional experiments are performed on standard and large non-differentiable test instances. Some interesting
observations regarding comparison of some VNS variants with NM based local search are made.

Keywords: global optimization; non-differentiable optimization; simplex method; heuristics; variable


neighbourhood search.

1. Introduction
Let us consider an unconstrained non-linear programming (NLP) problem

min{f (x) | x ∈ Rn }, (1.1)

where f : Rn → R is a continuous objective function. In general, the objective function is neither convex
nor concave, and there may be many local minima. In addition, function f (x) can be non-differentiable
hence, the first- and the second-order methods cannot be used.
Unconstrained NLP problems arise in many applications, e.g. in financial planning, risk manage-
ment, scientific modelling, advanced engineering design, data analysis, chemistry and facility location.
In many cases, such problems are very difficult because there are more local minima, whose number
can grow exponentially with the dimension of the problem. For the problems of even moderate size,
methods that offer a guarantee of finding the true global minimum are too time-consuming. Hence,
different (meta)heuristic approaches, which heavily depend on computer power, have been developed


c The authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
2 of 14 M. DRAŽIĆ ET AL.

(see e.g. Battiti & Tecchiolli, 1996; Chelouah & Siarry, 2000; Hedar & Fukushima, 2002; Audet et al.,
2008; Kovačević et al., 2012).
Variable neighbourhood search (VNS) is a metaheuristic whose basic idea is a systematic change
of neighbourhood structures in search for better solution (Mladenović & Hansen, 1997; Hansen et al.,
2010). Several VNS-based heuristics for solving NLP have been proposed (see Dražić et al., 2006;
Mladenović et al., 2008; Toskari & Güner, 2007; Carrizosa et al., 2012). In this paper, we present two
new VNS variants that use recent modification of the original NM method (Nelder & Mead, 1965) as

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
local search, and compare them with previously developed Glob-VNS and Gauss-VNS (Dražić et al.,
2006; Carrizosa et al., 2012).
The paper is organized as follows. In Section 2, we give pseudo-codes of the recently restarted
modified Nelder–Mead (RMNM) method (Zhao et al., 2009, 2012) and its extension, suggested in
this paper, which uses VNS principle. In Section 3, we give the steps of the Glob-VNS heuristic for
solving unconstrained NLP used when comparing new and old VNS + NM heuristics. In Section 4, the
comparison of four heuristics is performed on standard small test instances, as well as on small and
large non-differential ones. Section 5 concludes the paper.

2. Modifications of the NM method


As mentioned earlier, in their RMNM, Zhao et al. (2009) basically propose two modifications of clas-
sical NM: modify Shrink step; restart NM with initial value of the simplex size. In our VNS + restarted
Nelder-Mead (RNM), we use only the second modification; in VNS + RNM, the NM method is restarted
just once, after reaching a local minimum.

2.1 Classical NM
As it is known, the classical NM method consists of:
• choosing a starting point x1 ∈ Rn and creating the initial simplex X by generating remaining n ver-
tices x2 , x3 , . . . , xn+1
• repeating the NM iteration until the stopping condition is met.
The NM iteration consists of the following steps:
• selecting the vertex x1 having the lowest (minimal) value of the optimized function f , vertices xn+1
and xn having the highest and the second highest function value, and centroid x̄ of simplex vertices
excluding the vertex xn+1 ;
• calculating the point xr in the following way:

xr = x̄ + α(x̄ − xn+1 )

and performing one of the following moves according to f (xr ):


– if f (x1 )  f (xr ) < f (xn ) then, replace the vertex xn+1 with xr
– if f (xr ) < f (x1 ) then, compute xe = x̄ + β(xr − x̄), and replace xn+1 with the better one from xr
and xe
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION 3 of 14

– if f (xr )  f (xn ) then, compute



xco = x̄ + γ (xr − x̄), if f (xr ) < f (xn+1 )
xc =
xci = x̄ − γ (x̄ − xn+1 ), if f (xr )  f (xn+1 )

and if f (xc ) < f (xn+1 ) then, replace xn+1 with xc , otherwise perform the shrink step.

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
Numbers α, β and γ can be considered as parameters, but usually they are set to 1, 2 and 12 , respec-
tively. In Zhao et al. (2012), α, β and γ are taken at random from a given interval. Shrink step consists
of moving n vertices of the simplex X along the direction of the best one. In other words, shrink step
consists of replacing current vertices, except the best one (vertex x1 ), in the following way: vertex xi is
replaced with x1 + δ(xi − x1 ), where δ usually has the value 12 .
The NM method can be easily implemented, the objective function does not have to be smooth
(it can even be discontinuous), and in each iteration, the objective function is calculated just once or
twice. Shrinking step occurs rarely, so the method is well suited for problems whose function evaluation
is expensive. On the other hand, the method may not converge to the optimal solution even if the
function is convex and smooth. The shape of the working simplex can almost degenerate while adapting
itself to the local landscape. The method can take an enormous amount of iterations with negligible
improvement in function value, despite not being near to the optimum.

2.2 Modification of the NM iteration


In our modification of Nelder–Mead (MNM), the shrinking step is modified (Zhao et al., 2009). Instead
of ‘shrinking’ all vertices of current simplex X (except the best one), we shrink only the worst one. The
method proceeds only if the worst vertex can improve the solution in the next iteration. But if there is
no improvement in the next step, then we reconstruct the previous simplex, perform shrinking of two
worst vertices and so on. If all vertices in the incumbent simplex X have been shrinked, a new simplex
X1 has been obtained, we replace a current X , and continue with the MNM iterations. Pseudo-code for
the MNM iteration is given in Algorithm 1.
The complete method that iterates the MNM iteration is called the MNM method (pseudo-code is
given in Algorithm 2).

2.3 RMNM method


In the first method, we restart the complete MNM by using local optimum, obtained by last (modified)
NM as the initial point, and construct new initial simplex with this initial point as its vertex (Zhao et al.,
2009). The new initial simplex has the same size as the initial one. We repeat restarting until MNM
improves the previous solution.
Pseudo-code is given in Algorithm 3.

2.4 Restarted MNM method with expansion


In the second variant, the MNM is restarted from the previously obtained local optimum, but the size of
the simplex will depend on status of the last result in the following ways:
• if the last MNM improves the best solution found so far, then the size of the simplex X in the next
iteration will have the same initial value; we call this size ‘minssize’
4 of 14 M. DRAŽIĆ ET AL.

Algorithm 1: Iteration of MNM


1 function MNMIteration(X , f );
/* f - function to be optimized */
/* X - current simplex */
2 let x1 be the vertex of the simplex X having the lowest function value (f (x1 )  f (xi ));
3 let xn+1 and xn be vertices of X having the highest and the second highest value of f ;

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
4 let x̄ denotes a centroid for vertices of X excluding xn+1 ;
5 xr ← x̄ + α(x̄ − xn+1 );
6 if f (x1 )  f (xr ) < f (xn ) then
7 X ← X ∪ {xr } \ {xn+1 } /* replace xn+1 with xr */;
8 return true;
9 end
10 if f (xr ) < f (x1 ) then
11 xe ← x̄ + β(xr − x̄);
12 if f (xe ) < f (xr ) then
13 X ← X ∪ {xe } \ {xn+1 } /* replace xn+1 with xe */;
14 else
15 X ← X ∪ {xr } \ {xn+1 } /* replace xn+1 with xr */;
16 end
17 return true;
18 end
19 if f (xr ) < f (xn+1 ) then
20 xc ← x̄ + γ (xr − x̄)
21 else
22 xc ← x̄ − γ (x̄ − xn+1 )
23 end
24 if f (xc ) < f (xn+1 ) then
25 X ← X ∪ {xc } \ {xn+1 } /* replace xn+1 with xc */;
26 return true;
27 else
28 return false;
29 end

• if the last MNM does not improve solution, then we restart MNM with the old local optima as the
initial point but with the increased simplex size.
MNM is repeated until the size of the simplex becomes larger than its maximum allowed value
(maxssize, the parameter). Steps of the RMNM with expansion of the simplex size are given in
Algorithm 4.

3. Glob-VNS
The idea of using several geometric neighbourhood structures and random distributions in the shaking
step led to the Glob-VNS variant of VNS (Dražić et al., 2006; Mladenović et al., 2008). It turned out
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION 5 of 14

Algorithm 2: Modified Nelder Mead


1 function MNM(f , n, x0 , size);
/* f - function (of n arguments) to be optimized */
/* x0 - starting or initial point */
/* size - size (length of edges) of initial simplex */
2 X ← InitialSimplex(x0 , size);
Xo ← X ; ns ← 0;

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
3
4 while Stopping condition is not met do
5 if not MNMIteration(X , f ) then
6 X ← X0 ; ns ← ns + 1;
7 X ← ReducedShrink(X , ns) /* shrink only ns worst vertices */;
8 if ns = n then
9 Xo ← X ; ns ← 0;
10 end
11 else
12 Xo ← X ; ns ← 0;
13 end
14 end
15 let x1 vertex of X having lowest value of function f ;
16 return x1 ;

Algorithm 3: Restarted MNM (RMNM)


1 function MNM(f , n, ssize)
/* f - function (of n arguments) to be optimized */
/* ssize - size of starting (initial) simplex */
2 x1 ← InitialPoint
3 repeat
4 x0 ← x1
5 x1 ← MNM (f , n, x0 , ssize)
6 until f (x1 )  f (x0 )
7 return x0

to be noticeably more efficient than the variants with fixed geometry and fixed distribution. Its steps are
given in Algorithm 5.
In order to make the algorithm Glob-VNS efficient, some decisions must be made. First, a geometry
for the neighbourhood structure Nk (x), k = 1, . . . , kmax , x ∈ Rn , is needed. The most popular choices are

Nk (x) = {y | ρ(x, y)  ρk }, (3.1)

or
Nk (x) = {y | ρk−1 < ρ(x, y)  ρk }. (3.2)
6 of 14 M. DRAŽIĆ ET AL.

Algorithm 4: Restarted Modified NM with expansion (RMNME)


1 function RMNME(f , n, minssize, maxssize, incssize)
/* f - function (of n arguments) to be optimized */
/* minsize, maxsize - minimal and maximal value of simplex size */
2 x0 ← InitialPoint
3 ssize = minssize

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
4 repeat
5 x1 ← MNM (f , n, x0 , ssize)
6 if f (x1 ) < f (x0 ) then
7 x0 ← x1
8 ssize ← minssize
9 else
10 ssize ← ssize + incssize
11 until ssize > maxssize
12 return x0

Algorithm 5: Glob VNS


1 function Glob-VNS(f , n, kmax )
2 Select a set of neighbourhood structures N k , k = 1, . . . , kmax
3 Select distribution, one type or more, for obtaining random points in Nk
4 Choose an arbitrary initial point x ∈ S
5 Set x∗ ← x, f ∗ ← f (x)
6 repeat
8 Set k ← 1
9 repeat
10 Shake: Generate a point y ∈ Nk (x) at random; Apply some local search method starting
from y to obtain a local minimum at y
11 if f (y ) < f ∗ then
12 x∗ ← y , f ∗ ← f (y )
13 goto 8
14 else
15 k←k +1
16 until k > kmax
17 Change (in a cyclic way) the distribution to the next type
18 until the stopping condition is met
19 return x∗

Metric ρ(·) is usually an p distance, 1  p  ∞, typically p = 1, 2, ∞. The geometry of neighbourhood


structures is thus determined by the choice of metric ρ(·), and Nk (x) is determined by choice of the
radii ρk . The radii ρk are monotonically increased in k and are defined either by the user or calculated
automatically in the optimization process.
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION 7 of 14

One also needs to specify the distribution Pk for obtaining the random point y from Nk (x) in the
shaking step. Drawing y uniformly in Nk (x) is the most popular choice. Some other distributions, like
special designed hypergeometric distribution can be very successful and often, they outperform uniform
distribution (Dražić et al., 2008).
Recently, Carrizosa et al. (2012) proved this approach to be successful. All neighbourhoods Nk (x)
are set to the entire solution space, and the distributions Pk are taken to be Gaussian multi-variate
distributions with variances ρk , resulting in a Gauss-VNS variant of VNS.

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
4. Experimental results
For testing the effectiveness of the proposed modifications of the NM method, we used the package
GLOB, a test platform for numerical experiments with VNS (Dražić et al., 2006). GLOB is coded in C++
programming language. We also implemented the two proposed modifications of the NM method, so
the tests could be performed with the same stopping condition. The quality of any optimization method
is usually measured by the number of function evaluations (computer effort) until the optimal (or best
known) solution is reached. We consider that the global optimum is reached if

Fmin − Fopt
< ε1 , ε1 = 10−4 . (4.1)
|Fmin | + |Fopt | + 1

Local optima are detected if

Fmax − Fmin
< ε2 , ε2 = 10−5 , (4.2)
|Fmax | + |Fmin | + 1

where Fmax and Fmin are the maximum and minimum function values at simplex vertices. The initial
simplex for each method is generated to be equilateral with a random point as one of its vertices. Each
experiment is repeated 10 times, and average results are presented. For VNS tests, we fixed kmax = 10
for all standard test functions and kmax = 15 for all large non-differentiable problems. Glob-VNS and
Gauss-VNS variants of VNS performed nearly equally efficient on presented test functions, so only
Glob-VNS variant was used in the presented results. We tested four variants of the NM method within
VNS heuristic.
• VNS + NM—standard Nelder–Mead method used as a local optimizer for VNS-based heuristic;
• VNS + RNM—same as VNS + NM, except that in local search NM is restarted only once after reaching
a local optima;
• RMNM—restarted modified NM given in Algorithm 3.
• RMNME—extended restarted modified NM given in Algorithm 4.
Once the random staring point is defined, RMNME and RMNM operate in a deterministic way, and
stop when exit criteria (4.1) or (4.2) are fulfilled. On the other hand, VNS variants use the NM method
as a local optimizer and continue to search for better solutions from different initial points randomly
determined from various neighbourhoods. Maximal allowed run time limit is a natural exit criterion
for VNS, and is most suitable for hard problems in high-dimensional spaces. While for functions with
many local optima VNS has clear advantage, for convex problems or problems with a single local (also
global) optimum, it is not clear which variant is more efficient.
8 of 14 M. DRAŽIĆ ET AL.

To make the overall analysis, two sets of test problems are considered. The first set consists of
standard small dimension test functions. The second set consists of 10 non-differentiable functions with
a variable dimension (Haarala, 2004).

4.1 Standard test functions


The results for standard test functions, well known from literature, are summarized in Table 1. First,

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
three columns denote the test function name, space dimension and the global optimum value. For each
variant of NM, the results are presented in three columns. The first column (Feval ) contains the aver-
age number of overall function evaluations from 10 test runs. The second column (succ.) contains the
number of job runs, where the global optimum is found (4.1). If the optimal value was not found in all
test runs, column (Fmin ) contains the average of function values in points obtained by these 10 runs,
otherwise, it is empty.
It appears that both VNS variants (VNS + NM and VNS + RNM) solve successfully all 15 test
instances. Regarding computational efforts, the basic VNS variant needed less function evaluations to
reach known global minima than the new VNS-based method (compare 1.6 million with 2 million). The
possible reason is the fact that standard test functions are relatively simple. The new RMNME is able to
improve precision of RMNM (compare 97 and 74 number of exact solution reached by RMNME and
RNMN, respectively). For example, for the Shubert function S7, RMNME solved exactly all 10 times,
while RMNM missed to find it in all 10 trials. The more detailed observation follows.
As Table 1 shows, there is no significant difference among all four variants of NM for test functions
S1, S9, S13 and S15. For function S2 RMNME and RMNM are superior over VNS variants. For func-
tions S3 and S7 RMNME did find optimum in all 10 runs with two times more computer effort compared
with VNS variants, while RMNM was not successful. For eight remaining test functions, superiority of
VNS + NM and VNS + RNM is evident, since they found optima in all tests while RMNME and RMNM
failed to do so. For functions S4, S5, S6 and S11 RMNME failed with comparable or bigger computer
effort than for successful VNS variants, while RMNM was even less successful but with much less
computer effort. For S8, S10, S12 and S14 both RMNME and RMNM performed bad with significantly
less computer effort.
In VNS framework, restarting NM did not improve the efficiency for most standard test functions.
Only for functions S9, S11 and S15 VNS + RNM was insignificantly better, while for all 12 remaining
functions, VNS + NM was more efficient.

4.2 Large non-differentiable problems


For testing the proposed methods on non-differentiable functions in high-dimensional space, we used
10 test functions from Haarala (2004). They are as follows.
LND1. Generalization of MAXQ

f (x) = max x2i , fmin = 0.


1in

LND2. Generalization of MXHILB


 
 
 n x 
f (x) = max  j , fmin = 0.
1in  + − 
j=1
i j 1 
Table 1 Standard test functions

Function VNS + NM VNS + RNM RMNME RMNM

Name n Fopt Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin
S1. Branin 2 0.3979 72 10 75 10 72 10 72 10
S2. Goldstein and 2 3 150 10 211 10 109 10 109 10
Price
S3. Hartman H34 3 −3.8626 245 10 367 10 702 10 123 4 −2.1453
S4. Hartman H64 6 −3.3224 1019 10 1375 10 1,547 8 −3.2744 590 5 −2.9421
S5. Shekel 4, 10 4 −10.5364 1367 10 3131 10 1,793 6 −7.3679 222 2 −4.2124
S6. Shekel 4, 5 4 −10.1532 1035 10 1318 10 1,487 5 −6.4069 270 5 −6.4069
S7. Shubert 2 −186.7309 1071 10 1218 10 2,228 10 73 0 −23.378
S8. Ackley 10 0 188,726 10 324,405 10 35,706 0 0.8977 880 0 12.47
S9. Dixon and 10 0 5273 10 4752 10 5,138 10 5,138 10
Price
S10. Griewank 6 0 148,528 10 161,096 10 17,285 0 0.2108 601 0 53.18
S11. Rosenbrock 10 0 9738 10 7917 10 8,482 8 0.7974 6,750 8 0.7974
S12. Schwefel 10 0 1,173,568 10 1,262,060 10 16,426 0 1789.88 2,535 0 1818.45
S13. Zakharov 10 0 4056 10 4218 10 4,508 10 4,508 10
S14. Rastrigin 10 0 130,802 10 222,881 10 32,184 0 7.5618 1,147 0 88.75
S15. Powell 12 0 6092 10 5977 10 6,349 10 6,237 10
Sum 1,671,742 150 2,002,001 150 134,016 97 29,255 74
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION
9 of 14

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
10 of 14

Table 2 Large non-differentiable test functions

Function VNS + NM VNS + RNM RMNME RMNM


n Fopt Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin
LND1. Generalization of MAXQ
10 0 5657 10 5601 10 5387 10 5387 10
20 0 28,351 10 26,637 10 29,732 10 26,875 10
30 0 67,566 10 68,105 10 75,889 10 72,932 10
40 0 120,672 10 125,107 10 132,496 10 127,671 10
50 0 253,598 10 243,742 10 251,925 10 247,238 10
LND2. Generalization of MXHILB
10 0 62,219 10 10,873 10 7583 10 7583 10
20 0 5,611,105 0 0.000185 63,118 10 40,325 10 30,845 8 0.0002
30 0 2,627,389 0 0.000488 167,127 10 95,301 10 84,607 7 0.0003
40 0 1,563,528 0 0.000709 747,442 8 0.000101 179,506 10 168,198 9 0.000115
50 0 995,644 0 0.000989 1,005,813 1 0.000151 369,404 6 0.000105 269,360 6 0.000689
LND3. Chained LQ
10 −12.7279 9296 10 5277 10 4669 3 −12.7254 4384 3 −12.7254
20 −26.8701 122,393 10 29,266 10 22,698 7 −26.8697 19,887 3 −26.8648
30 −41.0122 550,353 10 62,813 10 49,191 7 −41.004 39,790 7 −41.0040
M. DRAŽIĆ ET AL.

40 −55.1543 4,861,354 5 −55.1426 384,261 10 64,210 9 −55.1433 38,966 9 −55.1410


50 −69.2965 2,819,445 8 −69.2823 154,489 10 118,847 3 −69.2825 43,782 7 −69.2455
LND4. Chained CB3 I
10 18 8601 10 8292 10 6334 10 6334 10
20 38 36,317 10 31,394 10 18,183 8 38.007 16,594 8 38.007
30 58 51,788 10 51,699 10 25,192 6 58.011 21,585 6 58.011
40 78 97,106 10 93,124 10 46,030 7 78.015 42,827 6 78.015
50 98 201,912 10 143,845 10 77,979 4 98.0193 58,479 3 98.1884
LND5. Chained CB3 II
10 18 8912 10 9775 10 7975 9 18.003 7170 9 18.003
20 38 28,893 10 48,761 10 77,622 5 38.007 36,018 4 38.009
30 58 65,678 10 86,455 10 43,261 9 58.009 33,839 9 58.010
40 78 82,356 10 133,536 10 156,658 8 78.012 94,093 6 78.015
50 98 107,374 10 145,274 10 266,291 8 98.0196 50,423 6 98.0979

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
Table 3 Large non-differentiable test functions (cont.)

Function VNS + NM VNS + RNM RMNME RMNM


n Fopt Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin Feval succ. Fmin
LND6. Number of active faces
10 0 199,318 10 7776 10 7681 10 7681 10
20 0 12,991,251 0 0.000218 145,763 10 57,492 10 57,492 10
30 0 8,069,203 0 0.001117 8,189,895 1 0.000139 92,096 10 89,057 8 0.000122
40 0 5,486,154 0 0.00220 5,799,417 0 0.000254 172,636 10 162,874 6 0.000221
50 0 3,878,521 0 0.002894 4,088,098 0 0.000403 354,226 7 0.000101 316,238 6 0.00158
LND7. Non-smooth generalization of Brown function 2
10 0 34,591 10 8498 10 7953 10 7953 10
20 0 6,869,657 1 0.000182 43,204 10 26,373 10 26,373 10
30 0 4,413,961 0 0.000691 79,623 10 52,366 10 49,550 9 0.00010
40 0 3,089,010 0 0.00146 226,122 10 120,818 10 110,215 8 0.00011
50 0 2,344,423 0 0.00299 999,753 9 0.0001 184,837 7 0.000155 101,496 5 0.000301
LND8. Chained Mifflin 2
10 −6.5146 30,960 10 20,413 10 24,977 1 −6.5128 6114 0 −6.5073
20 −13.5831 787,015 10 195,133 10 76,270 1 −13.5739 18,990 0 −13.5646
30 −20.6535 2,595,076 9 −20.6493 1,071,285 10 157,337 2 −20.6365 31,171 0 −20.6210
40 −27.7243 4,619,462 7 −27.718 2,442,460 9 −27.7186 282,583 1 −27.6941 47,385 0 −27.6575
50 −34.795 4,166,819 2 −34.7809 3,179,337 6 −34.7871 389,722 0 −34.7513 74,022 0 −34.7135
LND9. Chained crescent I
10 0 14,831 10 7626 10 6240 10 6240 10
20 0 81,819 10 35,149 10 56,663 10 33,562 3 0.0052
30 0 130,844 10 68,264 10 633,172 10 121,431 0 0.0027
40 0 306,036 10 73,833 10 121,471 10 104,325 0 0.0014
50 0 290,204 10 90,509 10 139,425 10 103,884 0 0.00021
LND10. Chained crescent II
10 0 148,252 10 90,998 10 51,247 10 10,674 0 0.2051
20 0 7,240,978 10 443,301 10 155,093 8 0.0064 45,460 0 0.6214
30 0 8,651,335 1 0.00059 3,280,270 10 422,891 0 0.2071 86,248 0 0.4208
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION

40 0 6,472,405 0 0.00163 6,492,246 2 0.00023 1,529,778 0 0.2739 162,380 0 0.9271


50 0 4,443,330 0 0.0155 3,928,254 5 0.00508 768,620 2 1.465 336,334 0 2.393
Sum 107,742,962 333 44,877,053 441 8,064,655 368 3,672,016 281
11 of 14

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
12 of 14 M. DRAŽIĆ ET AL.

LND3. Chained LQ


n−1

f (x) = max{−xi − xi+1 , −xi − xi+1 + (x2i + x2i+1 − 1)}, fmin = −(n − 1) 2.
i=1

LND4. Chained CB3 I



n−1

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
f (x) = max{x4i + x2i+1 , (2 − xi )2 + (2 − xi+1 )2 , 2 e−xi +xi+1 }, fmin = 2(n − 1).
i=1

LND5. Chained CB3 II


 n−1 
 
n−1 
n−1
f (x) = max (x4i + x2i+1 ), ((2 − xi )2 + (2 − xi+1 )2 ), (2 e−xi +xi+1 ) , fmin = 2(n − 1).
i=1 i=1 i=1

LND6. Number of active faces


⎧ ⎛ ⎞ ⎫
⎨ 
n ⎬
f (x) = max g ⎝− xj ⎠ , g(xi ) , g(y) = ln(|y| + 1), fmin = 0.
1in ⎩ ⎭
j=1

LND7. Non-smooth generalization of the Brown function 2


n−1
(|xi |xi+1 +1 + |xi+1 |xi +1 ),
2 2
f (x) = fmin = 0.
i=1

LND8. Chained Mifflin 2



n−1
f (x) = (−xi + 2(x2i + x2i+1 − 1) + 1.75 | x2i + x2i+1 − 1|)
i=1

fmin varies, fmin = −34.795 for n = 50, fmin = −140.86 for n = 200.

LND9. Chained Crescent I


 n−1 
 
n−1
f (x) = max (xi + (xi+1 − 1) + xi+1 − 1),
2 2
(−xi − (xi+1 − 1) + xi+1 + 1) ,
2 2
fmin = 0.
i=1 i=1

LND10. Chained Crescent II



n−1
f (x) = max{x2i + (xi+1 − 1)2 + xi+1 − 1, −x2i − (xi+1 − 1)2 + xi+1 + 1}, fmin = 0.
i=1

Since the dimension of all these test functions is variable, we tested all of them using n = 10, 20, 30,
40 and 50. The results are summarized in Table 2 and 3. For each test function, the first two columns
contain the dimension and the global optimum value. For each variant of NM, as for standard test func-
tions, the results are presented in three columns. The first column (Feval ) contains the average number
of overall function evaluations from 10 test runs. The second column (succ.) contains the number of
CONTINUOUS VNS WITH MNM FOR NON-DIFFERENTIABLE OPTIMIZATION 13 of 14

job runs where the global optimum is found (4.1). If the optimal value was not found in all test runs,
column (Fmin ) contains the average global minimum value from 10 runs, otherwise it is empty.
According to the accuracy and efficiency of the four proposed NM variants on large test problems,
the results from Tables 2 and 3 show the following.
It appears:
• The most reliable method among four is VNS + RNM. In 500 runs (50 instances, each run 10 times),
only 59 times the optimal solution was not found. For only 2 out of 50 instances, optimal solutions

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
were not found in 10 restarts. However, even in those cases (i.e. in LND6 instance, with n = 40 and
n = 50), the solutions found are very close to the global optimum.
• Surprisingly, our new simple extended MNM (RMNME) is comparable (even better on average)
with VNS + NM, with respect to solution quality (compare 368 and 333 exactly solved instances by
RMNME and VNS + NM, respectively). VNS + NM solved exactly more problems for test instances
LND3, LND4, LND5 and LND8, while the RMNME was more effective for LND2, LND6 and
LND7. The effectiveness in solving other instances of these two heuristics are similar.
• As expected, if optimal solutions are found by RMNM and VNS + NM, then their computational
efforts are smaller than those by RMNME and VNS + RNM, respectively.
• It is interesting to note that instance LND6, the hardest one for our best method (VNS + RNM) is
successfully solved by RMNME and RMNM.

5. Conclusion
Nelder–Mead (NM) is probably the most popular direct search method for solving unconstrained contin-
uous optimization. Because of its simplicity, it is also used as a part of global optimization methods such
as simulating annealing, genetic search and variable neighbourhood search (VNS). Recently, an efficient
modification of NM has been proposed, called RMNM. Next, we extend RMNM to get a new version of
NM (RMNME), capable of solving non-convex, non-differentiable problem instances. Finally, we per-
form extensive comparative analysis of four different variants of RMNM and VNS, which use different
NM methods. Special attention has been paid to testing non-differentiable functions since the first- and
the second-order methods cannot solve them. It appears that RMNM fits excellent into a VNS-Glob. In
addition, the extended RMNME outperforms the RMNM, as well as VNS + NM.
Future work may include the insertion of RMNM into other heuristics that use NM as a subroutine.

Funding
The present research work has been supported by Research Grant 174010 of the Serbian Ministry of
Science and Environmental Protection.

References
Audet, C., Béchard, V. & Digabel, S. L. (2008) Nonsmooth optimization through mesh adaptive direct search
and variable neighborhood search. J. Global Optim., 41, 299–318.
Battiti, R. & Tecchiolli, G. (1996) The continuous reactive tabu search: blending combinatorial optimization
and stochastic search for global optimization. Ann. Oper. Res., 63, 153–188.
Carrizosa, E., Dražić, M., Dražić, Z. & Mladenović, N. (2012) Gaussian variable neighborhood search for
continuous optimization. Comput. Oper. Res., 39, 2206–2213.
14 of 14 M. DRAŽIĆ ET AL.

Chelouah, R. & Siarry, P. (2000) A continuous genetic algorithm designed for global optimization of multimodal
functions. J. Heuristics, 6, 191–213.
Dražić, M., Kovačević-Vujčić, V., Čangalović, M. & Mladenović, N. (2006) GLOB-A new VNS-based soft-
ware for global optimization. Global Optimization: From Theory to Implementation (L. Liberty & N. Maculan
eds). Berlin, Heidelberg, New York: Springer, pp. 135–154.
Dražić, M., Lavor, C., Maculan, N. & Mladenović, N. (2008) A continuous variable neighborhood search
heuristic for finding the three-dimensional structure of a molecule. EJOR, 185, 1265–1273.
Haarala, M. (2004) Large-Scale Nonsmooth Optimization: Variable Metric Bundle Method with Limited Memory.

Downloaded from http://imaman.oxfordjournals.org/ at University of California, San Francisco on November 27, 2014
Jyväskylä: University of Jyväskylä, pp. 96–98.
Hansen, P., Mladenović, N. & Pérez, J. A. M. (2010) Variable neighborhood search: methods and applications.
Ann. Oper. Res., 175, 367–407.
Hedar, A. R. & Fukushima, M. (2002) Hybrid simulated annealing and direct search method for nonlinear uncon-
strained global optimization. Optim. Methods Softw., 17, 891–912.
Kovačević, D., Petrović, B. & Milošević, P. (2012) Estimating differential evolution crossover parameter with
VNS approach for continuous global optimization. Electron. Notes Discrete Math., 39, 257–264.
Mladenović, N., Dražić, M., Kovačević-Vujčić, V. & Čangalović, M. (2008) General variable neighborhood
search for the continuous optimization. EJOR, 191, 753–770.
Mladenović, N. & Hansen, P. (1997) Variable neighborhood search. Comput. Oper. Res., 24, 1097–1100.
Nelder, J. A. & Mead, R. (1965) A simplex method for function minimization. Comput. J., 7, 308–313.
Toksari, M. D. & Güner, E. (2007) Solving the unconstrained optimization problem by a variable neighborhood
search. J. Math. Anal. Appl., 328, 1178–1187.
Zhao, Q., Mladenović, N. & Urošević, D. (2012) A parametric simplex search for unconstrained optimization
problem. Trans. Adv. Res., 8, 22–27.
Zhao, Q., Urošević, D., Mladenović, N. & Hansen, P. (2009) A restarted and modified simplex search for
unconstrained optimization. Comput. Oper. Res., 36, 3263–3271.

Potrebbero piacerti anche