Sei sulla pagina 1di 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/225168818

Method of Successive Weighted Averages (MSWA) and Self-Regulated


Averaging Schemes for Solving Stochastic User Equilibrium Problem

Article  in  Networks and Spatial Economics · December 2009


DOI: 10.1007/s11067-007-9023-x

CITATIONS READS

132 2,073

3 authors:

Henry X. Liu Xiaozheng (Sean) He


University of Minnesota Twin Cities Rensselaer Polytechnic Institute
158 PUBLICATIONS   4,128 CITATIONS    75 PUBLICATIONS   1,314 CITATIONS   

SEE PROFILE SEE PROFILE

Bing-Sheng He
Nanjing University
101 PUBLICATIONS   5,428 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Testing and Evaluation of Highly Automated Driving Systems View project

Positive-Indefinite Proximal Regularization (PIPR) View project

All content following this page was uploaded by Xiaozheng (Sean) He on 26 May 2014.

The user has requested enhancement of the downloaded file.


Netw Spat Econ
DOI 10.1007/s11067-007-9023-x

Method of Successive Weighted Averages (MSWA)


and Self-Regulated Averaging Schemes for Solving
Stochastic User Equilibrium Problem

Henry X. Liu & Xiaozheng He & Bingsheng He

# Springer Science + Business Media, LLC 2007

Abstract Although stochastic user equilibrium (SUE) problem has been studied
extensively in the past decades, the solution convergence of SUE is generally quite
slow because of the use of the method of successive averages (MSA), in which the
auxiliary flow pattern generated at each iteration contributes equally to the final
solution. Realizing that the auxiliary flow pattern is in fact approaching to the
solution point when the iteration number is large, in this paper, we introduce the
method of successive weighted averages (MSWA) that includes a new step size
sequence giving higher weights to the auxiliary flow patterns from the later
iterations. We further develop a self-regulated averaging method, in which the step
sizes are varying, rather than fixed, depending on the distance between intermediate
solution and auxiliary point. The proposed step size sequences in both MSWA and
self-regulated averaging method satisfy the Blum theorem, which guarantees the
convergence of SUE problem. Computational results demonstrate that convergence
speeds of MSWA and self-regulated averaging method are much faster than those of
MSA and the speedup factors are in a manner of magnitude for high accuracy
solutions. Besides SUE problem, the proposed methods can also be applied to other
fixed-point problems where MSA is applicable, which have wide-range applications
in the area of transportation networks.

Keywords Stochastic user equilibrium . Method of successive averages (MSA) .


Method of successive weighted averages (MSWA) . Self-regulated averages

H. X. Liu (*) : X. He
Department of Civil Engineering, University of Minnesota–Twin Cities,
Minneapolis, MN 55455, USA
e-mail: henryliu@umn.edu
X. He
e-mail: hexxx069@umn.edu

B. He
Department of Mathematics, Nanjing University, Nanjing, China
e-mail: hebma@nju.edu.cn
H.X. Liu, et al.

1 Introduction

In stochastic user equilibrium (SUE) models, non-shortest routes can carry positive
flow because probabilistic errors in travelers’ perception on route costs are assumed.
After Daganzo and Sheffi (1977) proposed the stochastic user equilibrium principle,
many studies (e.g., Sheffi and Powell 1982) have provided solution algorithms to
SUE problem, most of which relies on method of successive averages (MSA).
However, the slow convergence speed of MSA is one of the disadvantages that limit
the practical applications of SUE model. The reason for its poor convergence is
mainly due to the predetermined sequence of step size, which inspires researchers to
develop alternative methods with optimized step size.
Optimized step size algorithms offer better convergence rates than MSA. Chen
and Alfa (1991) proposed the convex combination algorithm to the Logit SUE, in
which an optimized step length is generated by minimizing Fisk’s (1980) objective
function. Maher (1998) developed alternative optimized step size choice with simple
formulation by using quadratic interpolation applied to the unconstrained SUE
model proposed by Sheffi and Powell (1982).
Noticing that the optimized step size algorithms recalculate the step size based on
information from prior iterations, Magnanti and Perakis (1997) generalized these
optimized step size algorithms as adaptive averaging methods in which the varying
step sizes minimize certain auxiliary evaluation functions. Bottom et al. (1999)
applied the general adaptive averaging methods to a specific fixed point problem on
the route guidance generation. Although optimized step size methods offer more
reasonable step size choice strategies than the fixed one in MSA, their algorithms
discard the simplicity of MSA because of an additional optimization sub-problem at
each step.
The purpose of this paper is to develop alternative step size choice strategies, in
order to provide better convergence speed for SUE problem, whilst maintaining the
simplicity of MSA. In the standard solution algorithm of SUE, at each iteration, a
stochastic network loading (either Logit- or Probit-based) is performed and an
auxiliary flow pattern is generated. Realizing that solving SUE is essentially
equivalent to solving a fixed point problem, and the auxiliary flow pattern generated
at each iteration is gradually approaching to the solution point, we develop the
method of successive weighted averages (MSWA), in which the step size sequence
gives higher weights to the later auxiliary flow patterns. This is clearly in contrast
with equal weights in the conventional MSA. In addition, we propose a self-
regulated step size choice strategy, which adjusts step size, instead of using
predetermined one, based on the distance between the intermediate solution and the
auxiliary point. Both alternative step size choice strategies significantly improve the
performance of MSA in SUE with only minor changes to the original MSA
algorithm. Similar to the sequence used in the conventional MSA, new step size
sequences in both MSWA and self-regulated averaging method satisfy the Blum
theorem, which guarantees the convergence of SUE problem.
The remainder of this paper is organized as follows. In the next section, we
review the SUE problem and MSA algorithm and discuss the properties that lead to
our algorithm development. Section 3 describes the details of the method of
successive weighted averages (MSWA), which has the revised sequence of step size.
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

Two algorithms (MSWA-I and MSWA-II) are proposed for the implementation of
MSWA. Section 4 introduces the self-regulated averaging method, whose step sizes
are guided by the distance between the auxiliary flow pattern and intermediate
solution. We discuss and compare the proposed algorithms in Section 5 using a small
minimization problem. Numerical examples are provided in Section 6, in order to
demonstrate the improvement of our methods. Finally, Section 7 summarizes the
findings and offers future research directions.

2 Stochastic user equilibrium and MSA solution algorithm

2.1 Basic concepts

Consider a directed graph G(N,A), where N and A are the sets of nodes and links,
respectively. For each link a∈ A, the link flow is defined as xa, with flow-dependent
travel cost as ta(xa). Each origin-destination (OD) pair (r,s) has the trip demand qrs
assigned on a path set Krs. Denote fkrs and crs k as the path flow and path travel
cost, respectively, on path k∈Krs; then the P relationship between path flow and
OD demand can   be described as q rs ¼ k fk ; 8ðr; sÞ. Further define Δ ¼
rs

ð. . . ; Δ ; . . .Þ ¼ d ij as link-path incidence matrix, where δij =1, if path j traverses


rs

link i, otherwise, δij =0. With this incidence matrix Δ, we have the relationship
between link-flow and path-flow, as x=Δf, and the relationship between link-cost
and path-cost as c=ΔTt.
In SUE assignment, the path flow is determined by fkrs ¼ qrs Pkrs ; 8ðr; sÞ; k 2 Krs,
where Pkrs is the probability that traveler traverses through path k ∈ Krs. As shown in
Sheffi (1985), when the random term of discrete route choice satisfies Gumbel or
Normal distribution, the route choice probability can be described as multinomial
Logit or Probit, respectively. For the Logit model, the route choice probability is
defined by:

eqck
rs

Pkrs ¼ P qcrs ð2:1Þ


le
l

where θ>0 indicates the degree of driver’s aversion to longer travel time. When
θ→0, the SUE assignment is totally random, regardless of path travel time; whilst if
θ→∞, the SUE assignment converges to deterministic user equilibrium assignment.
Sheffi and Powell (1982) proposed an unconstrained minimization program for
the SUE problem as follows:
X X XZ xa
min zð xÞ ¼  qrs Srs ½c ð xÞ þ
rs
xa ta ðxa Þ  ta ðwÞdw ð2:2Þ
rs a2A a2A 0

where
 
  rs @Srs ðcrs Þ
Srs ½c ð xÞ ¼ E min
rs
crs jc ð xÞ ; and ¼ Pkrs ; ð2:3Þ
k2Krs
k
@crsk
H.X. Liu, et al.

define the expected perceived travel cost for OD pair (r,s), which is relevant to
the entire path set between r and s. Since the expectation in 2.3 cannot be readily
evaluated except through the means of Monte Carlo simulation in stochastic
loading, evaluation of objective function 2.2 is not an easy task. Sheffi (1985)
therefore directly applied the standard Method of Successive Averages (MSA) as
the solution method to avoid the evaluation of the objective value. To improve the
slow convergence rate of MSA, Maher (1998) proposed an optimized step length
algorithm (OSLA) which successfully improves the convergence.
Due to its simplicity, MSA is widely used in solving SUE. However, the
conventional MSA only provides a pre-determined step size sequence, without
consideration of any property of SUE problem. Since the auxiliary flow pattern
should approach to the solution after sufficient large number of iterations, the
smaller step size restricts the convergence speed especially when the iteration
number is large, as discussed in Maher and Hughes (1998). This inspires us to study
the inherent properties of SUE problem in order to construct a new step size
sequence utilizing the intermediate information generated at each iteration.

2.2 Analysis of the SUE problem

For THE unconstrained minimization problem 2.2, it is equivalent to solve ∇z(x)=0,


as described in Sheffi (1985). The gradient of z(x) is represented by:
" #
X
rzðxÞ ¼ rx t ðxÞ  x  qrs Δ P ðxÞ :
rs rs
ð2:4Þ
rs

When the travel cost function is separable, ∇xt(x) is a diagonal positive definite
matrix. This property provides the equation ∇z(x)=0 an equivalent but simpler
expression, as:
X
rzðxÞ ¼ 0 , x  qrs Δrs Prs ðxÞ ¼ 0 ð2:5Þ
rs

P
Let yðxÞ ¼ qrs Δrs Prs ðxÞ, then the minimization program is equivalent to
rs
finding link flow vector x such that x−y(x)=0, which is a fixed point problem. In
addition, y(x)−x is a descent direction of z(x), since

rzðxÞT ½ yðxÞ  x  ¼ ½ x  yð xÞT  rx t ðxÞ  ½ yðxÞ  x  < 0: ð2:6Þ

The MSA provided in Sheffi (1985) applies y(x)−x as the searching direction,
with a predetermined step size sequence converging to zero. However, considering
the searching direction itself is close to zero, it is unreasonable to require the step
size to be small. Therefore we need to study the properties of MSA and provide
more reasonable step size sequence utilizing the property that the searching direction
approaches to zero.
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

2.3 Method of successive averages

MSA was originally introduced by Robbins and Monro (1951) for fixed point
problem F(x)=x. The general MSA procedure contains two main steps: firstly find a
temporary point by yk =F(xk), and then update the solution by
 
xkþ1 ¼ xk þ αk yk  xk ; k ¼ 1; 2; . . . ð2:7Þ

The sequence of step size, {αk}, guarantees the convergence of this method. Robbins
P
and Monro
P (1951) and Blum (1954) have proved that, under the conditions of ak ¼
1 and ðak Þ2 < 1, the procedure 2.7 converges almost surely to a solution of F(x)=
x. Robbins and Monro (1951) also suggested a predetermined step size sequence of
α
Pk ¼ 1=k in thePconventional MSA, which is easily proved satisfying
P conditions
ak ¼ 1 and ðak Þ2 < 1. In fact, the second condition of ðak Þ2 < 1 could
be replaced by a more general condition of lim ak ¼ 0. P And the convergence can be
k!1
simply proven by Taylor expansion with the conditions ak ¼ 1 and lim ak ¼ 0.
k!1
As indicated by Wolfe conditions (Nocedal and Wright 1999), the step sizes αk in the
iterative solution algorithms should be neither too large nor too small. Good step sizes
should satisfy both sufficient decrease and curvature conditions. In MSA, the condition
of lim ak ¼ 0 guarantees the step sizes will not be too large; whilst the condition of
P k!1
ak ¼ 1 collectively assures the step sizes will not be too small. However, it is a
fact that, at the very beginning, αk could be too large, and therefore, the objective
values do not reduce until a number of iterations. In contrast, after a large number of
iterations, the step sizes of αk could become too small, such that the convergence speed
becomes extremely slow. Due to these inherent drawbacks, in general, MSA can be
only used when there is not enough information to support a better step size selection.
To avoid the step sizes αk becoming too small in later iterations, some researchers
have proposed alternative predetermined sequences to slow down the decrease of αk
in later iterations. For example, Polyak (1990) proposed new averaging method by
alternative step sizes of αk ¼ k 2=3 , and Nagurney and Zhang (1996) proposed
another sequence as:
8 k
9
>
> zfflfflfflfflfflffl}|fflfflfflfflfflffl{ >
>
< 1 1 1 1 1 1 1 1 =
fak g ¼ 1; ; ; ; ; ; . . . ; ; ; . . . ; . . . ð2:8Þ
>
> 2 2 3 3 3 k k k >
>
: ;

P
Both sequences satisfy ak ¼ 1 and lim ak ¼ 0. Bottom and Chabini (2001)
k!1
compared the Polyak’s averaging method with regular MSA and found that Polyak’s
method outperforms MSA in their numerical experiments.
Other studies focus on heuristic methods to provide larger step size in the later
iterations of MSA procedure. For instance, Cascetta and Postorino (2001) proposed a
restart procedure to have an “acceleration step”. Recently, Bar-Gera and Boyce (2006)
have applied constant step size sequences in MSA, which efficiently solved travel
forecasting problems. However, the convergence of constant step averaging methods
depends on the properties of the mapping F(x). To achieve high efficiencies, the
constant step sizes should vary along with specific applications. Magnanti and Perakis
H.X. Liu, et al.

(1997) compared the convergence properties between variable step-size averaging and
constant step-size averaging methods.
Comparing with the conventional MSA, the step size sequences {αk} in Polyak
(1990) and Nagurney and Zhang (1996) reduce to zero slower than O(1/k), which
may be too aggressive at the beginning iterations; the method of Cascetta and
Postorino (2001) cannot tell where is the optimal restarting point; Bar-Gera and
Boyce (2006) did not offer a general way to find the optimal constant step size in
different problems, and it may suffer a divergence since the constant step sizes do
not satisfy the Blum Theorem.
Instead of predetermined step size sequence, the adaptive averaging has been
summarized by Magnanti and Perakis (1997) for accelerating the convergence of
averaging methods for fixed point problems. In adaptive averaging, the step sizes αk
are determined by utilizing information from previous iterations, e.g., the objective
value of 2.2 for SUE problems. And therefore, the convex combination algorithm in
Chen and Alfa (1991) and the optimal step length algorithm (OSLA) in Maher
(1998) are specific cases of adaptive averaging methods.
Note that the conventional MSA essentially is an average
P of all auxiliary solutions.
If we let yk =F(xk), the n+1 iteration provides xnþ1 ¼ 1n nk¼1 yk , where all previous
intermediate points provide the equal weight of 1/n to current solution. However, the
weights on the auxiliary solution yk should NOT be equal. Because yk+1 is closer to
the solution x* than yk when k is sufficiently large. Therefore, yk+1should contribute
to the final solution MORE than yk. When auxiliary yk approach closer to current
solution xk, the highest weight should be given to the latest auxiliary yk. Based on
this observation, we develop the method of successive weighted averages (MSWA)
and self-regulated averaging method in the following sections.

3 Method of successive weighted averages (MSWA)

The conventional MSA iterates by:


  1
xkþ1 ¼ xk þ αk yk  xk ; αk ¼ ; k ¼ 1; 2; . . .
k
It follows that:
1 
xkþ1 ¼ y1 þ y2 þ ::: þ yk :
k
Based on the fact that y k approaches to the solution x* in fixed point problem,
more weight should be assigned on the later intermediate solution y k, instead of a
simple equal average. A straightforward approach is to linearly increase weights to
each element in the series {y k}, and therefore the solution follows a weighted
average as:
1  
x kþ1 ¼ 1  y1 þ 2  y2 þ 3  y3 þ ::: þ k  y k :
M
P
Since the summation of all weights should equal to 1, it follows that M ¼ ki¼1 i.
A generalized formulation of method of successive weighted averages (MSWA)
can be developed by:
1d  y1 þ 2d  y2 þ 3d  y3 þ . . . þ k d  y k
x kþ1 ¼ ð3:1Þ
1d þ 2d þ 3d þ . . . þ k d
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

where weight parameter, d≥0, is a real number. When the parameter d is an integer
number, the step size αk has a clear expression, for example:
1
d ¼ 0 ) ak ¼
k
2
d ¼ 1 ) αk ¼
kþ1
6k
d ¼ 2 ) αk ¼
ðk þ 1Þð2k þ 1Þ
4k
d ¼ 3 ) αk ¼
ð k þ 1Þ 2
30k 3
d ¼ 4 ) ak ¼
ðk þ 1Þð2k þ 1Þð3k 2 þ 3k  1Þ
It can be seen that the conventional MSA essentially is a special case of MSWA
with d=0. The value of d determines how much weight to be assigned to the later
iterations. The larger value of d, the less weight is assigned to the beginning
intermediate point. For example, comparing with the weight of 1/k when d=0, the
weight to the initial intermediate point y1 is only k ðkþ1
2
Þ when d=1, and therefore
more weights are assigned to later auxiliary points.
In distinct ways of implementation, we develop two algorithms for the MSWA.

3.1 MSWA-I

The straightforward implementation is directly derived from Eq. (3.1) as:


  kd
x kþ1 ¼ x k þ αk  y k  x k ; where αk ¼ ; k ¼ 1; 2; . . . ð3:2Þ
1d þ 2d þ 3d þ ::: þ k d
The pseudo code of MSWA-I is written as follows:

Pseudo code of MSWA-I


Initialization:
Set γ0 =0, k=1, the real number d≥0, and the stop criteria ε>0
Calculate initial point x1 and y1 =F(x1)
Main iteration:
 
While x k  y k   " do
βk =k d, γk =γk-1+βk, αk =βk/γ
 k;
x kþ1 ¼ x k þ αk  y k  x k ;
y k+1 =F(x k+1);
k=k+1;
End.
Output: x k

Three basic steps compose of the main iteration of MSWA-I. Firstly, the weighted
step size αk is recursively calculated; and then a new solution xk+1 is generated by
H.X. Liu, et al.

using the weighted step size αk, followed by the calculation of a new auxiliary point
yk+1. Comparing with the original MSA, the implementation cost of MSWA-I is
almost the same except for the additional calculation of weighted step size αk, which
is a trivial task. It should be noted that the sequences xk and yk generated in the
algorithm are different with those of original MSA due to the different step sizes αk.
Frees and Ruppert (1990) argued that it may be advantageous to use one method
to select the design points and another to estimate the solution. Following this
argument, we develop an alternative MSWA, MSWA-II, which utilizes the auxiliary
solution sequence in the Robbins-Monro process. We discuss MSWA-II in the next
section.

3.2 MSWA-II

MSWA-II generates a weighted solution sequence in parallel to the MSA solution. In


k k
MSWA-II, we update the auxiliary k points y =F(x ) following standard Robbins–
Monro process x kþ1
¼ x þ k  y  x , and provide a weighted running average
k 1 k

of the iterates simultaneously as the solution series, by


1d  y1 þ 2d  y2 þ 3 d  y3 þ . . . þ k d  yk
e
xkþ1 ¼ :
1d þ 2d þ 3d þ . . . þ k d
MSWA-II contains the same auxiliary points yk as those in the conventional
MSA. In contrast, MSWA-I updates auxiliary points yk using the latest solution point
xk as Eq. (3.1). The detailed implementation of the algorithm of MSWA-II is shown
as follows:

Pseudo code of MSWA-II


Initialization:
Set γ0 =0, ξ0 =0, k=1, the real number d≥0, and the stop criteria ε>0
Calculate initial point x1 and y1 =F(x1)
Main iteration:
 
While x k  y k   " do
βk =k d, γ k ¼ γ k1 þ βk ;
ξk ¼ ξk1 þ βk yk , e x kþ1 ¼ ξk γ k ; (*)
x kþ1 ¼ xk þ 1k   y k  x k ;
y kþ1 ¼ F x kþ1 ;
k=k+1;
End.
Output: e
xk

Note that the calculation of weighted average e


xk is not necessary at line (*) until
the final output.
MSWA-II maintains k all the conventional MSA process, and produces a parallel
 k
solution sequence e x which follows Eq. (3.1). Therefore, the sequence e x
preserves the property of assigning more weights on the later points and avoids the
disadvantage of MSWA-I that the initial points may approach farther away from the
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

solution than the solutions in conventional MSA, because of the bigger step sizes
used in MSWA-I.
The step size sequences are determined a priori in both conventional MSA and
MSWA. These sequences do not consider any intermediate information generated from
the solution procedure. In next section, we develop a self-regulated averaging method,
whose step size choice varies according to information from the last iterations.

4 Self-regulated averaging method

Let step size in conventional MSA αk =1/βk, where βk follows that


(
1; k¼1
βk ¼ ð4:1Þ
βk1 þ 1; k2

In the original MSA, no matter how current solution is close to the optimal
solution, βk is enlarged by an increment of one at each iteration. However, the stable
increment may generate two possible disadvantages. One of them is that the step size
may be too large, and leads the next solution
  lying farther away from the optimal
solution than previous one, or xkþ1  x*  Q xk  x* . The other disadvantage is that,
when the current solution is close to the optimal solution, the step size may be too
small such that the convergence speed is extremely slow. If we could use the
information of between iterations, it is possible to guide the choice of step size either
“speeding up” or “slowing down” to zero.
Similar to the idea of adaptive averaging methods, we develop a self-regulated
averaging method, where the step sizes are updated by evaluating a potential
function. The potential function may have various forms as provided by Bottom et
al. (1999) for studying a specific fixed point problem of route guidance generation.
When the potential function detects the step size in previous iteration is effective to
the convergence, it maintains current step size slowly converging to zero; otherwise,
it speeds up the reduction of the step size in the averaging method.
The most convenient measurement can be used to monitor the convergence is the
distance between auxiliary point yk and current solution xk, due to the fact that
yk→x*. Therefore, the self-regulated averaging algorithm  regulates
 the increment of
βk according to the information of absolute error xk  yk . The increment of βk
should be greater than 1, if the iterate tends to diverge, or whenever the distance

xk  yk  becomes larger; otherwise, the increment of βk should be smaller than 1,
 
when the solution series tend to converge, or whenever the distance xk  yk 
becomes smaller. That is:
(    )
βk1 þ *; * > 1; if xk  yk   xk1  yk1 
βk ¼     ð4:2Þ
β k1 þ γ; γ < 1; if xk  yk  < xk1  yk1 

The choice of step size increment parameters Γ and γ is flexible, e.g., Γ ∈ [1.5,2]
and γ ∈ [0.01,0.5].
The implementation details of self-regulated averaging method are described in
the following pseudo code:
H.X. Liu, et al.

Pseudo code of self-regulated averaging method


Initialization:
Set k=1, Γ > 1, 0<γ<1, and the stop criteria ε>0
Calculate initial point x1 and y1 =F(x1)
Main iteration:
 
While xk  y k    " do
 
if x k  y k   x k1  y k1 
βk ¼ βk1 þ *;
else
βk ¼ βk1 þ γ;
end
αk ¼ 1=βk ;  
x kþ1 ¼ x k þ αk  y k  x k ;
y kþ1 ¼ F x kþ1 ;
k ¼ k þ 1;
End.
Output: x k

The self-regulated averaging method is based on the consideration that the step
size should be larger to give more aggressive exploration of the solution space when
current iteration converge, otherwise, it should be smaller when the solutions
diverge. The stepPsize series {αk} in the self-regulated averaging method satisfies the
conditions of ak ¼ 1 and lim αk ¼ 0, and therefore, it guarantees the
k!1
convergence for SUE problems. In self-regulated averaging method, {αk} is still a
monotonically decreasing positive series, however, it maintains a more reasonable
decreasing speed. Especially, when iterates are close to the optimal solution, the step
sizes decrease slowly to avoid a slow convergence speed.

5 Algorithm comparisons

Before we apply these algorithms to solve the SUE problem, we use a small
minimization problem to exploit the convergence characteristics of different
algorithms proposed in this paper.
Suppose we have an optimization problem: min 2:5x21 þ 0:25x22 ; ðx1 ; x2 Þ 2 R2 . The
optimal solution is obvious at point x* =(0,0). Given a starting point x1 =(3.2,4.5), we
apply conventional MSA, MSWA, and self-regulated averaging method to this
problem to reveal the convergence properties of all these methods. In both MSWA-I

Table 1 Convergence comparison between MSA, MSWA and self-regulated averaging method
 
"k ¼ xk  x  ε1 ε2 ε3 ε4 ε5 ε6 ε7 ε8 ε9 ε10 ε11

MSA 5.52 13.00 19.27 12.88 3.43 1.11 1.02 0.94 0.88 0.84 0.79
MSWA-I 5.52 13.00 29.90 44.81 44.81 29.88 12.82 3.25 0.61 0.45 0.41
MSWA-II 5.52 13.00 29.90 23.50 6.48 0.86 0.77 0.71 0.65 0.61 0.58
SR average 5.52 13.00 10.23 6.36 3.19 1.53 1.03 0.89 0.79 0.71 0.64
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

45
MSA
40 MSWA-I
MSWA-II
SR-Av.
35

30

25
Error

20

15

10

0
0 1 2 3 4 5 6 7 8 9 10 11
Iteration

Fig. 1 Convergence performance comparisons

and II, we set the weight parameter d=1. Set the step increment parameters Γ=1.8
and γ=0.3 in self-regulated averaging method. The absolute errors from each iteration
to the optimal solution in all four methods are summarized into Table 1, and depicted
in Fig. 1 as well. The step sizes curves of these four algorithms are shown in Fig. 2.
Figure 1 also reflects the convergence speeds differences between these four
algorithms. It clearly shows that, although both MSWA algorithms approach closer to
the optimal point than conventional MSA after a few iterations, the MSWA could iterate
even farther away from the optimal point at the initial steps. This is because MSWA
algorithms apply a larger step size sequence than that in conventional MSA, as shown in
Fig. 2. Therefore, MSWA is better than conventional MSA only for a higher accuracy.
In contrast, the self-regulated averaging method better balances the step sizes both
at initial iterations and later ones. By receiving the feedback information of the

1
MSA
0.9 MSWA
SR-Av.
0.8

0.7

0.6
Step size

0.5

0.4

0.3

0.2

0.1

0
1 2 3 4 5 6 7 8 9 10 11
Iteration

Fig. 2 Step size comparisons


H.X. Liu, et al.

Fig. 3 Simple SUE network

iterations, the self-regulated averaging method effectively adjusts the step sizes. As
shown in Fig. 2, when the self-regulated averaging method detects a diverge
approach, it quickly decreases its searching step size to avoid the solution
approaching farther away from the optimal solution; otherwise, it applies a more
aggressive step size to explore the solution space. Therefore, the self-regulated
averaging method can maintain its steps neither too large nor too small, and its step
size sequence drops much slower than that in conventional MSA, when the solution
approaches to the optimal one.

6 Numerical examples

In this section, we firstly introduce a small SUE problem to demonstrate


convergence rates of all new methods comparing to standard MSA. And then we
use the Sioux Falls network for the SUE traffic assignment to explore the
applicability of proposed algorithms for solving high-accuracy practical problems.

6.1 Small SUE problem

Consider a simple network depicted in Fig. 3, consisting of six nodes, eleven links,
connecting nine OD pairs. All OD demands are shown in Table 2. The travel time on
each link follows Bureau of Public Roads (BPR) function:
"
4 #
xa
ta ðxa Þ ¼ ta 1 þ 0:15
0
ð6:1Þ
ca

where the free-flow travel time ta0 and capacity ca on link a are provided in Table 3.
Assume that traveler’s choice follows the Logit model. We apply the
unconstrained minimization 2.2 to model this SUE problem. Notice that, at iteration
k in MSA, the temporary auxiliary flow pattern yk in fact is generated by a Logit

Table 2 Origin-destination demand table

Demand O1 O2 O3

D1 80 40 20
D2 50 180 35
D3 25 20 30
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

Table 3 Free flow travel time and capacity

Link number 1 2 3 4 5 6 7 8 9 10 11

ta0 60 40 60 20 20 20 20 20 20 20 20
ca 100 40 160 300 300 300 300 300 300 300 300

flow loading. By using different Logit network loading methods, the auxiliary flow
pattern yk may be different, therefore provides different convergence speeds in the
algorithm. For the example network, we can enumerate all possible 27 acyclic paths,
such that the convergence speed only depends on the choices of step size.
Throughout
 this
 example,
 we set the convergence measurement by the difference
"k ¼ xk  yk xk .
We firstly solve the SUE problem with aversion parameter θ=1 by applying
MSWA-I, whose convergence performances under different weight parameter d are
shown in Table 4. The dash line in the table means that the algorithm does not
provide a solution that achieves the stopping criterion after 50,000 iterations. Note
that the MSWA-I is equal to the conventional MSA when d=0. The bold numbers in
Table 4 highlight the fastest algorithm for the same problem under the same stop
criterion. It can be observed that the convergence speeds of MSWA-I with d≥1
generally are much faster than that of conventional MSA. Therefore, the weight
parameter d in MSWA-I should be always greater than one for achieving a high
accuracy (e.g. ε≤0.1).
Table 4 also exhibits a certain “diagonal rule” that a bigger weight parameter d>0
helps for achieving a higher accuracy. Thus, the weight parameter d may switch to a
greater value in a MSWA-I process to achieve higher convergence accuracy. Also
notice that the convergence speed of MSWA-I may be slower than conventional
MSA when the accuracy is low, due to the larger step sizes negatively contribute the
convergence at initial iterations.
Secondly, we apply MSWA-II to the same SUE problem for the same stopping
criteria. Unlike MSWA-I, MSWA-II with higher value of weight parameter d always
produces better solutions than that with smaller value of d, as demonstrated in
Table 5. With increasing values of d, the numbers of iterations keep decreasing for
the same problem. This suggests that MSWA-II will always converge faster than the
original MSA to as long as reasonable accuracy is required for the solution. Recall
that the auxiliary solution points {yk} in MSWA-II are the same with those in MSA.
MSWA-II also equals to conventional MSA when d=0. As shown in Table 5, for
ε=1, MSWA-II consistently consumes only half number of iterations of the

Table 4 Weight parameter versus accuracy in MSWA-I

d ε=1 ε=0.1 ε=0.01 ε=0.001 ε=0.0001

0.0 (MSA) 326 2,958 29,576 – –


0.5 214 455 2,048 9,503 –
1.0 280 296 585 1,754 5,544
1.5 345 356 498 866 2,080
2.0 410 422 525 701 1,155
2.5 475 488 577 719 898
H.X. Liu, et al.

Table 5 Weight parameter versus accuracy in MSWA-II

d ε=1 ε=0.1 ε=0.01 ε=0.001 ε=0.0001

0.0 (MSA) 326 2,958 29,576 – –


0.5 146 705 3,684 17,270 –
1.0 146 599 2,238 7,344 23,482
1.5 148 497 1,587 4,346 11,419
2.0 147 427 1,260 3,143 7,486
2.5 147 377 1,070 2,539 5,774
3.0 146 339 947 2,186 4,865
4.0 148 282 796 1,791 3,929
5.0 147 236 705 1,573 3,438
10.0 147 154 507 1,133 2,475

conventional MSA. In contrast, for higher accuracies (e.g., ε≤0.1), the weight
parameter d is the larger, the better. The MSWA-II outperforms MSWA-I on
relatively low accuracies (e.g., ε≥0.01). That is because the interior conventional
MSA process provides a poor auxiliary sequence when the iteration approaches
close to the optimal solution.
Finally, we apply self-regulated averaging method to solve the same SUE
problem. We set the step parameters in self-regulated averaging method to Γ=1.9
and γ=0.01. To demonstrate the performance of proposed methods, in Table 6, we
also include Polyak’s averaging method and an optimized step length algorithm
(OSLA) and its variation (MSA-OLSA; Maher 1998), in the comparison. As being
indicated in Maher (1998), “the convergence of OLSA cannot be assured, as there is
no guarantee that the step length estimated by cubic (or quadratic) interpolation will
be close to the true optimal step length”. To improve the convergence of OLSA,
Maher (1998) suggested using the combination of MSA and OLSA, i.e., MSA-
OLSA, by switching to OSLA after certain iterations of MSA. However, the
switching point from MSA to OLSA could not be known a prior. We manually tried
multiple switching points and found that switching MSA to OLSA at 200 iterations
provides us reasonable results. These results are shown in the last column of
Table 6.
Note that, at each iteration, OSLA contains two stochastic network flow loadings,
in contrast to only one loading in other algorithms. Since the stochastic network flow

Table 6 Comparison of the accuracy after same number of loadings


 
Number of loadings Error " k ¼ x k  y k 

K MSA MSWA-I MSWA-II SR-Average Polyak’s OSLA MSA-OSLA


100 64.928 81.566 63.367 69.405 36.814 642.066 64.928
200 1.786 63.619 0.186 9.254 82.260 442.316 1.786
400 0.790 0.029 0.169 0.114 81.430 436.213 0.009
600 0.509 0.009 0.100 0.030 78.985 434.438 8.4e-5
800 0.376 0.005 0.063 0.008 73.210 423.249 8.5e-7
1,000 0.299 0.003 0.044 0.002 63.222 141.162 6.6e-9
1,200 0.248 0.002 0.032 0.001 47.384 433.879 <1e-10
1,400 0.212 0.001 0.024 0.000 17.256 421.494 <1e-10
1,600 0.186 0.001 0.019 0.000 0.001 130.739 <1e-10
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

loading is the major computational cost in each iteration of solving SUE problems,
we compare the convergences by the numbers of loadings instead of iterations.
Table 6 demonstrates the fact that self-regulated averaging method may converge
slower than conventional MSA at the initial steps, since the algorithm maintains
larger step sizes when the solutions approach closer to the auxiliary points. When the
solution sequence keeps the converging trend, the self-regulated averaging method
provides much faster convergence speed to achieve high accuracies than the
conventional MSA. Such larger step sizes help the self-regulated averaging method
to satisfy different stopping criteria comparing to MSWA algorithm, since self-
regulated averaging method adjusts step sizes based on the information provided in
the iterations. Therefore, self-regulated averaging method is more preferable than
MSWA, because the optimal weight parameter d for individual problem is not
known a priori, while the step parameters Γ and γ provide better convergence speed
for various problems.
This table also shows the poor convergence of Polyak’s averaging method, which
is due to the aggressive step sizes when the searching directions are increasing
directions to problem 2.2 at initial iterations, and thus, the absolute errors "k ¼

xk  yk  in Polyak’s method are hard to decrease in the initial steps, although it
becomes very efficient in later iterations. In contrast, the self-regulated averaging
method can efficiently avoid the difficulty by quickly shrinking the step sizes at the
initial steps.
In addition, Table 6 demonstrates the divergence of OSLA, as indicated by Maher
(1998). It appears that MSA-OLSA has the best performance. However, in practice,
the switching point between MSA and OLSA will be problem-specific and may be
difficult to find.

6.2 Sioux falls network

A second example we apply in this paper is the well-known Sioux Falls network
which contains 24 nodes, 76 links, with 528 origin-destination pairs. All link cost
functions are assumed to satisfy the BPR form 6.1.
To test the convergence of SUE model, a consistent network loading method is
required. For the Sioux Falls network, path enumeration becomes infeasible.
Therefore, Bell’s loading, instead of path enumeration or Dial’s loading, is applied
at each iteration to generate the auxiliary flow pattern yk. Dial’s loading, as indicated
in Bell (1995), assigns the stochastic flow on inconsistent sub-network which only
contains the efficient paths under current link cost. Although Bell’s process provides
consistent loading and its convergence has been proven (Wong 1999), unreasonable
paths containing infinite loops exist in the loading as shown by Akamatsu (1996).
However, this issue is beyond the scope of this paper.
In this example, MSWA I and II, self-regulated
 averaging method, the standard
MSA, Polyak’s (1990) sequence αk ¼ k 2=3 , as well as OSLA, are applied to the
SUE assignment problem with same aversion   θ=1.0. Their performances
 parameter
are compared by the absolute errors "k ¼ xk  yk xk . Since OSLA contains two
loadings at each iteration instead of one loading in other algorithms, we use the
number of loadings to represent their computational cost differences. In the
experiments, we set the weight parameter in MSWA by d=1.0; and the step
H.X. Liu, et al.

Table 7 Solution methods convergence comparison

Method ε=1,000 ε=100 ε=10 ε=1 ε=0.1 ε=0.01

MSA 136 – – – – –
MSWA-I 44 144 458 – – –
MSWA-II 33 157 910 – – –
SR-MSA 48 64 101 146 204 276
Polyak’s 49 61 103 161 237 334
OSLA – – – – – –

parameters in self-regulated averaging method by Γ=1.9 and γ=0.1. Table 7


aggregates the numbers of loadings under different stop criteria.
The bold numbers in Table 7 highlight the fastest algorithm achieving that stop
criteria, and the dash line means the algorithm can not achieve the stop criteria after
1000 stochastic flow loadings. Their absolute errors in first 300 loadings are also
shown by Fig. 4. Table 7 and Fig. 4 demonstrate that: (1) both MSWA and self-
regulated averaging method converge better than the standard MSA after a few
iterations; (2) MSWA-II performs better than the standard MSA from the very
beginning, however, it becomes the slowest one to achieve higher accuracy except
the standard MSA; (3) both self-regulated averaging method and Polyak’s method
can achieve higher accuracy solutions in this experiment; (4) the self-regulated
averaging method is the best one for high accuracy solution, faster than Polyak’s
method, which will also be demonstrated by the later experiment; (5) the optimized
step length algorithm (OLSA) may diverge. It did not converge after 1,000 loadings.
To achieve the high accuracy, Polyak (1990) has showed that when the step size
αk→0 “slower” than O(1/k), the MSA algorithm can have better performance in later
iteration, because of more aggressive step sizes. That is consistent
 with the
observation of our previous experiment by using αk ¼ k 2=3 . For example, at
iteration 1,000, the step size in Polyak’s α1,000 =0.01, which equals the step size of

Fig. 4 Convergence speed comparisons for Sioux Falls network


Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

MSA at iteration 100. The self-regulated averaging method has the similar property
when the iterations are sufficiently close to optimal solution. For general real number
of weight parameter d≥0 in MSWA, the step sizes αk could be computed by:
αk ¼ Pkk
d
¼ Pd
kd
id ðkþ1Þdþ1 Bi d ðkþ1Þdiþ1
i¼0 dþ1 þ i¼1 diþ1 i
¼ Pd
1
ð1þ 1k Þ
dþ1 Bi d ðkþ1Þdiþ1
k
dþ1 þ 1d i¼1 diþ1 i
k

 dþ1 
¼ dþ1
k þo k

 dþ1
k

Therefore, the step sizes in MSWA are approximately multiple of the standard
step sizes in MSA by d+1. In other words, the step sizes in MSWA converge to zero
in a speed equivalent to, or not “slower” than, O(1/k). That is the reason why MSWA
shows similar convergence rate with regular MSA shown in Fig. 4.
Although MSWA and self-regulated averaging methods exhibit similar perfor-
mances to Polyak’s averaging method shown in Table 7 and Fig. 4, they provide us
systematic strategies to accommodate different accuracy requirements for various
problems by adjusting the parameters in the algorithms. To provide algorithms more
aggressive step sizes for high accuracy, the weight parameter d in MSWA should be
larger, and the step parameter γ should be smaller. This adjustment strategy directly
comes from our aforementioned “diagonal rule”. Under this consideration, we
enlarge the parameter in the algorithms d=10 in MSWA; and for the self-regulated
averaging method, we reduce the step parameter γ=0.01, in order to achieve much
higher accuracy solutions. To avoid divergence, we apply OSLA after 20 MSA
iterations because at least 19 iterations of MSA are needed for convergence in our
computational test. The benchmark performances of these algorithms are accumu-
lated into Table 8. And the absolute errors in 1,000 loadings are depicted in Fig. 5.
In such experiment, the self-regulated averaging method outperforms all other
algorithms especially when higher accuracies are required. This is demonstrated by
both Table 8 and Fig. 5, where only self regulated averaging method can achieve
error of ε=10−10 in less 500 iterations. By assigning a higher weight parameter d=
10, MSWA-I converges much faster than that with d=1, and even faster than
Polyak’s method. The slow convergence speed of MSWA-II attributes to its
maintenance of MSA sequence. As the “diagonal rule” in MSWA, the parameter γ

Table 8 High accuracy performance comparisons

Method ε=1 ε=0.01 ε=10−4 ε=10−6 ε=10−8 ε=10−10

MSA – – – – – –
MSWA-I 179 265 396 595 896 –
MSWA-II 734 – – – – –
SR-MSA 224 249 280 337 398 471
Polyak’s 161 334 603 988 – –
MSA-OSLA 178 284 390 498 606 718
H.X. Liu, et al.

Error (log10) 0

-2

-4
MSA
-6 MSWA-I
MSWA-II
SR-MSA
-8
Polyak's
MSA-OSLA
-10
0 200 400 600 800 1000
Number of loadings

Fig. 5 High accuracy performance comparisons

in the self-regulated averaging method should be reduced to slow down the decrease
of step sizes, in order to achieve a high accuracy solution. Finally, the switching
strategy greatly improves OSLA and achieves high accuracy with a linear
convergence rate, but there is no guideline existing for the determination of
switching point between MSA and OLSA.

7 Conclusions

In this paper, we present the method of successive weighted averages and self-
regulated averaging schemes for solving the SUE problem. MSWA allocates more
weights on the later iterations instead of equal average in MSA; and self-regulated
averaging method adjusts the step sizes based on the distance between auxiliary and
intermediate solution points. Numerical examples exhibit that both MSWA and self-
regulated averaging method outperform conventional MSA by much faster
convergence speeds. The speedup factors significantly increase, especially for
highly precise solutions.
The parameters in both MSWA and self-regulated averaging method significantly
affect the convergence rates. When a low accuracy is required, the MSWA may
converge slower than conventional MSA. In contrast, self-regulated averaging
method consistently shows better convergence than conventional MSA. Although
there are numerous ways to choose weight parameter d in MSWA, the selection of d
follows a “diagonal rule” that a higher accuracy solution requires a bigger weight
parameter d. The SUE assignment examples also demonstrate better performance of
self-regulated averaging method. Thus, the self-regulated averaging method is highly
suggested for practical SUE problems with high accuracy.
Although we only demonstrate the applicability of MSWA and self-regulated
averaging scheme for solving SUE problem, these algorithms can be readily
extended for solving other fixed point problems. Whenever MSA is applicable for
fixed point problems, MSWA and self-regulated averaging scheme can also be
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging...

applied to achieve faster convergence speed and higher accuracy. Therefore the
proposed algorithms in this paper have wide-range applications for a class of fixed
point problems.

References

Akamatsu T (1996) Cyclic flows, Markov process and stochastic traffic assignment. Transp Res Part B 30
(5):369–386
Bar-Gera H, Boyce D (2006) Solving a non-convex combined travel forecasting model by the method of
successive averages with constant step sizes. Transp Res Part B 40(5):351–367
Bell MGH (1995) Alternative to Dial’s logit assignment algorithm. Transp Res Part B 29(4):287–295
Blum JR (1954) Multidimensional stochastic approximation methods. Ann Math Stat 25(4):737–744
Bottom J, Ben-Akiva M, Bierlaire M, Chabini I, Koutsopoulos H, Yang Q (1999) Investigation of route
guidance generation issues by simulation with DynaMIT. In: Ceder A (ed) Transportation and traffic
theory. Proceedings of the 14th International Symposium on Transportation and Traffic Theory,
Pergamon, Amsterdam, pp 577–600
Bottom J, Chabini I (2001) Accelerated averaging methods for fixed point problems in transportation
analysis and planning. Triennial Symposium on Transportation Analysis (TRISTAN IV). São, Miguel,
Azores, Preprints 1, 69–74
Cascetta E, Postorino MN (2001) Fixed point approaches to the estimation of O/D matrices using traffic
counts on congested networks. Transp Sci 35(2):134–147
Chen M, Alfa AS (1991) Algorithms for solving Fisk’s stochastic traffic assignment model. Transp Res
Part B 25(6):405–412
Daganzo CF, Sheffi Y (1977) On stochastic models of traffic assignment. Transp Sci 11(3):253–274
Fisk C (1980) Some developments in equilibrium traffic assignment. Transp Res Part B 14(3):243–255
Frees E, Ruppert D (1990) Estimation following a sequentially designed experiment. J Am Stat Assoc
85:1123–1129
Magnanti TL, Perakis G (1997) Averaging schemes for variational inequalities and system of equations.
Math Oper Res 22(3):568–587
Maher MJ (1998) Algorithms for logit-based stochastic user equilibrium assignment. Transp Res Part B 32
(8):539–549
Maher MJ, Hughes PC (1998) Recent developments in stochastic assignment modeling. Traffic Eng
Control 39(3):174–179
Nagurney A, Zhang D (1996) Projected dynamical systems and variational inequalities with applications.
Kluwer, Boston, Massachusetts
Nocedal J, Wright SJ (1999) Numerical optimization. Springer, New York
Polyak BT (1990) New method of stochastic approximation type. Autom Remote Control 51(7):937–946
Robbins H, Monro S (1951) A stochastic approximation method. Ann Math Stat 22(3):400–407
Sheffi Y (1985) Urban transportation networks: equilibrium analysis with mathematical programming
methods. Prentice-Hall, New York
Sheffi Y, Powell WB (1982) An algorithm for the equilibrium assignment problem with random link
times. Networks 12(2):191–207
Wong SC (1999) On the convergence of Bell’s logit assignment formulation. Transp Res Part B 33
(8):609–616

View publication stats

Potrebbero piacerti anche