Sei sulla pagina 1di 8

1.

6 SPARSITY AND THE OPTIMAL ORDERING OF CIRCUIT


EQUATIONS
In this final section of the chapter, the efficiency of the equation solution method will be considered and it
will be shown that this can be improved by exploiting certain properties of the circuit equations. From the
previous discussion, we need approximately n3 long operations (multiplications and divisions) for matrix
factorization and approximately n2 long operations for forward and back substitution. This can become
expensive with repeated solutions of very large systems of equations. As will be seen, nonlinear circuit
analysis in the time domain requires perhaps several thousand repeated solutions of the system of equations
describing the circuit at different iterations and time points. If the circuit is complex, say 10,000 equations,
huge numbers of long operations may be required, e.g. of the order of 1012 . Even with fast computers this
will a take long time. In order to find ways to reduce the number of operations we can study the properties
of the system of equations describing electronic circuits. Of course, the specific equation formulation
method used must influence the structure of the system of equations from both the topological and the
numerical points of view. Thus, only systems arising from MNA will be considered.

Fig. 1.20 Positive gain amplifier.

The circuit of Fig. 1.20 produces a typical system of equations

(1.44a)

which, if G1=1 S, G2=1 S, G3=1 S, and G4=2 S, becomes

(1.45a)
After factorization and forward and back substitution we obtain

(1.45b)

and for E=1 V,

(1.45c)

What are the properties of this system and its solution? First of all, for the nodal admittance matrix of a
system describing a linear resistive network without any controlled sources, the reduced part of the modified
nodal admittance matrix, Gr in (1.14), is both topologically and numerically symmetric, with diagonal
elements dominating their corresponding row and column. These properties may be exploited both for
reducing the computer memory needed to store the matrix (symmetry) and for more effective pivot
extraction (diagonal dominance). The above properties can be preserved for any electronic circuit if
unilateral elements (controlled sources) are not allowed to affect the top-left corner of the matrix. In the
example above, the operational amplifier is modelled as a voltage-controlled voltage. Besides the VCVS,
three other controlled sources are used for modelling electronic components such as transistors. In reality,
the rest of a component model (in both frequency and time domain analysis) is bilateral with signal paths
between all the component's terminals. From the topological point of view, it is unlikely that an asymmetric
stamp will be produced, except for the case of the operational amplifier. Numerically, however, electronic
components will always produce asymmetry which is important from the point of view of diagonal
dominance.

Another very important feature of the system matrix in (1.44a) is its sparsity. A nonzero off-diagonal
element is produced during the equation formulation process if a connection exists between the two
corresponding nodes. Bearing in mind that an electronic circuit node has only a few connections to
neighbouring nodes, say three to five, we might expect no more than five nonzero elements in each row of
the circuit matrix. So a circuit described by 10,000 equations would have no more than, say, 40,000
nonzero off-diagonal elements. Compare this with the 10,000!10,000=108 nonzero elements of a fully
connected circuit. A matrix in which most of the elements have the same value (in this case zero) is said to
be sparse. We can exploit this property in order to:

accelerate the solution process and


reduce the memory needed.

In order to do this we have to organise the program so that only nonzero elements are stored (memory
savings) and so that only nonzero elements are manipulated (acceleration of the factorization). The number
of operations may be dramatically reduced so as to be approximately proportional to the number of
equations (n) instead of n3 .

Before proceeding to consideration of sparse matrix techniques, let us see what happens to the system
(1.44a) during the solution. There are 36 matrix elements. Of the 30 off-diagonal elements, 19 are equal to
zero which means that, even in this simple example, the circuit matrix may be considered sparse. In
addition, this is an example where the RHS vector is also extremely sparse. This can be exploited as well.

From inspection of (1.45b), however, we find that the number of nonzero off-diagonal elements in the
factorized matrix has increased. There are 12 zero-valued off-diagonal elements in L and U, which,
roughly, is about half the number in A in (1.45a). In other words factorization influences the sparsity and,
here, reduces it. New nonzero elements are produced which are referred to as infills. The origin of an infill
can be seen from (1.41). If at least one nonzero product, lip upk, exists for all p, lik in (1.41a) will become
nonzero irrespective of the value of aik. Similarly, ukj becomes nonzero if in (1.41b) a nonzero lkp upj
product exists. Note that, conversely, a zero-valued lik or ukj may arise even if aik or akj, respectively, is
nonzero, but this can only occur in the unlikely event of complete cancellation and so will not be considered
further except in the choice of pivot candidates.

In order to understand better how the infills are created, let us consider the transformation of the matrix in
(1.45a) into L and U in (1.45b). A zero valued element in the original matrix A does not produce an infill.
Diagonal elements are also not responsible for infill creation. But a nonzero off-diagonal element in row k
can produce an infill in the non-factored part of the matrix (the 'reduced matrix'). The infill will be created if
a nonzero element is also found in column k in the opposite half of the matrix (where the diagonal is
considered the axis of symmetry). For example, a15 is responsible for creating the following infills: u25
(with l21 ), u45 (with l42 and u25 ), u55 (with l51 ), and l65 (through a set of elements). As a measure of the
number of infills potentially created by a matrix element akj, the Markowitz product (Markowitz, 1957) may
be used. It is computed as follows

(1.46)

where nk is the total number of nonzero elements in row k, and nj is the total number of nonzero elements in
column j of the remaining matrix. At the beginning of the factorization the whole matrix is considered.

Accordingly, the sparsity may be severely reduced and no benefit will be obtained unless special attention is
paid to preserving the sparsity. This is achieved by adopting a suitable pivoting strategy. Each pivoting
element is chosen so as to produce as few infills as possible.

Finding the optimal set of pivoting elements is an NP-complete problem. This means that the number of
operations needed to find the optimum rises exponentially with the matrix size. A sub-optimal algorithm is
used to find an acceptable solution (Duff, 1977). Two such are those of Berry (1971), and Markowitz
(1957). The first is perhaps slightly better while the second is easier to understand and program. Because of
its simplicity, the Markowitz procedure will be explained here. This algorithm is implemented in SPICE
(Nagel, 1975, Rohrer, 1992).

The pivoting process consists of reordering the equations - row pivoting, or renumbering the variables -
column pivoting, or both. For example, if the first and fifth row in (1.45) are interchanged (which is row
pivoting) the 1 at a15 will be substituted with a 0 from a55 which will then suppress all the infills resulting
from a15 , as described above. Now the Markowitz product of the a11 element in the reordered system is
equal to zero, which is the best value to reduce the number of infills. At the same time, a nonzero diagonal
element results at a55 . Of course, both the original and the reordered matrix are non-singular and if no
further reordering or renumbering is done we can proceed with factorization. Unfortunately this is not
always the case. To show this let us consider the circuit of Fig. 1.21. It is very similar to that of Fig. 1.20,
the difference being hidden but very important. In the new circuit there are no currents flowing from any
node to ground except for the currents flowing through grounded ideal voltage sources (i4 and i5 ). We say
that there is a cut-set of ideal voltage sources that has no admittance representation. This circuit is described
by

(1.47a)

Fig. 1.21 A circuit producing a cut-set of ideal voltage sources.

No matter what the values of G1 and G2 the LU factorization performed as described will lead to a singular
L matrix as follows

Of course this problem was noticed by the authors of MNA, Ho et al (1975), and they, and others (Fang
and Tsividis, 1980, Hajj et al, 1981, Lee and Park, 1983, Tan, 1986), have proposed different pivoting
strategies to avoid zero pivots. Here the suggestion proposed in Kung (1986) and Berry (1971) will be
used. The reordering will be performed so that the equations with a single 1 will be manipulated first. This
can be achieved by interchanging the branch equation with the corresponding node equation. In the above
example, this holds for the first and fourth, and for the third and fifth equations. After these interchanges are
performed, the reordered system, which now avoids full cancellation on the diagonal, becomes

(1.47b)

This result, of course, is not optimal from the point of view of minimizing infills. A minimum number of
infills may be obtained by a simple interchange of the second and third equations in (1.47b) which
eventually leads to a fully factorized system. A general procedure is needed to choose the next interchange
after equations with a single 1 have been swapped. Markowitz suggests the following. Compute the
Markowitz product for every element in the bottom-right part of the matrix (which is the 'unfactorised' part)
and choose as a pivot the element with the smallest Markowitz number.

For example, after reordering (1.44a) so that the equation with the single 1 is the first, the following system
is produced

(1.44b)

Note that when equations are reordered both the matrix and the right-hand-side vector are affected. Now,
after the first step of the factorization, the first column and first row are considered to be lik and ukj,
respectively. Thus the bottom-right corner is a 5!5 matrix from which the next pivot is chosen. The
Markowitz numbers are as follows

(1.48)

We can see that there are two elements (a46 and a55 ) which have a zero valued Markowitz product. These
are candidates for the next pivot. Without further criteria about which of these two should be chosen, we
will use a46 as a pivot assuming that it was the first found in the search for a zero valued Markowitz
product. This leads to the following

(1.44c)

where equations 2 and 4, and variables 2 (v2 ) and 6 (i6 ) have swapped places.

For this new system, in order to simplify the explanation, only the Markowitz numbers for the bottom-right
corner (4!4) of the matrix will be shown:

(1.49)
This suggests that equations 1 and 3, and variables 3 and 5 should be interchanged. Thus

(1.44d)

Now the (3!3) bottom-right part of the matrix gives the following set of Markowitz products

(1.50)

This suggests no change in the system (1.44d), i.e. a44 (-G2 ) will be taken as a pivot. Finally, the set of
Markowitz products for the (2!2) bottom-right corner of the matrix is

(1.51)

which, again suggests no changes need be done. This means that (1.44d) should be assumed to be the
optimal system from the algorithm's point of view. After factorization of this system's matrix, we obtain the
following matrices:

(1.52)

leading to

(1.53)

and

(1.54)

In order to assess this result let us count the nonzero matrix elements in (1.52). There are in L and U
together, 10 off-diagonal elements which is approximately half the corresponding number for (1.45b): 18.
Note that no pivoting was applied for (1.45b) to be produced. The reader should not be confused by the fact
that a smaller number of off-diagonal element is obtained for the factorized system than for the original one.
Two additional infills were created during factorization but located on the diagonal of the factorized matrix.

Further improvements to this algorithm can be obtained by including the prospective infills in the
Markowitz products, by deciding between equal Markowitz numbers at different matrix positions, and so
on. Nevertheless, a very important criterion is abandoned when pivoting for retaining sparsity is imposed.
This concerns the preservation of accuracy. We cannot expect to obtain the maximum accuracy with an
algorithm intended to preserve sparsity. In order to avoid losing accuracy we should monitor the pivot
value. If it is less than a given threshold then the ordering is unacceptable, and the rest of the matrix has to
be reordered with a new pivot acceptable from both accuracy and sparsity points of view. The threshold
value is an important matter. A minimum value of 10-13 is suggested for an equation solver written for a
circuit analysis program using double precision arithmetic. This value is used in the SPICE program. A
more systematic procedure to avoid losing accuracy is described in MacInnes (1991).

Finally, an important matter related to the reordering of a system of equations describing an electronic
circuit is that this system (with probably, different element values) will be solved many times. For instance,
consider frequency domain analysis where the value of the angular frequency is used as a parameter. A new
system of equations arises at each frequency value but the matrix has the same zero-nonzero structure.
There is no need for the renumbering and reordering to be performed every time the system is reformulated.
Instead, the renumbering and reordering are performed symbolically, based on the predicted matrix structure
before the real matrix elements are computed. During the equation formulation the computer program will
work with an internal order of equations and numbering of variables. After solution the user-defined
enumeration will be re-established and the results displayed. It is clear that accuracy cannot be taken into
account if a reordering is done symbolically, unless we are prepared to pay the heavy price of reordering at
times during the course of the simulation.

Having briefly resolved all the problems relating to equation formulation, reordering and renumbering, and
solution, we can address the problem of storing a sparse system of circuit equations. In order to avoid
lessons in data structures, we will simply note that, for sparse matrices with no more than five nonzero off-
diagonal elements, the best way to represent the circuit matrix is to use an orthogonal linked list. Only
nonzero matrix elements are stored and represented by list nodes. Each matrix node is linked by two
pointers to the matrix elements below and to the right of it. Hence, the following data will characterise a
matrix element record: the matrix element value, the row number, the column number, a pointer to the
element below, and a pointer to the next element to the right. In order to make the manipulation of the
matrix easier three arrays are used. The FIC array contains pointers to the first element in each column of
the matrix. The FIR array contains pointers to the first element in each row of the matrix. Finally, the D
array contains pointers to the diagonal elements of the circuit matrix. This structure is shown in Fig. 1.22.
Fig. 1.22 Orthogonal linked list describing the system matrix for the circuit of Fig. 1.20.

Contents

1.5 Solution of a system of linear algebraic equations

Potrebbero piacerti anche