Sei sulla pagina 1di 68

ANTENNA PATTERN SYNTHESIS BASED ON OPTIMIZATION

IN A PROBABILISTIC SENSE

BY

WILLIAM FORREST RICHARDS

B.S., Oid Dominion University, 1970

THESIS

Omitted in partial fulfillment of the requirements


~2 degree of Master of Science in Electrical Engineering
in the Graduate College of the
--iversity of Illinois at Urbana-Champaign, 1972

Urbana, Illinois
(;,2 \.~~t-t\o5
R ~G(l. a
UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

THE GRADUATE COLLEGE

August 1972

I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY

SUPERVISION BY WILLIAM FORREST RICHARDS

ENTITLED ANTENNA PATTERN SYNTHESIS BASED ON OPTIMIZATION

IN A PROBABILISTIC SENSE

BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR

THE DEGREE OF __ MA_S_T_E_R_OF_S_C_I_E_N_CE _

1I 1 7-
);.
rk
J~ITICharg~ of Thesis
./ G..,-_t; .
------------- 'o../~
. . ead of Department

Recommendation concurred int

Committee

on

Final Examinationt

-----------------------

t Required for doctor's degree but not for master's.

D517
F ILLINOIS
-=..-npaign Campus
- -- ate College
- ~ .istration Building

FORMAT APPROVAL

~he Graduate College:

rormat or the thesis submitted by William Forrest Richards

_ r the degree of Master of Science is acceptable to the

=:_artment of Electrical Engineering

July 31, 1972 (Signed) ~/.,/2:Z/~


~
Date Departmental Representative
iii
ACKNOWLEDGEMENT

~e author wishes to express his gratitude to the patient guidance

-- __oressor Y. T. Lo in the studies that led to, and in the preparation

--, this thesis. Appreciation is also expressed to Professor A. H. Sameh

is discussions with the author on matters relating to numerical

~,- ysis.

Thanks are expressed to Professor O. L. Gaddy for obtaining funds

= r typing and publication, and to Mrs. Burns and Miss Anderson for

'ng. Special thanks are expressed to Mrs. Brown of the Graduate College

- ~ the extension given during the period of personal tragedy experienced

~he author.

...
iv

TABLE OF CONTENTS

Page

NOTATION 0 0 1

STATEMENT OF PROBLEM 3

MODELING OF ERRORS AND PHILOSOPHIES OF OPTIMIZATION. 0 7

MINIMIZATION OF E{e} 0 11

APPROXIMATE CALCULATION OF THE DISTRIBUTION FUNCTION


OF E. e 0 (') 0 0 0 0 0 e 0 CJ 0 0 0 s 0 0 0 0 0 C; 0 16

AN EXAMPLE OF THE SYNTHESIS OF A TWO-DIMENSIONAL PATTERN


FUNCTION BASED ON MINIMIZATION OF SO 0 0 0 0 0 21

AN EXAMPLE OF AMPLITUDE PATTERN SYNTHESIS. 49

CONCLUSIONS 0 53

_ ENDIX A 0
55

-=-_ENDIX B 58

~= T OF REFERENCES 60

I
I"

.-
v

LIST OF TABLES

Page
- LE

1, RESULTS OF OPTIMIZATION AND SIMULATION FOR a .1%


AND VARIOUS VALUES OF M, • • • • • • • • • . . 32

2. RESULTS OF OPTIMIZATION AND SIMULATION FOR M = 10


AND VARIOUS LEVELS OF ERROR. . • • . • 34

3. COMPUTED EIGENVALUES OF G •• , 42

4. RESULTS OF OPTIMIZATION OF s , . 50

,.
~
[

. "t
..
..:
"
vi
LIST OF FIGURES

re Page

Illustration of Three Philosophies of Optimization 0 9

An Example of the General Antenna Structure Used in


Chapter .VI II 0 II GOO <7) t) 0 0 Cl 0 €> 0 0 0 (! II 0
22

Synthesized and Desired Pattern Functions for M = L 25

Synthesized and Desired Pattern Functions for M 2. 26

Synthesized and Desired Pattern Functions for M =30 27

Synthesized and Desired Pattern Functions for M = 40 28

7. Synthesized and Desired Pattern Functions for M - 6. 29

Synthesized and Desired Pattern Functions for M = 80 30

Synthesized and Desired Pattern Functions for M 10 31

Synthesized and Desired Pattern Functions for 0 = 0% 35

Synthesized and Desired Pattern Functions for 0 = 1% 36

Synthesized and Desired Pattern Functions for 0 3% 37

Synthesized and Desired Pattern Functions for 0 = 5% 38

--. Synthesized and Desired Pattern Functions for 0 = 7% 39

Synthesized and Desired Pattern Functions for (J =10%. 40

Mean E'S for Solutions Regularized to Various Noise


Levels but in the Presence of an Actual 0 = 5% 0 0 0
41

Edgeworth Series and Simulation for N = M = 10, 0 =.1% 46

Distribution Functions for Solutions Regularized to


Various Noise Levels but in the Presence of an Actual
cr = 5% Le'v'el II () GOO Cl 0 Ii 0 0 0 I!l 0 • 0 0 0 () 0 47

(a) Pattern of Initial Approximation Corresponding to


Minimum of Equation (8) (K;;: 0), (b) Pattern of Final Ite'cate
Corresponding to Minimum of Equati~n (17) (K = 0). 0 0 •• 51

(a) Phase Function of Pattern of Figure 19 (a), (b) Phase


Function of Pattern of Figure 19 (b) 0 • 0 0 0 • • 0 • • > 52
1

I. NOTATION

~ e following notation is used throughout this report; those not

- -ed in the list below are defined in subsequent chapters 0

denotes a three-dimensional Euclidean vector; e.g., A

is a vector.

* denotes complex conjugation of a complex object; e.g., A *

is the complex conjugate of A.

denotes a unit three-dimensional Euclidean vector in the

sense that if e is a unit vector, then e* e 10

> denotes a 1 x N matrix whose elements may be complex

numbers of variables or complex three-dimensional

Euclidean vectors or vector functions; e.g.,

J (1)
.Y:.l 1

J(l)
.Y:.Z z
V> J >
l

t denotes the Hermitian conjugate (conjugate transpose) of

a matrix; e.g., if A is a matrix, then the element in the

ith row and jth column of A~[At].., is [A]~..


~J J~
< denotes (»t.

<XY> is the product defined in the usual way.

E{Q} is the expectation or probability mean of random variable or

process Q.
2.

<Q> denotes the sample mean of Q; that is, if QI, Q2,

are random samples of random variable or process Q, then

Re. Q is the real part of Q.

1m Q is the.imaginary part of Q.

(r, e, ~) is a point in the usual polar coordinates.

j denotes 1=1.
jwt is the time variation of all the fields and currents
e

considered in this report.

k denotes the free-space wavenumber.


o
Z is the wave impedance of free space.
o
dQ ~s the element of solid angle, sin e de d~. v
3
II. STATEMENT OF PROBLEM

It is well known [1] that the radiation electric field can be

axpressed as

r'r
-jk 0-
E(r, 8, ~) = -j(k Z /4nr) f(8, ~) e
- 0 0 -

ere (r, 8, ~) is the point of observation and i(8, ~), called the

attern function, is given by

jk r' or
i(8, ~) = -r x r x II ~(r', 8', ~') e 0 dr'd~'

ere ~(r, 8, ~) is the source current density.

Generally, the synthesis problem is to find a realizable function J

~ ch that the pattern function i thus realized is close in some sense to

- desired pattern function id. To give meaning to the term closeness,

norm is usually defined with a corresponding performance index

_ = Iii d - !.II. As E: decreases, i.becomes closer to i.d' It appears

,3] that a pattern function can be approximated as precisely as one

~shes. However, this is done at the expense of requiring ver~ large

_= els of currents. Suppose that the errors that will inevitably be

_esent in the physical realization of the synthesized currents are

- ative in nature, that is, as the mean currents grow, the probability

-- large errors also grows. For this type of errors (although if built

~ecisely the synthesiz~d antenna may produce a pattern that approximates

e desired pattern very closely) the large currents necessary for this

-_ecision will likely give rise to large error c~rrents. Since large
Ii
~ _or currents usually give rise to large errors' in the pattern function

thesized, it appears that asking for too much precision can do more

-~ than good. Large currents also create other problems such as high
4
c losses and extreme frequency sensitivity.

A classical device for obtaining a function J which minimizes E

-0 restrict Jto a finite dimensional linear space spanned by basis

uz,
tions .!:!.l' . N ; i •e., l. = J! ul + . • • + J~ .!:!.
.!:!. N' The

ern thus obtained is a member of a finite linear normed space P


jk r"r
ed by basis functions V o
-:J..
-r x r x ff ~ e 0- dr'd~', i = 1, Z,

• N, that is, i = J! VI + ..• + J~ V N = <JV>. It is particularly

enient to define an inner product between any two members of P, iI'

= as [iI' i
z] = f ft iz w d~, where wee, ~) is positive for all directions

:). With the norm defined as 11.1 IZ = [., .] the performance index
11!u - i liZ can be expanded as

E = 11!uIIZ + <JGJ> - ZRe <JC>, (1)

- e

G = f V><V w d~

: a positive definiteliermi,tian [4] matrix and

C> = f ~ w V >d~. (Z)

an be shown that E attains its unique minimum when the complex

I are given by
N

(3)

is a unitary matrix which diagonalizes G and AI: ~:' •• : AN > a


the eigenvalues of G, then (3) may be written equivalently as

-1 -1 i\;-l~ utC>,
J > = U Diag [A I ' A Z ' • ••
, N (4)
o

~cating that J > may be strongly influenced by the last few small
o
~ values of G and that the level of currents, of which <J o J0> is a
5

- asure,may be large if some of the eignevalues of G are small. Indeed,

~~ can be shown [5] that as more basis functions are added to P (which adds

re rows and columns to G) that the smallest eigenvalue can certainly never

~ crease and may decrease. Thus, as remarked earlier, if the unavoidable

rrors made in realization are relative in nature, the solution obtained

(3) minimizing s may not really be an optimum solution in any practical

sense.

A number of attempts have been made to define a more realistic sense of

timization than the minimization of s. Rhodes [6] minimized s subject

the constraint that Taylor's superdirective ratio [7] is some pre-

-etermined constant, y. In the same spirit, Lo [8] maximized gain and

-:gnal-to-noise ratio subject to a constraint on Q = <JJ>/<JGJ>. In

th cases, by constraining y or Q, more practical solutions with lower

rrent levels than those obtained by simply minimizing s or maximizing

-in and signal-to-noise ratio, respectively, were found. However, the

~ rmer required the use of complicated functions and neither answered the

- estion of exactly how y and Q should be determined. Cabayan [9]


a 2
thesized line source distributions by minimizing s + a f lu(x)! dx
-a
ere u(x) is the source current distribution in an aperture of 2a,and

:s some positive consfant called the "regularization parameter." This

hod,which is computationally simpler than minimizing s with Q or y

strained, also leads to practical solutions with current levels


a
lu(x)12 dx : (l/~) I Ifdl I. Cabayan gave a method for determining
-a
but it is one which requires a random simulation which can be costly

utationally for large sample sizes and subject to question for small

Ie sizes. A third method developed in this paper approaches the problem

- ectly by assuming that the error currents are random variables and
6
_:> ative in nature. Under this assumption, E: becomes a random

riable E characterized by a distribution function whose various

_ perties may be optimizedo By minimizing E{s}, the concept of

_egularization parameter is generalized and the need for simulation is

:> iminated. By making use of an asymptotic series derived from the normal

~stribution, the distribution function of s, F(E:; J» which depends

mean currents J>, can be approximated. This approximate distribution

y then be used for optimization of other properties of F(E:; J».

7 r the examples considered in this paper, a number of simulations were

rformed to check the theory, and their results are found to be in

- cellent agreement with theory.

..
" ,
7
III. MODELING OF ERRORS AND PHILOSOPHIES OF OPTIMIZATION

Accepting the fact that any physical realization of an antenna is

subject to errors of a random nature, the currents J> will be considered

as random variables J>. For convenience, the random error currents, oJ>,

are defined as oJ~ = J> - J> where J> ~ E{J>} are the mean currents.

The first moment matrix of oj> is zero. The second moment matrix of

~J> is assumed to exist and be of the form

E{oJ><oJ} A., A.. = X •• J .J~ (5)


1.J 1.J 1. J

here the matrix X formed by the elements Xij is an N X N nonnegative

efinite Hermitian matrix~ Tliat the covariance matrix' 1\:"


is'of the

required nonnegative definite. form for all J> is easily verified by

'=orming <Y A Y> = ijL (Y.J.)(Y.J.) * X .. = <ZXZ> > 0 where Z. = Y.J .. This
1. 1. 1.
1. 1. J. J 1.J

is true for all Y> so that A must also be nonnegative definite. It

should be noted.that at this point,. except for the form of the first and

second moment matrices of'oJ>, no assumption has been made as to the precise

'=orm of the probability law which oj> obeys.

One observes from the form of (5), that as the amplitudes of the

urrents grow, the probability of having large error currents also

orows. Large error currents give rise to large errors in the pattern

=unction. On the other hand, it was pointed out in Chapter II that if

everything could be done ~ith precision, the better the match to the desired

_attern function, the higher the current level becomes. Thus it appears

-'at~ compromise might be struck such that the resulting antenna is

optimumllin some probabilistic sense.

If the currents are random, then the pattern function they produce

'11 also have to be random. Thus the pattern becomes


8
i(St ep) = <JV> = <oJV>+<JV> of + f.

placing i by i in the definition of € corresponding to Equation (l)t

_ becomes the random variable given by

(6 )

~ noted in Chapter lIt one classically obtains a solution to the

synthesis problem simply by minimizing € in (1). When E is thought of

as a random variable as defined in (6)t then it is meaningless to talk

.out the minimum of E. Rathert one must choose some attribute of the

_ obabi1ity law of E on which to optimize. For instancet if F(€; J» =


.robabi1ity of the event {E <€ for the given set of mean currents J>},

- e distribution function of E, then one might choose to maximize F(E; J»

with respect to J> holding € fixed. Such a scheme will be referred to

as "vertical optimization." One might set F(E; J» = p for p € (0,1), thus


p1icit1y defining a function E = E(J». In this case one would attempt

find a set of currents J> which minimizes E(J». This scheme will be

lIed "horizontal optimization." A third method is to minimize E{E} with

_espect to J>. The three methods are illustrated in Figure 1 (a, b, and

, respectively). In each of the cases illustrated in Figure 1, the

..stribution functions are labeled by their corresponding mean currents.

: the three possible solutions given in the illustration, J1>, J2>, and

-3> are the optimum solutions to the firstt second, and third schemes,

=espectively.

Except for the case where E of the vertical optimization scheme

~5 chosen to be the result, E i of the horizontal optimization, the


mn
- _utions to the horizontal and vertical optimization schemes will be

.~fferent. It is also doubtful that the solution to the third optimization

scheme would coincide with those of the first two methods. Thus, it appears
9

E;J»

VERTICAL
OPTIMIZATION

(a )

I
I
I
I HORIZONTAL
: OPTIMIZATION
I
I

€MIN
( b)
"II"
i::::::1
I:;;;;C:
1:::::1

-
IllIn

::~
E
k
= MEAN OF E
ImH-

REFERRED TO F(E;J
k
»

OPTIMIZATION OF E{E}

Figure 1. Illustration of Three Philosophies of Optimization


10
-:.at these three methods represent fundamentally different philosophies

f optimization. The choice of method one employs among these three

(and many others that can be derived) depends, among other things, on

--e amount of computational effort required from each method.

It has been tacitly assumed that the solutions exist to the three

ptimization schemes discussed above. It is shown in the next section

-hat a solution exists to the third scheme. Although no rigorous proof

~s posed for the existence of solutions to the horizontal and vertical

ptimization schemes, the following heuristic argument is given. First

fall, F(E; J» is bounded from above by unity in the vertical optimization

.rocedure,and E(J»,defined implicitly by F(E; J» = p,is bounded to


-he left at least by zero in the horizontal optimization scheme. Thus,

the only way for solutions to the two problems not to exist is for the

_evels of currents to increase withopt bound as F(E; J» increases and

s(J» decreases in the first and second methods, respectively. However,

since the errors one is prone to make grow proportionately to the size

of the mean currents, most sample values of E would eventually be very

arge, indicating that F(E; J» is reduced for a fixed finite E in the

ertical scheme, and that E(J» for a fixed p = F(E; J» is increased

in the horizontal scheme, both being contrary to hypothesis. Therefore,

it appears that the current levels are bounded and hence solutions exist.
11

IV. MINIMIZATION OF E{f;}

Of the three types of optimization discussed in the previous.chapter,

minimization ,of E{E} is the simplest computationally. Th~concept.of

regularization used. by Cabayan llO] is a natural result of this optimization

procedure. Not only does this procedure give physical meaning to

regularization, but it generalizes the concept of regu1arizatiqn parameter

and provides a simple and direct way of determining the proper amount of

regularization that should be used without the need for numerical

experimentation. Three related versions qf thistype of optimization are

presented below.

Equation (6) can be expanded to

2 -
8 = 11!u - ill + 110i I I - 21<e. [!ct - I, 0I] (7)

since E{oJ>}= 0, E{oi} also vanishes and, therefore, the expectation of any-

thing which is linear in Of will also vanish. Expanding E{ Ilo!112},

E{llofI1 }
2
= E{<oJGoJ>} = L E{oJioJj} Gij
i,j

N. Taking the me<;l.n


of (7),

E: = E{E} = 11!u - II12 + <JKJ> (8)

which is the E: as defined in Equation (1) with "regularization" <JKJ>

added.

Notice that in the special case when K ..


is the scalar matrix aI,

(8) reduces to e = I I~ 2
- II 1 +a<JJ>, which is the regularized performance
12
index used by Cabayan ~rlO] in the discrete array case.

In order to extend the ,theory to the case of line sources, which

were also considered by Cabayan, 1e~ ou(x) be a random process with mean

zero with each sample a continuous function ,defined on -a < x < a, Let

the mean source distribution be u(x) , which generates a pattern


a
f(~) = e(~) J ej~x u(x) dx, where e(~) is the normalized element pattern,
-a
and ~ = k
o
sin e with e being the angle of deviation from the broadside

direction. For any antenna whose construction errors, ou(x) , are

random in na,ture and obey a probability law with a correlation function,

E{ou(x) ou(y) } * = X(x, y) u(x) u(y)*,

a a
where for all u, 0 < J J X(x, y) u(x) u(y)* dxdy < 00, the mean of the
-a -a
norm,squared of of is

a a
= J J X(x, y) G(x - y) u(x) u(y)* dxdy
-a -a

where

k
G(x - y) = J 0 e(~)e*(~) ej(x-y) d~.
-k
o

Letting K(x, y) = G(x - y) X(x, y),

2 a a .
e = I Ifd - fl I +J J K(x, y) u(x) u*(y) dxdy, (9)
-a -a

In the special case when X(x, y) =ao(x- y), (9) reduces to

2
e ;;: IIfd - f 11 + aJ~ Iu(x) ] 2 dx
-a
13

-hich is again the performance index used by Cabayan.

It appears from the analysis above that the concept of regularization

"parameter" (or more precisely regularization matrix or function) is a

natural property of the antenna structure embodied in the matrix or

=unction G and of the correlation matrix or function X. Physically.

-he quadratic form of the regularization matrix <JKJ> or of the regularization


a a
=unction f f K(x. y) u(x) u(y)* dxdy is the mean power radiated
-a -a
'y an antenna of the given structure excited by all possible error currents

~J> or error aperture functions ou(x).

In view of the positive definiteness of G and the semidefiniteness

f K (which follows from II ~- II 2 :::.0).a unique set of currents

J> = (G + K)-l C> (10)

uinimizes s where C> is defined in Equation (2). (The solution of the

_fne source case can be obtained by a slight extension of the method

abayan used to obtain solutions for scalar a.)

It was remarked in Chapter II that the small eigenvalues of G cause

all the difficulty in inverting G to obtain J>. However. it is clear

=rom Equation (10) that eigenvalues smaller than the level of errors

expressed in K are essentially "cut-off" so they can no longer give

_ise to huge currents.

The performance index may be calculated from (I), (8). and aO) to be

~ = Iliul12 - <J(G + K)J> = Iliul12 - <C(G + K)-lC>. (11)

ubstituting (La) into (1).


14
The performance index for the unregularized case, K 0, is

(13)

corresponding to J defined in Equation (3).


o
The performance ind~x defined in Equation (7) requires that the

phase and polarization of ~(e, ~) be specified. A more practical

problem is to try to match Iii to I~I. Unfortunately, this leads to

nonquadratic performance indices, which in turn lead to nonlinear equations

for the solutions. In this case the performance index given in Equation (1)

is modified to

(14)

Replacing !u in Equation (1) by l!uI . (i I Iii),

J > G-l e(J »> where


o 0

e(J»> = f l!ul «y.}>1 I<J~I) V> wee, ~) d~. (15)

J > can be determined by minimizing (14) by some iterative scheme.


o
When considering currents as random variables, proceeding exactly

as before, the random performance index for amplitude synthesis becomes

(16)

Unfortunately, if one attempts to calculate E{E } as defined in (16),


a
extremely complicated computations are encountered which for simple error

models reduce to evaluation of integrals of confluent hypergeometric

functions. To avoid these difficulties, the desired pattern function is

chosen to be !u = l!uI' . il Iii, in which case (8) must be minimized by


15

iteration. The solution obtained by minimizing (8) is not expected to be

the same as or produce as small an E{s } as that obtained by minimizing


a
E{s }; however, it seems reasonable to expect that minimizing (8) would
a
yield a better solution in the sense of lower E{s a } than one obtains for

an arbitrary choice of the phase and polarization of~. For the given

choice of ~, (8) can be written equivalently as

"Z = II I~ I - If I 112 + <JKJ>


(17)

which again is a generalization of the performance index used by

Gabayan. The currents minimizing (17) satisfy

(18)
(G + K)J> G(J»>

with G(J»> given by (15).

Ll

1,,'::111

l1fl'lI
;1:,"'11
16
V. APPROXIMATE CALCULATION OF THE DISTRIBUTION FUNCTION OF E

Minimization of E was seen in Chapter IV to provide a simple means

of obtaining a practical solution to the synthesis problem. However,

it is also interesting and necessary if horizontal, vertical, or other

optimization techniques are to be used to consider other properties

of the distribution function of E. Rearrangi~g Equation (7),

<XGX > + E (19)


o

where E is given by (13) and X> oj> + ~J> where ~J> = J> - J >
o o

with J > defined in Equation (3). Although the first moment of E is


o
obtained directly in Chapter IV, it is very complicated to use the

representation of E in (19) to generate higher moments of E. An

alternative representation is sought.


- -
Let E{Re.oJ> Re.<oJ} = 1\.RR'E{1YnoJ> Im<oJ} = 1\.ilI'
and E{Re.oJ> Im<oJ} =-A
-

RI
Then the covariance matrix of the 2N random variables,

[Re.<oj, - Im<oJ]

is given by

+ 1\.n + j [1\.~I- 1\.RI].


It follows that A of Equation (5) is given by A = 1\.RR

In order to simplify computations, the following two assumptions are made.


t
Assume that 1\.RR= 1\.11and ARI = -1\.RI. Secondly,assume that the components

of oj> are jointly normally distributed. The first assumption is not


17

critical; with a little more work using 2N X 2N matrices, one could

dispense with this assumption. It allows the covariance information about

the 2N variables in 8J> to be expressed in the N X N complex matrix A.

On the other hand, the second assumption considerably reduces the work

necessary to approximate F(E; J»,

With these assumptions, a convenient representation of s can be

derived through the following transformations. Let X'> = SX> where

S = DUt, U is a unitary matrix that diagonalizes G, and D is a diagonal

matrix of the square roots of the eigenvalues of G. Therefore,

<XGX> = <XIX'>. The covariance matrix of X'> is A' = SASt which is,

in general, not diagonal. Let W be a unitary matrix which diagonalizes

AI and let vI' v2 v be the eigenvalues of AI. Therefore, if


N
X"> = TX> where

(20)
T

then

s = <X"X"> + E (21)
o

where X"> are a set of independent,normally distributed, complex,random

variables with means


lIJ"> = TlIJ> (22)

and. var~ances 0f the rea 1 and" ~mag~nary parts 0f 2


X"i equa 1 to 1 Vi'

i = 1, 2, •• N.

Equation (21) shows that the distribution function of s is a

generalized noncentral X2 distribution shifted by E.


o Although the

usual x2 distribution function is known to be given by the incomplete

gamma function [11], for the more general distribution at hand, it


18

appears that no such simple representation is known [12]. For this

reason, an approximation to the distribution function of E is sought.

The characteristic function of E is given by

N -1 N 2
= [TI =l (1 - jv.t)] exp {jt [€ + I I~J~I /(1 jv.t)]}
i 1 0 i=l 1
1

(23)

with its corresponding cumu1ants

N
(24)
'\ = E{E} = I
i=l
N k-1 .
= (k - 1)! I v
i'
. (v . + k
.. i
\,~J'.'I>2),
1
.
k 2, 3, • ••
i=l
(25)

as derived in Appendix A. The characteristic function ~(t) (which

might be inverted by application of the fast Fourier transform to give

the frequency function of F(€; J») and the cumu1ants of ~(t) characterize

the distribution function of E. In view of this, one might ask if a

simple formula exists whose parameters depend on the cumu1ants of

~ or equivalently on the moments of E, which yields an approximation

to the distribution function of E. There is such a formula which is

an asymptotic series derived from the normal distribution called the

Edgeworth series [11].

Since the Edgeworth series is an asymptotic series for N ~ 00 and

may not converge, the number of terms to be used is an important

question. One practical way of answering this question is to simulate

the system, which may be done simply on a high-speed digital computer,

and compare the results of the simulation to the approximate distribution.

Since one is dealing with chance anyway, if the results of the simulation

agree with the approximated distribution function to within a few percent,


19

then the approximate distribution function should be good enough for

practical engineering purposes. This was done in the examples

considered below and it was found that the following approximate

representation for F(e; J» was adequate

where

Z
HZ (z) z - 1

3
H (z) z - 3z
3

5 3
HS(z) = z - 10z + lSz

are Hermite polynomials,

is the coefficient of skewness,

is the coefficient of excess, ~(z) is the normal distribution with


d
zero mean and unit variance, ~(z) = dz ~(z) is the frequency function

of Hz),

z = (e - E)/cr

is the standardized variable, and finally,

is the standard deviation of s. In obtaining the coefficient of ~(z)


20
n
d
in (26), use was made of the formula, --- ~(z) = (_l)n H (z)~(z) [11].
dzn n
Since (26) is an asymptotic formula, one should not be disturbed by the

fact that the "tails" of the distribution may be slightly negative

on the left or slightly greater than unity on the right.

Using (26) not only mayan approximation to F(e; J» be displayed,

but (26) may also be used in other optimization schemes. Since the

mean currents, J>, appear in a highly nonlinear manner in F(e; J»,

the only hope of vertically optimizing F is to employ some iterative

scheme. A conjugate gradient method such as that due to Davidon could

be applied, but not without some difficulty. Every term and factor

except for constants in (26) are nonlinear functions ofJ> defined

implicitly through the eigenvalues and eigenvectors of A'. Thus,

although one could work through the various definitions and obtain the

gradient of F(e; J», it would be a fairly tedious exercise. It is

more difficult computationally to apply a gradient method to

horizontal optimization since it requires the inverse function of F(e; J» =p.

Although the horizontal and vertical optimization will not be considered

here, and in spite of: the apparent numerical complexity of their implemen-

tation, they do seem to represent a more desirable philosophy of optimization.

After all, if one is only going to build one antenna, minimizing the mean

E does not seem to make as much sense as maximizing the probability that

the antenna will have an E less than a given acceptable e, It would

seem that minimizing e for a given level of confidence, p = F(e; J»,

is even more desirable than vertical optimization, since it does not require

an a priori choice of a reasonable e. In any case, it is interesting to

have the distributions of probable errors, and they are included in some

of the specific examples considered below.


21

I. AN EXAMPLE OF THE SYNTHESIS OF A TWO-DIMENSIONAL PATTERN FUNCTION


BASED ON MINIMIZATION OF E

A particular case of the general antenna structure used in the

examples in this section is illustrated in Figure 2. At the intersection

of each ring of which there are N and each ray of which there are 2M uniformly

riented in angle in the (x, y) plane is a vertical Hertzian dipole. All

f the dipoles on the ith ring of radius ; Pi are to be identically excited


o
=or i = 1, 2, ••• , N.

The classical assumption that identical elements when placed in an

array configuration retain their individual free-space patterns is made.

ince each element in the array is identically oriented, the basis

:unctions can be taken as the scalar functions

2M jpicos(<P-<Pk)sin 8
V.
1.
sin 8 L e
k=l

M
2 sin 8 I cos (picos (<P-<Pk) sin 8), (27)
k=l

--here (8, <P) is the direction of observation in the usual spherical 'ill
'illl'

oordinates, <P = (k - 1) ~<P, ~<P = l80/M degrees.


k IUI!I

:IU
Since V. is periodic in <P with period ~<P, this type of array is I ~1I1

""I'
1. I!lli
~11I
1 ~III
l.ulliO:

ell-suited to the synthesis of circularly symmetric pattern functions,

~he type considered here. Specifically, let the desired pattern function

sec 8 o < 8 < II'

{ sec II'

o
(Because of the symmetry of the problem about the 8 = 90 plane, angles
22

PLANAR ANTENNA USING 4 RINGS AND 6 RAYS

RAY 4 RAY 3

RAY 5 RAY 2

RAY 6 z '"I,
I'll

::::\,

1Il!1I

Hlilti.
)"11
)1"1

,
""
....

Figure 2. An Example of the General Antenna Structure Used in Chapter VI


23

:: > 900 need not be considered.) The motivation for this type of problem

:8 to create a pattern that will provide a uniform illumination of a

arget over a large angular region [13].

For these examples, the norm was defined by

TI/2 2TI
2 2
II • 11 = (1/8TIM) f fl. 1 sin e d~de.
o 0

?ortunately, G can be computed in closed form and although C> must be

alculated by numerical quadrature, at least the integral over ~ can be

one in closed form. Details of their evaluation are given in Appendix B o

For all the examples considered, the errors are assumed to obey the

;ollowing model. For simplicity, the errors on each of the 2M elements

n the ith ring are identical and are given by oj., where the real and
1

imaginary parts of oj. are independent, normally distributed, random


1

ariables both having mean zero and standard deviations of 01J.1 I. Errors

n different rings are assumed to be uncorrelated. For such a model the


2
matrix defined in Equation (5) is 2o I, and the corresponding regularization

atrix is K
0
= 202 Diag[Gll, G22, ••• , GNN].
In order to see the effect of ° on regularized optimum solutions,

some of the examples in this section were deliberately improperly

_egularized. To this end the following notation is introduced. Let

'~[0l' 02] be E computed from Equation (8) using regularization matrix

and currents given by Equation (10) using K That is,


1 °2

-1
.tth J> given by J> = (G + K ) C> • Thus Z[0l, 02] is the mean of €
°2
24
for an actual error level of cr of an antenna optimized for an error
1
level of crZ'
' d for N = 10 , Pi = 2
Patterns were synth eSlze 1 TI -
( 1 )10-i - 1,
2
700, and a number of different values of M and cr, The evolution of

patterns synthesized by minimizing E for M = 1, 2, 3, 4, 6, 8, 10 with

0.1 percent is displayed in Figures 3 through 9. The figures are

all polar plots of the mean pattern functions in various planes super-

imposed on plots of the desired pattern. The plots in the first and

second quadrartts are, respectively, of f(B, 0) and f(B, ~~/Z), both of

which are E-p1ane patterns. The p10ffiin the lower two quadrants of these

figures are of f(O, ~) which is an H-p1ane pattern. Notice that as

M increases, the pattern becomes more nearly circularly symmetric,

For each of the examples listed above, 100 independent random

samples of oj> were drawn and corresponding samples of E and q = <oJGoJ>

were generated. To perform this simulation, independent random samples

of normal distributions with means of zero and standard deviations

of 5 percent IJ, I (for i = 1, Z, ••• , N), were simulated as follows,


1

By using the law of large numbers, it can be shown that a random variable

formed by adding a number of independent uniformly distributed random 1'11111:

IIt:m:
:::llUI
1I\~:1:
variables is approximately normally distributed. In this case, twelve :::~
III11UJ
1I111111

uniform "pseudo-random" variables (formed by the multiplicative congruentia1

method [14]) were used:. With the samples of oj> computed in this way,

the real and imaginary parts of each component were independent and

there was no correlation between different components of oj>.

Listed in Table 1 are the probability meanss[5 percent, 0.1 percent]

and q= E{q}, the corresponding sample means of E and q denoted by <E> and

<q>, respectively, s[O,l percent, 0.1 percent], and the mean currents, J>,
25

ep= 0
90 PLANE ep =0° PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5

1.0 8 = 0° PLANE
1.5

2.0

M= I

Figure 3. Synthesized and Desired Pattern Functions for M = 1


26

ep =0° PLANE

1.0 .5 1.0 I.5 2.0 2.5

1.0 e=0 0
PLANE

1.5

2.0

2.5
M=2

Hl'lI
lIilll

Figure 4. Synthesized and Desired Pattern Functions for M = 2


27

ep = 30° PLAN E ep = 0° PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 20 2.5


.5

\.0 e = 0° PLANE
1.5

2.0

2.5
M=3

Figure 5. Synthesized and Desired Pattern Functions for M = 3


28

o o
ep = 22.5 PLANE ep = 0 PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5
0
1.0 8 =0 PLANE

1.5

2.0

2.5
M=4
Iii! II
Itilli
1111 I!

1111
illl

iii!
"II
1111
1111

Figure 6. Synthesized and Desired Pattern Functions for M = 4 IIi!


111\
1111
29

ep = 15° PLANE ep= 0° PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5

1.0 e = 0° PLANE

1.5
,
,.I"
"

2.0 "

2.5.
M=6

Figure 7. Synthesized and Desired Pattern Functions for M = 6


30

ep= 0
0 PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5

1.0 e=a 0
PLANE

1.5
2.0

2.5
M=8

:1::::1
1111111

:::::~

Figure 8. Synthesized and Desired Pattern Functions for M = 8


31

ep= gO PLANE ep = 0° PLANE

2.5 2.0 1.5 1.0 .5 .5 /.0 1.5 2.0 2.5


.5

1.0 e = 0° PLANE

1.5

2.0

2.5
M = 10

Figure 9. Synthesized and Desired Pattern Functions for M = 10


TABLE 1. RESULTS OF OPTIMIZATION AND SIMULATION FOR (J ::i: 01% AND VARIOUS VALUES OF M.

M 1 2 3 4 6 8 10*
J
1 .5387 -.1557 -.1651 -.3509 -.8022 -.3383 -.3845
J2
1.473 1.353 1.012 .9788 1.184 .5907 .5878
J
3 -.5773 -.7060 -.1293 .5093 1.465 .7157 .7846
J
4 -.8400 -.7632 -.9258 -1. 866 -3.427 -1. 541 -1. 693
J
5 1.195 1.135 1.187 2.070 2.376 .9383 1.020
J
, 6 -.1516 -.1264, .06008 -.3540 1.615 1.149 1. 249
J
7 1.105 1.093 1.488 -1. 245 ' -4.726 -2.530 1-2.912
J8
1.450 1. 421 1.81,2 1.593 4.937 2.431 2.994
J
9 -.9182 - .8805 1.103 -.9550 -2.813 -1. 228 1-1.719
J10
.2572 .2401 .3019 .2464 .7428 .2946 .4984
t[5%, .1%1 .04066 .01952 .01651 .02149 .1028 .02533 .04215
<E> .04168 .01921 .01649 .0-2161 .1041 .02476 .04323
q .01464 .01230 .01451 .02077 . .1026 .02519 .04205
<q> :01566 .01199 .01450 .02090 .1039 .02463 .4314
l
£ [,1%, .1%] .02603 .007220 .002001 7.211 • 10-~ 2.323 • 10. i 1. 423 • 10.~ 1.09 • 10 ~
S. D. .007343 .006610 .008473 .01181 .06200 .01658 .02771
Y1 1.226 1.333 1.460 1.:j35 1.467 1.583 1.594
Y2 2.546 2.951, 3.504 2.876 3.531 4.009 4.075
I<JJ> 3.030 2.879 3.185 3.781 8.909 4.411 5.217
* 2500 samples used W
N
33

and I<JJ>. The standard deviation (S.D.) and the coefficients of

skewness (y 1) and excess (y 2) of S are also listed in Table 1. The table

shows close agreement between sample averages and their corresponding

probability means. One also notices that as the number of rays increases,

E[.l percent, .1 percent] decreases.


In order to test the theory developed in previous chapters, solutions

for M = 10 regularized to error levels of a = 0, 1 percent, 3 percent,

5 percent, 7 percent, and 10 percent were obtained. However, in the

simulations, an actual error level of a = 5 percent was used. Thus,

one properly regularized and five improperly regularized solutions were

obtained. Simulations as described above with sample sizes of 1000 were

performed for each case. S.D., Yl' Y2' E[5 percent, a], q,<f::>,<q>,da, a],

mean currents J. 's and <JJ>1/2 are listed in Table 2 for the value~ of
~
a given above. The pattern funTtions corresponding to the currents in

Table 2 are displayed in Figures 10 through 15. Figure 16 is a plot of

E[5 percent, a], with its corresponding <s>, against a. Notice that.

for the proper amount of regularization, namely that which corresponds

to a = 5 percent in this case, e;T5percent, a] takes on its minimum value when

a = 5 percent, in agreement with the theory presented in Chapter IV. Also

note that even for the simple error model used in this example, the

concept of a scalar regularization parameter is insufficient to minimize

E since the diagonal elements of G are all different.

The method used to solve the system of equations, (G + K)J> = C> was

to find the unitary matrix R such that

C'>, (28)

where J> = RJ'>. Although this is an inefficient way to solve a system


TABLE 2. RESULTS OF OPTIMIZATION AND SIMULATION FOR M = 10 AND VARIOUS LEVELS OF ERROR.

3% 5% 7% 10%
0 0% 1%
J1 .04127 .04502 .04585 .04557
2.790 .02200
J2 .1657 .1626 _.1574
-4.989 .1808 .1693
J3 -.003275 -.01517 -002019 -.02464
-.9968 .04477
J4 -.02591 -.01559 -.01535 -.02045
-3.809 -.09016
J .1112 .1143 01123 .1061
5 4.613 .1006
J6 .1232 . .1001 .09032 .08179
-.5083 •2124
J7,- -.08305 -.07037 -.06700 -.06599
4.136 -.1607
J ' --. -.02768 -.01155 .•..•
008039 -.009804
8 5.853 -.06149
J .1751 .1386 .1241 .1126
9 -4.064 . .2824
JlO -.06183 -.03736 -.02643. -.01538
1.346 -.1377
5.003 • 10-~ 3.915 • 10-
4 3.852 • 10-
11
3.891 • 10-
4 4.147 • 10-4
d5%; 6] .4718
5.007 • 10-' 3.903 • 10-' 3.887 • 10-
l
3.898 • 10-4 4.082 • 10..:q
<E> .4662.
2.401 • 10-' 2.244 • 10-
l
2.156 • 10-'" 2.024 • 10-4
q .4718 3.631 • 10-'
3.634 • 10-
11
2.388 • 10-4 2.278 • 10-
11
2.161 • 10-l 1.975 • 10":'''
<q> .... .4662
6.791 • 10-~ 1.518 • 10-
11 .
: .2.379 • 10
-~ 3.852 • 10-
04
5.961. 10
..:.~
1.022 • 10-';
;[0, 0]
11
1.639 • 10-'1 1.609 • 10":-4
S. D. .3647 2.168 • 10-
11
1.673 • 10-l 1.660 • 10-
1. 728 1.804 1.826 1.841
Y1 1. 865 1.386'
Y2 4.768 5.116 5.216 5.278
5.394 3.087 w
.2657 .2514 ~
/<JJ> 11. 81 .4763 .3174 .2806
35

o
4>= 9 0
PLANE ep = 0 PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0


.5
0
1.0 8= 0 PLANE

15

2.0

2.5
(j' = 00/0

Figure 10. Synthesized and Desired Pattern Functions for cr = 0%


36

0
ep= 9
0
PLANE ep = 0 PLANE

2.5 2.0 1.5 1.0 .5 .5 /.0 1.5 2.0 2.5


.5
o
1.0 8= a PLANE

/.5

2.0

2.5
(J = I 0/0

Figure 11. Synthesized and Desired Pattern Functions for 0 = 1%


37

ep== 9 PLANE
0 ep== 00 PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5

1.0 e ==
0
0 PLANE

1.5

2.0

2.5
(J == 30/0

Figure 12. Synthesized and Desired Pattern Functions for cr = 3%


38

ep = 9 0
PLANE

2.5 2.0 1.5 /.0 .5 .5 /.0 1.5 2.0 2.5


.5
o
/.0 8 = 0 PLANE
1.5

2.0
2.5

Figure 13. Synthesized and Desired Pattern Functions for a = 5%


39

o
ep= 9 PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5
o
1.0 8 = 0 PLANE
1.5

2.0

2.5
(J = 70/0

Figure 14. Synthesized and Desired Pattern Functions for a = 7%


40

ep = 9° PLANE

2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5


.5

1.0 e= 0° PLANE

1.5

2.0

2.5
0- = 10 %

Figure 15. Synthesized and Desired Pattern Functions for 0 = 10%


41

4

o
o

Figure 16. Mean E'S for Solutions Regularized to Various Noise Levels
but in the Presence of an Actual cr = 5% Level
42
of linear equations, it does avoid some of the numerical difficulties

of Cholesky decomposition and Gaussian elimination for ill-conditioned


"
matrices. The computed eigenvalues of G are listed in Table 3 below.

TABLE 3. COMPUTED EIGENVALUES OF G.

~'I 7.742 A 0.01619


6

A 0.9582 A 1.779 • 10-4


2 7
7
1. = 0.4329 A8 = 4.918 • 10-
3

1. 0.1579 A = -1.907. 10-8


4 9

AS 0.04106 A ="" -1.987. 10-8


lO

The last two entries of Table 3 can be considered as nothing less than

nonsense since G is a positive definite matrix and, hence, its eigen-

values must be positive. For an error oG .in.):the computation.

of G, the corresponding first-order variation in its eigenvalue;;V

is 01. "':<uoGu>, where u> is a corresponding normalized eigenvector

of G. In view of .this one cannot hope to calculate A accurately if

its actual value is not much greater than the.levelof machine. precision;

Since the currents obtained from Equation (4) are strongly dependent on

the smallest eigenvalues of G, it is highly doubtful that the currents

in Table 2 corresponding to 0 = 0 can be anything even close to J >.


o
In fact the solution obtained for 0 = 0 is really regularized in the

sense that the numerical errors in computation of G and its eigenvalues

have overwhelm~d its smallest eigenvalues. Of course, when a sufficient

amount of regularization is added, then the problem disappears since the


43

smallest eigenvalue that appears in Equation (28) is of the order of

0
2 which can be much greater than the level of numerical errors.

Although J > in itself is of no practical interest, J > as well


o 0

as the eigenvalues of G enter into the formulas necessary for the

calculation of the moments of ~ through Equations (13), (20), and

(22). This raises the question: How can the moments and approximate

distribution function of ~ be calculated by the method of Chapter V

accurately? Indeed, if some of the calculated eigenvalues are negative,

how can the method of Chapter V even be applied in view of the form

of D? It is demonstrated below that in spite of these numerical

difficulties, meaningful results can still be obtained.

From the definition of U in Chapter V, it follows that if X> = UY>,


_ _ r _
2
then <XGX> '" L A Iy 1 , where r is the 'effective rank" ofG, the eigenvalues
k=l k k
of which are assumed to be in descending order. Certainly r must be chosen

small enough to exclude any eigenvalues erroneously calculated to be negative.

Although small positive eigenvalues may also be erroneous,. one must take care

not to.exclude too.many eigenvalues since they appear as square toots in T in

Equation (20)• With ..


the approximation given above (which of course is to be

interpreted in a probabilistic sense), the S matrix becomes the r X N matrix

1.1/2
1
o. . .0
0
S 1.1/2
2
o. . •0 • U t.

0
'.~.1.1/2
r
o• . •0
Al = SASt and its unitary matrix Ware now r X r matrices and the

transformation matrix T becomes an r X N matrix. From Equations (4)

and (20), tlJ">depends on


44

-1
Al
-1
0 .. 0
1..
2 0

TJ > =W
t
Q Q .u t . U C'>
o
. -1
.
0
.
0
A
r
..
0
X...,l
N

t [,-1/2 -1/2
= W Diag h
1 '.~2 '

which excludes the questionable eigenvalues. Therefore, although

the calculated J > may be considerably in error, LU"> computed by ,,'


o
;.1
(22) with the T given above can be computed accurately. This allows

the second and higher moments of E and hence the shape of the
1111
III
nit
illl

distribution function of E to be computed accurately. The displacement illl

lii~
1111:
III!

iii
of the distribution function depends on s which, in turn, depends
o 111"1
III
III

on J >. It is difficult to study the sensitivity s to errors in 6G


o a
of the order of machine precision and the errors in the computation of

the smaller eigenvalues of G. Fortunately the mean of E can be

computed from (11) which does not involve J >. Thus all the moments
\ a
of E can be computed accurately if the assumption that G is very nearly

semidefinite with rank r holds.

Since one has no way of knowing what the smaller inaccurate eigen-

values of G actually are, the best and perhaps the only way to determine

how small r should be and how good the approximations made above are

appears to be simulation using the representation of E given in (19).

This was done for the case where N 10, M = 10, and cr = .1 percent.
45
The results displayed in Figure 17 show excellent agreement between

the approximate distribution function and the simulation. For

this example, two independent simulations with sample sizes of 2500 were

performed. For all the examples considered in this chapter, the

Edgeworth series of Equation (26) was used to approximate the distribution

function. The cumulants of the distribution function of E from which the

parameters of (26) are calculated were computed using the matrix T as

modified above by excluding only the erroneous negative eigenvalues

of G. It was found that E calculated by (8) and the formula, Eo + E{<X"X">}=Xl

as given by Equation (24), agreed to at least three significant digits for

examples considered.

The distribution functions for the examples where N = 10, M = 10,

cr = 0, 1 percent, 3 percent, 5 percent, 7 percent, and 10 percent(considered

previously)are plotted with their corresponding simulations in Figure 18.

The figure indicates that the distribution corresponding to the solution

regularized to the actual cr = 5 percent error level present not only

has the minimum mean but generally has the most desirable properties of

all the distributions plotted. The figure also indicates that although,

in view of the scale of E, there is very little practical difference

between the distribution functions, it does appear that it is better

to "over-regularize" slightly than "under-regularize" if the exact

nature of the probability law of oj> is not known.

In all of the examples considered in this chapter, the phase

function of fd(e, ~) was chosen to be unity which lead to real solutions.

It is now shown that: for the antenna structure used in this chapter

all stationary points J> of (17) are real. Moreover, it is shown that

all solutions obtained in this chapter are stationary points of (17). Due

to the symmetry of the antenna, v> and consequently G are real. Equation (15)

can be written equivalently as


--~ --~-----

46

F(E)
1.0
o
X
x
.9

.8

.7

.6

.5

.4

.3 EDGEWORTH SERIES

.2 X G SIMULATION

.1

o
o .01 .02 .03 .04 .05 .06 .07 .08 .09 .10

Figure 17. Edgeworth Series and Simulation for N = M = 10, a .1%

- -
F (E)

1.0 • 0* .8. 0

0.9

0.8

0.7

0.6

SIMULA TIONS
X 0" = .01
0.4
.• (J = .03

0.3 o (J = .05
A (J = .07

0.2 .0"=.10

0.1
x 10-4
0
0 2 3 4 5 6 7 8
E
.l>-
Figure 18. Distribution Functions for Solutions Regularized to Various -...J
Noise Levels but in the Presence of an Actual a = 5% Level
48

G(J»> = {JI~I v><v/I<Jv>1 dQ/SnM} J> - HJ>

where the normalization constant used in the definition of 1 1'112 is

included. Like G, H is a positive definite real symmetric matrix.

Stationary points of (17) satisfy (IS) which is equivalent to

(G + K) J> HJ>.

But this is an eigenvalue equation with unit eigenvalue and real

eigenvector J>. Therefore, if a stationary point of (17) exists

then J> is real. The plots of the patterns obtained in this section

show that f(8, ~) = <JV> > O. Evidently, for these patterns, G>

calculated from (15) is identical to G> calculated by (2), which

implies that the corresponding solution currents J~:»are also stationary

points of (17). Any other stationary points of (17) would have to

give rise to a pattern with nulls in the pattern other than at 8 = O.

If any such points actually exist~it seems very doubtful that they

would produce an E from (17) which is less than that produced by the

solutions found in this section. Thus, although it is not proven,

intuition leads one to suspect that the solutions obtained in this

section may also produce globally minimum E defined in (17).


49

VII. AN EXAMPLE OF AMPLITUDE PATTERN SYNTHESIS

It was shown at the end of Chapter VI that the solutions found

to minimize the quadratic performance index of (8) at least make the

nonquadratic index of (17) stationary and possibly minimum. This

type of behavior will occur for structures with certain symmetry. For

structures with no such symmetry, the stationary points of (17) will,

in general, be complex.
To illustrate how the pattern one obtains is improved by allowing

the phase to be free, the following example is included. A linear

array of five isotropic radiators located at z1.= ( ~2 i - ( l2 )i]/ka


0

on the z-axis is used to synthesize the one-dimensional (circularly

symmetric)pattern
o

sec e

For this problem, Vo1.= exp (jzo


1.cos e] and G.1.J
° = sin (z.1.- z.)/(zo
J 1. zo)'
J
An initial solution, J >, was obtained without regularization
a
from (3) with C> calculated from (2). With this as 'a starting.point

Davidon's method was applied to E of (17) with K = O. The gradient

of the performance index is equal to

where
'IT
n> f V><VJ> (1 - fd!l-<JV>I) sin ede + KJ>
o

was calculated by Simpson's rule.


50

Figure 19(a) displays the pattern of the initial solution whi~e

19(b) is the pattern function of the solution .found by iteration.

Figures 20(a) and 20(b) are the respective phase functions. Figure 19(b)

shows a considerable i~provement over 19(a), and this improvement

comes with the additional benefit of a lower Euclidian norm of

J>. The results are summarized in Table 4 below.

TABLE 4. RESULTS OF OPTIMIZATION OF e:.

Initial J> Final Iterate J>

Jl 1.662 LJ.3. 87 .752008.43

I
J2 2.421038.6 1.525 ~9.74 I.

J3 2.40504.66 1.075 L1l4.3

J4 1.802040.3 1.175 !..::J6.35 I

,,
J5 .5800U1.98 .7238047.9

e: .5457 .1994
0

<JJ>1/2 4.237 2.440


.5

1.5 1.0

( b) (a)

Figure 19. (a) Pattern of Initial Approximation Corresponding to Minimum of


Equation (8) (K = 0) lJl
(b) Pattern of Final Iterate Corresponding to Minimum of ~
Equation (17) (K = 0)
52

PHASE (DEGREES)

180
150
120
90
60
30
o 160 180
-30
-60 e (DEGREES)
-90
-120
(0)
-\50
-180
180
150
120
90
60
30
o 140 160 \80
-30
-60
-90 e (DEGREES)
-120
-150 (b)
- 180
Figure 20. (a) Phase Function of Pattern of Figure 19 (a)
(b) Phase Function of Pattern of Figure 19 (b)
53

VIII. CONCLUSIONS

It has been demonstrated that the problem of antenna synthesis may

be approached realistically by considering the excitation of antennas

as random around some mean value. By so doing, the criterion of closeness

of the synthesi~ed and desired patterns, s, becomes a random variable.

Three philosophies of optimization of such a random variable were given.

It was shown that minimization of the mean of E leads very naturally

to a generalization of the concept of regularization and to a simple

and direct mehtod of computing the proper "amount" of regularization.

It should be remembered that to apply this method only the knowledge of

the second moments of the errors is required. By making some additional

assumptions on the probability law of the errors, it was demonstrated

that the distribution function of E could be approximated by the

Edgeworth series. Such an approximation could then be used to apply

the horizontal (i.e., minimizing s for a given probability) and vertical

(i.e., maximizing probability for ~ not exceeding a given value)

optimization schemes. Finally, a random performance index suitable for

amplitude pattern synthesis was included.

Several examples were considered, all of which involved the synthesis

of three-dimensional, circularly symmetric pattern functions. Most involved

planar arrays of vertical Hertzian dipoles. In these examples, considerable

simplification was obtained by taking advantage of the high symmetry

properties of the antenna array. To demonstrate the theory, several

improperly regularized solutions and one properly regularized solution

were obtained. It was found in these examples that by choosing the

regularization matrix properly, ErE} is indeed' minimiZed ..' In addition,

simulations of the errors were perfor,med' for each solution


54
obtained. F~gurel6and Tables 1 and 2 demonstrate the close agreement

between theory and the results of these simulations.

The distribution function of s was computed by the Edgeworth series

for a number of examples. This was accomplished in spite of severe

numerical errors in the smaller eigenvalues of G (and consequently in the

unregularized solution, J » by considering G as essentially a positive


o
semidefinite matrix. A number of simulations of the distribution

functions were performed (Figures 17 and 18). In each case, the

distribution function approximated by the Edgeworth series and the simulated

distribution agreed quite well.

An example of amplitude pattern synthesis was included. This example

demonstrated that considerable improvement in the amplitude pattern

function over that realized by minimiz~ng (8) is obtained (at the

expense of requiring iteration) when one minimizes E of Equation (17).


55

APPENDIX A.

1.0 Calculation of the Characteristic Function of a Generalized Noncentral


X2 Distribution

The characteristic function of a random variable E is defined as

-2
Consider the random variable X formed by squaring the normal random

variable X with the mean y and the variance ~ Vo The characteristic


-2
function of X is

-2
n(t) E{eijtX}= 1/;;:;;. foo exp [jtx2 - (x - y)2/v] dx = [11 ~ jvt]-l
_00

r 00
[x _ ]1
(1 - jvt) J
2

'll/l-1TV-/-(-l---jV-t-) J ,[ exp dx)-' exp [j"2t/(1 - jvt)].


[ (1 ~ jvt)]

By comparing the factor in braces, {.}, with a normal distribution with


1
variance of 2 v/(l - jvt) and mean yj(l - jvt), it is evident that {.} = 1

and

n(t) exp [j]12tj(1- jvt)]/Il - jvt.

Next consider the random variable ~2

Xl' X , ••• ,X are independent, normally distributed, complex, random


2 N
variables wi th mean~ ]11' YN and variances of the real and

imaginary parts of
1
2 vI' , 12 vN' respectively. In view of

the independence of Xl' ••• , XN, the characteristic function of ~2 is


- 2 - .2
the product of the characteristic functions of the (Re Xl) , (1m Xl) , •
- 2 - 2
(Re X ) , (1m XN) , and is given by
N
56

By Shl'ftl'ng
X-2 as def'
lned' In th'
e prevlous paragraph by th e constant

So' Equation (23) results •

2.0 Calculation of the Cumulants of a Generalized Noncentral x2 Distribution

The cumulants of a characteristic function,~(t), are defined as

k
,-k d
v = J --- ~n ~(t) k 1, 2, •••
k dtk
t=O

With n(t) defined in the first paragraph

~n n(t) -I ~n (1 - jvt) + j~2t/(1 - jvt)

v 2 2.
j
-1 d.Q,n n 2+V
---+ j-----
V yt
dt 1 - jvt 2
(1 - jvt)

v2 2 2 2
2 + 2V v + j2 • 1 • V v t
(1 - jvt)2 (1 - jvt)3

which is proven by

k
-1 d .-k d ~n n
j dt J k
dt
57

,
+ j (k+l) • k! tvk ,}vl (1 - jvt)k+2

Setting t 0,

are the cumulants of net). It follows from this that the cumulants of

~ lj:I(t)
are

N
kk-l I ]J.] 12
v
k = (k - l)! L [v.
1.
+ kv.
1. 1.
i=l

from which Equations (24) and (25) may be derived.


58

APPENDIX B.

1.0 Calculation of G

2M
V.
1.
= I
k=l
exp [jPi cos (~ - ~k) sin e] sin e

2'IT'IT/2 *
G., = f f V,V, sin e de d~/8'ITM
1.J 0 0 1.J

G ..
1.J

where

and

= tan -l[ (P. sin sin


1.

Using formula 9.1,21 of [15],

1 'IT/2. '"
-- f eJzcos ~ d~ = J (z)
2 'IT 0 0

where J is the Bessel function of the first kind of zero order,


o

2M 2M 'IT /2
3
G,. = I L f J (p, 'k.Q, sin e) sin e de/4M.
1.J k=l,Q,=l 0 0 1.J

Using formula 11.4.10 of [15],


59
,and formulas in Chapter 10 of [15],

2M 2M
G. . L L 8 (p i .k£ ) 14M
~J k=l £=1 J

where

8(p) = sin(p)(l/p - l/p3) + cos (p)/p2.

Finally, taking advantage of the degeneracy in Pijk~'

M
G .. = L 8(P"k) + -2l[8(P.+ p.)+8(p. - P.)]
~J k=2 ~J ~ J ~ J

where

(To apply the formula to G .., note that an application of L'Hospitalts


~~
2
rule shows that 8(0) = 3 .)

2.0 Calculation of C>

2TI TI/2
Ci = J J fee) V. sin e de d~/8TIM
~
a a
1 2M 2TI TI/2
2
=- L JJ exp [jP. cos (~-<Ilk) sin e] sin e fd(e) de de»
8TIM k=l a a ~

which can be integrated by Simpson's rule. J is evaluated using formulas


o

9.4.1 and 9.4.3 of [15].


/'

60

LIST OF REFERENCES

[1] Ro E. Collin, "Radiation from simple sources," Antehna Theory, Part 1,


Ro Collin and F. J. Zucker, eds., New York: McGraw-Hill, 1969 (Chapter
II) .

[2] C. Jo Bouwkamp and N. G. De Bruwn, "The problems of optimum antenna


current distribution," Philips Research Reports, 1, pp. 135-158, 1945-19460

[3] P. M. Woodward and J. Do Lawson, "The theoretica,l precision with which


an arbitrary radiation pattern may be obtained from a source of finite
size," J. lEE (London), Part 3, vol. 9.5,ppo 363-370, Sept. 1948.

[4] N. J. Achieser, Theory of Approximation, C. J. Hyman, translator,


New York: Frederick Ungar Publishing Company, 1956 (Chapter I).

[5] B. Noble, Applied Linear Algebra, Englewood Cliffs, New Jersey:


Prentice-Hall, Inc., 1969, pp. 415-416.

[6] D. R. Rhodes, "The optimum line source for the best mean-square
approximation to a given radiation pattern," IEEE Trans 0 Antennas
and Propagation, vol. AP-ll, noo 4, ppo 440-446, July 1963.

[7] T. 1. Taylor, "Design of line source antennas for narrow beam-width


and low side losses," IRE Trans. Antennas and Propagation, vol. AP-3,
ppo 16-28, Jan. 19550

[8] Y. T. Lo, S. W. Lee, Q. H. Lee, "Optimization of directivity and


signal to noise ratio of an.arbitrary antenna array," Proc. IEEE,
vol. 54, no. 3, pp. 1033-1045, Aug. 1966.

[9] Go A. Deschamps and Ho S. Cabayan, "Antenna synthesis and solution


of inverse problems by regularization methods," IEEE Trans. Antennas
and Propagation, v~L AP-20, no. 3, ppo 268-274, May 19720

[10] H. So Cabayan, P. Eo Mayes, and G. A. Deschamps, "Techniques for


computation and realization of stable solutions for synthesis of
antenna patterns," Antenna Laboratory Report No. 70-13, University
of Illinois, Octo 1970.

[11] H, Cramer, Mathematical Methods of Statistics, Princeton: Princeton


University Press, 1945.

[12] Y. T. Lo, "A mathematical theory of antenna arrays with randomly


spaced elements," IEEE Trans. Antennas and Propagation, vol. AP-12,
no. 3, pp. 257-268, May 1964.

[13] L. C. Van Atta and T. J. Keary, "Shaped beam antennas," Microwave


Antenna Theory and Design, S. Silver, ed., New York: McGraw-Hill,
1949 (Chapter XIII).
61

[14] B. C. Richardson, "Random number generation on the IBM 360,"


Department of Computer Science Report No. 329, University of
Illinois, April 1969.

[15] M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical


Functions with Formulas, Graphs, and Mathematical Tables,
Washington, D.C., U.S. Government Printing Office, 1968.

Potrebbero piacerti anche