Sei sulla pagina 1di 9

An Introduction to Genetic Algorithms for Electromagnetics

Randy L. Haupt Department of Electrical Engineering US Air Force AcademyDFEE 2354 Fairchild Drive, Suite 2F6 CO 80840-6236 Tel: (719)472 3190 Fax: (719)472 3756 E-mail: haupt@sheiter.usafa.d.mil

1 Abstract .

This article is a tutorial on using genetic algorithms to optimize antenna and scattering patterns. Genetic algorithms are global numerical-optimization methods, patterned after the natural processes of genetic recombination and evolution. The algorithms encode each parameter into binary sequences, called a gene, and a set of genes is a chromosome. These chromosomes undergo natural selection, mating, and mutation, to arrive at the final optimal solution. After providing a detailed explanation of how a genetic algorithm works, and a listing of a M i Z A B code, the article presents three examples. These examples demonstrate how to optimize antenna patterns and backscattering radar-cross-section patterns. Finally, additional details about algorithm design are given.
2. Introduction

large number of discrete parameters are simulated annealing and genetic algorithms. Simulated annealing models the annealing (slow cooling) process of metals in a liquid state to metals in a solid state [l, 2 , while genetic algorithms model evolution and genetic 1 recombination in nature [ ] Both optimization techniques model 3. natural processes and tend to be slow, but sure, at finding a good solution. Which technique is best? Davis book gives case studies to compare the two techniques [ ] Both algorithms are easy to pro4. gram, and quickly give gratifying results in a short period of time. These algorithms appeal to our intuitive side, and they resist the mathematical rigor of traditional optimization methods. Some examples of using simulated annealing for optimizing electromagnetics problems appear in [SI and [ ] This author is hooked on 6. genetic algorithms, because of my initial experience in using the algorithms. I started from scratch, and had a working geneticalgorithm antenna-array-optimization M L B p r o m written TA within one hour of reading a Scientific American article [3]. Genetic algorithms are beginning to show some appeal in optimizing radiation patterns. Michielssen, et. al used a genetic algorithm to synthesize a multilayer radar-absorbing coating that maximizes absorption of an electromagnetic wave over a desired range of frequencies and incident angles [ ] The optimization 7. parameters were absorber thickness, permittivity, and permeability. Genetic algorithms were also applied to the problem of thinning linear and planar arrays, to obtain the lowest maximum-relativesidelobe level, over a specified bandwidth and scan angle [SI. Genetic algorithms work well to design antennas, but are slow on serial computers, and are not well suited for real-time applications, such as adaptive nulling. This paper is a tutorial on how to apply genetic algorithms to optimize radiation patterns. Step-by-step procedures are given, with associated MATLAB [ ] code to offer the reader an oppor9, tunity to try genetic algorithms. Some simple examples for optimizing array-antenna patterns, and backscattering patterns fiom strips, are presented. Finally, some practical hints and suggestions are given, in the final section.
3 A genetic algorithm .

ature abounds with examples of plants and animals adapting to their environments. An animal changes color to hide. A plant develops extensively deep roots because of strong winds or little moisture. Engineers can use natures philosophy of adaptation in order to design better products. Computer algorithms that model survival of the fittest are very attractive, because they are simple to program, and not hidden in arcane mathematical jargon. Turning these algorithms loose on a wide variety of optimization problems leads to some stunning results. This paper shows how to apply evolution and natural selection, in the form of genetic algorithms, to optimize radiation patterns. Optimizing antennas to closely approximate desired far-field responses, or optimizing absorbing material to control scattering, are topics of considerable interest in electromagnetics. Traditional optimization techniques search for the best solutions, using gradients and/or random guesses. Gradient methods quickly converge to a minimum, once an algorithm is close to that minimum. They have the disadvantages of getting stuck in local minima, requiring gradient calculations, working on only continuous parameters, and being limited to optimizing a few parameters. Random-search methods dont require gradient calculations, but tend to be slow, and susceptible to getting stuck in local minima.

Electromagnetic-optimizationproblems often involve many parameters, and these parameters may be discrete. An example is optimizing the low sidelobes of a large array antenna, when the amplitude and phase have quantized values. Although there are not an infinite number of possibilities in the search space, the number of possibilities is so large that an exhaustive search is impractical.Two relatively new global optimization procedures that can handle a

This section begins with a quick overview of genetic algorithms, and then provides a step-by-step implementation. Much more detail on genetic algorithms is found in [lo].In the following sections, specific electromagnetics examples are presented. Hopefblly, the reader can quickly use this information to implement a working genetic algorithm. MATLAB code resembles a pseudo1045-9243/93/$03.0091995 IEEE

IEEE Antennas and Pmpagatiin Magazine, Vol. 37, No. 2,April 1995
~-

7
*

code, and should be understandable even to those not familiar with this software package. Genes are the basic building blocks of genetic algorithms. A gene is a binary encoding of a parameter. A chromosome in a computer algorithm is an array of genes. Each chromosome has an associated cost function, assigning a relative merit to that chromosome. The algorithm begins with a large list of random chromosomes. Cost functions are evaluated for each chromosome. The chromosomes are ranked from the most-fit to the least-fit. according to their respective cost functions. Unacceptable chromosomes are discarded, leaving a superior species-subset of the original list. Genes that survive become parents, by swapping some of their genetic material to produce two new offspring. The parents reproduce enough to o f e the discarded chromosomes. Thus, the total fst number of chromosomes remains constant &er each iteration. Mutations cause small random changes in a chromosome. Cost functions are evaluated for the offspring and the mutated chromosome, and the process is repeated. The algorithm stops after a set number of iterations, or when an acceptable solution is obtained. Figure 1 is a flow chart of a genetic algorithm. The algorithm begins by defining a chromosome as an array of parameter values to be optimized. If the chromosome has Npr parameters (an N-dimensional optimization problem), given by p,,pz,.. . , p ~ ,,then the chromosome is written as chromosome = where

L,

(3)
n=l

q,, = quantized version of p,,

L, =number of quantization levels for q,,


bw =array containing the binary sequence representing q,, Q = the largest quantization level = half the largest-possible value of q,,
The binary-encoded parameter, qn, does not have to mathematically relate to p n , as in Equation (3). Instead, q,, may just represent some value of p,,. For instance, if p,, represents eight values of resistivity, then qn has the following representation:

q,, = 000 = 100R/O qn = 001 = 200RlO q,, = 111 = 800R/O


The implementation of genetic algorithms described here only works with the binary encoding of the parameters, and not the parameters themselves. Whenever the cost function is evaluated, the chromosome must first be decoded. An example of a bmaryencoded chromosome that has Npr parameters, each encoded with Npbil = 10 bits, is

, [p,p2p3.. . p ~ 1.

(1)

Each chromosome has a cost function, found by evaluating a h c tion,f, at A,&,...,pN, . The cost functionis represented by

The parameters, pn , can be discrete or continuous. If the parameters are continuous, either some limits need to be placed on the parameters, or they should be restricted to a handful of possible values. One way to limit the parameters is to encode them in a binary sequence, such as

Substituting this binary representation into Equation (3) yields an array of quantized versions of the parameters. This chromosome has a total of Ngbil = Npbil x Npr bits. M e r devising a scheme to encode and decode the parameters, a list of random chromosomes is generated. Each chromosome has an associated cost, calculated from the cost function in Equation (2). An example of an arbitrary list of Ndm = 8 random chromosomes, with their associated costs, appears in Table 1. The next step in the algorithm ranks the chromosomes from best to worst. Assuming a low cost is good, the ranking is shown in Table 2.
Table 1. A list of 8 random chromosomes and their associated costs.

establish encodinpJdecoding of parameters generate M random chromosomes evaluate cost function for the chromosomes mutations tno
I

3 chromosomes rank

yeTL
stop
8
L

chromosomes mate remaining

Figure 1. A flowchart of a genetic algorithm.

r
IEEE Antennas and Pmpagation Magazine, Vd. 37, No. 2, April 1995
~-

discard'inferior

Table 2. Chromosomes are ranked and selected.

This is a simple genetic algorithm written in MPITLAB *number of bits in a gene Znumber of genes Znumber of iterations

N-8;

M-N; last-20; M2-M/2;

Z creates M random genes having N bits Z Gene is an RxN matrix Gene=round(rand(M,N) ; 1


for ib-1:last

7 8 5 4

11111111OOOO 110011001100 0010101oooO1 OOO101010111

2.8 discard 3.0 discard 10.1 discard 12.6 discard

keep keep discard discard

ZZZZZ~~Z~ZZ~Z~ZZ~ZZ~ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ Z Insert a subroutine that calculates the cost Z Z function. It should have the form Z Z cost funct ion (Gene1 Z Z where cost is a Nxl array Z ZZZZZ~ZZZZZZZZZZZZZZZZZZZZ?Z~ZZZZZZZZZZZZZZZZZZZZ

Z ranks results and discards bottom S o t [cost,indl-sort(cost); Z sorts costs tram bast to worst Z sorts Gene according to costs Gene-Gene(ind(l:MZ),:); Z and discards bottom half of list

At this point, the unacceptable chromosomes are discarded. Unacceptable is user defined. Typically, the top x are kept (where x is even), and the bottom Ndlm - x are discarded. As an example, if 5 ? of the chromosomes are discarded, then chromosomes 1,2,3, 0h and 6 are kept, while chromosomes 4,5,7, and 8 are discarded (column 4 in Table 2). Another possibility is to require the cost to meet a specified level. As an example, if the cost must be less than 5.0, then chromosomes 1.2,3,6,7, and 8 are kept, while chromosomes 4 and 5 are discarded (column 5 in Table 2). For this example, well keep 50%. The next step, after ranking and discarding the chromosomes, is to pair the remaining Ndlm / 2 chromosomes for mating. Any two chromosomes can mate. Some possible approaches are to pair the chromosomes from top to bottom of the list, pair them randomly, or pair them 1 with Ndlm i 2 , 2 with Ndm / 2 - 1, etc. Once paired, new offspring are formed from the pair-swapping genetic material. As an example, chromosome 6 is paired with 2, and 3 is paired with 1, from Table 2. Now, a random crossover point is selected. The binary digits to the right of the crossover point are swapped to form the offspring. If this random crossover point is between bits 5 and 6, the new chromosomes are formed from

Z mate cross-ceil((N-l)+rand(M2,1)); Z selects random cross over points Z pairs genes and swaps binary digits to the right of the Z cross over points to form the offspring for ic-1:a:MZ Z offspring#l Gene (M2+ic, cross)-Gene tic,1: cross) ; 1: Gene (MZ+ic, cross+l:N)-Gene (ic+l, cross+l:N); Z Z offsprinfl2 Gene (M2+ic+l, 1:croas) -Gene (ic+l, 1:cross) ; Gene (M2+ic+l, croas+l:N) -Gene (ic, cross+l:N); Z end
Z mutate ix-ceil (M*rand) Z randam gene ; iy-ceil (N+rand) Z randam bit in gene ; Gene(ix,iy)-l-Gene(Fx,iy); Z mutate bit iy in gene ix
end Zfor ib-1:last

Figure 2. A simple genetic algorithm w i t n for MAIZAB. rte

M e r the mutations take place, the costs associated with the offspring and mutated chromosomes are calculated, and the process is repeated. The number of generations that evolve depends on whether an acceptable solution is reached, or a set number of iterations is exceeded. M e r a while, all of the chromosomes and associated costs become the same, except for those that are mutated. At this point, the algorithm should be stopped.

parent #I (chromosome #6) 101010010101 parent #2 (chromosome #2) 11lOO!lOOlO!

offspring #1 offspring #2

10101~10010~

111000010101

The MATLAB code that implements a very simple genetic algorithm appears in Figure 2. M e r the first for statement, a hnction call must be inserted by the user, to calculate the cost of the chromosomes. Since the chromosomes are encoded in binary, the hnction must translate the binary chromosomes into continuous parameters, before calculating the cost of the chromosomes. The number of Chromosomes, number of bits per chromosome, and number of iterations are set at the beginning of the program. This M I Z A B routine is bare-bones, and the reader is welcome to embellish and experiment with it. If you are unfamiliar with M Z A B , the code is generic enough to translate into another language. Three optimization examples are given in the next three sections. The first example shows how to optimize a thinned conducting grid of strips, to obtain the lowest relative-sidelobe level in the backscattering radar-cross section (RCS). It is simple, because the parameters naturally have a binary representation. The next example shows how to optimize a nonuniformly spaced array for the lowest possible sidelobe level in the array factor. Continuous spacing parameters are encoded into a chromosome. The final and most complex example shows how to optimize the resistivity and size of loads placed on strips, to produce the lowest possible sidelobe level in the backscattering RCS. Both the resistivitiesand sizes of the loads must be encoded and optimized. This last two examples are also optimized with a quasi-Newton algorithm, for comparison.
9

M e r the surviving Ndlm 12 chromosomes pair and mate, the list of N d m 1 2 parents and N h m 1 2 offspring results in a total of Nd,,, chromosomes (the same number of chromosomes as at the start). At this point, random mutations alter a small percentage of the bits in the list of chromosomes, by changing a 1 to a 0 or visa versa. A bit is randomly selected for mutation fiom the Ndm x Nsbi, total number of bits in all the chromosomes. Increasing the number of mutations increases the algorithms freedom to search outside the current region of parameter space. This freedom becomes more important as the algorithm begins to focus on a particular solution. Typically, on the order of 1% of the bits mutate per iteration. Mutations do not occur on the final iteration.

I Antennas and Pmpgafiin Mgaziie, Vol. 37,No. 2, April 1995

..,.I

__

. .

. ..

. --

scattered fields

incident and

/
-bX

k-4

-1-

k-d--t(

Figure 3. A diagram of a uniform grid of perfectly conducting strips.

direction, and have a center-to-center spacing of d, and a width of w. Only backscattering is considered, so the direction of the incident wave and the direction of scattering are the same, and are given by 4. Since the width and spacing of the strips are constants, the set of parameters indicates whether a strip is present in the grid or not. Figure 4 shows the correspondence between a chromosome and a grid. Only M bits need to be in a chromosome, because the grid is assumed to be symmetric about its center. A "1" indicates 0 that a particular strip is there, and a " " indicates that particular strip is gone. Thus, a fully populated grid has a chromosome with Nd,m = M 1s. This example presents the simplest-possibleencoding of a parameter: one bit. A similar approach has been used to find optimum-thinning configurations for linear- and planar-array antennas [8]. Assume the incident plane wave has an electric field polar-

ie in the z-direction, and has a magnetic field n o d i to one. zd

- -

physical

When there are 2M symmetric strips, the integral-equation formulation is given by

chromosome = [l

1 0 1 0

-x 13
where

Figure 4. A thinned grid and its corresponding chromosome.

Jz(x') k
2
X",

= induced surface current,


=

= =

2zl2 wavelength distance from origin to center of strip m

HA2)(.)= the zeroth-order Hankel function of the second kind

I5t
10

I'

max re1 sII I-13.27 dB

Point matching, with five pulses per strip, is used, to find the J,(x) on each strip. The RCS from the grid is calculated from

m L1 U

E
$ 2 5 r z .v)

0
CT

The cost function is the maximum relativesidelobe level of Equation (5). This value is found by comparing an array containing 0(4), starting at 4 = 900. with an array containing o(+) sorted from the maximum value to the minimum value. The first place where the two arrays are not similar is assumed to be the highest relative-sidelobelevel. This value is the cost that is to be minimized. Figure 5 is an example of a backscattering pattern of a fullypopulated grid of 2M = 40 strips. The strip width is w = 0.0372, and the spacing between strip centers is d = 0.11. Its maximum relative-sidelobe level is -13.3 dB. The goal is to reduce the maximum sidelobe level in this backscattering pattern by thinning the grid with a genetic algorithm [I I]. Starting with 80 chromosomes, and optimizing over eight generations, produces a genetically superior chromosome given by [I101 10110101 1 1 IooOOl], where the first bit represents the right center strip, and the last bit represents the right edge strip. The thinned grid represented by this chromosome now consists of only 24 strips, but has a maximum relative sidelobe level of -17.1 dB. Figure 6 shows the backscattering pattern of the optimized grid. Note that the peak RCS is also reduced, because 40% of the strips were rboved from the grid. An exhaustive search requires checking 220 possible chromosomes. The genetic-algorithm solution was found by checking at most 80 x 8 = 640 possibilities (not all of the chromosomes are unique). Is this a global minimum? That's a tough question. At least the

-5

--0

0.2

0.4
U

0.6

08 .

Figure 5. The backscattering pattern from a uniform grid of perfectly conducting strips. The incident field is polarized with the electric field parallel to the strip edge. There are 40 strips 0.0371lambda wide and spaced 0.11lambda apart.

4. Optimizing RCS backscattering from thinned grids of strips

The first example finds the lowest-possible relative backscattering-sidelobe level from a grid of strips, by removing selected strips from a periodic grid with 2M perfectly conducting strips. We begin with a finite periodic array of perfectly conducting strips, as shown in Figure 3. The strips are infinitely long in the z10
L

IEEE Antennas and Pm-n

h&gZine, Vol. 37,No. 2, April 1995

answer seems pretty good. Running the algorithm several different times did not produce a better result. Increasing the number of mutations and number of chromosomes didn't improve the solution, either. As a result, I am confident this solution is good. Another possible check of the solution would be to optimize the problem using simulated annealing, and compare the results. Figure 7 charts the performance of the algorithm in terms of best cost and average cost versus iteration. The algorithm quickly found the optimum solution in the second iteration. Average cost of the 80 chromosomes generally decreases with each iteration. Sometimes the average cost increases, due to poor offspring and mutations.
5. Optimizing sidelobe levels of nonuniformly spaced arrays A slightly more difficult example is to find the element spacings, of a nonuniformly spaced array, that yield the lowestpossible maximum-relative-sidelobe level. Figure 8 shows a diagram of a linear array, having 2Ne1 point sources, with sin4 element patterns. The array far-field pattern is given by
U

where d, / 2 is the distance of element 1 fiom the physical center of the array, and d,, is the spacing between element m - 1 and element m. Note that the distance fiom element m to the center of the
. This representation of the ele-

Figure' 6. The backscattering pattern from an optimized thinned grid. This grid has 24 out of 40 strips present, and has a maximum-relative-sidelobelevel of -17.1 dB.

ment spacing insures that element n is closer to the array center than element n + 1. The minimum spacing is assumed to be > 0 ; otherwise, one element would lie on top of the next element. There are an infinite number of continuous element-spacing combinations. A binary encoding of the spacing between elements brings the number of possible combinations to a large, but finite, value. The element spacing is represented by the formula
Npbi,

-average

cost

_ _ _ _ peak cost

-15,-15.5U
C .-

0
0

,
\

U)

ic-

I \\

d,,, = d + o
n=l

b[n]A(S)"-' ,

(7)

-'"[
I

l
\ I

where
do= the minimum allowableelement spacing > 0 ;

-171
1

________________________________

Npbi, = the number of bits representing the spacing;


b = the vector containing the binary code;
A = the largest quantization level.

4 5 iteration number

Figure 7. A plot of the average and best costs over 8 iterations.

This optimization problem has Nel parameters that must be encoded into a chromosome. The genetic algorithm optimized a 48-element array, with sinp element patterns and three-bit spacing accuracy [12]. Each chromosome has 3 bits/parameter x 24 parameters = 72 bits. An exhaustive search for this chromosome definition requires evaluating 2n possible combinations. The minimum spacing is assumed to be d = 2 /4 . This parameter is physically limited by the size of the o antenna elements and mutual coupling. The maximum spacing is S 2 4 . In this problem, A = 0.52 . Figure 9 shows the far-field pat-

incident field

'

Figure 8. A diagram of a nonuniformlyspaced linear array. 11


~~~

IEEE Antennas and Pmpagation Magazine, Vol. 37, No. 2, April 1995

0
# element48

sin(phi)element pattern
-1 0 -511

spacing resulted in a negligible improvement after optimization, so quantization noise doesnt seem to be limiting the solution in this case. A possible extension to this problem is to represent do in the chromosome as another three-bit parameter. This same problem was optimized with a quasi-Newton algorithm, in the M T L A B Optimization Toolbox. The optimized far-field pattern appears in Figure 11. Its maximum-relative-sidelobe level is -21.2 dB, or 6 dB higher than the genetic-algorithm result. The quasi-Newton algorithm out-performed genetic algorithms for arrays with 8 elements or less. As the number of parameters increases, the quasi-Newton method is more likely to get stuck in local minima.
6. Optimizing RCS backscattering from resistively loaded

strips This example demonstrates how to use genetic algorithms to find resistive loads that produce the lowest maximum-backscatter relative-sidelobe level, from a perfectly conducting strip [131. The backscattering pattern of a 61 strip appears in Figure 12, and its relativesidelobe level is about 13 dB below the peak of the main beam.A model of the loaded strip is shown in Figure 13. Assuming the.incident electric field is parallel to the edge of the strip, the physical-optics backscattering RCS is given by [141
I
# elements;. 48

Figure 9. The far-field pattern of a 48-element uniform array of sin ( elements, spaced R 1 2 apart.
Oh

a # = - 4ussu(2kan() () 4

kl

sin(phi) element pattern number of bits = 3

where
L O C . _.
0

.w 0

I
X=

s = sing, U = cosg,

-25

0.6027 0.8819 1.272 1.528 1.981 2.241 2.609 2.994 3.338 3.655 4.006 4.439

-30

-5

-35
-40

h
,

# elements=48 sin(phi) element pattern

-4I
0.2

i
max slk-21.18

0.4
U

0.6

0.8

Figure 10. The far-field pattern of an optimized (using a genetic algorithm) 48-element array, with sin (-element patterns.

5 .3 0

U -20

5 -25
-30

tern of a 48-element array, with sin g, element patterns and uniform spacings of R 1 2 . The first sidelobe is -13 dB below the peak of the main beam. Figure 10 shows the resulting far-field pattern, due to the optimized element spacingsgiven by d, =.25,.25,.25,375,.25,.375,375,.25,375,25,.5,.25, .5,375,.375,.375,.5,.5,.625,.5, .75,1.0,.75,.8752 The resulting far-field pattern has a maximum-relativesidelobe level of -27.2 dB.Increasing the number of bits representing the

-35

0.2

-%-

0.4
U

0.6

0.8

Figure 11. The far-field pattern of an optimized (using a qua& Newton algorithm) 48-element array, with sin # -element patterns.

12
A

IEEE Antennas and Pmpagation Magazihe. Vol. 37, No. 2, April 1995
~~

~-

2 =width of perfectly conducting strip a b,, =width of load n =

cl,bw[m]21-mW

incident and scattered fields

tl,, = resistivity of load n = ~ ~ = , b , [ m ] 2 ' - " ' R

sin x su(x) = X

B,, B,. = number of bits representing the strip width and resistivity b,, b, = array of binary digits that encode the values for the strip widths and resistivities. W ,R = width and resistivity of the largest quantization bit
Eight resistive loads are placed on each side of a perfectly conducting strip that is 6 2 wide. The widths and resistivities of these loads are optimized to reduce the maximum-relative-sidelobe level. Both the width and resistivity of each load are represented by 5 quantization bits, and W = 1 and R = 5 . The optimized values, arrived at by genetic algorithms, are
= 0.16,0.31,0.78,1.41,1.88,3.13,4.53,4.22 w,, = 131,1.56,1.94,0.88,0.81,0.69,1.00,0.632

2a

Figure 12. A diagram of resistively loaded perfectly conducting strips.

~ l l -1 3.33 dB -

maX ~l

15

v,,

These values result in a maximum relative-backscattering-sidelobe level of -33.98 dB.Figure 14 shows the optimized-backscattering pattern. The resistive loads were also optimized, using a quasiNewton method that updates the Hessian matrix, using the BFGS formula. A true gradient search was not used, because the derivative of Equation (8) is difficult to calculate. The quasi-Newton algorithm performed better than genetic algorithms for ten or less loads. Using the quasi-Newton method in the previous example resulted in a maximum-relative-sidelobe level of -36.86 dB. When 15 loads were optimized, genetic algorithms were clearly superior.
7. Lessons learned

0.2

0.4
U

0.6

0.8

Figure 13. The backscattering pattern from a 612 perfectly conducting strip.

The person programming genetic algorithms has many variables to control and trade-offs to consider. For instance:
Number of bits that represent a parameter. More bits give greater accuracy but slow convergence. Some guidance may be obtained from formulas developed to relate statistical errors to sidelobe levels or null depths. For instance, the rms sidelobe level of an array with quantized phase shifiers is given by [15]
crq, =K

J52B '

(9)

where B is the number of bits in the phase shifter or in the chromosome. The relative rms sidelobe level for the array is (oq/ N) . If the relative sidelobe level is -20 dB, then B is the next integer larger
-5
-1 0
2

Number of chromosomes in the initial random population. More chromosomes provide a better sampling of the solution space, but slow convergence. Ideally, I try to have the total number of chromosomes equal to about 10 times the number of bits in a chromosome. If there are a lot of bits in a chromosome, then the computer may have to swap data with the hard drive, because it runs out of RAM.In this case,I use many fewer chromosomes. You will

-1 5

0.2

0.4
U

0.6

0.8

Figure 14. The optimized backscattering pattern from a 6 2 1 lambda perfectly conducting strip with 5 resistive loads.
13
~

IEEE Antennas and Propagation Magazine, Vol. 37, No. 2, April 1995
~-

know when your program is swapping data with the hard drive, because the iterations take a long, long time. Generating the random list. The type of probability distribution and weighting of the parameters has a significant impact on the convergence time. I often use a normal distribution to generate the bits in the chromosomes. The center of an array will be more dense than the edges, for low sidelobes in the radiation pattern. Thus, the probability of an element or a strip being present goes down, moving from the center of the array to the edges. Using a priori information helps the algorithm converge faster. Natural selection. Several methods are available for deciding which chromosomes to discard. In the examples presented here, I discarded chromosomes that produced cost functions with costs that are less than -13 dB relative-sidelobe level. I figured I could get this sidelobe level With a uniform array of antenna elements or strips. At first, most chromosomes are discarded, so many new random chromosomes must be generated at each iteration. However, I found that the algorithm converged much faster when the survival a requirement w s set at -13 dB, instead ofjust discarding 50% of the chromosomes. This approach more than compensated for the time taken to generate the additional random chromosomes. Pairing the chromosomes for mating. Chromosomes may be paired from the top to the bottom of the list, randomly, best-withworst, etc. I have tried many ways, and have found that pairing from the top to the bottom of the ranked list seemed to work best. Number of mutations. Mutations guard against the algorithm getting stuck in a local minimum, but slow convergence. Generally, I mutate about 1% of the chromosomes each iteration. I usually cheat, by not letting the best chromosome in each iteration mutate. Ive run several cases where the best chromosome mutated, and the algorithm either failed to find it again, or took a long time to find it again. Algorithm convergence. Determining a stopping point for the algorithm is difficult. Eventually, the natural selection and mating processes will cause all of the chromosomes to be the same, except for the mutated chromosomes. I let the algorithm run for 20 to 50 iterations. Sometimes it converges to a good solution very quickly. Other times it takes a long time. You will find that the algorithm doesnt always converge on the same solution on separate runs. There is no guarantee that the global optimum will be found. If the algorithm is having problems converging then 1) increase the number of mutations, 2) increase the number of chromosomes, or 3) add some constraints that you know about from the physics of the problem. Trying another optimization algorithm may be a good idea too.
8. Conclusions

quasi-Newton algorithm out-performed genetic algorithms for a small number of parameters. Few local minima and no quantization noise favor the quasi-Newton algorithm. As the number of parameters increases, genetic algorithmsare the best approach. The examples presented here demonstrate how to optimize antenna- and backscattering-sidelobe levels, using a genetic algorithm. Only a small fraction of the total number of possible combinations are checked to arrive at the optimal solution. Sometimes, parameters are inherently discrete, as in the thinned-grid example, while at other times, continuous parameters am quantized, as in the two final examples. Quantization noise must be considered when optimizingcontinuous parameters. Hopefully, enough information is provided in this article to allow the reader to start exploring optimization with genetic algorithms. Figures 1 and 2 are good starting points for writing your own algorithms. Very quick introductions are found in references [3] and [6]. Much more detail about the algorithms may be found in [lo]. Some of the hints in Section 7 should be useful for the aspiring computer geneticist. The Internet is an excellent place to look for genetic algorithms and information about genetic algorithms. Probably the best place to start is the Genetic Algorithm Archive, at www.aic.nrl.navy.mil.
9. Acknowledgment

I would like to thank Christopher McCormack, of the University of Michigan, for introducing me to genetic algorithms. I would also like to thank Sue Haupt. of the University of Colorado, for her diligent review of this article.
1. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, Optimization by simulated annealing, Science, 220, 4598, 1983, pp. 671-680. 2. W. H.Press, et. al, NifmericalRecipes, New York, Cambridge University Press, 1992. 3. J. H. Holland, Genetic algorithms, Scientijk American, July 1992, pp. 66-72. 4. L. Davis (ed.), Genetic Algorithms and Simulated Annealing,

Los Altos, CA, Morgan Kauhann Publishers, Inc., 1987.


5. C. S. Rut Numerical annealing of low-redundancy linear arrays, IEEE Transactions on Antennas and Propagation, AP-41, 1, January 1993, pp. 85-90.
6. E. Michielssen and R. Mittra, RCS reduction of dielectric cylin-

ders using a simulated annealing approach, I E AP-S SympoEE sium Digest, Vol. III, May 1990, pp. 1268-1271.
7. E. Michielssen. et. al, Design of lightweight, broad-band microwave absorbers using genetic algorithms, IEEE T m s a c t i m on Microwave Theov and Techniques, MTT-41, 617, JuneJJuly 1993, pp. 1024-1031.
8. R L. Haupt, Thinned arrays using genetic algorithms, ZEEE Transactions on Antennas and Propagation, AP-42, 7, July 1994, pp. 993-999. 9. UQTLAB Reference Guide, Natick, MA, The Mathworks, Inc., 1992.

Genetic algorithms are very usehl for many electromagnetics-optimization problems. These algorithms tend to be slow (as in nature), but very powerful. They can optimize problems with many parameters, and dont require any gradient calculations. Another advantage is that they inherently optimize discrete parameters (unlike gradient-based algorithms that inherently optimize continuous parameters). Many practical problems have a large, but finite, number of possible parameter settings. Exhaustive and random searches are too time consuming for these problems. Gradientbased algorithms require the calculation of derivatives, and get stuck in local minima. Two of the examples presented were compared to results from a quasi-Newton algorithm. In general, the

14
L

IEEE Antennas and Propasation Magazine, Vol. 37,No. 2, April 1995

10. D. E. Goldberg, Genetic Algorithms, New York, AddisonWesley, 1989, Chapters 1-4. 11. R. L. Haupt and A. S. Ali, Optimized backscattering sidelobes from an array of strips using a genetic algorithm, Proceedings o f the Applied Complrtational Electromagnetics Conference, Monterey, CA, March 1994, pp. 266-270. 12. R. L. Haupt, Optimization of array antennas using genetic algorithms, Proceedings of the Progress in Electromagnetics Research Sypositim, Noordwijk, The Netherlands, July 1994, p. 172. 13. R. L. Haupt, Comparison between genetic and gradient-based optimization algorithms for solving electromagnetics problems, Proceedings of the Sixth Biennial IEEE Conference on Electromagnetic Field Computation, Aix les Bains, France, July 1994, p. 229. 14. R. L. Haupt, Grating lobes in the scattering patterns of edgeloaded strips, IEEE Transactionson Antennas and Propagation, AP-41,8, August 1993, pp. 1139-1143. 15. M. Skolnik (ed.), Radar Handbook, New York, McGraw-Hill, 1990. 16. J. F. Frenzel, Genetic algorithms a new breed of optimization,

Introducing Feature Article Author

Randy L. Haupt received his BS from the USAF Academy


(1978), MS from Northeastem University (1983), and PhD from The University of Michigan (1987), all in electrical engineering, and

IEEE Potentials, October 1993, pp. 21-24.

an MS in engineering management from Westem New England College (1981). He is currently a Lt. Col. in the USAF, and a Professor of Electrical Engineering at the USAF Academy, CO. He previously worked as an engineer on the USAF Over-the-Horizon Radar system, and as an antenna engineer for Rome Air Development Center. His research interests include antennas, radar cross section, time-frequency analysis, numerical methods, and chaos theory. Dr. Haupt is a registered Professional Engineer in Colorado and a member of Tau Beta Pi. He was named the 1993 Federal Engineer of the Year by the National Society of Professional Engineers.

Editors Comments Continuedfiom page 6


plane with one or more coaxial loops. Again, this has some significant advantages. He describes the design, and shows some experimental results in this issue. It also makes a pretty cover picture!

Transactions. Congratulations (and sympathy), George! He will take office July 1. An announcement appears in this issue. Note that submissions should continue to be sent to Ron Marhefka, our current Editor, until after the week of June 18 (Symposium week). Thereafter, they should be sent to George.
Report on those meetings! There are several reports on meetings in this issue. Everyone cant go to every meeting. By sharing what happens at the meetings, we expand our ability to benefit from them. In several cases, the organizers of the meetings were kind enough to do what I asked, namely, to have the chairs of the sessions prepare a one-paragraph summary of the most notable results and discussion in each session. It was then easy to prepare a report for the Magazine, and the views of a number of the participants could be included. If youre organizing a meeting, please try to do this. Onward to Newport Beach! Spring is almost here, Summer is coming, and people are beginning to t i k they might have a little hn time for which they can plan something to do, instead of just reacting to the crisis of the moment. Fear not: its probably an illusion. However, do try to make some time for sharing and interacting with your colleagues. If you can go, Newport Beach really is going to be h , in so many ways. Regardless, I hope you have the chance n to share your work and your pleasure with others-and that we have @ an opportunity to see each other some time this year. !

If you do numerical electromagnetics, read John Volakis forward to the EM Programmers Comer in this issue. By all means, read the contribution there by Roberto Cecchini, Roberto Coccioli, and Giuseppe Pelosi. It describes a very usehl code for analyzing artificially anisotropic surfaces, and the code is available from the authors. But also consider Johns request for input regarding error estimation and control. The time has come to address this very important topic.
Take a look at Ed Millers PCs for AP and EM Reflections. His discussion regarding ethics in reviewing, presenting, and publishing papers is very important. Your input is needed.
Someone new. With this issue, we welcome Tom Milligan to the Magazine Staff. Tom joins Hal Schrank as Associate Editor for the Antenna Designers Notebook. This has long been one of the most popular elements of our Magazine, thanks to Hals hard work over more than a decade, and your contributions. Tom will be working with Hal to continue to make the Antenna Designers Notebook as valuable as it can be. New Tmnsadons Editor. At the January AdCom meeting, George Uslenghi was approved as the in-coming Editor of our

EAntennas and Pmpagation Magazine, Vol. 37,No. 2, April 1995

15

Potrebbero piacerti anche