Sei sulla pagina 1di 60

Genetic Algorithms (GAs)

Approx. Analysis for Optimisation Optimisation Human


Models design algorithms procedure interface

physical cost vs. math all-in-one • formulation,


accuracy by progra • interpretatio
avoiding mimg n,
mathematical expensive de- • coordination,
reanalysis composition • distributed
• linear, shared
derivative vs • nonlinear, computing,
with & without
function • decoupling • discrete • web-based
coordination
based of multi- design
disciplinary
modules, stochastic
• using • Hierarchical
• Taylor • non-
series sensitivity
analysis • random, hierarchical,
based, • hybrid
• response • simulated
surfaces annealing,
• Neural • genetic
Networks algorithm
Contents
• Optimisation methods revisitied
• Soft computing tools
• Introduction to Genetic Algorithms
• Coding/representation of design variables
• Mechanisms of Genetic Search
• Fitness definition and selection of members
• Basic Operations
• Schema Theorem
• Constraint Handling
• Expression based Strategies
• Real coded Genetic Algorithms
Optimisation methods revisited
▪ Enumerative schemes
 Evaluate objective function at a number of points in the
design space. Most practical problems are not amenable
to this approach
▪ Random search
 random-walk search techniques are like enumerative
schemes
▪ Mathematical programming
 Efficient method for restrictive class of problems.
Requires continuity and unimodality of the design space
▪ Soft computing tools bridge this gap
▪ Genetic algorithm is the most popular amongst
the soft computing tools
Soft Computing tools
▪ Soft computing tools are supposed to emulate
the human brain. The tools are based on partial
truth and are approximate
▪ Soft computing tools tolerate uncertainty and can
be less precise compared conventional
computational methods (hard computing)
▪ Some soft computing tools are
 Genetic algorithms (GAs)
 Simulated annealing (SA)
 Neural networks
 Immune network modelling
 Fuzzy logic
 Machine learning paradigms
Soft Computing tools
▪ Genetic algorithms (GAs) and Simulated annealing
(SA) are used for searching optima in mixed design
space without gradient information
▪ Neural networks are used for function
approximations, identifying causality from data,
control system synthesis, combinatorial optimisation
▪ Immune network modelling methods are used for
decomposition based design, enhancing efficiency of
GA based search, multi-criteria based optimisation
▪ Fuzzy logic is used for modelling of manufacturing
processes, design with non-crisp information
▪ Machine learning paradigms are used in function
modelling and deriving context based rules
Intro to Genetic Algorithms
▪ GA belongs to Evolutionary algorithm which also
includes two similar algorithms: Evolutionary
strategies and Evolutionary programing
▪ Evolutionary Algorithms are search and
optimization procedures have their origin and
inspiration in the biological world
▪ On grounds of accumulated evidence so far we
accept the Darwinian theory of evolution “the
survival of the fittest” in changing envoirnment.
▪ Evolutinary algoritms abstract and mimic the
traits of the ongoing struggle for evolution in
order to survive by adapting and hence searching
and optimizing
Intro to Genetic Algorithms
▪ Engineering solutions are tyically Newtonian
mechanics based with of less of probablisticity.
In biological evolution the solutions are more
probalistics based.
▪ In GA, we borrow only ideas from the theory of
biological evolution more as an inspiration
 We note that evolution has taken millions of years and
even now it is an ongoing process.
 Evolution is an extremely complex process and it can not
be modelled by us fully.
 We develop a model that essentially capture the
distinctive features of natural evolution.
 Also we use man made computing machines; therefore
we freely use suitable methodologies so that problems
are solved quickly within available computing resources
Generic algorithm for evolutionary
methods
▪ Basic algorithm
produce an initial population of individuals
while (termination condition is not met) do
{
evaluate the fitness of all individuals
select fitter individuals for reproduction
produce new individuals
generate a new population by inserting some new good
individuals and by discarding some old bad individuals
mutate some individuals
}
endwhile
Basis for GA based optimisation
▪ Global search strategies of GAs offer improved
performance over mathematical programming
(hard computing methods)
▪ Populations of individauls (designs), each
represented by a string (akin to chromosome-like
structure) are manipulated in a manner
analogous to biological evolution
 global search using information from different regions of
design space. This produces global optimum as well as
relative optimum
 easily implementable for mixed variable problems
 requires no gradients
Coding of variables in Binary GA
▪ Variables are mapped into a string
▪ Assume that X can take values from the following
X =1, 3, 4, 6, 7, 8, 11, 14
▪ A 3 digit string offers eight possible combinations
that can provide such mapping
000 1 001 3
010 4 011 6
100 7 101 8
110 11 111 14
Represenation of Discrete/integer
variables
▪ Binary representation is ideal for representing
integer variables
▪ The required string of length p is given by:

2 p

 X u
XL 
 1, where Ac is precision
Ac
for discrete variable with k alternatives,
2p  k
▪ Properties of coding
 Equality - one to one correspondence between variables
and its representation in GA (desirable)
 Inequality - non-unique representation of design variable
(not desirable)
Inequality handling
▪ Penalty approach
 Compute smallest p that satisfies the inequality

2p

X
XL u
1

Ac
 assign unique string to each m discrete variables
 assign remaining strings to out-of-bound integers
▪ Excessive distribution approach
 assign excessive bit-strings to already assigned variables
 uneven expansion of design space could target a string
length that would give more even expansion of the
design space
Non-binary Represenations
▪ Binary coding is not essential for implementing GA
▪ For example, a composite laminate can be
represented purely based on its stacking sequence
of admissible angles (say 0, +45, -45, and 90
represented by 1,2,3 and 4 respectively)
 Represent 16-ply laminate 0 deg 1

(+45/-45/04/902)s by 16 digit
code (2311114444111132) -45 deg 3

 Subscript s indicates +45 deg 2


symmetry
 If symmetry is implicit in the 90 deg 4

design, 8-digit string


(23111144) can be used. This
results in the number of 1324 laminate
possible designs to half (0, +45, -45, 90)
Design variables mapping
▪ Map design variables into finite-length strings.
This mapping proceeds under some guide lines
 use smallest no of alphabet for coding the problem
 Large no of alphabets suppress the similarities that are
exploited by the GA search
 Consider a variable X that can take on integer values
between 1 to 26 - map using a 5 digit binary string and
alphabet A-Z

X Binary representation Non-binary representation


8 01000 H
12 01100 L
24 11000 X
19 10011 S
Design variables mapping
▪ If a string of length p is used to represent X, we
have available integers from 0 to 2p
▪ To represent design variables in this range, use a
linear map
▪ For p=5, then Xmin=00000 and Xmax=11111, The
precision of mapped coding is
▪ Ac = (Xmax-Xmin)/(2p-1) = 0.0323(Xmax-Xmin)
▪ Intermediate values of binary numbers can be
obtained by linear scaling
▪ Precision can be increased by increasing p or
decreasing (Xmax-Xmin)
Mechanism of Genetic Serach
▪ To mimic biological evolution, there is a need to
define a population of designs that must evolve
from one generation to another
▪ An environment to which population must adapt,
are all measures of fitness of design(objective
function) and all the constraints
▪ The basic operators which transform a given
population into one that is better adapted to the
environment are:
 Reproduction,
 Crossover, and
 Mutation
Fitness definition
▪ Consider a function maximisation problem
 maximise f(x), xL ≤ x ≤ xU
▪ For function minimisation, following choices of
fitness measures can be adopted
 F = 1/f(x)
 F=Fmax- f(x)
▪ For the treatment of constraints, the fitness
function must be appropriately modified
 Penalty function formulation is traditionally employed
 Fitness function is then sensitive to relative amplitude of
objective function and constraint
 Alternative is to use expression operator based approach
Selection of members for
reproduction
▪ There are several methods possible for selection of
members based on their fittness
▪ One of the better knwon methods is fitness-
Proportionate Selection.
▪ In this method, we first obtain the sum of fitness of
all individuals
popsize
S f
i 1
i

▪ Then a probablity pi is assigned to each individual


f
p i  i

S
▪ Cumulative probability is obtained for each individual
by adding the fitnesses of the preceding population
members
Selection of members for
reproduction
i
ci   pk , i  1,2,..., popsize
k 1
▪ A random number r uniformly distributed in [0,1]
is drawn from the popsize times and each time ith
string is selected such that ci-1 < r <ci. When r <
c1 , the first string is selected.
 For example, for a population of four strings with p1= 0.30,
p2 = 0.20, p3 = 0.40 and p4 = 0.10, c1 = 0.30; c2 = 0.50;
c3 = 0.90; c4 = 1.0
 If r = 0.25, then individual 1 will be selected as r < c1. If
r=0.96 then individual 4 will be eselected (c3 < 0:96 < c4 )
Selection of members for
reproduction
▪ It can be seen that the fittest members have
proportionatly more chances of being reproduced
and strings mapped to represent them can be
selected more than once.
▪ Note that the method works only if fitness is
positive
Mechanics of Genetic Serach
▪ Reproduction
 Biases the evolutionary process towards more fit
members of the population. It is implemented by adding
extra copies of more fit members into the population pool
 no new designs are created during this process
▪ Crossover
 This mechanism brings in new information into the
population. This mechanism is akin to exchange of genes
between mating partners. In GA crossover mechanism is
dominant
▪ Mutation
 This process toggles bits of string at very few random
sites to prevent premature loss of genetic information.
The process is necessary due to finite size populations
used in practical simulations
Basic operations
▪ Reproduction
 Assign each design i a probability of selection pi given by
F
pi  i


N
F i

 At random select the N most fit designs for the cross over
and mutation operation
▪ Crossover
 Of the N member population generated during
reproduction, select mating pairs, two at a time, to
conduct the cross over operation
Basic operations
▪ parent 1 1100010011
▪ parent 2 1011011011

▪ child 1 1100011011
▪ child 2 1011010011
▪ Repeat for all possible mating pairs
▪ Note that real variables can also be used to code
design variables into string like representations
Basic operations
▪ Consider the design of a rectangular laminated
plate, where plate dimensions are to be sized in
addition to determining number of 020, +/-450 and
9020 groups
 parent 1: 10.17.3500 is a 10.1x7.3 plate of layup (0100)s
 parent 2: 12.56.4212 is a 12.5x6.4 plate of layup (040,+/-
450,9040)s
▪ In such presentations, child design will only
contain numbers that were present in the original
population
▪ one of the solutions is an averaging cross over
Basic operations
▪ For a string length L, generate a real random
number w = rL between 0 and L- define [w] as
integer portion of w
▪ Take first [w] variables from one parent, last L-
[w]-1 from second part, and average the [w]+1
variable between two parents
▪ If x1 and x2 are the [w]+1 variables of parent 1
and 2, average these numbers to create xc child
as xc =(w-[w])x1 + (1-w+[w])x2
 For w=1.6, [w]=1 and child is generated by using first
number from parent 1, last 3 from parent 2 and
averaging these numbers as
 xc = (1.6-1)7.3+(1-1.6+1)6.4=6.9
 child 1 = 10.1 6.9 2 1 2
Basic operations
▪ Mutation
▪ Select strings at random from the population
pool, and at randomly selected sites, switch zero
to one and vice versa
▪ The cross over and mutation operators are
assigned a finite probability
▪ pc ~ 0.7
▪ pm ~0.01
Basic operations
▪ Permutation
 An operator referred to as permutation can be exploited
with some success
 Permutation may be viewed as a mutation that is applied
over a segment of the string
 Balance [45/-45/0/902/0/45/02/-45)s symmetric laminate
are
 before permutation 2314412113
 after permutation 2312144113
 results in a (45/-45/0/45/0/902/02/-45)s laminate
Schema Theorem
▪ Schema theorem serves as the analysis tool for
the GA process
▪ It explains why GAs work by showing the
expectation of schema survival
▪ Schema theorem is applicable to
 a canonical GA
 with binary representation
 fixed length individuals
 fitness proportional selection
 single point crossover
 gene-wise mutation
Schema – definition, order and
length
▪ A schema is a template that defines a subset of
strings with similarities at certain string positions
▪ Schema 1##0#1 is the set of all words of length
6 with 1 at 1st and 6th positions and a 0 at 4th
position and 0 or 1 at 2nd,3rd and 4th position
▪ Order of a schema O(H) is the number of
explicitly stated 0's and 1’s
▪ Schema H=10#1# represents the set of binary
strings 10010, 10011, 10110, 10111
▪ Length of a schema ⸹(H) is the distance between
the first and last specific digits on a string
▪ String 10 of length 2 belongs to 22= 4 different
schemas ##, #0, 1#, 10
Schema – definition, order and
length
1001000010 0000001100 1000000000 O(H) = 30, (H) = 29

10#####0## 00######0# 100#######O(H) = 9, (H) = 22

#1#01 O(H) = 3, (H) = 3

0#### O(H) = 1, (H) = 0

▪ M(H, k) denotes the number of instances of H in


kth generation
▪ f(x) denotes fitness value of x. f(H, k) denotes
average fitness of H in kth generation
Fitness of a schema
▪ Suppose x is an individual that belongs to
schema H, then x is an instance of H (x ∈ H)
▪ m(H, k) denotes the number of instances of H in
the kth generation
▪ If f(x) denotes fitness value of x, then averaged
fitness of H in kth generation is given by
 f (x)
f (H ,k )  x H

m (H ,k )
Effect of GA On a Schema
▪ Schema affects each step in GA.
 Effect of Selection
 Effect of Crossover
 Effect of Mutation
▪ These effects collectively give “Schema Theorem”
Effect of Selection on Schema
▪ Assume that fitness is proportional selection, then
selection probability for the individual x
f (x)
ps(x)  N


i1
f (xi)

where the N is the total number of individuals


▪ The expected number of instances of H in the
mating pool M(H, k) is
 f ( x) f (H , k )
M (H , k )   m( H , k )
xH
f f
▪ Schemas with fitness greater than the population
average are likely to appear more in the next
generation
Effect of Crossover on schema
▪ Assume that crossover happens at a single-point,
then it can be seen that Schema H survives
crossover operation if
 one of the parents is an instance of the schema H and
 one of the offspring is an instance of the schema H

▪ Crossover with schema H=#10## surviving


 Parent 1 = 11010 child 1 = 11011  H
 Parent 2 = 10111 child 2 = 10110 H

▪ Crossover with schema H=#10## destroyed


 Parent 1 = 11010 child 1 = 11111 H
 Parent 2 = 10111 child 2 = 10010 H
Effect of Crossover Operation
▪ Suppose a parent is an instance of a schema H.
When the crossover occurs within the bits of the
defining length, schema is destroyed unless the
other parent repairs the destroyed portion
▪ Given a string of length l and a schema H with
the defining length δ(H), the probability that the
crossover occurs within the bits of the defining
length is δ(H)/(l - 1)
▪ Example : for H = *1**0 (length l = 5, δ(H)=3),
the probability that the crossover occurs within
the defining length is 3/4
Crossover Operation
▪ The upper bound of the probability that the
schema H is destroyed in given by
 (H )
Dc (H )  pc
l 1
where pc is the crossover probability
▪ The lower bound on the probability Sc(H) that H
survives is
 (H )
S c ( H )  1  Dc ( H ) 1  pc
l 1
▪ Schemas with low order are more likely to survive
Mutation Operation
▪ Assume that mutation is applied gene by gene
▪ For a schema H to survive, all fixed bits must
remain unchanged
▪ Probability of a gene not being changed is
(1  p m )
where pm is the mutation probability of a gene

▪ Net Effect of Mutation : The probability a schema


H survives under mutation
S m ( H )  (1  pm )O ( H )
▪ Schemas with low order are more likely to survive
Schema Theorem
▪ Expectation of Schema H in Next Generation >
Expectation in Mating Pool
f (H , k )
M ( H , k )  m( H , k )
f
▪ Probability of Surviving Crossover
 (H )
( S c ( H )  1  pc )
l 1
▪ Probability of Surviving Mutation
(Sm ( H )  (1  pm )o( H )
Schema Theorem
▪ Mathematically stated
f (H , k )   (H ) 
E[m( H , k  1)]  m( H , k )  1  pc 1  pm 
O( H )

f  l 1 
▪ The schema theorem states that the schema with
above average fitness, short defining length and
lower order is more likely to survive
Constraint handling - Penulty
Formulation
▪ Genetic transformation operations require that a
fitness measure is maximised during the search
process
▪ For unconstrained maximisation, the objective
function itself may be chosen as the fitness value
Z = F(X)
▪ For unconstrained minimisation, the fitness
measure can be defined as
Z = Fmax-F(X)
▪ For constrained minimisation, a composite measure
of objective function and constrained functions
must be used in defining the fitness function
Constraint handling using Penalty
Formulation
F  F(x)  P andZ  Fmax F
Minimise
where
Pisthepenalty
term
▪ Penalty term must be chosen carefully so as to
prevent biasing the search in favour of objective
function or constraint functions
▪ For an average fitness of feasible designs as Fave,
define a limit value of the penalty
Limit valueof penalty L  k Fave , where k  2
G if (G  L )
P 
 L   (G  L ) if (G  L )
m
G  r  g j , r is penalty paramter; g j is violated costraint
j 1
Constraint handling - Penulty
Formulation
▪ Scaling prevents constraint violations from
biasing the search process.
 If =0, penalty of all violated constraints is limited to L
 For  =0.2 or similar such small value, constraint
violations are allowed to increase beyond L, with a
slope =.
 Performance of the penalty function approach is clearly
dependent upon user specified constants:
 penalty parameter r,
 factor k used to establish L, and
 slope parameter 
Expression Based Strategies
▪ GA is essentially developed by borrowing ideas
from biological evolution
▪ In the biological analogue of genetic search, the
chromosome is a double-stranded (diploid)
structure and expressed gene at a particular
location is determined on the basis of a dominant
recessive gene theory
▪ Traditional GA implementation represents designs
by a single stranded (haploid) structure
▪ We combine features of feasible and infeasible
designs in a given population through the use of
an expression operator probabilistic in nature
Expression Based Strategies
▪ Consider a temporarily assembled diploid model
 String A (feasible) 1110010110
 String B (infeasible): 1000011011
▪ Expression operator can be applied on a bit-by-
bit basis to determine an expressed chromosome
that would replace string B in the population.
▪ Replace bit at a location in string B by
corresponding bit on string A with probability ρE.
Expression Based Strategies
Stepwise Implementation
1. Population is initialized - uniform normal distribution of
design variables between specified lower and upper bounds
2. Evaluate population of designs by evaluating objective and
constraint function values
3. Combine infeasible and feasible designs in the population
through use of an expression operator
4. Determine objective and constraint function values of
expressed designs
5. Selection operation of traditional GA applied to population
of feasible designs
6. Selected designs combined with all infeasible designs to
obtain predetermined population size
7. Perform crossover and mutation as in a traditional GA
implementation
8. repeat from step 2 till convergence
Expression Based Strategies –
Strategy I
▪ Identify best feasible design in population as Xbest
▪ Rank all j infeasible designs in population (j=1, N)
as Rj, with Rj=N assigned to design with most
constraint violation
▪ Combine each infeasible design with Xbest through
use of the expression operation on a bit-by-bit basis

 i
g B
if ( ri N  R i )
g iE   v

 g ij if ( ri N  R i )
giE : ith bit on chromosome representing expressed design.
gijv : ith bit on chromosome representing jth violated design.
giB : ith bit on chromosome representing best design.
riN : randomly generated integer between 1 and M
(population size)
Expression Based Strategies –
Strategy I
▪ Designs with higher constraint violations become
similar to chromosome of best design.
Expression Based Strategies –
Strategy II
▪ Selection of feasible-infeasible pair determined by
closeness in terms of the objective function value
▪ If ij = obj(i) - obj(j), the difference in objective
function values between an infeasible design j
and a feasible design i
▪ The design i with smallest absolute ij is chosen for
expression.
 Negative ij preferred over positive value for comparable
absolute value of ij
Expression Based Strategies –
Strategy II
giE : ith bit on chromosome representing expressed design.
giBF : ith bit on chromosome of feasible design selected for
expression.
gijv : ith bit on chromosome representing jth violated design.
ri : random number (uniform distribution) between 0 and 1
ρE : fixed probability of expression

 g iBF if ( ri   E )
g E
  v
 g ij if ( ri   E )
i
Simultaneous Min/Max Identification
Sharing Functions in GA
▪ Based on a principle of sharing available resources
of an environment to maximize individual gains for
distinct species
▪ Sharing is implemented by degrading fitness of
each design in proportion to the number of designs
located in its neighbourhood
  d 
1   ij

ij      , if d ij   sharing
  
sharing

 0
▪ dij is the metric distance between the ith and jth
design
▪ sharing is the radius of a defined neighbourhood
Sharing Functions in GA
▪ Fitness of each design is modified as follows
fi
f sharing _ i 

M
ij

 M is the number of designs in the neighbourhood of


design i
▪ This helps in discouarging the convergente to a
single area of the fitness landscape
▪ If the fitness has to be shared with more
individuals, the crowded areas become less
attractive
▪ The algorithm should converge to a point where a
design is obatined maintained.
Real-Coded Genetic Algorithms
(RGA)
▪ Binary coded GA (BGA) has been used extensively
used
▪ Binary coding is easy to adapt for discrete and
integer design variables
▪ Continuous variables are treated as pseudo-
discrete variables
 spacing of discrete variables determined by length of
binary bit string and lower/upper bounds on the design
variable
 long strings > increased number of design alternatives to
consider > higher computational costs
Real-Coded Genetic Algorithms
(RGA)
▪ Real coded GA (RGA) can also be used
▪ Real coded GAs have proved very effective in
problems with continuous valued variables
▪ Central focus here is to devise a genetic
exchange mechanisms that operate on the
variable directly, not on a chromosome like
binary representation
Operators for RGA
▪ Linear crossover is commonly used
▪ Let the parents be
X1={x11,x12,…, x1n} and X2={x21,x22,…x2n}
▪ Produce offsprings Hk={hk1, hk2,…,hkn}, k=1, 2, 3
h1i=0.5x1i+0.5x2i
h2i=1.5x1i-0.5x2i
h3i=-0.5x1i+1.5x2i
▪ Operation is performed with a crossover probability
pc as in the binary coded genetic algorithm
▪ Best two of three progenies replace parents in next
generation
Operators for RGA

▪ Mutation — new value of variable obtained as


xafter  xbefore  
▪  is a random perturbation obtained as r, where
r is a random variable between —1 and +1
▪  is standard deviation of variable in current
population
▪ Extension to mixed variable problems requires
some additional manipulations
▪ Discrete variable values arranged in numerically
ascending order in an array Dis(I), Dis(2),...,
Dis(N)
Operators for RGA
▪ A random number between 0 and 1can be linearly
mapped to a range of 0.501 to (N+O.499). This real
number is then rounded to its adjacent integer number,
i, selected from numbers 1,2,...N
▪ Here each choice of discrete variable would have the
same probability of (1/N) as they are selected using
pseudo random number (0,1)
▪ An integer number, i, is used to retrieve the
corresponding discrete element from array Dis(i)
▪ For an integer variable bounded between i and j, a
random number (0,1) is generated and mapped with a
linear variation between i-0.499 and j+0.499
▪ Intermediate real number is then rounded to
corresponding integer value
Operators for RGA
▪ Note that all design variables are varied
independently
▪ Crossover and mutation may generate real
numbers
▪ For discrete valued variables, representative
integer value of variable is used to produce
crossover variation — resulting real number again
rounded to adjacent integer for recovery of
discrete variable (same process for mutation)
▪ Integer valued variables are handled in a similar
manner— no reference to a discrete array,
however, is required
GA vs. traditional method
▪ GA's work with a coding of the entire set of design
variables and not the variables themselves
▪ GA's do not optimise the design by advancing it
from point to point. Instead, they form a
population of designs, advancing several designs in
each cycle of evolution
▪ Only function information is required
▪ Evolution and adaptation are implemented by non-
deterministic transition rules
▪ Implicit parallelisation is embedded in the
approach. This gives computational advantage
▪ n evaluations of a population result in exploring of
n3 schema
General Observations
▪ RGA tends to produce greater design variable
diversity
▪ RGA seems to perform better in problems with
difficult constraints or where sharp peaks and
valleys can be encountered
▪ In problems involving a mixed design variables,
RGA tends to consistently produce better designs
than BGA
▪ In some cases BGA produces better designs.
▪ Many mutation and crossover operators should be
explored
▪ RGA is robust; should be used whenever possible
Lecture Ends

Potrebbero piacerti anche