Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Colony
Optimization
Index
Table
of
Contents
Introduction
.............................................................................................................................................
3
Biological
Inspiration
...............................................................................................................................
3
Swarm
Intelligence
..............................................................................................................................
4
Ant
colony
algorithm
...........................................................................................................................
4
Pheromone
trails
.................................................................................................................................
4
Algorithm
.............................................................................................................................................
5
The
optimization
technique
.....................................................................................................................
5
The
Ant
Colony
Optimization
Metaheuristic
.......................................................................................
6
Application
...............................................................................................................................................
7
Travelling
salesman
problem
...............................................................................................................
7
Real
life
example
..................................................................................................................................
8
Advantages:
.........................................................................................................................................
8
Disadvantages
......................................................................................................................................
9
Theoretical
Results
..................................................................................................................................
9
Outlook
and
Conclusions
.......................................................................................................................
10
References
.............................................................................................................................................
10
2
|
P a g e
Introduction
Ant
colony
optimization
(ACO)
takes
inspiration
from
the
foraging
behavior
of
some
ant
species.
This
approach
is
derived
from
Swarm
intelligence.
Swarm
intelligence
is
a
relatively
new
approach
to
problem
solving
that
takes
inspiration
from
the
social
behaviors
of
insects
and
of
other
animals.
The
study
behavior
of
ants
had
been
the
most
successful
till
date.
Thus
Ant
colony
optimization
is
widely
applied
in
computer
science
and
operations
research.
It
is
a
probabilistic
technique
for
solving
computational
problems
which
can
be
reduced
to
finding
good
paths.
These
are
a
set
of
algorithms
which
come
under
swarm
intelligence
methods
and
metaheuristics
optimizations.
Initially
it
was
proposed
by
Marco
Dorigo
in
1992.
Since
then
ACO
has
attracted
the
attention
of
increasing
numbers
of
researchers
and
many
successful
applications
are
now
available.
Moreover,
a
substantial
corpus
of
theoretical
results
is
becoming
available
that
provides
useful
guidelines
to
researchers
and
practitioners
in
further
applications
of
ACO.
In
this
study
we
will
first,
deal
with
the
biological
inspiration
of
ant
colony
optimization
algorithms.
We
show
how
this
biological
inspiration
can
be
transferred
into
an
algorithm
for
discrete
optimization.
Then,
we
study
the
optimization
techniques,
algorithm
and
theoretical
results.
Biological
Inspiration
Some
species
react
to
significant
stimuli,
and
effects
of
these
reactions
can
act
as
new
significant
stimuli
for
both
the
insect
that
produced
them
and
for
the
other
insects
in
the
colony.
Stigmergy
is
particular
type
of
communication
in
which
workers
are
stimulated
by
the
performance
they
have
achieved.
The
two
main
characteristics
of
stigmergy
that
differentiate
it
from
other
forms
of
communication
are
the
following.
1)
Stigmergy
is
an
indirect,
non-symbolic
form
of
communication
mediated
by
the
environment:
insects
exchange
information
by
modifying
their
environment;
and
2)
Stigmergic
information
is
local:
it
can
only
be
accessed
by
those
insects
that
visit
the
locus
in
which
it
was
released
(or
its
immediate
neighborhood).
In
many
ant
species,
ants
walking
to
and
from
a
food
source
deposit
on
the
ground
a
substance
called
pheromone.
Other
ants
perceive
the
presence
of
pheromone
and
tend
to
follow
paths
where
pheromone
concentration
is
higher.
Through
this
mechanism,
ants
are
able
to
transport
food
to
their
nest
in
a
remarkably
effective
way.
In
an
experiment
known
as
the
double
bridge
experiment,
the
nest
of
a
colony
of
Argentine
ants
was
connected
to
a
food
source
by
two
bridges
of
equal
lengths.
In
such
a
setting,
ants
start
to
explore
the
surroundings
of
the
nest
and
eventually
reach
the
food
source.
Along
their
path
between
food
source
3
|
P a g e
and
nest,
Argentine
ants
deposit
pheromone.
Initially,
each
ant
randomly
chooses
one
of
the
two
bridges.
However,
due
to
random
fluctuations,
after
some
time
one
of
the
two
bridges
presents
a
higher
concentration
of
pheromone
than
the
other
and,
therefore,
attracts
more
ants.
This
brings
a
further
amount
of
pheromone
on
that
bridge
making
it
more
attractive
with
the
result
that
after
some
time
the
whole
colony
converges
toward
the
use
of
the
same
bridge.
This
colony-level
behavior,
based
on
autocatalysis,
that
is,
on
the
exploitation
of
positive
feedback,
can
be
used
by
ants
to
find
the
shortest
path
between
a
food
source
and
their
nest.
Ant
colony
optimization
is
a
technique
for
optimization
that
was
introduced
in
the
early
1990s.
The
inspiring
source
of
ant
colony
optimization
is
the
foraging
behavior
of
real
ant
colonies.
Swarm
Intelligence
It
is
a
collective
system
capable
of
accomplishing
difficult
tasks
in
dynamic
and
varied
environments
without
any
external
guidance
or
control
and
with
no
central
coordination.
It
is
achieved
by
a
collective
performance
and
cannot
normally
be
achieved
by
an
individual
acting
alone.
It
constitutes
a
natural
model
particularly
suited
to
different
kind
of
problem
solving.
Swarm
intelligence
has
inspired
to
create
some
highly
successful
optimization
algorithms.
One
of
those
algorithms
is
ant
colony
algorithm.
It
is
a
way
to
solve
optimization
problems
based
on
the
behavior
of
ants
searching
for
food.
Ant
colony
algorithm
The
principle
is
that
the
trace
(stigmergy)
left
in
the
environment
by
an
action
stimulates
the
performance
of
a
next
action,
by
the
same
or
a
different
agent.
Individuals
leave
markers
or
messages
these
dont
solve
the
problem
in
themselves,
but
they
affect
other
individuals
in
a
way
that
helps
them
solve
the
problem.
Pheromone
trails
One
ant
tends
to
follow
strong
concentration
of
pheromone
caused
by
repeated
passes
of
ants;
a
pheromone
trail
is
then
formed
from
nest
to
food
source,
so
in
intersections
between
several
trails
an
ant
moves
with
high
probability
following
the
highest
pheromone
level.
4
|
P a g e
Individual
ants
lay
pheromone
trails
while
travelling
from
the
nest,
to
the
nest
or
possibly
in
both
directions.
The
pheromone
trail
gradually
evaporates
over
time.
But
pheromone
trail
strength
accumulates
with
multiple
ants
using
path.
Example
of
ant
colony
optimization:
Algorithm
Ants
are
agents
that
move
along
between
nodes
in
a
graph.
They
choose
where
to
go
based
on
pheromone
strength
(and
maybe
other
things).
An
ants
path
represents
a
specific
candidate
solution.
When
an
ant
has
finished
a
solution,
pheromone
is
laid
on
its
path,
according
to
quality
of
solution.
This
pheromone
trail
affects
behaviour
of
other
ants
by
`stigmergy
The
optimization
technique
It
was
put
forward
by
Deneubourg
and
team
after
getting
inspired
by
the
foraging
behavior
of
ants.
The
algorithm
works
by
going
through
a
number
of
iterations
of
all
the
possible
solutions
and
every
time
updating
the
solution
so
that
the
next
iteration
leverages
on
the
learning.
The
leveraging
is
done
using
the
environment
as
a
means
of
communication.
This
original
idea
was
proposed
in
early
90s
but
later
many
algorithms
used
this
algorithm
as
the
base
and
developed
new
forms
of
it.
5
|
P a g e
As
the
above
definition
explains,
there
are
3
main
aspects
to
define
the
problem.
1. A
set
of
decision
variable
2. All
the
possible
constrains
the
solution
should
follow
3. A
function
that
defines
all
the
possible
solutions
The
best
solution
of
the
above
defined
problem
is
the
one
which
give
the
most
optimum
solution.
This
optimum
value
is
defined
by
a
variable
which
is
analogues
to
the
pheromone
value
in
the
ant
case.
These
values
are
calculated
for
each
possible
solution
which
are
the
outcome
of
the
multiple
iterations
of
the
ant,
and
compared
against
each
other
considering
the
requirements
and
the
optimum
solution
is
chosen.
After
initialization,
the
metaheuristic
iterates
over
three
phases
explained
below,
Construct
Ant
Solutions:
A
set
of
m
artificial
ants
constructs
solutions
from
elements
of
a
finite
set
of
available
solution
components
C
=
{cij
},
i
=
1,
.
.
.
,
n,
j
=
1,
.
.
.
,
|Di
|
.
A
solution
construction
starts
from
an
empty
partial
solution
sp
=
.
At
each
construction
step,
the
partial
solution
sp
is
extended
by
adding
a
feasible
solution
component
from
the
set
N(s
p)
C,
which
is
defined
as
the
set
of
components
that
can
be
6
|
P a g e
added
to
the
current
partial
solution
s
p
without
violating
any
of
the
constraints.
The
choice
of
a
solution
component
from
N(s
p)
is
guided
by
a
stochastic
mechanism,
which
is
biased
by
the
variable
that
depicts
the
pheromone
associated
with
each
of
the
elements
of
N(s
p).
The
rule
for
the
stochastic
choice
of
solution
components
vary
across
different
ACO
algorithms
but
in
all
of
them,
it
is
inspired
by
the
model
of
the
behavior
of
real
ants
given
in
Equation
1.
Apply
Local
Search:
Once
solutions
have
been
constructed,
and
before
updating
the
pheromone,
it
is
common
to
improve
the
solutions
obtained
by
the
ants
through
a
local
search.
This
phase,
which
is
highly
problem-
specific,
is
optional
although
it
is
usually
included
in
state-of-the-art
ACO
algorithms.
Update
Pheromones:
The
aim
of
the
pheromone
update
is
to
increase
the
pheromone
values
associated
with
good
or
promising
solutions,
and
to
decrease
those
that
are
associated
with
bad
ones.
Usually,
this
is
achieved
(i)
by
decreasing
all
the
pheromone
values
through
pheromone
evaporation,
and
(ii)
by
increasing
the
pheromone
levels
associated
with
a
chosen
set
of
good
solutions.
Application
Travelling
salesman
problem
Given
a
list
of
cities
and
the
distances
between
each
pair
of
cities,
ACI
is
used
to
find
the
shortest
possible
route
that
visits
each
city
exactly
once
and
returns
to
the
origin
city.
The
ant
algorithm
is
shown
below:
1. Ant
is
placed
at
a
random
node
as
seen
in
the
diagram
at
node
B.
2. The
ant
decides
where
to
go
from
that
node,
based
on
probabilities
calculated
from:
pheromone
strengths,
next-hop
distances.
Suppose
this
one
chooses
BC.
The
ant
is
now
at
C,
and
has
a
tour
memory
=
{B,
C}
so
he
cannot
visit
B
or
C
again.
Again,
he
decides
next
hop(from
those
allowed)
based
on
pheromone
strength
and
distance;
suppose
he
chooses
CD
3. The
ant
is
now
at
D,
and
has
a
`tour
memory
=
{B,
C,
D}.
There
is
only
one
place
he
can
go
now:
4. So,
he
has
nearly
finished
his
tour,
having
gone
over
the
links:
BC,
CD,
and
DA.
5. So,
he
has
nearly
finished
his
tour,
having
gone
over
the
links:
BC,
CD,
and
DA.
AB
is
added
to
complete
the
round
trip.
Now,
pheromone
on
the
tour
is
increased,
in
line
with
the
fitness
of
that
tour.
6. Next,
pheromone
everywhere
is
decreased
a
little,
to
model
decay
of
trail
strength
over
time.
7. We
start
again,
with
another
ant
in
a
random
position.
7
|
P a g e
1.
2.
5.
7.
3.
6.
4.
Real
life
example
We
place
some
salesman
at
each
city.
Each
salesman
then
does
this:
It
makes
a
complete
tour
of
the
cities,
coming
back
to
its
starting
city,
using
a
transition
rule
to
decide
which
links
to
follow.
By
this
rule,
he
chooses
each
next-city
at
random,
but
biased
partly
by
the
pheromone
levels
existing
at
each
path,
and
biased
partly
by
heuristic
information.
When
all
salesmen
have
completed
their
tours.
Global
Pheromone
Updating
occurs.
Advantages:
Positive
Feedback
accounts
for
rapid
discovery
of
good
solutions
Distributed
computation
avoids
premature
convergence
8
|
P a g e
The
greedy
heuristic
helps
find
acceptable
solution
in
the
early
solution
in
the
early
stages
of
the
search
process.
The
collective
interaction
of
a
population
of
agents.
Disadvantages
Slower
convergence
than
other
Heuristics
Performed
poorly
for
TSP
problems
larger
than
75
cities.
No
centralized
processor
to
guide
the
system
towards
good
solutions.
Theoretical
Results
Experimental
work
in
this
regard
has
shown
that
successful
algorithms
can
be
derived
from
ACO.
Certain
theoretical
foundations
have
also
been
conceptualized
based
on
the
study
by
researchers.
The
work
highlights
the
following
questions
while
dealing
with
metaheuristics
concerns:
Will
an
optimal
solution
be
derived
using
the
given
ACO
algorithm?
Some
of
the
first
convergence
proofs
was
provided
in
the
form
of
Graph
Based
Ant
System
(GBAS).
The
probability
found
out
under
this
model
comes
out
to
be
1-.
However,
the
above
algorithm
is
rather
peculiar
and
does
not
extend
to
other
algorithms
generally
adopted
in
applications.
Another
observation
is
that
these
convergence
results
do
not
predict
the
timeline
or
the
time
taken
to
find
the
optimal
solution.
Recently,
Gutjahr
presented
models
to
predict
these
timelines.
This
research
has
led
to
opening
of
new
channels:
1) Link
between
ACO
and
optimal
control
and
reinforcement
theory.
2) Link
between
ACO
and
probabilistic
learning
algorithms.
Also
a
more
wholesome
Model
Based
Search
(MBS)
algorithm
has
also
been
proposed
which
is
claimed
to
improve
understanding
of
the
ACO.
Convergence
proofs
do
not
generally
provide
implementation
guidelines
to
researchers.
In
this
regard,
research
efforts
that
aim
for
higher
understanding
propose
better
solutions.
Also,
first
order
deception
has
also
been
found
in
ACO
algorithms
along
with
second
order
deception.
Some
of
the
ACO
applications
are
listed
below:
9
|
P a g e
References
https://www.ics.uci.edu/~welling/teaching/271fall09/antcolonyopt.pdf
rain.ifmo.ru/~chivdan/presentations
www.macs.hw.ac.uk/~dwcorne/Teaching
Dorigo
M,
Optimization,
learning
and
natural
algorithms.
PhD
thesis,
Dipartimento
di
Elettronica,
Politecnico
di
Milano,
Italy,
1992
[in
Italian]
Wikipedia
https://www.ics.uci.edu/~welling/teaching/271fall09
code.ulb.ac.be/dbfiles/
Ant
Colony
Optimization
for
Feature
Selection
in
Software
Product
Lines
by
WANG
Ying-lin1,2,
PANG
Jin-wei2
mitpress.mit.edu/books/ant-colony-optimization
10 | P a g e