Sei sulla pagina 1di 46

Multi-objective Optimization

Using Particle Swarm Optimization

Satchidananda Dehuri, Ph.D.(CS),


Department of Information and
Communication Technology, Fakir Mohan
University, vyasa Vihar, Balasore-756019,
ORISSA, INDIA.
Single Vs Multi-objective
Single Objective Optimization:
When an optimization problem involves only one objective
function, the task of finding the optimal solution is called single-
objective optimization.

Example: Find out a CAR for me with Minimum cost.

Multi-objective Optimization: When an optimization problem


involves more than one objective function, the task of finding one
or more optimal solutions is known as multi-objective
optimization.

Example: Find out a CAR for me with minimum cost and


maximum comfort.
Single Vs Multi-objective: A Simple
Visualization
1

0.5

-0.5
60 2
60
40
40 B
20
20 Luxury
0 0
A

1
Price
Multi-objective Problem (ctd.)
Mapping: Rd to Fn

Reference: S. Dehuri, A. Ghosh, and S.-B. Cho, Particle Swarm Optimized Polynomial Neural Network
for Classification: A Multi-objective View, International Journal of Intelligent Defence Support Systems,
vol. 1, no. 3, pp.225-253, 2008.
Concept of Domination
A solution x1 dominate other solution x2, if
both conditions 1 and 2 are true:
1. The solution x1 is no worse than x2 in all
objectives, or fi(x1) fj(x2) for all j.
2. The solution x1 is strictly better than x2 in
at least one objective or fi(x1) fj(x2) for
at least one j.
A Simple Visualization
Minimize f2
Time Complexity of non-
dominated set: O(MN2)
1
2

3 4 5

Maximize f1
Properties of Dominance
Reflexive: The dominance relation is not reflexive.
Symmetric: The dominance relation is also not symmetric.
Transitive: The dominance relation is transitive.

In order for a binary relation to qualify as an ordering


relation, it must be at least transitive [3].

Thus dominance relation is only a strict partial order


relation.
Pareto Optimality
Non-dominated Set: Among a set of solutions P, the non-
dominated set of solutions P are those that are not
dominated by any member of the set P.
Global Pareto Optimality Set: The non-dominated set of
the entire feasible search space S is the globally Pareto-
optimal set.
Locally Pareto Optimality Set: If for every member x in
a set P there exists no solution y (in the
neighborhood of x such that ||y-x|| <= eps, where eps is
a small pos. number) dominating any member of the set
P, then solutions belonging to the set P constitute a
locally Pareto-optimal set.
Multi-objective Problem (ctd.)
Why PSO for MOP
It would not be surprising to apprehend that the
development of preference-based approaches was
motivated by the fact that available optimization methods
could find only a single optimized solution in a single
simulation run.

How to get multiple trade off solutions?

Probably the non-classical, unorthodox and stochastic


search such as PSO can help us to find multiple trade
off solutions in a single run of the algorithm.

HOW?
Examples of MOP
Minimization Problem:
1. Minimize f1(x)=x1
f2(x)=(1+x2)/(x1)
Domain: {0.1 <= x1<=1, 0 <=x2 <=5}
20

18

16

14

12

10

2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Examples (ctd.)
Maximize f1(x)=x1
f2(x)=1+x2-(x1*x1)
Domain:{0<=x1<=1, 0<=x2<=3}
3

-1

-2

-3
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
MOPS Approaches
1) Weighted Sum Approaches
2) Lexicography Approaches
3) Pareto Approaches
Weighted Sum Approach
Optimize M
F(x)= wm . f m ( x)
M
wm [0,1], wm 1
m 1
m 1

As we converted it into single objective we can now proceed


using PSO with its associated operators.
Hopefully we will get an optimal solutions.
Problem: How to fix these weights? (Static/Dynamic)
Example
Minimize f1(x)=x1
Minimize f2(x)=1+x2*x2-x1-a*sin(b*PI*x1)
Domain: x1=[0,1], x2=[-2,2], a=0.2, b=1.
5

4.5

3.5

2.5

1.5

0.5

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Lexicography Approach
In the lexicographic approach, different
priorities are assigned to different
objectives, and then the objectives are
optimized in order of their priority.
Review of the Classical Methods
1. Only one Pareto optimal solution can be
expected to be found in one simulation run of
a classical algorithm.
2. Not all Pareto optimal solution can be found
by some algorithms in non-convex MOOPs.
3. All algorithms require some problem
knowledge, such as suitable weights, epsilon,
or target values, etc.
Pareto Approach from EA Domain
VEGA (Vector Evaluated Genetic Algorithms)
(Contributed by David Schaffer in 1984).
VOES(Vector Optimized Evolution Strategy) contributed by Frank Kursawe in
1990.
MOGA (Multi-objective GA) introduced by Fonseca and Fleming in 1993.
NSGA (Non-dominated Sorting GA) introduced by Srinivas and Deb in 1994.
NPGA (Niched-Pareto Genetic Algorithm) introduced by Horn et al. in 1994.
PPES (Predator-Prey Evolution Strategy) introduced by Laumanns et al. in 1998.
DSGA (Distributed Sharing GA) introduced by Hiroyasgu et al. in 1999.
DRLA (Distributed Reinforcement Learning Approach) introduced by Mariano
and Morales in 2000.
Nash GA introduced by Sefrioui and Periaux in 2000, motivated by a game
theoretic approach.
REMOEA (Rudolphs Elitist MOEA) introduced by Rudolph in 2001.
NSGA-II by Deb et a. in 2000. and so on..
Potential Research Directions
MOEA in Data Mining [-1]
MOEA in real time task scheduling [0]
MOEA for Ground-water Contamination [1]
MOEA for Land-use Management [2]
More about MOGA
Please Visit:
KANGAL-Kanpur Genetic Algorithm Laboratory
(Prof. Kalyanmoy Deb)
CINVESTA-Mexico (Prof. Carlos A. Coello Coello)
Particle Swarm Optimization
A new Paradigm of Swarm Intelligence
What is a Swarm Intelligence (SI)?
Examples from nature
Origins and Inspirations of SI
What is a Swarm?
Collection of interacting agents (Soft/Hardware).
Agents (Soft/Hardware):
Individuals that belong to a group (but are not necessarily
identical).
They contribute to and benefit from the group.
They can recognize, communicate, and/or interact with each
other.
The instinctive perception of swarms is a group of
agents in motion but that does not always have to
be the case.
A swarm is better understood if thought of as
agents exhibiting a collective behavior.
Example of Swarms in Nature
Classic Example: Swarm of Wasps/Bees
Can be extended to other similar systems:
Ant colony
Agents: ants
Flock of birds
Agents: birds
Traffic
Agents: cars
Crowd
Agents: humans
Immune system
Agents: cells and molecules
Beginnings of Swarm Intelligence

First introduced by Beni and Wang in


1989 with their study of cellular
robotic systems
Extended by Theraulaz, Bonabeau,
Dorigo, Kennedy,..
Definition of Swarm Intelligence

SI is also treated as an artificial intelligence


(AI) technique based on the collective
behavior in decentralized, self-organized
systems.
Generally made up of agents who interact
with each other and the environment.
No centralized control structures.
Based on group behavior found in nature.
Swarm Intelligence Techniques
A few popular and recent SI techniques:
Particle Swarm Optimization,
Ant Colony Optimization,
Bee colony Optimization,
Wasp Colony Optimization,
Intelligent Water Drops
Success and On-going Research on SI
Tim Burton's Batman Returns was the first movie to
make use of swarm technology for rendering,
realistically depicting the movements of a group of bats
using the Boids system. Entertainment industry is
applying for battle and crowd scenes.
U.S. military is investigating swarm techniques for
controlling unmanned vehicles.
NASA is investigating the use of swarm technology for
planetary mapping.
Swarm intelligence to control nanobots within the body
for the purpose of killing cancer tumors.
Load balancing in telecommunication networks
Swarm Robotics
The application of SI principles to collective
robotics.
A group of simple robots that can only
communicate locally and operate in a
biologically inspired manner.
A currently developing area of research.
Advantage of SI Techniques
The systems are scalable because the same
control architecture can be applied to a couple of
agents or thousands of agents.
The systems are flexible because agents can be
easily added or removed without influencing the
structure.
The systems are robust because agents are simple
in design, the reliance on individual agents is
small, and failure of a single agents has little
impact on the systems performance.
The systems are able to adapt to new situations
easily.
Particle Swarm Optimization
A population based stochastic
optimization technique.
Searches for an optimal solution in the
computable search space.
Developed in 1995 by Eberhart and
Kennedy.
Inspiration: Flocks of Birds, Schools of
Fish.
Particle Swarm Optimization (ctd.)
In PSO individuals strive to improve themselves
and often achieve this by observing and
imitating their neighbors
Each PSO individual has the ability to
remember
PSO has simple algorithms and low overhead
Making it more popular in some circumstances than
Genetic/Evolutionary Algorithms
Has only one operation calculation:
Velocity: a vector of numbers that are added to the
position coordinates to move an individual
In General: How PSO Work
Individuals in a population learn from previous
experiences and the experiences of those around
them.
The direction of movement is a function of:
Current position
Velocity
Location of individuals best success
Location of neighbors best successes
Therefore, each individual in a population will
gradually move towards the better areas of the
problem space.
Hence, the overall population moves towards
better areas of the problem space.
Particle Swarm Optimization (ctd.)
A swarm consists of N particles in a D-
dimensional search space. Each particle holds a
position (which is a candidate solution to the
problem) and a velocity (which means the flying
direction and step of the particle).
Each particle successively adjust its position
toward the global optimum based on two factors:
the best position visited by itself (pbest) denoted
as Pi=(pi1,pi2,,piD) and the best position
visited by the whole swarm (gbest) denoted as
Pg=(pg1,pg2,,pgD) .
Particle Swarm Optimization (ctd.)
vi (t 1) w vi (t ) c1 rand () ( pi xi (t )) c2 rand () (pg xi (t ))
xi (t 1) xi (t ) vi (t )
My
PBest best
pi perf.
Here I
am! x pg The best
perf. of
team
gBe s t
v
Pseudo code
Initialize;
while (not teminated)
{ t = t +1
for i = 1:N // for each particle
{
Vi(t) = Vi(t-1) + c1*rand()*(Pi Xi(t-1))
+c2*rand()*(Pg Xi(t-1))
Xi(t) = Xi(t-1) + Vi(t)
Fitness i(t) = f(Xi(t));
if needed, update Pi and Pg;
}// end for i
} // end for while
PSO Vs.GA
Similarity
Both algorithms start with a group of a randomly generated population
Both have fitness values to evaluate the population.
Both update the population and search for the optimum with random
techniques.
Both systems do not guarantee success.

Dissimilarity
However, unlike GA, PSO has no evolution operators such as crossover
and mutation.
In PSO, the potential solutions, called particles, fly through the problem
space by following the current optimum particles.
Particles update themselves with the internal velocity.
They also have memory, which is important to the algorithm.
advantages
PSO is easy to implement and there are few parameters to adjust.
Compared with GA, all the particles tend to converge to the best solution
quickly even in the local version in most cases
Our Contribution towards PSO
[1]Mishra, B.B., and Dehuri, S., A Novel Stranger Sociometry
Particle Swarm Optimization (S2PSO), ICFAI Journal of
Computer Science, vol. 1, no. 1, 2007.
[2]Dehuri, S., An Empirical Study of Particle Swarm
Optimization for Cluster Analysis, ICFAI Journal of
Information Technology, 2007.
[3]Dehuri, S., Ghosh A., and Mall, R, Particles with Age for Data
Clustering, Proceedings of International Conference on
Information Technology, Dec. 18-21, Bhubaneswar, 2006.
[4]Dehuri, S, and Rath, B. K., gbest Multi-swarm for Multi-
objective Rule Mining, Proceedings of National Conference on
Advance Computing, March 22-23, Tezpur University, 2007.
PSO for MOP
Three main issues to be considered when extending PSO to multi-
objective optimization are:

How to select particles (to be used as leaders) in order to give


preference to non-dominated solutions over those that are
dominated?
How to retain the non-dominated solutions found during the
search process in order to report solutions that are non-
dominated with respect to all the past populations and not only
with respect to the current one? Also, it is desirable that these
solutions are well spread along the Pareto front.
How to maintain diversity in the swarm in order to avoid
convergence to a single solution?
Statistics of MOPSO Development
Growth of GA, PSO and ACO for
MOP
Algorithm: MOPSO
INITIALIZATION of the Swarm
EVALUATE the fitness of each particle of the swarm.
EX_ARCHIVE = SELECT the non-dominated solutions from the
Swarm.
t = 0.
REPEAT
FOR each particle
SELECT the gbest
UPDATE the velocity
UPDATE the Position
MUTATION /* Optional */
EVALUATE the Particle
UPDATE the pbest
END FOR
UPDATE the EX_ARCHIVE with gbests.
t = t+1
UNTIL (t <= MAXIMUM_ITERATIONS)
Report Results in the EX_ARCHIVE.
Dehuri, S., Cho, S.-B., "Multi-criterion Pareto based particle swarm optimized polynomial
neural network for classification: A Review and State-of-the-Art. Computer Science Review,
Elsevier Science, vol. 3, no. 1, pp. 19-40, 2009.
A Few Contributions
Parsopoulos and Vrahatis [a]
Baumgartner et al. [b]
Hu and Eberhart [c]
Parsopoulos et al. [d]
Chow and Tsui [e]
Moore and Chapman [f]
Ray and Liew [g]
Fieldsend and Singh [h]
Coello et al. [i]
and so on
References
[-1] A. Ghosh, S. Dehuri, and S. Ghosh, Multi-objective Evolutionary
Algorithms for KDD, Springer-Verlag, 2008.
[0] J. Oh and C. Wu, Genetic Algorithms based real time task scheduling
with multiple goals, The journal fo systems and Software, vol. 71, pp.
245-258, 2004.
[1]R. Farmani, et al., An Evolutionary Bayesian Belief Network
Methodology for Optimum Management of Groundwater
Contamination, environmental Modeling and Software, vol.24,
pp.303-310, 2009.
[2]D. Dutta, et al., Multi-objective Evolutionary Algorithms for Land-Use
Management Problem, International Journal of Computational
Intelligence Research, vol. 3, no. 4, pp/ 371-384, 2007.
[3] V. Chankong and Y. Y. Haimes, Multi-objective Decision Making
Theory and Methodology, New York: North-Holland, 1983.
References
[a]Konstantinos E. Parsopoulos and Michael N. Vrahatis. Particle swarm
optimization method in multiobjective problems. In Proceedings of the 2002
ACM Symposium on Applied Computing (SAC2002), pages 603607,
Madrid, Spain, 2002. ACM Press.
[b]U. Baumgartner, Ch. Magele, and W. Renhart. Pareto optimality and particle
swarm optimization. IEEE Transactions on Magnetics, 40(2):11721175,
March 2004.
[c]Xiaohui Hu and Russell Eberhart. Multiobjective optimization using dynamic
neighborhood particle swarm optimization. In Congress on Evolutionary
Computation (CEC2002), volume 2, pages 16771681, Piscataway, New
Jersey, May 2002. IEEE Service Center.
[d]Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, and Michael N. Vrahatis.
Multiobjective optimization using parallel vector evaluated particle swarm
optimization. In Proceedings of the IASTED International Conference on
Artificial Intelligence and Applications (AIA 2004), volume 2, pages 823
828, Innsbruck, Austria, February 2004. ACTA Press.
References
[e]Chi-kin Chow and Hung-tat Tsui. Autonomous agent response learning by a
multi-species particle swarm optimization. In Congress on Evolutionary
Computation (CEC2004), volume 1, pages 778785, Portland, Oregon,
USA, June 2004. IEEE Service Center.
[f]Jacqueline Moore and Richard Chapman. Application of particle swarm to
multiobjective optimization. Department of Computer Science and Software
Engineering, Auburn University, 1999.
[g]Tapabrata Ray and K.M. Liew. A swarm metaphor for multiobjective design
optimization. Engineering Optimization, 34(2):141153, March 2002.
[h]Jonathan E. Fieldsend and Sameer Singh. A multiobjective algorithm based
upon particle swarm optimisation, an efficient data structure and turbulence.
In Proceedings of the 2002 U.K. Workshop on Computational Intelligence,
pages 3744, Birmingham, UK, September 2002.
[i]Carlos A. Coello Coello and Maximino Salazar Lechuga. MOPSO: A
proposal for multiple objective particle swarm optimization. In Congress on
Evolutionary Computation (CEC2002), volume 2, pages 10511056,
Piscataway, New Jersey, May 2002. IEEE Service Center.

Potrebbero piacerti anche