Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Satchidananda Dehuri, Ph.D.(CS), Department of Information and Communication Technology, Fakir Mohan University, vyasa Vihar, Balasore-756019, ORISSA, INDIA.
Single Vs Multi-objective
Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called singleobjective optimization. Example: Find out a CAR for me with Minimum cost. Multi-objective Optimization: When an optimization problem involves more than one objective function, the task of finding one or more optimal solutions is known as multi-objective optimization. Example: Find out a CAR for me with minimum cost and maximum comfort.
-0.5 60 40 40 20 0 0 20 60
2
Luxury
B A 1
Price
Mapping:
d R
to
n F
Reference: S. Dehuri, A. Ghosh, and S.-B. Cho, Particle Swarm Optimized Polynomial Neural Network for Classification: A Multi-objective View, International Journal of Intelligent Defence Support Systems, vol. 1, no. 3, pp.225-253, 2008.
Concept of Domination
A solution x1 dominate other solution x2, if both conditions 1 and 2 are true: 1. The solution x1 is no worse than x2 in all objectives, or fi(x1) fj(x2) for all j. 2. The solution x1 is strictly better than x2 in at least one objective or fi(x1) fj(x2) for at least one j.
A Simple Visualization
Minimize f2 1 2 3 Time Complexity of nondominated set: O(MN2)
4
6
Maximize f1
Properties of Dominance
Reflexive: The dominance relation is not reflexive. Symmetric: The dominance relation is also not symmetric. Transitive: The dominance relation is transitive.
In order for a binary relation to qualify as an ordering relation, it must be at least transitive [3]. Thus dominance relation is only a strict partial order relation.
Pareto Optimality
Non-dominated Set: Among a set of solutions P, the non-dominated set of solutions P are those that are not dominated by any member of the set P. Global Pareto Optimality Set: The non-dominated set of the entire feasible search space S is the globally Pareto-optimal set. Locally Pareto Optimality Set: If for every member x in a set P there exists no solution y (in the neighborhood of x such that ||y-x|| <= eps, where eps is a small pos. number) dominating any member of the set P, then solutions belonging to the set P constitute a locally Pareto-optimal set.
Examples of MOP
Minimization Problem: 1. Minimize f1(x)=x1 f2(x)=(1+x2)/(x1) Domain: {0.1 <= x1<=1, 0 <=x2 <=5}
20 18 16 14 12 10 8 6 4 2 0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Examples (ctd.)
Maximize f1(x)=x1 f2(x)=1+x2-(x1*x1) Domain:{0<=x1<=1, 0<=x2<=3}
3
-1
-2
-3
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
MOPS Approaches
1) Weighted Sum Approaches 2) Lexicography Approaches 3) Pareto Approaches
wm [0,1], wm 1
m 1
As we converted it into single objective we can now proceed using PSO with its associated operators.
Example
Minimize f1(x)=x1 Minimize f2(x)=1+x2*x2-x1-a*sin(b*PI*x1) Domain: x1=[0,1], x2=[-2,2], a=0.2, b=1.
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Lexicography Approach
In the lexicographic approach, different priorities are assigned to different objectives, and then the objectives are optimized in order of their priority.
What is a Swarm Intelligence (SI)? Examples from nature Origins and Inspirations of SI
What is a Swarm?
Collection of interacting agents (Soft/Hardware).
Agents (Soft/Hardware):
Individuals that belong to a group (but are not necessarily identical). They contribute to and benefit from the group. They can recognize, communicate, and/or interact with each other.
The instinctive perception of swarms is a group of agents in motion but that does not always have to be the case. A swarm is better understood if thought of as agents exhibiting a collective behavior.
Flock of birds
Agents: birds
Traffic
Agents: cars
Crowd
Agents: humans
Immune system
Agents: cells and molecules
Swarm Robotics
The application of SI principles to collective robotics. A group of simple robots that can only communicate locally and operate in a biologically inspired manner. A currently developing area of research.
Advantage of SI Techniques
The systems are scalable because the same control architecture can be applied to a couple of agents or thousands of agents. The systems are flexible because agents can be easily added or removed without influencing the structure. The systems are robust because agents are simple in design, the reliance on individual agents is small, and failure of a single agents has little impact on the systems performance. The systems are able to adapt to new situations easily.
Individuals in a population learn from previous experiences and the experiences of those around them. The direction of movement is a function of: Current position Velocity Location of individuals best success Location of neighbors best successes Therefore, each individual in a population will gradually move towards the better areas of the problem space. Hence, the overall population moves towards better areas of the problem space.
xi (t 1) xi (t ) vi (t )
My best perf. Here I am! x pi
Pseudo code
Initialize; while (not teminated) { t = t +1 for i = 1:N // for each particle { Vi(t) = Vi(t-1) + c1*rand()*(Pi Xi(t-1)) +c2*rand()*(Pg Xi(t-1)) Xi(t) = Xi(t-1) + Vi(t) Fitness i(t) = f(Xi(t)); if needed, update Pi and Pg; }// end for i } // end for while
PSO Vs.GA
Similarity Both algorithms start with a group of a randomly generated population Both have fitness values to evaluate the population. Both update the population and search for the optimum with random techniques. Both systems do not guarantee success. Dissimilarity However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. Particles update themselves with the internal velocity. They also have memory, which is important to the algorithm. advantages PSO is easy to implement and there are few parameters to adjust. Compared with GA, all the particles tend to converge to the best solution quickly even in the local version in most cases
30 25 20 15 10 5 0 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Years
Algorithm: MOPSO
INITIALIZATION of the Swarm EVALUATE the fitness of each particle of the swarm. EX_ARCHIVE = SELECT the non-dominated solutions from the Swarm. t = 0. REPEAT FOR each particle SELECT the gbest UPDATE the velocity UPDATE the Position MUTATION /* Optional */ EVALUATE the Particle UPDATE the pbest END FOR UPDATE the EX_ARCHIVE with gbests. t = t+1 UNTIL (t <= MAXIMUM_ITERATIONS) Report Results in the EX_ARCHIVE.
Dehuri, S., Cho, S.-B., "Multi-criterion Pareto based particle swarm optimized polynomial neural network for classification: A Review and State-of-the-Art. Computer Science Review, Elsevier Science, vol. 3, no. 1, pp. 19-40, 2009.
A Few Contributions
Parsopoulos and Vrahatis [a] Baumgartner et al. [b] Hu and Eberhart [c] Parsopoulos et al. [d] Chow and Tsui [e] Moore and Chapman [f] Ray and Liew [g] Fieldsend and Singh [h] Coello et al. [i]
and so on
References
[-1] A. Ghosh, S. Dehuri, and S. Ghosh, Multi-objective Evolutionary Algorithms for KDD, Springer-Verlag, 2008. [0] J. Oh and C. Wu, Genetic Algorithms based real time task scheduling with multiple goals, The journal fo systems and Software, vol. 71, pp. 245-258, 2004. [1]R. Farmani, et al., An Evolutionary Bayesian Belief Network Methodology for Optimum Management of Groundwater Contamination, environmental Modeling and Software, vol.24, pp.303-310, 2009. [2]D. Dutta, et al., Multi-objective Evolutionary Algorithms for Land-Use Management Problem, International Journal of Computational Intelligence Research, vol. 3, no. 4, pp/ 371-384, 2007. [3] V. Chankong and Y. Y. Haimes, Multi-objective Decision Making Theory and Methodology, New York: North-Holland, 1983.
References
[a]Konstantinos E. Parsopoulos and Michael N. Vrahatis. Particle swarm optimization method in multiobjective problems. In Proceedings of the 2002 ACM Symposium on Applied Computing (SAC2002), pages 603607, Madrid, Spain, 2002. ACM Press. [b]U. Baumgartner, Ch. Magele, and W. Renhart. Pareto optimality and particle swarm optimization. IEEE Transactions on Magnetics, 40(2):11721175, March 2004. [c]Xiaohui Hu and Russell Eberhart. Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Congress on Evolutionary Computation (CEC2002), volume 2, pages 16771681, Piscataway, New Jersey, May 2002. IEEE Service Center. [d]Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, and Michael N. Vrahatis. Multiobjective optimization using parallel vector evaluated particle swarm optimization. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2004), volume 2, pages 823 828, Innsbruck, Austria, February 2004. ACTA Press.
References
[e]Chi-kin Chow and Hung-tat Tsui. Autonomous agent response learning by a multi-species particle swarm optimization. In Congress on Evolutionary Computation (CEC2004), volume 1, pages 778785, Portland, Oregon, USA, June 2004. IEEE Service Center. [f]Jacqueline Moore and Richard Chapman. Application of particle swarm to multiobjective optimization. Department of Computer Science and Software Engineering, Auburn University, 1999. [g]Tapabrata Ray and K.M. Liew. A swarm metaphor for multiobjective design optimization. Engineering Optimization, 34(2):141153, March 2002. [h]Jonathan E. Fieldsend and Sameer Singh. A multiobjective algorithm based upon particle swarm optimisation, an efficient data structure and turbulence. In Proceedings of the 2002 U.K. Workshop on Computational Intelligence, pages 3744, Birmingham, UK, September 2002. [i]Carlos A. Coello Coello and Maximino Salazar Lechuga. MOPSO: A proposal for multiple objective particle swarm optimization. In Congress on Evolutionary Computation (CEC2002), volume 2, pages 10511056, Piscataway, New Jersey, May 2002. IEEE Service Center.