Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Almir Becirspahic
s186795
almir.becirspahic@polito.studenti.it
Politecnico di Torino
1. Introduction
Home Automation Systems consist of a number of appliances (washing machine, dishwasher, oven, refrigerator, boiler, )
which are connected by a communication network of some kind. The various elements of the system share common
resources (electricity, water and gas) to accomplish their specific tasks, according to user requirements. Since resources are
limited, in the sense that their consumption cannot exceed fixed thresholds, competition between elements of the system
generates internal conflicts. Using the information dispatched over the communication network and centralized or
decentralized computational resources, control strategies have to be implemented within the system, so to allocate limited
resources while maximizing performances. The definition of performance indices and that of the individual control strategies
and their tuning as well as the optimization procedure are the key issues in the HAS design.
Resources are allocated by implementing control strategies defined by means of suitable parameters. Performances should
be evaluated with respect to given scenarios, which specify, over a period of time, number, kind and scheduling of the tasks
the system is asked to perform and the users preferences. Clearly, avoiding overload situations may increase the delay and
this gives rise to the optimization problem that the HAS has to solve. This problem of optimization isnt a convex problem so
we cannot use the classical methods of optimization because our function is non-convex (see figure 1) and we can enter in a
local minimum or maximum. For solving this kind of optimization problems we have to use some suitable meta-heuristic
optimization techniques.
We have to find the value of parameters witch are the best possible compared with other parameters witch are around some
space, but maybe there are not the best absolute value. The key point is to define meta-heuristic optimizer to find a way to
choose new parameter starting from old one. Thats mean we start from some value of parameters, we compute the
performances of the system for that value of parameters, we modify the parameters and evaluate the performances and if
we have a improvement we keep the last value of parameters and again we modify looking for the better value.
Considering a very simple scenario with only two appliances with there parameters to1, ts1, to2, ts2 and we consider the same
scenario for both and it works in this way: we give a first random initial value to the parameters, we run simulation, we
evaluate the performance and if it better we modify the parameters and run simulation again (see figure 2).
META-HEURISTIC
OPTIMIZER
Responses
Input
parameters
SIMULATION
MODEL
2. Heuristics methods
The term heuristic is linked to algorithms mimicking some behavior found in nature, e.g., the principle of evolution through
selection and mutation (genetic algorithms), the annealing process of melted iron (simulated annealing) or the self
organization of ant colonies (ant colony optimization). In fact, a large number of heuristics results from such analogies
extending to the great deluge algorithm introduced by Dueck (1993). Heuristic optimization methods are essentially
computational and therefore they have been naturally introduced following the development of electronic computing devices.
First contributions go back to Bock (1958) and Croes (1958) who developed procedures to solve the traveling salesman
problem, but the most significant advances in the domain have been made in the late 1980s and 1990s when the main
techniques have been introduced. However, only very recently, desktop computers have reached the necessary performance
to make the use of these methods really appealing.
end while
Hill-climbing uses information about the gradient for the selection of a neighbor xn in line 3 whereas local search algorithms
choose the neighbors according to some random mechanism. This mechanism as well as the criteria for acceptance in line
4, which are specific for a particular heuristic, define the way the algorithm walks through the solution space. The stopping
criteria often consists in a given number of iterations.
Local search methods are generally divided into trajectory methods which work on a single solution and population based
methods 1, where a whole set of solutions is updated simultaneously. In the first class we find threshold methods and tabu
search whereas the second class consists of genetic algorithms, differential evolution methods and ant colonies. All these
local search methods have particular rules for either or both, the choice of a neighbor and the rules for acceptance of a
solution. All methods, except tabu search, allow uphill moves, i.e. accept solutions which are worse than the previous one, in
order to escape local minima.
3. Trajectory methods
The definition of neighborhood is central to these methods and generally depends on the problem under consideration.
Finding efficient neighborhood functions that lead to high quality local optima can be challenging. We consider three
different trajectory methods.
for r = 1 to Rmax do
end while
Reduce T
end for
Threshold accepting (TA) This method, suggested by Dueck and Scheuer (1990), is a deterministic analog of simulated
annealing where the sequence of temperatures T is replaced by a sequence of thresholds . As for SA, these threshold
values typically decrease to zero while r increases to Rmax. Algorithm 2 becomes then:
xc = xn and T = T xn
Update memory
end while
for i = 1 to n do
P = P xchild
end for
P = survive( P , P)
end while
end for
for all ants do Update pheromone trail (more for better solutions)
end while
for k = 1 to nG do
P(0) = P(1)
for i = 1 to nP do
Generate r1 , r2 , r3 { 1, . . . , nP} , r1 6 = r2 6 = r3 6 = i
Compute Pi(v) = Pr1(0) + F ~ (Pr2(0) Pr3(0) )
for j = 1 to d do
if u < CR then Pj,i(u) = Pj,i(v) else Pj,i(u) = Pj,i(0)
end for
if f (Pi(u) ) < f (Pi(0) ) then Pi(1) = Pi(u) else Pi(1) = Pi(0)
end for
end for
vi(k) = v(k1) + vi
Bibliography
[1] M.Gilli, P. Winker. Heuristic Optimization Methods in Econometrics. University of Geneva, October 2007.
[2] Manfred Gilli. An Introduction to Optimization Heuristics. University of Cyprus, September 2004.
[3] Pasi Airikka. Heuristic Evolutionary Random Optimizer for Non-linear Optimization Including Control and
Identification Problems. Finland.
[4] Premalatha K, Natarajan A M. Combined Heuristic Optimization Techniques for Global Minimization. Tamil
Nadu, India 2010.
[5] Michael Steinbrunn, Guido Moerkotte, Alfons Kemper. Heuristic and randomized optimization for the join
ordering problem. The VLDBJournal, 1997.
[6]
G. Conte,
Todd W. Neller. Heuristic optimization and dynamical system safety verification. Knowledge systems