Sei sulla pagina 1di 11

Computer-Aided Civil and Infrastructure Engineering 20 (2005) 450460

Aggregate Blending Using Genetic Algorithms

Y. Cengiz Toklu
Civil Engineering Department, Cyprus International University, Lefkosa,
Turkish Republic of Northern Cyprus

Abstract: Aggregate blending consists of finding the for solving aggregate-blending problems. These methods
proportions of fractions to form a final blend satisfying were all characterized by choosing a subset of sieve sizes
predefined specifications. It is a problem which is posed to make the calculations easier. In the TE type calcula-
in many ways, and solved by using different techniques. tions, an iterative procedure was followed for obtaining
These techniques range from simple graphical methods an acceptable solution. In the graphical methods, some
to advanced computer methods such as nonlinear pro- triangular and rectangular charts were designed where
gramming or dynamic programming. In this article, an each side corresponded to one sieve size. The solution
aggregate-blending problem is formulated as a multiob- was highly dependent on the sizes chosen and also the
jective optimization problem and solved by using genetic experience of the engineer. These methods were effec-
algorithms (GAs). It is shown that in this way all existing tive for at most three or four fractions and for the only
formulations of an aggregate-blending problem can be objective of finding a mix within the prescribed limits.
covered and solved. The effectiveness of this new appli- For a higher number of fractions, other graphical meth-
cation is demonstrated through numerical examples. The ods were proposed where straight lines approximated
technique is shown to be quite versatile in tackling multiple the grading curves. Analytical methods, which consisted
objectives including cost minimization, and approaching of solving a system of equations of a number equal to
at best a given target curve. Linear and nonlinear cost the number of fractions considered, were also being
functions can be dealt with equal ease; additional objec- used.
tives may be inserted into the problem with no difficulty. Then came more sophisticated methods, which were
The user has the possibility of defining and finding the adapted for computer applications. With the advances
best solutions with Pareto optimality considerations. in computer technologies and using the advantages
of these methods, more and more complex blending
problems have been solved, such as multiobjective or
1 INTRODUCTION chance-constrained problems or problems with nonlin-
ear constraints. A summary of these methods with a com-
Problems associated with aggregate blending are very prehensive literature analysis is given by Toklu (2002b).
common in the construction industry. Mixing aggregate The common disadvantage of these methods is that all
fractions is necessary when making concrete, mortar, as- of them are especially designed for the formulation con-
phalt concrete, and any soil recomposition, and when sidered. If a problem has different properties, then the
constructing granular bases and sub-bases. In fact, the method should be changed accordingly.
problem can easily be generalized to any blending prob- In the present study, the problem is formulated as a
lem that can be encountered in the food, chemical, multiobjective optimization problem. It has been shown
pharmaceutical, and petrochemical industries and the that this formulation is capable of covering all formula-
like. tions studied before and actually can be considered as
Before the common use of computers, trial-and-error the most general approach. The problem is then solved
(TE) type and graphical methods were used extensively by using a metaheuristic method, namely, a genetic algo-
rithm (GA). Certain aspects are checked by applying a
Towhom correspondence should be addressed. E-mail: yct001@ combinatory approach, scanning the range of all feasible solutions using a step size sufficiently small. Optimality

C 2005 Computer-Aided Civil and Infrastructure Engineering. Published by Blackwell Publishing, 350 Main Street, Malden, MA 02148, USA,
and 9600 Garsington Road, Oxford OX4 2DQ, UK.
Aggregate blending using genetic algorithms 451

of the solutions is discussed based on Pareto optimal- 2.2 Optimality conditions

ity. Special points arising from the application of a GA
In the formulations above, the problem is merely an exis-
are discussed. It has been concluded that the formula-
tence problem, without having any optimization aspect.
tion adopted is very convenient to solve the aggregate-
In fact, the problem can be put into the form of an opti-
blending problem and it can further be generalized to
mization problem with the introduction of one or more
any problem that can arise in the field considered.
of the following objectives:
1. Cost optimization: The total cost of the blend can
2 OVERVIEW OF THE AGGREGATE-BLENDING be calculated as the sum of the costs of the fractions
2.1 Existence conditions C= C j (x j ) (4)
Let n be the number of sieves in the sieve analysis.
Let m be the number of fractions that are used in where C is the total cost of the blend, and Cj (xj )
the grading, having gradation curves characterized by is the cost of the jth fraction, which is of course a
G{Gij , i = 1, . . . , n, j = 1, . . . , m} expressed as passing function of the amount used of fraction j. Cj (xj ) can,
percentages by weight of fraction j at sieve i. If x = for example, be in the form:
[x1 , x2 , . . . , xm ]T is the vector representing the propor-  xj
tion of the fractions to be used in a blend, the resulting Cj = c j j ( ) d (5)
grading curve would be given by quantities:
where j is the unit cost function, cj is the initial

unit cost of fraction j, and is a dummy variable
pi = Gij x j , i = 1, 2, . . . , n or p = Gx (1)
changing between 0 and the current x. For the linear
cost function, j is equal to 1 so that the integral
where term in Equation (5) becomes xj to make

xi 0, i = 1, 2, . . . , m (2a) Cj = c j xj (5a)

m where cj is the cost of unit amount of fraction j.
xi = 1 (2b) Then, an objective such as minimize C can be
i=1 inserted to the problem to make it an optimization
In general, the required grading is given by two curves: 2. Closeness to a target curve: As stated above, the
one specifying the upper limit and the other the lower usual grading constraints are given as an upper limit
limit. The upper-limit sieve passing percentages are and a lower limit, imposing the condition that the
characterized by r{ri , i = 1, 2, . . . , n}, and the lower- final blend should be within the envelope defined
limit sieve passing percentages are characterized by by the limits. Actually, any solution found near the
s{si , i = 1, 2, . . . , n} along with the conditions ri si , i = limits, though satisfying them, is liable to be violated
1, 2, . . . , n, so that the final gradation will remain within in the next sampling, due to the probabilistic nature
these limits: of the problem. Second, if an envelope is given, it is
si pi ri , i = 1, 2, . . . , n (3) always better to be away from the limits to achieve a
qualitatively better result because these limits mark
Thus, the problem can be stated as finding an m- the start of regions with unacceptable results. Thus,
dimensional vector x such that m nonnegativity condi- one may impose an objective that the final blend will
tions (Equation (2a)), 2n inequalities (Equation (3)), be as far as possible from the limits, though within
and one equality constraint (Equation (2b)), (thus to- the given envelope. This argumentation results in
taling m + 2n + 1 constraints) will be satisfied. In some defining a target gradation which is the median of
problems, more constraints are added to the ones cited the envelope, with a vector q:
above, such as the conditions on fineness modulus or  
plasticity index (Easa and Can, 1985). The additional q qi = (ri + si ), i = 1, 2, . . . , n (6)
constraints coming from these constraints are also lin-
ear, thus they do not change the general character of the If the target is chosen as the median of the envelope,
problem. Their sole effect would be an increase in the it will be equidistant from the upper and lower bounds,
number of constraints. thus not favoring any of them.
452 Toklu

In the literature, there are other curves that may be chromosomes (Toklu, 2002a). Although the main prin-
chosen as the target, such as Fullers ideal grading ciples are the same, the definitions of genetic operators
curve based upon considerations of obtaining a mix with and their relative importance vary from application to
minimum voids (Neville, 2002). In these curves, the cu- application (Haupt and Haupt, 1998).
mulative passing percentage at a sieve is equal to the A typical pseudocode of a simple application of GA
normalized sieve size raised to the power of a number will look like the following:
in the order of 0.45 or 0.50. Here, the normalization is
Begin GA
obtained by dividing the sieve size to the largest sieve
generation counter := 0
size used for the aggregate at hand. If such a target curve
Initialize Population P(generation counter)
is used, attention should be paid to its concordance with
Evaluate Population P(generation counter)
the other constraints of the grading.
{compute fitness values by using objective function}
In any case, the corresponding objective is to make p
While Not Done Do
as close as possible to q, the target, or, in other words,
generation counter := generation counter + 1
to make the length of the vector p q as small as
Select P(generation counter) from
possible. So, the objective becomes the minimization
P(generation counter 1)
of = p q, where is a measure of the distance
Crossover P(generation counter)
between the vectors p and q and may be calculated in a
Reproduce P(generation counter)
variety of ways.
Mutate P(generation counter)
Evaluate P(generation counter)
End While
PROBLEM USING THE GENETIC ALGORITHMS In the present study, GAs are applied to the aggregate-
blending problem with appropriate definitions. The fol-
3.1 Overview of the genetic algorithms lowing remarks point out the important points or differ-
GAs are optimization techniques based on natures prac- ences with classical applications.
tice of evolution and survival of the fittest and were de-
veloped by Holland (1975). They belong to the class of
3.2 Chromosome structure
stochastic search methods such as simulated annealing,
ant colony optimization, and tabu search. The main char- In the present application, chromosomes are chosen to
acteristic of GAs is that a population of solutions to the represent the solution vector x. They are formed by
problem at hand is taken as an initial generation, and genes, and the ith gene in a chromosome representing
an iterative process that involves operators of genetic the proportion of fraction i in a blend. The first m 1
origin, such as reproduction, crossover, and mutation, is elements of vector x are generated randomly in the in-
applied with the aim of obtaining better generations un- terval of [0,1], the mth one being calculated using Equa-
til sufficiency conditions are satisfied (Goldberg, 1989; tion (2b):
Haupt and Haupt, 1998).  
The successive generations are formed by individuals, 

and each individual is represented by a chromosome, x x1 , x2 , . . . , xm1 , 1 xi

which is actually an object representing the characteris-
tics of a candidate solution to the problem at hand. At In this representation, if the last gene xm is found to
early times, these chromosomes were formed by binary be violating the nonnegativity condition, then the chro-
numbers only, but later they are generalized so as to be mosome is rejected and a new one is created until this
formed by any other entities such as arrays, lists, trees, condition is satisfied.
mathematical operators, or any kind of number depend-
ing on the type of problem. In a problem such as the
3.3 Crossover operator
one considered in this article, where the task is to find a
set of numbers x1 , x2 , . . . , xn , it would be logical to de- Because of the special conditions on the genes of the
fine a chromosome as the set of these numbers such as chromosome defined for blending applications, that is,
{x1 , x2 , . . . , xn }. the genes will be nonnegative and their sum will be equal
The individuals are rated by a fitness function de- to unity, the crossover operator is applied in this appli-
fined according to type of problem and the objective. cation in a modified form. Consider two chromosomes
The selections then are performed linearly or nonlin- {0.15 0.00 0.63 0.22} and {0.33 0.42 0.11 0.14}, which
early proportional to the fitness values assumed by these are randomly chosen to mate, and it is randomly decided
Aggregate blending using genetic algorithms 453

that the crossover operator will be applied by section be- set of equations and inequalities that define the con-
tween the first and second genes. The resulting offspring straints and inputting them to the GA system, or by
will be {0.15 0.42 0.11 0.14} and {0.33 0.00 0.63 0.22} creating random individuals and rejecting the unfeasi-
with the summation of the elements as 0.82 and 1.18, ble ones. Both of these approaches may have very im-
respectively. In the present application, a normaliza- portant disadvantages. For the first approach, it may be
tion process is applied here to satisfy the unity condi- difficult or even impossible to find the solution set for
tion. Hence, the elements of the offspring are multi- the constraints, especially when dealing with nonlinear
plied by their respective normalization factors 1.00/0.82 equations and inequalities. For the second approach, it
and 1.00/1.18 to yield the normalized crossover offspring is possible that the probability of randomly finding fea-
as {0.183 0.512 0.134 0.171} and {0.280 0.000 0.534 sible solutions may be very low, which makes this part
0.186}. of the procedure highly time-consuming, or even never
Therefore, in this study, a more natural approach
3.4 Mutation operator
is adopted (Toklu, 2002a). Individuals are accepted to
For mutations, a modified definition is also applied to the initial or subsequent generations without checking
satisfy the conditions given in Equation (2). Consider whether they satisfy all the constraints. The procedure is
that the first chromosome above, namely {0.15 0.00 0.63 designed in such a manner that, as it can be seen in the
0.22}, is randomly chosen to be subject to mutation, and above paragraphs, the individuals created from the start
that the fourth gene is randomly assigned to take the or generated through crossover or mutation operators,
randomly determined value 0.47. With these considera- satisfy only Equation (2), but they do not necessarily
tions, the chromosome becomes {0.15 0.00 0.63 0.47} satisfy the constraints given in Equation (3). The elim-
with the summation of elements equal to 1.25 instead of ination of the infeasible individuals is then left to the
1.00. Applying the normalization procedure defined in natural selection aspect of the GAs through assigning
the paragraph above, the elements of the mutant will be penalties to the unfeasible ones evaluated in the objec-
multiplied by 1.00/1.25 to yield the normalized mutant tive function.
as {0.120 0.000 0.504 0.376}.
3.7 Objective function
3.5 Proportions of genetic operators
To compare and evaluate the fitness values of chro-
Three operators are used in the formulation: crossover, mosomes, a proper objective function has to be de-
mutation, and reproduction. The first two operators are fined. For a multiobjective problem like this one, this
defined in a modified way as defined above. The repro- can be achieved in a number of ways (Ehrgott, 2000;
duction is applied in such a way that a certain percent- Triantaphyllou, 2000). In this application, an appropri-
age of individuals are chosen to be passed to the next ate choice would be to use the weighted sum model and
generation without being subject to the applications of to combine the three objectives into one, as
the crossover operator. For example, for an application
(x) = (x) + C C(x) +  (x) (7)
where the number of individuals is 40 and the propor-
tions are given such that reproduction proportion is 0.30 where (x) is the objective function to be minimized.
and mutation probability is 0.01, the next generation will In this equation, (x) is the distance of p(x) from the
be found as follows: 12 best individuals will be chosen target curve q; C(x) is the cost of the solution x; (x)
as they are, 28 individuals will be obtained through an is the penalty function for unsatisfied constraints, and
application of the crossover operator. Then, mutation , C , and  are nonnegative factors for arranging the
operator will be applied to all these offspring with 1/100 existence and relative importance of the terms in the
probability to form the next generation. objective function.
The distance in Equation (7) between vectors p and q
can be calculated in one of the following ways
3.6 Feasible solutionssatisfaction of constraints

Dealing with constraints is one of the most critical points 1 = p q1 = | pi qi |
in the process. In creating individuals for the initial gen- i=1

eration or in producing new individuals for the next gen-  n
eration, one approach is to design a procedure such that 2 = p q2 = | pi qi |2
only feasible solutions, that is, individuals satisfying all i=1
the constraints will enter the generation. This can be ob-
tained either by introducing a routine which solves the = p q = max[| pi qi |, i = 1, . . . , n].
454 Toklu

The penalty function is calculated from the feasible elements of the solution space are obtained

 = j It is to be noted that in this formulation, the prob-
j=1 lem is a multiobjective optimization problem. The third
objective is to guarantee the satisfaction of the feasi-
pj r j
if p j > r j bility conditions. The other objectives may be elimi-
j = s j pj if p j < s j nated or augmented in number, for example, ignoring the

0 if s j p j r j cost-minimization aspect, or including other objectives
such as obtaining a certain plasticity index or fineness
so that  measures how much a pseudosolution x goes modulus.
outside the borders of feasible solutions. At the end of the optimization, one may not hope that
With this choice, three objectives are combined into and C would both vanish thus arriving at a solution with
one: zero cost and exactly equal to the target curve. This is not
the case for . To have a feasible solution, the latter has
1. being as close as possible to the target curve
to go down until it vanishes. At the end of the prescribed
(minimize ),
number of generations, if  is still not equal to 0, then the
2. obtaining a least cost solution (minimize C),
iterations may not be sufficient in quantity and quality,
3. satisfying the constraints to remain in the prescribed
or the problem may not have feasible solutions.
envelope (minimize  until it vanishes, if possible).
It is to be noted that among these objectives, the first
two are the only real external objectives. The third one is 4 ILLUSTRATIVE EXAMPLE
actually part of the algorithm to find feasible solutions.
In fact, alternatively this same program could be for- The algorithm is applied to the problem with the data
mulated without this third objective with the condition presented in Table 1. There are four fractions to be
that only chromosomes satisfying all feasibility condi- blended; the analysis is carried out with 10 sieves.
tions would be considered in the populations, and any Gradations for these four fractions, limiting gradations,
chromosome which does not satisfy the conditions im- and the target gradation, which is the median of the up-
posed by Equation (3) would be considered to be pre- per and lower limits, are shown in Figure 1. Two types
maturely dead. This means that according to the choice of the cost function are considered: linear and nonlinear.
made in this article, individuals in a population are pseu- For the linear cost function, the unit price of each fraction
dosolutions to the problem, in the sense that they may is independent of the quantity used, on the other hand,
or may not satisfy the limiting conditions. The nature for the nonlinear cost function, the unit prices are step-
of the procedure is that, in the succeeding generations, wise functions of the quantity used. As shown in Figure 2,
the ones that satisfy these conditions are preferred, and the functions j , defined in Equation (5), are taken to be

Table 1
Input data: required gradation and gradations for fractions to be blended; the unit costs of fractions for the
linear cost function

Sieve Lower Upper Median Fraction Fraction Fraction Fraction

Sieve opening limits limits Curve 1 2 3 4
no. (mm) s r q Gi1 Gi2 Gi3 Gi4

1 7.5 97 100 98.5 97.8 100 100 100

2 5 82 95 88.5 60 100 100 100
3 3.75 74 90 82 24.2 100 100 100
4 2.5 66 82 74 12.2 84 100 99.2
5 1.87 60 74 67 9 50 100 96
6 0.787 43 57 50 4 8.2 91 81.5
7 0.331 28 42 35 2 2.3 58 60
8 0.165 19 32 25.5 1.8 1.8 36 40
9 0.059 8 18 13 1.5 1.5 15.8 17.2
10 0.029 2 8 5 1 1 8 8
Unit cost: 10 15 5 20
Aggregate blending using genetic algorithms 455

s Lower Limits
r Upper Limits
q Median
80 Gi1 Fraction 1
70 Gi2 Fraction 2
Passing %

60 Gi3 Fraction 3 20
50 Gi4 Fraction 4

40 Best
fitness 15
10 value
1 2 3 4 5 6 7 8 9 10
Sieve Number

Fig. 1. Required gradation (limiting curves and the median)

and gradations of the four fractions to be blended.
0 1000 2000 3000 4000 5000

Unit cost function Y()

Fig. 4. Typical GA application. Three independent runs

0.25 The reproduction proportion is set to 0.25, with muta-

tion probability 0.01. For each and every data set, com-
0 putations are repeated three times independently, with
0 0.2 0.4 0.6 0.8 1
different random number sets, that is, on three isolated

islands, and the best result is assumed to be final out-
put. A typical run took about 1 minute. An exemplary
Fig. 2. Unit cost function.
graph visualizing the descent of best fitness values versus
ongoing generations is presented in Figure 4.
the same for all fractions just for the sake of simplicity. The runs are labeled D1, D2, D3 for analyzing the
The resulting linear and nonlinear cost functions for all three different norms; L1, L2, L3, L4 for obtaining so-
four fractions are visualized in Figure 3. lutions about tradeoff between approaching the target
The algorithm is applied through a computer program curve and minimizing costs calculated linearly; and N1,
prepared using the Visual Basic 6.0. The program is run N2, N3, and N4 for similar solutions with the nonlinear
on a computer with a Pentium III microprocessor that cost function. The properties of these runs are shown
has a clock speed of 750 MHz. Runs are performed with in Table 2. The results of these runs are compared with
5,000 generations and 40 individuals in each generation. each other and with the resulting TE solution, obtained
in the first trial taking all xi to be 14 . It is to be further
noted that solution D2 is identical to the one obtained
20 using the least-squares method (Neumann, 1964). The
18 Fr. 1 gradings obtained are visualized in Figures 59. In Fig-
16 Fr. 2 ures 69, a modified vertical scale is used to have a better
Cost of Fractions

14 Fr. 3 view of the curves obtained, seeing that the results are
12 Fr. 4 difficult to differentiate in Figure 5. The modification is
10 obtained by a normalization so that the upper and lower
gradation limits are mapped to +100% and 100%, re-
spectively, and the median is mapped to 0%.
2 In the runs D1, D2, D3, L4, and N4, the objective func-
0 tion is formulated with two terms; although in the runs
0 0.25 0.5 0.75 1 L1, L2, L3, N1, N2, and N3 this number is three. One
of the terms in the objective function is always related
to minimizing total penalties to ensure finding feasible
Fig. 3. Cost functions for the four fractions (dashed lines for solutions. In all cases this objective was satisfied, that is,
linear cost function, continuous lines for nonlinear feasible solutions are obtained such that at the end of it-
cost function). erations,  = 0. The coefficient  is taken to be 1,000 for
456 Toklu

Table 2
Data for all runs and corresponding outputs

Solution Deviation from target Cost

Code  x1 x2 x3 x4 1 2 C(Lin) C(Nonlin) 

TE 0.2500 0.2500 0.2500 0.2500 25.1500 9.8215 5.6000 12.5000 12.0000

D1 1 0.2406 0.1753 0.5840 9.4207 3.9113 2.3304 16.7170 14.5952 9.4207
D2 2 0.2539 0.1578 0.5883 9.6270 3.5983 2.2630 16.6723 14.4811 3.5983
D3 0.2496 0.1472 0.6032 11.4534 4.0567 2.0298 16.7680 14.4368 2.0298
L1 0.75 1 + 0.25C(L) 0.2615 0.1668 0.1044 0.4673 9.7405 4.3105 2.6704 14.9846 13.7926 11.0515
L2 0.5 1 + 0.5C(L) 0.2653 0.1558 0.3119 0.2670 12.1303 5.3276 2.8477 11.8899 11.3793 12.0101
L3 0.25 1 + 0.75C(L) 0.2704 0.1411 0.5885 17.9950 8.0723 5.7925 7.7629 7.1008 10.3209
L4 C(L) 0.3430 0.0502 0.6068 30.6231 12.6856 7.9999 7.2173 6.3643 7.2173
N1 0.75 1 + 0.25C(NL) 0.2631 0.1515 0.0000 0.5853 9.4511 3.8033 2.3102 16.6109 14.4313 10.6962
N2 0.5 1 + 0.5C(NL) 0.2653 0.1558 0.3110 0.2678 12.1206 5.3210 2.8470 11.9024 11.3894 11.7550
N3 0.25 1 + 0.75C(NL) 0.2704 0.1410 0.5885 17.9959 8.0736 5.7945 7.7626 7.1004 9.8243
N4 C(NL) 0.3430 0.0502 0.6068 30.6233 12.6857 8.0000 7.2173 6.3643 6.3643

100 100
s 80
Normalized Passing %

r 60 s
q 40 r
TE 20 q
Passing %

60 0 TE
50 -20 D1
40 -40 D2
30 -60 D3
10 1 2 3 4 5 6 7 8 9 10
Sieve Number
1 2 3 4 5 6 7 8 9 10
Sieve Number
Fig. 6. Gradation curves obtained for three different norms
and for TE solution on normalized scale (upper limits are
Fig. 5. Gradation curves obtained for three different norms 100%, lower limits are 100%, median are 0%).
and for TE solution.
eliminating infeasible solutions with a reasonable speed.
Normalized Passing %

60 s
The sum + C of the other two coefficients is kept to 40 r
be equal to 1 to obtain a convex linear combination of 20 q

these objectives. 0 TE
-20 D1
-40 L4
4.1 Effect of type of metric used N4
Use of three different norms are compared in the runs -100
D1, D2, and D3 where the fitness value is calculated using 1 2 3 4 5 6 7 8 9 10

1 , 2 , and , respectively. The corresponding results, Sieve Number

tabulated in Table 2 with gradings presented in Figures 5

Fig. 7. Gradation curves for extreme conditions
and 6 show, as expected, that the outputs may be quite
on normalized scale.
different according to the choice: x2 = 14.7% for D3 and
x2 = 17.5% for D1. In general, the decision maker is free
to choose any one of these norms and there is no phys- efficient R2 = 0.9842; and = 0.26851 0.1861 with
ical evidence on preference of anyone of them. In fact, correlation coefficient R2 = 0.9219, as calculated using
Figure 10 shows that there is a good correlation between all relevant points on Table 2). Based on these consider-
these norms (2 = 0.4131 + 0.048 with correlation co- ations, 1 is chosen to be the basic norm in this analysis
Aggregate blending using genetic algorithms 457

100 0.7
80 s 0.6

x 3 and x 4 in final Blend

Normalized Passing %

40 0.5
q x3 L
D1 0.4 x3 N
L1 0.3 x4 L
L2 x4 N
-40 0.2
-60 L3
-80 L4 0.1
-100 0
1 2 3 4 5 6 7 8 9 10 0 20 40 60 80 100
Sieve Number Cost Ratio in Final Objective
(L for Linear, N for Nonlinear Cost Function)

Fig. 8. Gradation curves for solutions with different

importance levels of linear cost. Fig. 11. Change of x3 and x4 depending on the cost ratio
in the final objective.
80 18
D1 Linear Cost
60 16 Nonlinear Cost
s L1
Normalized Passing %

40 D1
r 14 N1
20 q

D1 12
N1 N2
-20 10
-40 N3 8 L4
N4 N3
-60 N4
-80 5 10 15 20 25 30 35
-100 Distance From Target Curve
1 2 3 4 5 6 7 8 9 10
Sieve Number Fig. 12. Pareto optimal points for linear and nonlinear
cost functions.
Fig. 9. Gradation curves for solutions with different
importance levels of nonlinear cost.
as expected (see Figure 7). Effectively, the TE method
needs to be improved because it is simply a feasible
for further applications. solution without any optimization, and for L4 and N4
approaching the target curve was not an objective.
4.2 Minimization of the distance from the target curve The best solutions for each type of metric are under-
lined in Table 2 in columns 1 , 2 , and . If 2 is taken
It can be seen from the results that the worst solutions as to be the distance measure accepted, obviously the run
far as approaching the target curve is concerned are ob- D2 becomes the absolute best, as expected by definition.
tained from the TE method and the solutions L4 and N4 This solution gives 2 (D2) = 3.5983. It is to be noted that
the solutions obtained in the run D2 are not the best
when compared to other solutions as far as other metric
y = 0.413x + 0.048 definitions are concerned: 1 (D1) = 9.4207 < 1 (D2) =
R2 = 0.9842 9.627, (D3) = 2.0298 < (D2) = 2.2630.

8 2
6 4.3 Minimization of cost
y = 0.2685x - 0.1861
R2 = 0.9219
2 Cost minimization is analyzed for two types of cost def-
inition: linear and nonlinear costs. In the first case the
0 5 10 15 20 25 30 35 unit costs are constant, in the second case the unit costs
1 decrease nonlinearly as the amount used increases (see
Figure 3). Among the fractions the third one is the cheap-
Fig. 10. Correlation between norm types. est one and the fourth one is the most expensive one. The
458 Toklu

gradations of these two fractions, as it can be seen from

Figure 1, are such that one of them can be used to replace
the other one. This has actually been observed in such
a manner that as the importance of cost increases in the
final objective, more and more of fraction 4 is replaced
by fraction 3, as shown in Figure 11.
It is apparent from the results that the least cost solu-
tions are L4 and N4 where approaching the target curve
is not taken as one of the objectives. As the cost com-
ponent in the aggregate objective decreases, going from
L4 to D1 through L3, L2, and L1, the cost of the blend
increases while moving closer to the target curve. This
trend for the linear cost function can also be seen for Fig. 13. Envelopes for feasible points.
nonlinear cost definition (Figure 12).
In general, the least cost solutions may not be desirable
as far as the technical performance of the resulting mix is are plotted in the C 1 space as shown in Figure 14. It
concerned. From Table 2, one may observe that the least is obvious from this plot that the Pareto optimal points
cost solution is more than three times as far from the least form a front for the set of feasible points such that ev-
distance solution ( 1 (L4)/ 1 (D1) = 30.6231/9.4207 = ery point not on this front is dominated by at least one
3.25), and this is well visualized in Figure 7, where the Pareto optimal point.
solution L4 (and also N4) is seen to be on the borders A further visualization can be obtained by plotting the
of the envelope of feasible solutions at two different image of all feasible points obtained with 0.02 intervals
points. as mentioned above, together with the images of those
which satisfy Equation (2) without satisfying constraint
4.4 Feasible and Pareto optimal solutions 3. Actually this second set of points, which are out of
and decision making the envelope defined by Equation (3), satisfying only
the nonnegativity and unity conditions (Equations (2a)
Because the problem is a multiobjective optimization and (2b)), correspond to pseudofeasible points which are
problem, the results may be interpreted accordingly. The accepted in GA application and eliminated naturally
cases solved involve optimizations with three (C, , and as explained above. In Figure 15, the images of all these
) or less objectives combined into one aggregate ob- points are shown together with the Pareto optimal points
jective function. Among these, the penalty  is always determined in this analysis.
reduced to 0 to arrive at technically feasible solutions. It is obvious that the Pareto optimal set consisting of
Therefore, the problem can in fact be considered as hav- the points D1, L1, . . . , L4 and the ones on the curve join-
ing two or less functions to be minimized. On Figure 12, ing them dominates infinitely many feasible points and
the solutions are marked in C 1 space, with the linear the decision maker is to choose one point in this set ac-
and nonlinear cost functions. cording to the decision criteria to be used.
The points shown are Pareto optimal (Ehrgott, 2000),
or nondominated, to mean that none of them are better
(or worse) than any other point in the same set as far as
both objectives C and 1 are considered. If one moves 18
Feasible Points Pareto Optimal Points

from one point to another on the same curve, cost will D1

decrease but 1 will increase, or vice versa. 16

The relation between the feasible points and the L1

Cost, Linear

Pareto optimal set can further be visualized. Con-

sider all points x = [x1 , x2 , . . . , xm ]T , where xi {0, 12
0.02, 0.04, . . . , 1} satisfying Equations (2) and (3). These 10
points are all feasible because they satisfy all the con-
straints of the problem, and they will be in a convex poly- 8
L3 L4
hedron because the constraints of the problem (Equa- 6
tions (2) and (3)) are linear. With a special program, 5 10 15 20 25 30 35
all these points are determined and their envelopes are Distance From Target Curve
shown in Figure 13. Then objective values for these
points are calculated and the images of the feasible points Fig. 14. Feasible points and Pareto optimal points.
Aggregate blending using genetic algorithms 459

20 here, three objectives are envisioned, an internal one for

obtaining feasible solutions, and two external ones, one
for approaching the target curve as much as possible and
another one for finding the least cost mix. The first one is
Cost (Linear)

designed for eliminating unfeasible solutions. Therefore,

it is always taken into account and with a higher relative
importance. The other two can be applied with different
10 importance factors or may not be applied at all depend-
ing on the decision of the engineer. Thus, solutions will
be found with least cost, with a minimum distance to the
5 target curve, or a combination of both. This argumenta-
0 50 100 150 200 250 300 350 tion shows that the solution is not unique but depends
Distance From the Target Curve on the decision of the user. It is always possible to per-
form a tradeoff argumentation between the objectives.
Out-of-envelope points Feasible points Pareto Optimal Points
At this point the concept of Pareto optimality becomes
very useful in understanding the problem and in direct-
Fig. 15. Pareto optimal points, feasible points, ing the decision maker to grasp the possibilities posed
and out-of-envelope points. by the solutions.
Another remark also would be to note the ease with
4.5 Accuracy which other objectives could be added to the problem.
The tests conducted in the research show the power of
The accuracy of the results depends certainly on a GAs. Two points are to be emphasized. First is the ease
number of factors such as number of generations and with which GAs tackle constraints, the second one is the
population size. Comparison of the closed-form solution ease and versatility with which GAs deal with multiple
using the least-squares method and the corresponding objectives.
GM solution, D2, shows that sufficient accuracy could An important effort is spent in the analysis for de-
be obtained with the current preferences on the param- termining the effect of different measures of distance
eters of the method. between the gradation curve found and the target curve.
It has been seen that although there is a strong correla-
tion between these measures, the mix proportions may
5 SUMMARY AND CONCLUSIONS vary considerably depending on the choice of the norm.
A blend which is the best for a given choice of metric
Aggregate blending is an old but usually underestimated may not be the best for a different choice.
problem in civil engineering and it can be generalized to The application has shown that the aggregate-
any blending problem in other fields of science and en- blending problem which is formulated in very differ-
gineering. It can be formulated in a variety of ways from ent ways and solved by using different techniques can
simple to more complex approaches. In general, each for- now be solved by a single technique covering all types
mulation necessitates a special technique, ranging from of formulations. This new solution gives the user the
graphical methods and the solution of a system of linear possibility of trading off among the objectives and thus
equations; applications of the least-squares method; and finding the most appropriate solution for their problem.
linear, nonlinear, and dynamic programming. In this ar- The accuracy of the method is checked via a comparison
ticle, a formulation is presented which exhibits the char- with the closed-form solution obtained by the applica-
acteristics of a multiobjective optimization problem with tion of the least-squares method and found to be quite
the linear and nonlinear constraints. satisfactory.
Among the techniques that can be applied to this kind The method can be generalized by introducing weigh-
of problem, a metaheuristic technique is chosen, namely, ing parameters which depend on the importance of
a GA. It has been shown that this technique can be ap- the sieves and/or the interval between the upper and
plied very successfully to the problem at hand and in limits at a given sieve. A practical computer program
such a way that all the existing formulations can be tack- based on the principles laid out in the article can be
led with no or minimal modifications. It has been shown quite useful in industrial applications and help the users
that once the problem is formulated as a multiobjective in making decisions with the best tradeoff between
problem, it is very easy to make alternative applications lower cost and higher technical quality solutions. The
and to play with the degree to which one desires to ob- method can further be applied to pooling problems if it
tain the target value. In the formulation programmed is revised to tackle nonlinear constraints besides linear
460 Toklu

ones. Gij = The percentage of the elements pass-

Finally, it should be noted that once the problem is for- ing sieve i, for the fraction j
mulated as a multiobjective optimization problem, many G = The matrix with components Gij
other techniques different than GAs can be used to solve L1, L2, L3, L4 = Runs with different levels of linear
it, such as simulated annealing, ant colony optimization, cost and distance minimization
steepest descent method, random search method, etc. It m = The number of fractions that are used
is the belief of the author that the critical point here is in the grading
the formulation and not the technique chosen. N1, N2, N3, N4 = Runs with different levels of nonlin-
ear cost and distance minimization
n = The number of sieves in a sieve
pi = The percentage of the elements pass-
Easa, S. M. & Can, E. K. (1985), Optimization model for ag- ing sieve i, for the recomposed blend
gregate blending, Journal of Construction Engineering and p = The vector with components pi
Management, ASCE, 111, 21631. qi = The percentage of the elements pass-
Ehrgott, M. (2000), Multicriteria Optimization, Springer, New
York. ing sieve i, for the target gradation
Goldberg, D. E. (1989), Genetic Algorithms in Search, Op- q = The vector with components qi
timization, and Machine Learning, Addison-Wesley, New R = Correlation coefficient
York. ri = The percentage of the elements
Haupt, R. L. & Haupt, S. E. (1998), Practical Genetic Algo- passing sieve i, for the upper-limit
rithms, John Wiley, New York.
Holland, J. H. (1975), Adaptation in Natural and Artificial Sys- gradation
tems, The University of Michigan Press, Ann Arbor. r = The vector with components ri
Neumann, D. L. (1964), Mathematical method for blending si = The percentage of the elements
aggregates, Journal of Construction Division, ASCE, 90, 1 passing sieve i, for the lower-limit
13. gradation
Neville, A. M. (2002), Properties of Concrete, 4th edn., Prentice
Hall, Englewood Cliffs, NJ. s = The vector with components si
Toklu, Y. C. (2002a), Application of genetic algorithms to con- xj = The proportion of fraction j in a blend
struction scheduling with or without resource constraints, x = The vector with components xj
Canadian Journal of Civil Engineering, 29(3), 4219.  = Function to be minimized
Toklu, Y. C. (2002b), Aggregate blending problemAn arena , , 1 , 2 = Norm (length) of a vector, distance
of applications of optimization methods, ICCCBE-IX The
Ninth International Conference on Computing in Civil and between two vectors; subscript indi-
Building Engineering, Taipei, Taiwan, April, 35. cating the type of norm
Triantaphyllou, E. (2000), Multi-Criteria Decision Making j = Unit cost function for fraction j
Methods: A Comparative Study, Kluwer, Dordrecht. , C ,  = Relative importance factor for dis-
tance, cost, and penalty minimiza-
APPENDIX = Dummy variable used in cost
Notation  = Total penalty for a chromosome
The following symbols are used in this article: j = Penalty function for fraction j due to
unsatisfied constraints.
C = Total cost of a blend
Cj = Cost of the jth fraction
cj = Initial cost of the jth fraction for non-
linear costs i, j = Positive integer indices.
D1, D2, D3 = Runs with the norms 1 , 2 , , re-
spectively, and with no cost mini-