Applied Mathematics and Computation 219 (2013) 10967–10973
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
Solving nonlinear equations system via an efﬁcient genetic algorithm with symmetric and harmonious individuals
Hongmin Ren ^{a} ^{,} ^{⇑} , Long Wu ^{a} , Weihong Bi ^{b} , Ioannis K. Argyros ^{c}
^{a} College of Information and Engineering, Hangzhou Radio and TV University, Hangzhou 310012, Zhejiang, PR China ^{b} Department of Mathematics, Zhejiang University, Hangzhou 310028, Zhejiang, PR China ^{c} Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
article info
Keywords:
Genetic algorithms Symmetric and harmonious individuals Nonlinear equations system Elitist model Population diversity
abstract
We present an efﬁcient genetic algorithm as a general tool for solving optimum problems. As a specialized application this algorithm can be used to approximate a solution of a sys tem of nonlinear equations on n dimensional Euclidean space setting. The new idea involves the introduction of some pairs of symmetric and harmonious individuals for the generation of a genetic algorithm. The population diversity is maintained better this way. The elitist model is used to ensure the convergence. Simulation results show that the new method is indeed very effective.
2013 Elsevier Inc. All rights reserved.
1. Introduction
In this paper, we introduce genetic algorithms as a general tool for solving optimum problems. As a special case we use these algorithms to ﬁnd the solution of the system of nonlinear equations
8
>
>
>
>
<
>
>
>
>
:
f _{1} ðx _{1} ; x _{2} ; f _{2} ðx _{1} ; x _{2} ;
.
.
.
f _{n} ðx _{1} ; x _{2} ;
;
;
;
x _{n} Þ ¼ x _{n} Þ ¼
0;
0;
x _{n} Þ ¼ 0;
ð1: 1Þ
where f ¼ ð f _{1} ; f _{2} ;
Genetic algorithms (GA) were ﬁrst introduced in the 1970s by John Holland at University of Michigan [8] . Since then, a great deal of developments on GA have been obtained, see [5,6] and references therein. GA are used as an adaptive machine study approach in their former period of development, and they have been successfully applied in numerous areas such as artiﬁcial intelligence, selfadaption control, systematic engineering, image processing, combinatorial optimization and ﬁnan cial system. GA show us very extensive application prospect. Genetic algorithms are search algorithms based on the mechan ics of natural genetics. In a genetic algorithm, a population of candidate solutions (called individuals or phenotypes) to an optimization problem is evolved towards better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible [5] . The evolution usually starts from a population of randomly generated individuals and happens in generations. In
; f _{n} Þ : D ¼ ½ a _{1} ; b _{1} ½ a _{2} ; b _{2} ½ a _{n} ; b _{n} # R ^{n} ! R ^{n} is continuous.
⇑ Corresponding author. Email addresses: rhm65@126.com (H. Ren), wl@mail.hzrtvu.edu.cn (L. Wu), bivh@zju.edu.cn (W. Bi), iargyros@cameron.edu (I.K. Argyros).
00963003/$  see front matter 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2013.04.041
10968
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
each generation, the ﬁtness of every individual in the population is evaluated, the more ﬁt individuals are stochastically selected from the current population, and each individual’s genome is modiﬁed (recombined and possibly randomly mu tated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algo rithm terminates when either a maximum number of generations has been produced, or a satisfactory ﬁtness level has been reached for the population. In the use of genetic algorithms to solve the practical problems, the premature phenomenon often appears, which limits the search performance of genetic algorithm. The reason for causing premature phenomenon is that highly similar exists be tween individuals after some generations, and the opportunity of the generation of new individuals by further genetic manipulation has been reduced greatly. Many ideas have been proposed to avoid the premature phenomenon [6,15,16,18]. Guang and Yonglin [18] introduces two special individuals at each generation of genetic algorithm to make the population maintains diversity in the problem of ﬁnding the minimized distance between surfaces, and the computa tional efﬁciency has been improved. We introduces other two special individuals at each generation of genetic algorithm in the same problem in [15] , and the computational efﬁciency has been further improved. Furthermore, we suggest to put some symmetric and harmonious individuals at each generation of genetic algorithm applied to a general optimal prob lem in [16] , and good computational efﬁciency has been obtained. Some application of our methods in reservoir midong hydraulic power operation has been given in [17] . Recently many authors use GA to solve nonlinear equations system, see [2,9,10,12,14]. These works give us impression that GA are effective methods for solving nonlinear equations system. However, efforts are still needed so as to solve non linear equations system more effectively. In this paper, we present a new genetic algorithm to solve Eq. (1.1) . The paper is organized as follows: We convert the equation problem (1.1) to an optimal problem in Section 2, in Section 3 we present our new genetic algorithm with symmetric and harmonious individuals for the corresponding optimal problem, in Section 4 we give a mixed method by our method with Newton’s method, whereas in Section 5 we provide some numer ical examples to show that our new methods are very effective. Some remarks and conclusions are given in the concluding Section 6 .
2. Convert (1.1) to an optimal problem
Let us deﬁne function F : D ¼ ½ a _{1} ; b _{1} ½ a _{2} ; b _{2} ½ a _{n} ; b _{n} # R ^{n} ! R as follows
F ðx _{1} ; x _{2} ;
;
x _{n} Þ ¼ ^{1}
n
_{n} ^{X} j f _{i} ðx _{1} ; x _{2} ;
i¼1
;
x _{n} Þj:
Then, we convert problem (1.1) to the following optimal problem
min F ð x _{1} ; x _{2} ; s: t ðx _{1} ; x _{2} ;
;
; x _{n} Þ x _{n} Þ 2
D:
ð 2: 1Þ
ð 2: 2Þ
Suppose x ^{H} 2 D is a solution of Eq. (1.1) , then we have F ð x ^{H} Þ ¼ 0 from the deﬁnition (2.1) of F . Since F ð x Þ P 0 holds for all x 2 D , we deduce that x ^{H} is the solution of problem (2.2) . On the other hand, assume x ^{H} 2 D is a solution of problem (2.2) . Then for all x 2 D , we have F ð x Þ P F ð x ^{H} Þ . Now we suppose Eq. (1.1) has at least one solution denoted as y ^{H} . Then, we have 0 6 F ð x ^{H} Þ 6 F ð y ^{H} Þ ¼ 0, that is F ð x ^{H} Þ ¼ 0 is true and x ^{H} is a solution of Eq. (1.1) . Hence, Eq. (1.1) is equivalent to prob lem (2.2) if Eq. (1.1) has at least one solution. In what follows, we always suppose Eq. (1.1) has at least one solution, and try to ﬁnd its solution by ﬁnding a solution of (2.2) via a genetic algorithm.
3. New genetic algorithm: SHEGA
Since the simple genetic algorithm (called standard genetic algorithm) is not very efﬁcient in practical computation, many varieties have been given [6] . In this paper, we give a new genetic algorithm. Our main idea is to put pairs of symmetric and harmonious individuals in generations.
3.1. Coding
We use the binary code in our method, as used in the simple genetic algorithm [6] . The binary code is the most used code in GA, which represents the candidate solution by a string of 0s and 1s. The length of the string is relation to the degree of accuracy needed for the solution, and satisﬁes the following inequality
L _{i} P log _{2}
b _{i} a _{i}
e i
;
ð 3: 1Þ
where, L _{i} is the length of the string standing for the icomponent of an individual, and e _{i} is the degree of accuracy needed for x _{i} . In fact, we should choose the minimal positive integer to satisfy (3.1) for L _{i} . Usually, all e _{i} are equal to one another, so it can be denoted by e _{x} in this status. For example, let a _{1} ¼ 0 ; b _{1} ¼ 1 and e _{x} ¼ 10 ^{} ^{6} , then we can choose L _{1} ¼ 20, since
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
10969
L _{1} P log _{2} 10 ^{6} 19 : 93156857. Variable x _{1} 2 ½ a _{1} ; b _{1} is in the form of real numbers, it can be represented by 20 digits of the binary code:00000000000000000000 stands for 0, 00000000000000000001 stands for _{L} _{i} , 00000000000000000010 stands
1
2
2
for _{L} _{i} ,
2
, 11111111111111111111 stands for 1.
3.2. Fitness function
Since the simple genetic algorithm is used to ﬁnd a solution of a maximal problem, one should make some change for the ﬁtness function. In this study, we deﬁne the ﬁtness function as follows
g ðx _{1} ; x _{2} ;
;
x _{n} Þ ¼
1
1 þ F ð x _{1} ; x _{2} ;
; x _{n} Þ ^{:}
^{ð}^{3}^{:} ^{2}^{Þ}
For this function g , it satisﬁes: (1) the ﬁtness value is always positive, which is needed in the following genetic operators; (2)
the ﬁtness value will be bigger if the point ð x _{1} ; x _{2} ;
; x _{n} Þ is closer to a solution x ^{H} of problem (2.2) .
3.3. Selection
We use Roulette Wheel Section (called proportional election operator) in our method, as used in the simple genetic algorithm. The probability of individual is selected and the ﬁtness function is proportional to the value. Suppose the size of population is N , the ﬁtness of individual i is g _{i} . The individual i was selected to the next generation with probability p _{i} given by
p _{i} ¼
^{g} ^{i}
P N
i¼1 ^{g} i
ð i ¼ 1; 2;
; N Þ :
3.4. Symmetric and harmonious individuals
ð3: 3Þ
We ﬁrst give the deﬁnition of symmetric and harmonious individuals.
Deﬁnition 3.1. Suppose individuals M _{1} ¼ ð x _{1} ; x _{2} ;
M _{1} ^{0} ¼ ð x _{1} _{1} x _{1} _{2}
called symmetric and harmonious individuals if and only if x _{i} _{j} ¼ 1 y _{i} _{j} holds for any i ¼ 1 ; 2 ;
In Deﬁnition 3.1 individuals M ^{0} _{1} and M _{2} ^{0} are complements in the binary sense. In order to avoid the premature phenomenon of genetic algorithm, we introduce some pair of symmetric and harmonious individuals in generation. We do not use the ﬁxed symmetric and harmonious individuals as in [15,18]. Contrarily, we gen erate pair of symmetric and harmonious individuals randomly. On one hand, these pair of symmetry of individuals continue to enrich the diversity of the population. On the other hand, they continue to explore the space even if they have not been selected to participate in genetic manipulation of exchange or mutation. Suppose the size of population is N , and k 2 ½ 0 ; 0 : 5 Þ is a parameter. Let b r c be the biggest integer equal to or less than r . We introduce b k N c pairs of symmetric and harmonious individuals in current generation provided that the quantity between the best ﬁtness value of the last generation to the one of the current generation is less than a preset precision denoted by _{e} _{1} . Here, we call k as symmetry and harmonious factor.
; y _{n} Þ can be represented in the binary code by
y _{n} _{L} _{n} Þ , respectively. They are
; x _{n} Þ and M _{2} ¼ ð y _{1} ; y _{2} ;
^{x} _{1} _{L} _{1} ^{x} _{2} _{1} ^{x} _{2} _{2}
^{x}
^{2} ^{L} _{2}
^{x} _{n} _{1} ^{x} _{n} _{2}
x _{n} _{L} _{n} Þ and M _{2} ^{0} ¼ ð y _{1} _{1} y _{1} _{2}
^{y} _{1} _{L} _{1} ^{y} _{2} _{1} ^{y} _{2} _{2}
y
^{2} ^{L} 2
^{y} _{n} _{1} ^{y} _{n} _{2}
; n and j ¼ 1 ; 2 ;
;
L _{i} .
3.5. Crossover and mutation
We use onepoint crossover operator in our genetic algorithm, just as used in the simple genetic algorithm. That is, a single crossover point on both parents’ organism strings is selected. All data beyond that point in either organism string is swapped between the two parent organisms. The resulting organisms are the children. An example is shown as follows:
A : 10110111 j 001
B : 00011100 j 110
_{)} A ^{0} : 10110111 j 110
B ^{0} : 00011100 j 001:
We use bit string mutation, just as used in the simple genetic algorithm. That is, the mutation of bit strings ensue through bit ﬂips at random positions. The following example is provided to show this:
A : 1010 1 0101010 ) _{A} ^{0} _{:} _{1}_{0}_{1}_{0} _{0} _{0}_{1}_{0}_{1}_{0}_{1}_{0}_{:}
3.6. Elitist model
It is wellknown that the global convergence of the simple genetic algorithm cannot be assured [6] . In order to guarantee the global convergence of the genetic algorithm, we use an elitist model (or the optimal preservation strategy) in this paper. That is, at each generation we reserves the individual with the maximum ﬁtness value to the next generation.
10970
3.7. Procedure
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
For the convenience of discussion, we call our genetic algorithm with symmetric and harmonious individuals as SHEGA, and call the simple genetic algorithm with the elitist model as EGA. We can give the procedure of SHEGA as follows:
Step 1. Assignment of parameters of genetic algorithm: The size N of population, the number n of variables of (2.2) , the leng
; L _{n} (computed from (3.1) ) of the binary string of the components of an individual, symmetry and har
monious factor k , controlled precision e _{1} in Section 3.4 to introduce the symmetric and harmonious individuals, the probability p _{c} of the crossover operator, the probability p _{m} of the mutation operator, and the largest genetic gener ation G . Step 2. Generate the initial population randomly. Step 3. Calculate the ﬁtness value of each individual of the contemporary population, and reserve the optimal individual of the contemporary population to the next generation. Step 4. If the distance between the best ﬁtness value of the last generation to that of the current generation is less than a
preset precision e _{1} , we generate N 2 b k N c 1 individuals using Roulette Wheel Section and b k N c pairs of symmetry and harmonious individuals randomly. Otherwise we generate N 1 individuals using Roulette Wheel Section directly. The population is then divided into two parts: one is the seed subpopulation constituted by sym metry and harmonious individuals, and the other is a subpopulation ready to be bred and constituted by the residual individuals. Step 5. Take the crossover operator between each individual in the seed subpopulation to one individual selected from the other subpopulation randomly. Take the crossover operator each other using two paired method with probability p _{c} and take the mutation operator with probability p _{m} for each residual individual in the subpopulation ready to be bred. Step 6. Repeat Step 3–Step 5 until the maximum genetic generation G is reached.
thes L _{1} ; L _{2} ;
4. Mixed algorithm: SHEGA–Newton method
In order to improve further the efﬁciency of SHEGA, we can apply it by mixed with a classical iterative method such as Newton’s method [3,13]
x ^{ð}^{k} ^{þ}^{1}^{Þ} ¼ x ^{ð}^{k} ^{Þ} f ^{0} ð x ^{ð}^{k} ^{Þ} Þ ^{} ^{1} f ðx ^{ð}^{k} ^{Þ} Þ ð k P 0Þ ð x ^{ð}^{0}^{Þ} 2 DÞ:
ð 4: 1Þ
Here, f ^{0} ð x Þ denotes the Fréchet derivative of function f. Local as well as semilocal convergence results for Newton method (4.1) under various assumptions have been given by many authors [1,3,4,7,11,13]. It is wellknown that Newton’s method converges to the solution quadratically provided that some necessary conditions are satisﬁed. However, there are two deﬁciencies to limit the application of Newton’s method. First, function f must be differentiable, which will not be satisﬁed in practical application. Second, a good initial point for beginning the iterative is key to ensure the convergence of the iterative sequence, but it is a difﬁcult task to choose the initial point in advance. In fact, choosing good initial points to begin the corresponding iteration is the common question for all the classical iterative methods used to solve Eq. (1.1) [3,13] . Here, we use Newton’s method as an example. In fact, one can develop other methods by mixing SHEGA and other iter ative methods. We can state SHEGANewton method as follows:
Step 1. Given the maximal iterative step S and the precision accuracy e _{y} . Set s ¼ 1. Step 2. Find an initial guess x ^{ð} ^{0} ^{Þ} 2 D by using SHEGA given in Section 3 .
Step 3. Compute f _{i} ð x ; x
1
ð
s Þ
s Þ
ð
2
^{;}
; x Þð i ¼ 1 ; 2 ;
n
ð
s Þ
; n Þ . If F ð x ; x
1
ð
s Þ
s Þ
ð
2
^{;}
; x Þ 6 e _{y} , report that the approximation solution
n
ð
s Þ
x ^{ð} ^{s} ^{Þ} ¼ ð x ; x
1
ð
s Þ
s Þ
ð
2
^{;}
; x Þ is found and exit from the circulation, where F is deﬁned in (2.1) .
n
ð
s Þ
Step 4. Compute the Jacobian J _{s} ¼ f ^{0} ð x ^{ð} ^{s} ^{Þ} Þ and solve the linear equations
J _{s} u ^{ð}^{s}^{Þ} ¼ f ðx ^{ð}^{s}^{Þ} Þ :
_{S}_{e}_{t} _{x} ð s þ 1 Þ _{¼}
_{x} ð s Þ _{þ} _{u} ð s Þ _{.}
ð 4: 2Þ
Step 5. If s 6 S , set s ¼ s þ 1 and go to Step 3. Otherwise, report that the approximation solution cannot be found.
5. Numerical examples
In this section, we will provide some examples to show the efﬁciency of our new method.
Example 5.1. Let f be deﬁned in D ¼ ½ 3 : 5 ; 2 : 5 ½ 3 : 5 ; 2 : 5 by
(
f _{1} ð x _{1} ; x _{2} Þ ¼ x ^{2} þ x _{2} ^{2} þ x _{1} þ x _{2} 8 ¼ 0; f _{2} ð x _{1} ; x _{2} Þ ¼ x _{1} x _{2} þ x _{1} þ x _{2} 5 ¼ 0:
1
ð
5: 1Þ
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
Let us choose parameters as follows:
N ¼ 40;
p _{c} ¼ 0: 9;
p _{m} ¼ 0: 005;
e _{x} ¼ 0 :001;
e _{1} ¼ 0: 001:
10971
ð5: 2Þ
Since the genetic algorithms are random algorithms, we run each method 30 times, and compare convergence number of
times under various maximal generation G for EGA and SHEGA. The comparison results are given in Table 1 . We also give the comparison results of the average of the best function value F under the ﬁxed maximal generation G ¼ 300 in Table 2 .
Here, we say the corresponding genetic algorithm is convergent if the function value F ð x _{1} ; x _{2} ;
cision e _{y} . We set e _{y} ¼ 0 : 05 for this example. Tables 1 and 2 show us that SHEGA with proper symmetry and harmonious
factor k performs better than EGA.
; x _{n} Þ is less than a ﬁxed pre
Example 5.2. Let
f be deﬁned in D ¼ ½ 5 ; 5 ½ 1 ; 3 ½ 5 ; 5 by
8
>
<
: >
f _{1} ðx _{1} ; x _{2} ; x _{3} Þ ¼
f _{2} ðx _{1} ; x _{2} ; x _{3} Þ ¼ 2x ^{3} x _{2} ^{2} x _{3} þ 3 ¼ 0;
3x ^{2} þ sinð x _{1} x _{2} Þ x þ 2 ¼ 0;
1
2
3
1
f _{3} ðx _{1} ; x _{2} ; x _{3} Þ ¼ sinð 2x _{1} Þ þ
cosð x _{2} x _{3} Þ þ x _{2} 1 ¼ 0:
Let us choose parameters as follows:
N ¼ 50;
p _{c} ¼ 0: 8;
p _{m} ¼ 0: 05;
e _{x} ¼ 0: 0001;
e _{1} ¼ 0: 001:
ð5: 3Þ
ð5: 4Þ
We run each method 20 times, and compare convergence number of times under various maximal generation G for EGA and SHEGA. The comparison results are given in Table 3 . We also give the comparison results of the average of the best function value F under the ﬁxed maximal generation G ¼ 500 in Table 4 . Here, we say the corresponding genetic algorithm is conver
gent if the function value F ð x _{1} ; x _{2} ;
show us that SHEGA with proper symmetry and harmonious factor k performs better than EGA. Next, we will provide an example when f is nondifferentiable. Newton’s method (4.1) and SHEGANewton cannot be used to solve this problem since f is nondifferentiable. However, SHEGA can apply. Moreover, with the same idea used for SHEGANewton method, one can develop some mixed methods which do not require the differentiability of function f to solve the problem.
; x _{n} Þ is less than a ﬁxed precision e _{y} . We set e _{y} ¼ 0 : 02 for this example. Tables 3 and 4
Example 5.3. Let f be deﬁned in D ¼ ½ 2 ; 2 ½ 1 ; 6 by
(
f _{1} ðx _{1} ; x _{2} Þ ¼
f _{2} ðx _{1} ; x _{2} Þ ¼ x ^{2} _{2} þ x _{1} 7 þ _{9} j x _{2} j ¼ 0 :
x ^{2} x _{2} þ 1 þ _{9} j x _{1} 1j ¼ 0;
1
1
1
ð5: 5Þ
Table 1 The comparison results of convergence number of times for Example 5.1.
G ¼ 50 
G ¼ 100 
G ¼ 150 
G ¼ 200 
G ¼ 250 
G ¼ 300 

EGA SHEGA ( k = 0.05) SHEGA ( k = 0.10) SHEGA ( k = 0.15) SHEGA ( k = 0.20) SHEGA ( k = 0.25) SHEGA ( k = 0.30) SHEGA ( k = 0.35) SHEGA ( k = 0.40) SHEGA ( k = 0.45) 
11 
11 
11 
11 
12 
12 
13 
15 
18 
20 
20 
21 

8 
20 
23 
26 
29 
29 

16 
21 
27 
28 
28 
29 

16 
26 
29 
30 
30 
30 

20 
28 
29 
29 
30 
30 

22 
28 
30 
30 
30 
30 

17 
27 
29 
30 
30 
30 

17 
29 
30 
30 
30 
30 

19 
30 
30 
30 
30 
30 
Table 2 The comparison results of the average of the best function value F for Example 5.1.
G ¼ 300 

EGA SHEGA ( k = 0.05) SHEGA ( k = 0.10) SHEGA ( k = 0.15) SHEGA ( k = 0.20) SHEGA ( k = 0.25) SHEGA ( k = 0.30) SHEGA ( k = 0.35) SHEGA ( k = 0.40) SHEGA ( k = 0.45) 
0.129795093953972 
0.053101422547384 

0.041427669194903 

0.034877317449825 

0.035701604675096 

0.038051665705034 

0.039332883168632 

0.035780879206619 

0.034509424138501 

0.037425326021257 
10972
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
Table 3 The comparison results of convergence number of times for Example 5.2.
G 
¼ 100 
G ¼ 200 
G ¼ 300 
G ¼ 400 
G ¼ 500 

EGA SHEGA ( k = 0.05) SHEGA ( k = 0.10) SHEGA ( k = 0.15) SHEGA ( k = 0.20) SHEGA ( k = 0.25) SHEGA ( k = 0.30) SHEGA ( k = 0.35) SHEGA ( k = 0.40) SHEGA ( k = 0.45) 
3 
4 
4 
4 
4 

2 
6 
8 
9 
11 

2 
7 
11 
14 
15 

5 
11 
14 
14 
16 

3 
11 
14 
17 
18 

8 
11 
14 
17 
17 

8 
16 
16 
17 
17 

6 
14 
18 
19 
20 

5 
14 
19 
19 
20 

6 
17 
19 
19 
20 

Table 4 The comparison results of the average of the best function value F for Example 5.2. 

G 
¼ 500 

EGA SHEGA ( k = 0.05) SHEGA ( k = 0.10) SHEGA ( k = 0.15) SHEGA ( k = 0.20) SHEGA ( k = 0.25) SHEGA ( k = 0.30) SHEGA ( k = 0.35) SHEGA ( k = 0.40) SHEGA ( k = 0.45) 
0.1668487275323187 

0.0752619855962109 

0.0500864062405815 

0.0358268275921585 

0.0257859494269335 

0.0239622084932336 

0.0247106452514721 

0.0171980128114993 

0.0179659124369376 

0.0158999282064303 

Table 5 The comparison results of convergence number of times for Example 5.3. 

G 
¼ 100 
G ¼ 150 
G ¼ 200 
G ¼ 250 
G ¼ 300 

EGA SHEGA ( k = 0.05) SHEGA ( k = 0.10) SHEGA ( k = 0.15) SHEGA ( k = 0.20) SHEGA ( k = 0.25) SHEGA ( k = 0.30) SHEGA ( k = 0.35) SHEGA ( k = 0.40) SHEGA ( k = 0.45) 
6 
9 
10 
10 
10 

10 
15 
16 
17 
19 

12 
15 
17 
18 
20 

15 
18 
19 
20 
20 

15 
20 
20 
20 
20 

11 
14 
17 
17 
18 

10 
14 
16 
17 
19 

16 
17 
18 
19 
20 

10 
12 
18 
18 
20 

9 
12 
15 
15 
17 
Table 6 The comparison results of the average of the best function value F for Example 5.3.
G ¼ 300
EGA 
0.0081958187235953 
SHEGA ( k = 0.05) 
0.0021219384775699 
SHEGA ( k = 0.10) 
0.0019286950245614 
SHEGA ( k = 0.15) 
0.0018367719544782 
SHEGA ( k = 0.20) 
0.0022816080103967 
SHEGA ( k = 0.25) 
0.0023297925904943 
SHEGA ( k = 0.30) 
0.0023318357433983 
SHEGA ( k = 0.35) 
0.0021392510790106 
SHEGA ( k = 0.40) 
0.0022381534380744 
SHEGA ( k = 0.45) 
0.0025798012930550 
Let us choose parameters as follows:
N ¼ 40;
p _{c} ¼ 0: 85;
p _{m} ¼ 0: 008;
e _{x} ¼ 0: 001;
e _{1} ¼ 0: 005:
ð 5: 6Þ
We run each method 20 times, and compare convergence number of times under various maximal generation G for EGA and SHEGA. The comparison results are given in Table 5 . We also give the comparison results of the average of the best function
H. Ren et al. / Applied Mathematics and Computation 219 (2013) 10967–10973
10973
value F under the ﬁxed maximal generation G ¼ 300 in Table 6 . Here, we say the corresponding genetic algorithm is conver
gent if the function value F ð x _{1} ; x _{2} ;
show us that SHEGA with proper symmetry and harmonious factor k performs better than EGA.
; x _{n} Þ is less than a ﬁxed precision e _{y} . We set e _{y} ¼ 0 : 003 for this example. Tables 5 and 6
6. Conclusion
We presented a genetic algorithm as a general tool for solving optimum problems. Note that in the special case of approx imating solutions of systems of nonlinear equations there are many deﬁciencies that limit the application of the usually em ployed methods. For example in the case of Newton’s method function f must be differentiable and a good initial point must be found. To avoid these problems we have introduced some pairs of symmetric and harmonious individuals for the gener ation of a genetic algorithm. The population diversity is preserved this way and the method guarantees convergence to a solution of the system. Numerical examples are illustrating the efﬁciency of the new algorithm.
Acknowledgement
This work was supported by National Natural Science Foundation of China (No. 10871178), Zhejiang Radio and Television University Research Project (No. xkt200540), and Hangzhou Polytechnic Important Research Project (No. KZYZD20109).
References
[1] S. Amat, S. Busquier, M. Negra, Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405 . [2] Angel Fernando KuriMorales, Solution of simultaneous nonlinear equations using genetic algorithms, WSEAS Trans. SYSTEMS 2 (2003) 44–51 . [3] I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Studies in Computational Mathematics 15, Elsevier Publ. Co., New York, U.S.A., 2007 . [4] J.A. Ezquerro, M.A. Hernández, On an application of Newton’s method to nonlinear equations with w condition second derivative, BIT 42 (2002) 519–
532 . [5] D.B. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, New York, 2000 . [6] D.E. Goldberg, Genetic Algorithms in Search, second ed., Optimization and Machine Learning, AddisonWesley, 1989 . [7] J.M. Gutierrez, A new semilocal convergence theorem for Newton’s method, J. Comput. Appl. Math. 79 (1997) 131–145 . [8] J.H. Holland, Adaptation in Natural and Artiﬁcial System, Michigan Univ Press, Ann Arbor, 1975 . [9] Ibrahiem M.M. ElEmary, Mona M. Abd ElKareem, Towards using genetic algorithm for solving nonlinear equation systems, World Appl. Sci. J. 5 (2008)
282–289 . [10] G.N. Nasira, D. Sharmila Devi, Solving nonlinear equations through jacobian sparsity pattern using genetic algorithm, Int. J. Commun. Eng. 5 (2012) 78–
82 . [11] B. Neta, A new iterative method for the solution of system of nonlinear equations, in: Z. Ziegler (Ed.), Approximation Theory and Applications, Academic Press, 1981. pp. 249–263 . [12] Nikos E. Mastorakis, Solving nonlinear equations via genetic algorithms, in: Proceedings of the 6th WSEAS International Conference on Evolut ionary Computing, Lisbon, Portugal, June 16–18, 2005, pp. 24–28. [13] J.M. Ortega, W.C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970 . [14] Punam S. Mhetre, Genetic algorithm for linear and nonlinear equation, Int. J. Adv. Eng. Tech. 3 (2012) 114–118 . [15] Ren Hongmin, Bi Weihong, Wu Qingbiao, Calculation of minimum distance between freeform surfaces by a type of new improved genetic algorithm, Comput. Eng. Appl. 23 (2004) 62–64 (in Chinese) . [16] Ren Hongmin, Wu Qingbiao, Bi Weihong, A genetic algorithm with symmetric and harmonious individuals, Comput. Eng. Appl. 5 (2005) 24–26. 87 (in Chinese) . [17] Wan Xing, Zhou Jianzhong, Application of genetic algorithm for self adaption, symmetry and congruity in reservoir midong hydraulic power operation, Adv. Water Sci. 18 (2007) 598–603 (in Chinese) . [18] Xi Guang, Cai Yonglin, Finding the minimized distance between surfaces, J. CAD Graphics 14 (2002) 209–213 (in Chinese) .
Molto più che documenti.
Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.
Annulla in qualsiasi momento.