Sei sulla pagina 1di 8

DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 5, ISSUE 1, JANUARY 2010

29

EFFECT OF GRAPHICAL METHOD FOR SOLVING


MATHEMATICAL PROGRAMMING PROBLEM
Bimal Chandra Das
Department of Textile Engineering, Daffodil International University, Dhaka
E-mail: bcdas@daffodilvarsity.edu.bd

Abstract: In this paper, a computer have a great deal of applications, non-linear


implementation on the effect of graphical method programming problem mostly considered
for solving mathematical programming problem only in two variables. Therefore for non-
using MATLAB programming has been linear programming problems we have a
developed. To take any decision, for
opportunity to plot the graph in two
programming problems we use most modern
scientific method based on computer dimension and get a concrete graph of the
implementation. Here it has been shown that by solution space which will be a step ahead in
graphical method using MATLAB programming its solutions. We have arranged the materials
from all kinds of programming problem, we can of the paper in the following way: First I
determine a particular plan of action from discuss about Mathematical Programming
amongst several alternatives in very short time. (MP) problem. In second step we discuss
graphical method for solving mathematical
Keywords: Mathematical programming, objective programming problem and taking different
function, feasible-region, constraints, optimal kinds of numerical examples, we try to solve
solution.
them by graphical method. Finally we
compare the solutions by graphical method
1 Introduction and others. For problem so consider we use
Mathematical programming problem deals
MATLAB programming to graph the
with the optimization (maximization/
constraints for obtaining feasible region. Also
minimization) of a function of several
we plot the objective functions for
variables subject to a set of constraints
determining optimum points and compare the
(inequalities or equations) imposed on the
solution thus obtained with exact solutions.
values of variables. For decision making
optimization plays the central role.
Optimization is the synonym of the word
2 Mathematical Programming
maximization/minimization. It means Problems
choosing the best. In our time to take any The general Mathematical programming
decision, we use most modern scientific and (MP) problems in n-dimensional Euclidean
methods based on computer implementations. space Rn can be stated as follows:
Modern optimization theory based on Maximize f(x)
computing and we can select the best subject to
alternative value of the objective function. gi ( x) ≤ 0 , i = 1, 2, .....,m
[1].But the modern game theory, dynamic (1)
programming problem, integer h j ( x ) = 0 , j = 1, 2, ....., p
programming problem also part of the
optimization theory having wide range of (2)
application in modern science, economics
x ∈s
and management. In the present work I tried (3)
to compare the solution of Mathematical Where x = ( x1 , x 2 ,....., x n )T is the vector of
programming problem by unknown decision variables and f ( x ), g ( x )
Graphical solution method and others (i = 1, 2, 3, .......m ) h j ( x ), ( j = 1, 2, .... p )
rather than its theoretic descriptions. As we
know that not like linear programming are the real valued functions. The function
problem where multidimensional problems f(x) is known as objective function, and
inequalities
DAS: EFFECT OF GRAPHICAL METHOD FOR SOLVING MATHEMATICAL PROGRAMMING PROBLEM
30

(1) equation (2) and the restriction (3) are x1 − x 2 ≤ 1


referred to as the constraints. We have started 2 x1 + x 2 ≥ 6
the MP as maximization one. This has been 0 .5 x1 − x 2 ≥ − 4
done without any loss of generality, since a
x 1 ≥ 1, x 2 ≥ 0 .
minimization problem can always be
converted in to a maximization problem
using the identity min f(x) = -max (-
f(x)) (4)
i.e, the minimization of f(x) is equivalent to
the maximization of (-f(x)). The set S is
normally taken as a connected subset of Rn.
Here the set S is taken as the entire space Rn.
The set X= {x ∈ s, g i (x)=0, i=1,2, …..,m,
j=1,2, …..,p} is known to as the feasible
reason, feasible set or constraint set of the
program MP and any point x Fig.1 Optimal solution by graphical method
∈ x is a feasible solution or feasible point of
the program MP which satisfies all the The graphical solution is show in Fig.1. The
constrains of MP. If the constraint set x is region of feasible solution is shaded. Note
empty (i.e. x= φ ), then there is no feasible that the optimal does occur at an extreme
solution; in this case the program MP is point. In this case, the values of the variables
inconsistent and it was developed by [2]. that yield the maximum value of the
A feasible point x D ∈ x is known as a objective function are unique, and are the
point of intersection of the lines
global optimal solution to the program MP if
x1 + x2 = 6, 0.5 x1 − x2 = −4 so that the
f ( x) ≤ f ( x D ), x ∈ x . By [3].
optimal values of the variables
∗ ∗ ∗ 4 ∗ 14 The
3 Graphical Solution Method x 1 and x 2 are x 1 = , x 2 = .
3 3
The graphical (or geometrical) method for maximum value of the objective function is
solving Mathematical Programming problem 4 14
is based on a well define set of logical steps. z = 0 .5 × + 2 × = 10 , which was by [5].
3 3
Following this systematic procedure, the Now consider a non-linear programming
given Programming problem can be easily problem, which differs from the linear
solved with a minimum amount of programming problem only in that the
computational effort and which has been objective function:
introduced by [4]. We know that simplex z = 10 ( x 1 − 3 . 5 ) + 20 ( x 2 − 4 ) .
2 2

method is the well-studied and widely useful (5)


method for solving linear programming Imagine that it is desired to minimize the
problem. while for the class of non-linear objective function. Observe that here we have
programming such type of universal method a separable objective function. The graphical
does not exist. Programming problems solution of this problem is given in Fig.2
involving only two variables can easily
solved graphically. As we will observe that
from the characteristics of the curve we can
achieve more information. We shall now
several such graphical examples to illustrate
more vividly the differences between linear
and non-linear programming problems.
Consider the following linear programming
problems
Maximize z = 0.5 x1 + 2 x2
Subject to
Fig. 2 Optimal solution by graphical method
x1 + x2 ≤ 6
The region representing the feasible solution
is, of course, precisely the same as that for
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 5, ISSUE 1, JANUARY 2010
31

the linear programming problem of Fig1. the same as that considered above. This case
Here, however, the curves of constant z are is illustrated graphically in Fig.3. The optimal
ellipse with centers at the point (3.5, 4). The values of x1, x2, and z are x1* = 2, x2* = 3,
optimal solution is that point at which an and z* = 0.Thus it is not even necessary that
ellipse is tangent to one side of the convex the optimizing point lie on the boundaries.
set. If the optimal values of the variables are Note that in this case, the minimum of the
∗ ∗ objective function in the presence of the
x1 and x2 , and the minimum value of the
constraints and non-negativity restrictions is
objective function is z*, then from Fig 1-2,
∗ ∗
the same as the minimum in the absence of
x1 +x2 =6, and any constraints or non-negativity restrictions.

( ∗
z = 10 x1 − 3.5 + 20 x 2 − 4 . )
2
( ∗
)
2
In such situations we say that the constraints
and non-negativity restrictions are inactive,
Furthermore the slope of the curve
since the same optimum is obtained whether
z = 10(x1 − 3.5) + 20( x2 − 4)
∗ 2 2
evaluated or not the constraints and non-negativity
at (x1

, x2

) must be –1 since this is the restrictions are included. Each of the
examples presented thus far the property that
slope of x1 + x2 = 6. Thus we have the a local optimum was a global optimum and
additional equation x2 − 4 = 0.5 x1 − 3.5 .

( ∗
) was introduced by [5].
We have obtained three equations involving As a final example, I shall examine an integer
∗ ∗ linear programming problem.
x1 , x2 and z*. The unique solution is Let us solve the problem
∗ ∗
x1 = 2.50 , x2 = 3.50 and z*=15. Now the 0.5 x1 + x2 ≤ 1.75
point which yields the optimal value of the x1 + 0.30 x2 ≤ 1.50
objective function lies on the boundary of the
convex set of feasible solutions, but it is not x1 , x2 ≥ 0, x1 , x2 intigers
an extreme point of this set. Consequently, Max z = 0.25 x1 + x2 .
any computational procedure for solving The situation is illustrated geometrically in
problems of this type cannot be one which Fig.3.4.The shaded region would be the
examines only the extreme points of the convex set of feasible solutions in the
convex set of feasible solutions. By a slight absence of the integrality requirements.
modification of the When the xj are required to be integers, there
objective function studied above the only four feasible solutions which are
minimum value of the objective function can represented by circles in Fig.4. If we solve
be made to occur at an interior point of the the problem as a linear programming
convex set of feasible solutions. Suppose, for problem, ignoring the integrality
example, that the objective function is -
z = 10 ( x1 − 2 ) + 20 ( x 2 − 3 )
2 2

C B

O A

Fig. 4 Optimal solution by graphical method

Fig. 3 Optimal solution by graphical method requirements, the optimal solutions is x ∗1 = 0


, x ∗ 2 = 1.75, and z*= 1.75 . However it is
and that the convex set of feasible solutions is clear that when it is required that the xj be
integers, the optimal solution is x1*= 1 , x2*= 1 ,
DAS: EFFECT OF GRAPHICAL METHOD FOR SOLVING MATHEMATICAL PROGRAMMING PROBLEM
32

and z*= 1.25 . Note that this is not the solution The point which maximizes the value
that would be obtained by the solving the z = 2 x1 + 3 x2 and lies in the convex region
linear programming problem and rounding OABCD have to find. The desired point is
the results to the nearest integers, which obtained by moving parallel to 2 x1 + 3 x2 =k
satisfy the constraints (this would
give x1 = 0, x 2 = 1 ). and z = 0 .However, in for some k, so long as 2 x1 + 3 x2 =k touches
the case of a NLP problem the optimal the extreme boundary point of the convex
solution may or may not occur at one of the region. According to this rule, we see that the
extreme points of the solution space, point C (2, 4) gives the maximum value of Z.
generated by the constraints and the objective Hence we can find the optimal solution at this
function of the given problem. point by [6]
Graphical solution algorithm: The solution z Max = 2.2 + 3.4
NLP problem by graphical method, in = 16 at x1 = 2, x2 = 4.
general, involves the following steps:
Step 1: construct the graph of the given NLP 4.2 Problem with objective function linear
problem.
constraints non-linear+linear
Step 2: Identify the convex region (solution Maximize Z = x1 + 2 x 2
space) generated by the objective function
and constraints of the given problem. slt
x1 + x 2 ≤ 1
2 2
Step 3: Determine the point in the convex
region at which the objective function is 2 x1 + x 2 ≤ 2
optimum maximum or minimum). x1 , x 2 ≥ 0 .
Step 4: Interpret the optimum solution so
obtained. Which has been introduced by [2]. Let us solve the above problem by graphical
method:
4 Solution of Various Kinds of For this we see that our objective function is
Problems by Graphical Solution Method linear and constraints are non-linear
and linear. Constraints one is a circle of
4.1 Problem with objective function linear radius 1 with center (0, 0) and constraints two
constraints non-Linear is a straight line. In this case tracing the graph
Maximize Z = 2 x1 + 3 x 2 of the constraints of the problem in the first
Subject to the constraints quadrant, we get the following shaded region
x1 + x 2
2 2
≤ 20
as opportunity set.
x1 x 2 ≤ 8
x1 ≥ 0 , x2 ≥ 0.
Let us solve the problem by graphical
method:
C
For this, first we are tracing the graph of the B
constraints of the problem considering
inequalities as equations in the first quadrant
(since x1 ≥ 0, x2 ≥ 0 ). We get the following A

shaded region as opportunity set OABCD. O


Fig. 6 Optimal solution by graphical method

D C Considering the inequalities to equalities


x1 + x 2 = 1
2 2
(6 )
2 x1 + x 2 = 2 (7 )
B
Solving ( 6 ) and ( 7 )
⎛3 4⎞
We get ( x1 , x 2 ) = (1, 0 ), ⎜ , ⎟
⎝5 5⎠
O A The extreme points of the convex region are
O(0, 0), A (1,0) B (3 / 5, 4 / 5) and C(0,1).
Fig. 5 Optimal solution by graphical method
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 5, ISSUE 1, JANUARY 2010
33

By moving according to the above rule we 2 x1 + x2 = 5 and x1 + x2 = 4


see that the line x1 + 2 x2 = k touches Differentiating, we get
(3 / 5, 4 / 5) the
extreme point of the convex 2dx1 + dx2 = 0 and dx1 + dx2 = 0
region. Hence the required solution of the dx2 dx2
given problem is ⇒ = −2 = −1 (9)
dx1 dx1
3 4 3 8 11
Z Max = + 2 ⋅ = + = = 2 .2 Now, from (8) and (9) we get
5 5 5 5 5 − x1
3 4 = − 2 ⇒ x1 = 2 x 2
at x1 = , x2 = . x2
5 5 − x1
4.3 Problem with objective function non- and = − 1 ⇒ x1 = x 2 .
x2
linear constraints linear
This shows that the circle has a tangent to it-
Minimize Z = x1 + x2
2 2
(i) the line x1 + x2 = 4 at the point (2,2)
subject to the constraint
s:
(ii) the line 2 x1 + x2 = 5 at the point (2,1).
x1 + x2 ≥ 4
But from the graph we see that the point (2,1)
2x1 + x2 ≥ 5 does not lie in the convex region and hence is
x1 , x2 ≥ 0 to be discarded. Thus our require point is
Our objective function is non-linear which (2,2).
is a circle with origin as center and ∴ Minimum Z = 22 + 22 = 8 at thepoint(2,2).
constraints are linear. The problem of
minimizing Z = x1 + x2 is equivalent to
2 2
5 Comparison of Solution by Graphical
minimizing the radius of a circle with origin Method and Others
as centre such that it touches the convex Let us consider the problem
region bounded by the given constraints. First Maximize Z = 2 x1 + 3 x2 − x1
2

we contracts the graph of the constraints by


Subject to the constraints:
MATLAB programming [9] in the 1st
x1 + 2 x2 ≤ 4
quadrant since x1 ≥ 0, x2 ≥ 0 .
A

x1 , x2 ≥ 0
First I want to solve above problem by
graphical solution method.
B The given problem can be rewriting as:
⎛ 1⎞
Maximize Z = −( x1 − 1) + 3⎜ x2 + ⎟
2

⎝ 3⎠
Subject to the constraints
x1 + 2 x2 ≤ 4
C
x1 , x2 ≥ 0
Fig. 7 Optimal solution by graphical method We observe that our objective function is a
parabola with vertex at (1, -1/3) and
Since x1 + x2 ≥ 4 and 2 x1 + x2 ≥ 5, the constraints are linear. To solve the problem
desire point must be some where in the graphically, first we construct the graph of
unbounded convex region ABC. The desire the constraint in the first quadrant since
point will be that point of the region at which x1 ≥ 0 and x2 ≥ 0 by considering the
a side of the convex region is tangent to the inequation to equation.
circle. Differentiating the equation of the Here we contract the graph of our problem by
circle MATLAB programming [9] According to
2 x 1 dx 1 + 2 x 2 dx 2 = 0 our previous graphical method our desire
dx 2 x point is at
⇒ = − 1 (8 )
dx 1 x2 (1/4, 15/8)
Considering the inequalities to equalities
DAS: EFFECT OF GRAPHICAL METHOD FOR SOLVING MATHEMATICAL PROGRAMMING PROBLEM
34

1 1
If we consider 2 x1 = then x1 = .
2 4
Now putting the value of x1 in (10),
B we get x = 15
2
8
⎛ 1 15 3 ⎞ for this solution
∴ (x 1 , x 2 , λ ) = ⎜ , , ⎟.
⎝4 8 2⎠
A
∂F 1 3 4 −1− 3
≡ 2− 2⋅ − = = 0 satisfied
∂ x1 4 2 2
0
∂F 3
≡ 3 − 2/ ⋅ = 0 satisfied
Fig. 8 Optimum solution by graphical method ∂x2 2/
∂F 1 15 16 − 1 − 15
≡ 4− − 2/ ⋅ = = 0 satisfied
Hence we get the maximum value of the ∂λ 4 8/ 4 4
objective function at this point. Therefore, ∂F ∂F 1 15
x1 + x2 ≡ ×0+ × 0 = 0 satisfied
Zmax = 2 x1 + 3 x2 − x1
2
∂ x1 ∂x2 4 8
97 1 15 ∂F 3
= at x 1 = ,x = . λ ≡ ⋅ 0 = 0 satisfied
16 4
2
8 ∂λ 2
Let us solve the above problem by using [7] Thus all the Kuhn-Tucker necessary
Kuhn-Tucker Conditions. The Lagrangian conditions are satisfied at the point (1/4,
function of the given problem is 15/8)
F ( x 1 , x 2 , λ ) ≡ 2 x 1 + 3 x 2 − x 1 + λ (4 − x 1 − 2 x 2 ).
2 Hence the optimum (maximum) solution to
the given NLP problem is
By Kuhn-Tucker conditions, we obtain
Z max = 2 x1 + 3 x 2 − x1
2
∂F ∂F
(a) ≡ 2 − 2 x1 − λ ≤ 0 , ≡ 3 − 2λ ≤ 0
∂x1 ∂x 2 =
97
at x 1 =
1
, x2 =
15
.
16 4 8
∂F
(b) ≡ 4 − x1 − 2 x 2 ≥ 0 Let us solve the problem by Beale’s method.
∂λ
Maximize f ( x ) = 2 x1 + 3 x2 − x1
2
∂F ∂F
(c ) x1 + x 2 ≡ x1 (2 − 2 x1 − λ ) + x 2 (3 − 2λ ) = 0 Subject to the constraints:
∂x1 ∂x 2
x1 + 2 x2 ≤ 4
(d ) λ ∂F ≡ λ (4 − x1 − 2 x2 ) = 0 with λ ≥ 0.
x1 , x2 ≥ 0
∂λ
Now there arise the following cases: Introducing a slack variables s, the constraint
Case (i) : Let λ = 0 , in this case we get becomes
from x1 + 2 x2 + s = 4
∂F ∂F
≡ 2 − 2 x1 ≤ 0 and ≡ 3− 2⋅0 ≤ 0 x1 , x2 ≥ 0
∂x1 ∂x 2 since there is only one constraint, let s be a
⇒ 3 ≤ 0 which is impossible and this basic variable. Thus we have by [13]
solution is to be discarded and it has been xB = (s ), xNB = (x1,x2 ) with s = 4
introduced by [12].Case (ii): Let λ ≠ 0 . In
Expressing the basic xB and the objective
this case we get function in terms of non-basic xNB, we have
from λ (4 − x1 − 2 x2 ) = 0 s=4-x1-2x2 and f =2x1+3x2-x12. We
4 − x1 − 2x2 = 0 ⇒ x1 + 2x2 = 4 (10) evaluated the partial derivatives of f w.r.to
non-basic
variables at xNB=0, we get
Also from ∂ F ≡ 2 − 2 x1 − λ ≤ 0 ⎛ ∂f ⎞
∂ x1 ⎜⎜ ⎟⎟ = (2 − 2 x1 )x = 2 − 2⋅0 = 2
⎝ ∂ x1 ⎠ x
NB = 0
∂F
≡ 3 − 2 λ ≤ 0 ∴ 2 x1 + λ − 2 ≥ 0 NB = 0

∂x2 ⎛ ∂f ⎞
⎜⎜ ⎟⎟ =3
and 2λ − 3 ≥ 0 ⇒ λ ≥
3 ⎝ ∂ x 2 ⎠ x NB = 0
2 since both the partial derivatives are positive,
3 1 the current solution can be improved. As
If we take λ = , then 2 x1 ≥
2 2
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 5, ISSUE 1, JANUARY 2010
35

∂ f gives the most positive value, x will Expressing the basic xB in terms of non-basic
2
∂x 2 1
xNB , we have, x1 = − u1
enter the basis. Now, to determine the leaving 4
basic variable, we compute the ratios: and x 2 =
1
(4 − x 1 − x 3 ) = 15 + 1 u 1 − 1 s .
⎧⎪ α γ ⎫⎪ ⎧⎪ α γ ⎫⎪ 2 8 2 2
min ⎨ ho , ko ⎬ = min ⎨ 30 , 20 ⎬
⎪⎩ α hk γ kk ⎪⎭ ⎪⎩ α 32 α 22 ⎪⎭
The objective function, expressing in terms
⎧⎪ 4 3 ⎫⎪ of xNB is,
= min ⎨ , ⎬=2
⎪⎩ − 2 0 ⎪⎭
2
α ⎛1 ⎞ ⎛ 15 1 1 ⎞ ⎛1 ⎞
since the minimum occurs for 30 , s will f = 2⎜ − u 1 ⎟ + 3⎜ + u1 − s ⎟ − ⎜ − u1 ⎟
α 30 ⎝4 ⎠ ⎝ 8 2 2 ⎠ ⎝4 ⎠
97 3
= − s − u1 .
2
leave the basis and it was introduced by [8].
16 2
Thus expressing the new basic variable, x2 as
well as the objective function f in terms of the ⎛ ∂F ⎞ 3
Now, ⎜ ⎟ =− ;
new non-basic variables (x1 and s) we have: ⎝ ∂ s ⎠ x NB = 0 2
x1 s u=0
x2 = 2 − −
2 2 ⎛ δf ⎞
⎛ x s⎞ ⎜⎜ ⎟⎟ = −2u1 = 0
and f = 2 x1 + 3⎜ 2 − 1 − ⎟ − x1 ⎝ δu1
2

⎝ 2 2⎠ ⎠ x NB = 0
u =0

x 3
= 6 + 1 − s − x1
2
∂f ∂f
2 2 since ≤ 0 for all xj in xNB and = 0,
we, again, evaluate the partial derivates of f ∂u1 ∂u
w. r. to the non-basic variables:
the current solution is optimal. Hence the
⎛ ∂f ⎞ ⎛ 1 ⎞ 1 optimal basic feasible solution to the given
⎜⎜ ⎟⎟ = ⎜ − 2 x1 ⎟ = problem is:
⎝ ∂ x1 ⎠ x NB =0
⎝ 2 ⎠ x1 = 0 2
⎛ ∂f ⎞ 3 1 15 97
⎜ ⎟ = − . x = , x , Z * =
⎝ ∂s ⎠
1 2
x NB 2 4 8 16
=0

since the partial derivatives are not all Similarly we can find that by Wolfe’s
negative, the current solution is not optimal, algorithm the optimal point is at (1/4, 15/8).
clearly, x1 will enter the basis. For the next which was introduced by [14].
Criterion, we compute the ratios Thus for the optimal solution for the given
⎧⎪ α ⎫⎪ ⎧⎪ ⎫⎪
QP problem is
γ 10 2 1/ 2 3
min ⎨ ⎬ = ⎨ ⎬ =
20
, , .
⎪⎩ α γ 11 ⎪⎭ ⎪⎩ − 1 / 2 − 2 ⎪⎭ 4
21
M a x Z = 2 x1 + 3 x 2 − x12
since the minimum of these ratios correspond 2
1 15 ⎛ 1 ⎞
to γ 10 , non-basic variables can be = 2 ⋅
4
+ 3⋅
8
− ⎜ ⎟
⎝ 4 ⎠
γ 11 ⎛ 1 15 ⎞
removed. Thus we introduce a free variable,
=
97
16
at (x 1

, x2∗ )= ⎜
⎝ 4
, ⎟
8 ⎠
u1 as an additional non-basic variable,
defined by Therefore the solution obtained by graphical
1 ∂f 1 ⎛1 ⎞ 1 solution method, Kuhn-Tucker conditions,
u1 = = ⎜ − 2 x1 ⎟ = − x1
2 ∂ x1 2⎝2 ⎠ 4 Beale’s method and Wolf’s algorithm are
Note that now the basis has two basic same. The computational cost is that by the
variables x2 and x1 (just entered). That is, we graphical solution method using MATLAB
have Programming it will take very short time to
x NB = (s , u 1 ) and x B = ( x 1 , x 2 ) . determine the plan of action and the solution
obtained by graphical method is more
effective than any other methods we
considered.
DAS: EFFECT OF GRAPHICAL METHOD FOR SOLVING MATHEMATICAL PROGRAMMING PROBLEM
36

6 Conclusion [4] Gupta, P. K. Man Mohan: “Linear programming


and theory of Games” Sultan Chand & sons,
This paper has been presented a direct, fast New Delhi,
and accurate way for determining an [5] M. S. Bazaraa & C. M. Shetty: “Non-linear
optimum schedule (such as maximizing profit programming theory and algorithms”.
or minimizing cost) The graphical method [6] G. Hadley: “Non-linear and dynamic
gives a physical picture of certain programming”.
[7] Adadie. J: “On the Kuhn-Tucker Theory,” in Non-
geometrical characteristics of programming Linear Programming, J. Adadie. (Ed) 1967b.
problems. By using MATLAB programming [8] Kanti Sawrup, Gupta, P. K. Man Monhan
graphical solution can help us to take any “Operation Research” Sultan Chandra & Sons.
decision or determining a particular plan of New Delhi, India (1990)
[9] “Applied Optimization with MATLAB
action from amongst several alternatives in Programming,” Venkataraman (Chepter-4).
very short moment. All kinds of [10] Yeol Je Cho, Daya Ram Sahu, Jong Soo Jung.
programming problem can be solved by “Approximation of Fixed Points of Asymptotically
graphical method. The limitation is that Pseudocontractive Mapping in Branch Spaces”
programming involving more than two Southwest Journal of Pure and Applied
Mathematics. Vol. 4, Issue 2, Pub. July 2003
variables i.e for 3-D problems can not be [11] Mittal Sethi, “Linear Programming” Pragati
solved by this method. Non-linear Prakashan, Meerut, India 1997.
programming problem mostly considered [12] D. Geion Evans “On the K-theory of higher rank
only in two variables. Therefore, from the graph C*-algebras” New York Journal of
Mathematics . Vol. 14, Pub. January 2008
above discussion, we can say that graphical [13] Hildreth. C: “A Quadratic programming
method is the best to take any decision for procedure” Naval Research Logistics Quarterly.
modern game theory, dynamic programming [14] Bimal Chandra Das: “A Comparative Study of the
problem science, economics, and Methods of Solving Non-linear Programming
management from amongst several Problems ”.Daffodil Int. Univ. Jour. of Sc. and
Tech. Vol. 4, Issue 1, January 2009.
alternatives.
References
[1] Greig, D. M: “Optimization”. Lougman- Group
Bimal Chandra Das has
United, New York (1980). completed M. Sc in pure
[2] Keak, N. K: “Mathematical Programming with Mathematics and B. Sc (Hons) in
Business Application”. Mc Graw-Hill Book Mathematics from Chittagong
Company. New York. University. Now he is working as
[3] G. R. Walsh: “Methods of optimization” Jon Wiley a Lecturer under the Department
and sons ltd, 1975, Rev. 1985. of Textile Engineering in Daffodil International
University. His area of research is Operation
Research.

Potrebbero piacerti anche