Sei sulla pagina 1di 122

Important

mathematicians
in the
optimization
field
by Wikipedians
Note. This book is based on the Wikipedia article:
“Optimization_(mathematics)”. The supporting
articles are those referenced as major expansion
of selected sections.

3
Table of Contents
Optimization..............................................................................7
George Dantzig........................................................................28
Richard E. Bellman..................................................................37
Ronald A. Howard...................................................................43
Leonid Kantorovich.................................................................46
Narendra Karmarkar................................................................50
William Karush........................................................................55
Harold W. Kuhn.......................................................................57
Joseph Louis Lagrange............................................................60
John von Neumann..................................................................86
Lev Pontryagin.......................................................................114
Naum Z. Shor.........................................................................117
Albert W. Tucker...................................................................120
Hoang Tuy.............................................................................124
Optimization

Optimization
In mathematics and computer science,
optimization, or mathematical programming,
refers to choosing the best element from some
set of available alternatives.
In the simplest case, this means solving problems
in which one seeks to minimize or maximize a
real function by systematically choosing the
values of real or integer variables from within an
allowed set. This formulation, using a scalar, real-
valued objective function, is probably the
simplest example; the generalization of
optimization theory and techniques to other
formulations comprises a large area of applied
mathematics. More generally, it means finding
"best available" values of some objective function
given a defined domain, including a variety of
different types of objective functions and
different types of domains.

History
The first optimization technique, which is known
as steepest descent, goes back to Gauss.
Historically, the first term to be introduced was
linear programming, which was invented by

7
Important mathematicians in the optimization field

George Dantzig in the 1940s. The term


programming in this context does not refer to
computer programming (although computers are
nowadays used extensively to solve mathematical
problems). Instead, the term comes from the use
of program by the United States military to refer
to proposed training and logistics schedules,
which were the problems that Dantzig was
studying at the time. (Additionally, later on, the
use of the term "programming" was apparently
important for receiving government funding, as it
was associated with high-technology research
areas that were considered important.)
Other important mathematicians in the
optimization field include:
•Hoang Tuy •Arkadii Nemirovskii
•Richard Bellman •Yurii Nesterov
•George Dantzig •John von Neumann
•Ronald A. Howard •Boris Polyak
•Leonid Kantorovich •Lev Pontryagin
•Narendra Karmarkar •James Renegar
•William Karush •Kees Roos
•Leonid Khachiyan •Naum Z. Shor
•Bernard Koopman •Michael J. Todd
•Harold Kuhn •Albert Tucker

8
Optimization

•Joseph Louis Lagrange •László Lovász

Major sub-fields
•Convex programming studies the case when the
objective function is convex and the constraints,
if any, form a convex set. This can be viewed as a
particular case of nonlinear programming or as
generalization of linear or convex quadratic
programming.
1. Linear programming (LP), a type of
convex programming, studies the case in
which the objective function f is linear
and the set of constraints is specified
using only linear equalities and
inequalities. Such a set is called a
polyhedron or a polytope if it is bounded.
2. Second order cone programming (SOCP)
is a convex program, and includes
certain types of quadratic programs.
3. Semidefinite programming (SDP) is a
subfield of convex optimization where
the underlying variables are semidefinite
matrices. It is generalization of linear
and convex quadratic programming.

9
Important mathematicians in the optimization field

4. Conic programming is a general form of


convex programming. LP, SOCP and SDP
can all be viewed as conic programs with
the appropriate type of cone.
5. Geometric programming is a technique
whereby objective and inequality
constraints expressed as posynomials
and equality constraints as monomials
can be transformed into a convex
program.
•Integer programming studies linear programs in
which some or all variables are constrained to
take on integer values. This is not convex, and in
general much more difficult than regular linear
programming.
•Quadratic programming allows the objective
function to have quadratic terms, while the set A
must be specified with linear equalities and
inequalities. For specific forms of the quadratic
term, this is a type of convex programming.
•Nonlinear programming studies the general
case in which the objective function or the
constraints or both contain nonlinear parts. This
may or may not be a convex program. In general,
the convexity of the program affects the difficulty
of solving more than the linearity.

10
Optimization

•Stochastic programming studies the case in


which some of the constraints or parameters
depend on random variables.
•Robust programming is, as stochastic
programming, an attempt to capture uncertainty
in the data underlying the optimization problem.
This is not done through the use of random
variables, but instead, the problem is solved
taking into account inaccuracies in the input
data.
•Combinatorial optimization is concerned with
problems where the set of feasible solutions is
discrete or can be reduced to a discrete one.
•Infinite-dimensional optimization studies the
case when the set of feasible solutions is a subset
of an infinite-dimensional space, such as a space
of functions.
•Heuristic algorithms
1. Metaheuristics
•Constraint satisfaction studies the case in which
the objective function f is constant (this is used in
artificial intelligence, particularly in automated
reasoning).
1. Constraint programming.
•Disjunctive programming used where at least
one constraint must be satisfied but not all. Of
particular use in scheduling.

11
Important mathematicians in the optimization field

•Trajectory optimization is the specialty of


optimizing trajectories for air and space vehicles.
In a number of subfields, the techniques are
designed primarily for optimization in dynamic
contexts (that is, decision making over time):
•Calculus of variations seeks to optimize an
objective defined over many points in time, by
considering how the objective function changes if
there is a small change in the choice path.
•Optimal control theory is a generalization of the
calculus of variations.
•Dynamic programming studies the case in which
the optimization strategy is based on splitting the
problem into smaller subproblems. The equation
that describes the relationship between these
subproblems is called the Bellman equation.
•Mathematical programming with equilibrium
constraints is where the constraints include
variational inequalities or complementarities.

Multi-objective optimization
Adding more than one objective to an
optimization problem adds complexity. For
example, if you wanted to optimize a structural
design, you would want a design that is both light
and rigid. Because these two objectives conflict, a

12
Optimization

trade-off exists. There will be one lightest design,


one stiffest design, and an infinite number of
designs that are some compromise of weight and
stiffness. This set of trade-off designs is known as
a Pareto set. The curve created plotting weight
against stiffness of the best designs is known as
the Pareto frontier.
A design is judged to be Pareto optimal if it is not
dominated by other designs: a Pareto optimal
design must be better than another design in at
least one aspect. If it is worse than another
design in all respects, then it is dominated and is
not Pareto optimal.

Multi-modal optimization
Optimization problems are often multi-modal,
that is they possess multiple good solutions. They
could all be globally good (same cost function
value) or there could be a mix of globally good
and locally good solutions. Obtaining all (or at
least some of) the multiple solutions is the goal of
a multi-modal optimizer.
Classical optimization techniques due to their
iterative approach do not perform satisfactorily
when they are used to obtain multiple solutions,
since it is not guaranteed that different solutions
will be obtained even with different starting

13
Important mathematicians in the optimization field

points in multiple runs of the algorithm.


Evolutionary Algorithms are however a very
popular approach to obtain multiple solutions in a
multi-modal optimization task. See Evolutionary
multi-modal optimization.

Dimensionless optimization
Dimensionless optimization (DO) is used in design
problems, and consists of the following steps:
•Rendering the dimensions of the design
dimensionless
•Selecting a local region of the design space to
perform analysis on
•Creating an I-optimal design within the local
design space
•Forming response surfaces based on the
analysis
•Optimizing the design based on the evaluation of
the objective function, using the response surface
models

14
Optimization

Concepts and notation

Optimization problems
An optimization problem can be represented in
the following way
Given: a function f : A → R from some set A
to the real numbers
Sought: an element x0 in A such that f(x0) ≤
f(x) for all x in A ("minimization") or such that
f(x0) ≥ f(x) for all x in A ("maximization").
Such a formulation is called an optimization
problem or a mathematical programming
problem (a term not directly related to computer
programming, but still in use for example in
linear programming - see History above). Many
real-world and theoretical problems may be
modeled in this general framework. Problems
formulated using this technique in the fields of
physics and computer vision may refer to the
technique as energy minimization, speaking of
the value of the function f as representing the
energy of the system being modeled.
Typically, A is some subset of the Euclidean space
Rn, often specified by a set of constraints,

15
Important mathematicians in the optimization field

equalities or inequalities that the members of A


have to satisfy. The domain A of f is called the
search space or the choice set, while the
elements of A are called candidate solutions or
feasible solutions.
The function f is called, variously, an objective
function, cost function, energy function, or
energy functional. A feasible solution that
minimizes (or maximizes, if that is the goal) the
objective function is called an optimal solution.
Generally, when the feasible region or the
objective function of the problem does not
present convexity, there may be several local
minima and maxima, where a local minimum x* is
defined as a point for which there exists some δ >
0 so that for all x such that
∥x −x ∗∥ ;
the expression
f  x ∗  f  x 
holds; that is to say, on some region around x * all
of the function values are greater than or equal to
the value at that point. Local maxima are defined
similarly.
A large number of algorithms proposed for
solving non-convex problems – including the
majority of commercially available solvers – are

16
Optimization

not capable of making a distinction between local


optimal solutions and rigorous optimal solutions,
and will treat the former as actual solutions to
the original problem. The branch of applied
mathematics and numerical analysis that is
concerned with the development of deterministic
algorithms that are capable of guaranteeing
convergence in finite time to the actual optimal
solution of a non-convex problem is called global
optimization.

Notation
Optimization problems are often expressed with
special notation. Here are some examples.
min  x 21 
x∈ℝ

This asks for the minimum value for the objective


function x 21 , where x ranges over the real
numbers ℝ . The minimum value in this case is 1,
occurring at x=0 .
max 2 x
x ∈ℝ

This asks for the maximum value for the objective


function 2x, where x ranges over the reals. In this
case, there is no such maximum as the objective
function is unbounded, so the answer is "infinity"
or "undefined".

17
Important mathematicians in the optimization field

argmax x∈[−∞ ,−1] x 21


This asks for the value (or values) of x in the
interval [−∞ ,−1] that minimizes (or minimize)
the objective function x2 + 1 (the actual minimum
value of that function does not matter). In this
case, the answer is x = -1.
argmax x∈[−5,5] , y∈ℝ x⋅cos y 
This asks for the  x , y  pair (or pairs) that
maximizes (or maximize) the value of the
objective function x⋅cos  y  , with the added
constraint that x lies in the interval [−5, 5]
(again, the actual maximum value of the
expression does not matter). In this case, the
solutions are the pairs of the form (5, 2kπ) and
(−5,(2k+1)π), where k ranges over all integers.

Analytical characterization of
optima

Is it possible to satisfy all


constraints?
The satisfiability problem, also called the
feasibility problem, is just the problem of

18
Optimization

finding any feasible solution at all without regard


to objective value. This can be regarded as the
special case of mathematical optimization where
the objective value is the same for every solution,
and thus any solution is optimal.
Many optimization algorithms need to start from
a feasible point. One way to obtain such a point is
to relax the feasibility conditions using a slack
variable; with enough slack, any starting point is
feasible. Then, minimize that slack variable until
slack is null or negative.

Does an optimum exist?


The extreme value theorem of Karl Weierstrass
states conditions under which an optimum exists.

How can an optimum be found?


One of Fermat's theorems states that optima of
unconstrained problems are found at stationary
points, where the first derivative or the gradient
of the objective function is zero (see First
derivative test). More generally, they may be
found at critical points, where the first derivative
or gradient of the objective function is zero or is
undefined, or on the boundary of the choice set.
An equation stating that the first derivative

19
Important mathematicians in the optimization field

equals zero at an interior optimum is sometimes


called a 'first-order condition'.
Optima of inequality-constrained problems are
instead found by the Lagrange multiplier method.
This method calculates a system of inequalities
called the 'Karush-Kuhn-Tucker conditions' or
'complementary slackness conditions', which may
then be used to calculate the optimum.
While the first derivative test identifies points
that might be optima, it cannot distinguish a
point which is a minimum from one that is a
maximum or one that is neither. When the
objective function is twice differentiable, these
cases can be distinguished by checking the
second derivative or the matrix of second
derivatives (called the Hessian matrix) in
unconstrained problems, or a matrix of second
derivatives of the objective function and the
constraints called the bordered Hessian. The
conditions that distinguish maxima and minima
from other stationary points are sometimes called
'second-order conditions' (see 'Second derivative
test').

20
Optimization

How does the optimum change if


the problem changes?
The envelope theorem describes how the value of
an optimal solution changes when an underlying
parameter changes.
The maximum theorem of Claude Berge (1963)
describes the continuity of the optimal solution as
a function of underlying parameters.

Computational optimization
techniques
Optimization methods are crudely divided into
two groups:
SVO - Single-variable optimization
MVO - Multi-variable optimization
For twice-differentiable functions, unconstrained
problems can be solved by finding the points
where the gradient of the objective function is
zero (that is, the stationary points) and using the
Hessian matrix to classify the type of each point.
If the Hessian is positive definite, the point is a
local minimum, if negative definite, a local

21
Important mathematicians in the optimization field

maximum, and if indefinite it is some kind of


saddle point.
The existence of derivatives is not always
assumed and many methods were devised for
specific situations. The basic classes of methods,
based on smoothness of the objective function,
are:
•Combinatorial methods
•Derivative-free methods
•First-order methods
•Second-order methods

Actual methods falling somewhere among the


categories above include:
•Bundle methods
•Conjugate gradient method
•Ellipsoid method
•Frank–Wolfe method
•Gradient descent aka steepest descent or
steepest ascent
•Interior point methods

22
Optimization

•Line search - a technique for one dimensional


optimization, usually used as a subroutine for
other, more general techniques.
•Nelder-Mead method aka the Amoeba method
•Newton's method
•Quasi-Newton methods
•Simplex method
•Subgradient method - similar to gradient
method in case there are no gradients

Should the objective function be convex over the


region of interest, then any local minimum will
also be a global minimum. There exist robust, fast
numerical techniques for optimizing twice
differentiable convex functions.
Constrained problems can often be transformed
into unconstrained problems with the help of
Lagrange multipliers.
Here are a few other popular methods:
•Ant colony optimization
•Bat algorithm
•Beam search
•Bees algorithm

23
Important mathematicians in the optimization field

•Cuckoo search
•Differential evolution
•Dynamic relaxation
•Evolution strategy
•Filled function method
•Firefly algorithm
•Genetic algorithms
•Harmony search
•Hill climbing
•IOSO
•Particle swarm optimization
•Quantum annealing
•Simulated annealing
•Stochastic tunneling
•Tabu search

Applications
Problems in rigid body dynamics (in particular
articulated rigid body dynamics) often require
mathematical programming techniques, since you
can view rigid body dynamics as attempting to

24
Optimization

solve an ordinary differential equation on a


constraint manifold; the constraints are various
nonlinear geometric constraints such as "these
two points must always coincide", "this surface
must not penetrate any other", or "this point must
always lie somewhere on this curve". Also, the
problem of computing contact forces can be done
by solving a linear complementarity problem,
which can also be viewed as a QP (quadratic
programming) problem.
Many design problems can also be expressed as
optimization programs. This application is called
design optimization. One recent and growing
subset of this field is multidisciplinary design
optimization, which, while useful in many
problems, has in particular been applied to
aerospace engineering problems.
Economics also relies heavily on mathematical
programming. An often studied problem in
microeconomics, the utility maximization
problem, and its dual problem the Expenditure
minimization problem, are economic optimization
problems. Consumers and firms are assumed to
maximize their utility/profit. Also, agents are
most frequently assumed to be risk-averse
thereby wishing to minimize whatever risk they
might be exposed to. Asset prices are also
explained using optimization though the
underlying theory is more complicated than

25
Important mathematicians in the optimization field

simple utility or profit optimization. Trade theory


also uses optimization to explain trade patterns
between nations.
Another field that uses optimization techniques
extensively is operations research.

References
•Mordecai Avriel (2003). Nonlinear
Programming: Analysis and Methods. Dover
Publishing. ISBN 0-486-43227-0.
•Stephen Boyd and Lieven Vandenberghe (2004).
Convex Optimization, Cambridge University
Press. ISBN 0-521-83378-7.
•Elster K.-H. (1993), Modern Mathematical
Methods of Optimization, Vch Pub. ISBN 3-05-
501452-9.
•Jorge Nocedal and Stephen J. Wright (2006).
Numerical Optimization, Springer. ISBN 0-387-
30303-0.
•Panos Y. Papalambros and Douglass J. Wilde
(2000). Principles of Optimal Design : Modeling
and Computation, Cambridge University Press.
ISBN 0-521-62727-3.
•Rowe W.B.; O'Donoghue J.P. Cameron, A; (1970)
Optimization of externally pressurized bearing for

26
Optimization

minimum power and low temperature rise.


Tribology (London)
•Yang X.-S. (2008), Introduction to Mathematical
Optimization: From Linear Programming to
Metaheuristics, Cambridge Int. Science
Publishing. ISBN 1-904602-82-7.

27
Important mathematicians in the optimization field

George Dantzig
George Bernard Dantzig
Born November 8, 1914
Portland, Oregon
Died May 13, 2005 (aged 90)
Stanford, California
Citizenship American
Fields Mathematician
Operations research
Computer science
Economics
Statistics
Institutions U.S. Air Force Office of Statistical Control
RAND Corporation
University of California, Berkeley
Stanford University
Alma mater Bachelor's degrees - University of
Maryland
Master's degree - University of Michigan
Doctor of Philosophy - University of
California, Berkeley
Doctoral advisor Jerzy Neyman
Doctoral students Ilan Adler
Kurt Anstreicher
John Birge
Richard W. Cottle
B. Curtis Eaves
Robert Fourer

28
George Dantzig

Saul Gass
Alfredo Iusem
Ellis Johnson
Hiroshi Konno
Irvin Lustig
Thomas Magnanti
S. Thomas McCormick, V
David Morton
Mukund Thapa
Craig Tovey
Alan Tucker
Richard Van Slyke
Roger J-B Wets
Robert Wittrock
Yinyu Ye
Known for Linear programming
Simplex algorithm
Dantzig-Wolfe decomposition principle
Generalized linear programming
Generalized upper bounding
Max-flow min-cut theorem of networks
Quadratic programming
Complementary pivot algorithms
Linear complementary problem
Stochastic programming
Influences Wassily Leontief
John von Neumann
Marshal K. Wood
Influenced Kenneth J. Arrow
Robert Dorfman

29
Important mathematicians in the optimization field

Leonid Hurwicz
Tjalling C. Koopmans
Thomas L. Saaty
Paul Samuelson
Phil. Wolfe
Notable awards John von Neumann Theory Prize [1974]
National Medal of Science [1975]
George Bernard Dantzig (November 8, 1914 –
May 13, 2005) was an American mathematician,
and the Professor Emeritus of Transportation
Sciences and Professor of Operations Research
and of Computer Science at Stanford.
Dantzig is known for his development of the
simplex algorithm, an algorithm for solving linear
programming problems, and his work with linear
programming, some years after it was initially
invented by Soviet economist and mathematician
Leonid Kantorovich. Dantzig is the real-life
perpetrator of the tale of a student solving an
unproven equation after mistaking it for
homework (see "Biography" below), often thought
to be an urban legend.

Biography
George Dantzig was born in Portland, Oregon,
and with his middle name "Bernard" named after
the writer George Bernard Shaw. His father,

30
George Dantzig

Tobias Dantzig, was a Baltic German


mathematician and his mother the French
linguist Anja Ourisson. They had met during their
study at the Sorbonne University in Paris, where
Tobias studied with Henri Poincaré. They
immigrated to the United States and settled in
Portland, Oregon. Early in the 1920s the family
moved over Baltimore to Washington. Anja
Dantzig became a linguist at the Library of
Congress, Dantzig senior became a math tutor at
the University of Maryland, College Park, and
George attended Powell Junior High School and
Central High School. At highschool he was
already fascinated by geometry, and this interest
was further nurtured his father, by challenging
him with complex geometry problems.
George Dantzig earned bachelor's degrees in
mathematics and physics from the University of
Maryland in 1936, his master's degree in
mathematics from the University of Michigan in
1938. After a two-year period at the Bureau of
Labor Statistics, he enrolled in the doctoral
program in mathematics at the University of
California, Berkeley studying statistics under
mathematician Jerzy Neyman. In 1939, he arrived
late to his statistics class. Seeing two problems
written on the board, he assumed they were a
homework assignment and copied them down,
solved them and handed them in a few days later.

31
Important mathematicians in the optimization field

Unbeknownst to him, they were examples of


(formerly) unproven statistical theorems.
Dantzig's story became the stuff of legend, and
was the inspiration for the 1997 movie Good Will
Hunting.
With the outbreak of World War II, George took a
leave of absence from the doctoral program at
Berkeley to join the U.S. Air Force Office of
Statistical Control. In 1946, he returned to
Berkeley to complete the requirements of his
program and received his Ph.D. that year.
In 1952 Dantzig joined the mathematics division
of the RAND Corporation. By 1960 he became a
professor in the Department of Industrial
Engineering at UC Berkeley, where he founded
and directed the Operations Research Center. In
1966 he joined the Stanford faculty as Professor
of Operations Research and of Computer Science.
A year later, the Program in Operations Research
became a full-fledged department. In 1973 he
founded the Systems Optimization Laboratory
(SOL) there. On a sabbatical leave that year, he
headed the Methodology Group at the
International Institute for Applied Systems
Analysis (IIASA) in Laxenburg, Austria. Later he
became the C. A. Criley Professor of
Transportation Sciences at Stanford, and kept
going, well beyond his mandatory retirement in
1985.

32
George Dantzig

He was a member of the National Academy of


Sciences, the National Academy of Engineering,
and the American Academy of Arts and Sciences.
And he was the recipient of many honors,
including the first John von Neumann Theory
Prize in 1974, the National Medal of Science in
1975, an honorary doctorate from the University
of Maryland, College Park in 1976. The
Mathematical Programming Society honored
Dantzig by creating the George B. Dantzig Prize,
bestowed every three years since 1982 on one or
two people who have made a significant impact in
the field of mathematical programming.
Dantzig died on May 13, 2005, in his home in
Stanford, California, of complications from
diabetes and cardiovascular disease. He was 90
years old.

Work
Dantzig is "generally regarded as one of the three
founders of linear programming, along with John
von Neumann and Leonid Kantorovich",
according to Freund (1994), "through his
research in mathematical theory, computation,
economic analysis, and applications to industrial
problems, he has contributed more than any

33
Important mathematicians in the optimization field

other researcher to the remarkable development


of linear programming".
Dantzig's seminal work allows the airline
industry, for example, to schedule crews and
make fleet assignments. Based on his work tools
are developed "that shipping companies use to
determine how many planes they need and where
their delivery trucks should be deployed. The oil
industry long has used linear programming in
refinery planning, as it determines how much of
its raw product should become different grades of
gasoline and how much should be used for
petroleum-based byproducts. It's used in
manufacturing, revenue management,
telecommunications, advertising, architecture,
circuit design and countless other areas".

Mathematical statistics
An event in Dantzig's life became the origin of a
famous story in 1939 while he was a graduate
student at UC Berkeley. Near the beginning of a
class for which Dantzig was late, professor Jerzy
Neyman wrote two examples of famously
unsolved statistics problems on the blackboard.
When Dantzig arrived, he assumed that the two
problems were a homework assignment and
wrote them down. According to Dantzig, the
problems "seemed to be a little harder than

34
George Dantzig

usual", but a few days later he handed in


completed solutions for the two problems, still
believing that they were an assignment that was
overdue.
Six weeks later, Dantzig received a visit from an
excited professor Neyman, eager to tell him that
the homework problems he had solved were two
of the most famous unsolved problems in
statistics. He had prepared one of Dantzig's
solutions for publication in a mathematical
journal. Years later another researcher, Abraham
Wald, was preparing to publish a paper which
arrived at a conclusion for the second problem,
and included Dantzig as its co-author when he
learned of the earlier solution.
This story began to spread, and was used as a
motivational lesson demonstrating the power of
positive thinking. Over time Dantzig's name was
removed and facts were altered, but the basic
story persisted in the form of an urban legend,
and as an introductory scene in the movie Good
Will Hunting.

Linear programming
In 1946, as mathematical adviser to the U.S. Air
Force Comptroller, he was challenged by his
Pentagon colleagues to see what he could do to

35
Important mathematicians in the optimization field

mechanize the planning process, "to more rapidly


compute a time-staged deployment, training and
logistical supply program." In those pre-
electronic computer days, mechanization meant
using analog devices or punched-card machines.
"Program" at that time was a military term that
referred not to the instruction used by a
computer to solve problems, which were then
called "codes," but rather to plans or proposed
schedules for training, logistical supply, or
deployment of combat units. The somewhat
confusing name "linear programming," Dantzig
explained in the book, is based on this military
definition of "program."
In 1963, Dantzig’s Linear Programming and
Extensions was published by Princeton University
Press. Rich in insight and coverage of significant
topics, the book quickly became “the bible” of
linear programming.

36
Richard E. Bellman

Richard E. Bellman
Richard E. Bellman
Born August 26, 1920
New York City, New York
Died March 19, 1984 (aged 63)
Fields Mathematics and Control theory
Alma mater Princeton University
University of Wisconsin–Madison
Brooklyn College
Known for Dynamic programming
Richard Ernest Bellman (August 26, 1920 –
March 19, 1984) was an applied mathematician,
celebrated for his invention of dynamic
programming in 1953, and important
contributions in other fields of mathematics.

Biography
Bellman was born in 1920 in New York City,
where his father John James Bellman ran a small
grocery store on Bergen Street near Prospect
Park in Brooklyn. Bellman completed his studies
at Abraham Lincoln High School in 1937, and
studied mathematics at Brooklyn College where
he received a BA in 1941. He later earned an MA
from the University of Wisconsin–Madison.

37
Important mathematicians in the optimization field

During World War II he worked for a Theoretical


Physics Division group in Los Alamos. In 1946 he
received his Ph.D. at Princeton under the
supervision of Solomon Lefschetz.
He was a professor at the University of Southern
California, a Fellow in the American Academy of
Arts and Sciences (1975), and a member of the
National Academy of Engineering (1977).
He was awarded the IEEE Medal of Honor in
1979, "for contributions to decision processes
and control system theory, particularly the
creation and application of dynamic
programming". His key work is the Bellman
equation.

Work

Bellman equation
A Bellman equation, also known as a dynamic
programming equation, is a necessary condition
for optimality associated with the mathematical
optimization method known as dynamic
programming. Almost any problem which can be
solved using optimal control theory can also be
solved by analyzing the appropriate Bellman

38
Richard E. Bellman

equation. The Bellman equation was first applied


to engineering control theory and to other topics
in applied mathematics, and subsequently
became an important tool in economic theory.

Hamilton-Jacobi-Bellman
The Hamilton-Jacobi-Bellman equation (HJB)
equation is a partial differential equation which is
central to optimal control theory. The solution of
the HJB equation is the 'value function', which
gives the optimal cost-to-go for a given dynamical
system with an associated cost function. Classical
variational problems, for example, the
brachistochrone problem can be solved using this
method as well.
The equation is a result of the theory of dynamic
programming which was pioneered in the 1950s
by Richard Bellman and coworkers. The
corresponding discrete-time equation is usually
referred to as the Bellman equation. In
continuous time, the result can be seen as an
extension of earlier work in classical physics on
the Hamilton-Jacobi equation by William Rowan
Hamilton and Carl Gustav Jacob Jacobi.

39
Important mathematicians in the optimization field

Curse of dimensionality
The "Curse of dimensionality", is a term coined by
Bellman to describe the problem caused by the
exponential increase in volume associated with
adding extra dimensions to a (mathematical)
space. One implication of the curse of
dimensionality is that some methods for
numerical solution of the Bellman equation
require vastly more computer time when there
are more state variables in the value function.
For example, 100 evenly-spaced sample points
suffice to sample a unit interval with no more
than 0.01 distance between points; an equivalent
sampling of a 10-dimensional unit hypercube with
a lattice with a spacing of 0.01 between adjacent
points would require 1020 sample points: thus, in
some sense, the 10-dimensional hypercube can be
said to be a factor of 10 18 "larger" than the unit
interval. (Adapted from an example by R. E.
Bellman, see below.)

Bellman–Ford algorithm
The Bellman-Ford algorithm sometimes referred
to as the Label Correcting Algorithm, computes
single-source shortest paths in a weighted
digraph (where some of the edge weights may be

40
Richard E. Bellman

negative). Dijkstra's algorithm accomplishes the


same problem with a lower running time, but
requires edge weights to be non-negative. Thus,
Bellman–Ford is usually used only when there are
negative edge weights.

Publications
Over the course of his career he published 619
papers and 39 books. During the last 11 years of
his life he published over 100 papers despite
suffering from crippling complications of a brain
surgery (Dreyfus, 2003). A selection:
•1959. Asymptotic Behavior of Solutions of
Differential Equations
•1961. An Introduction to Inequalities
•1961. Adaptive Control Processes: A Guided
Tour
•1962. Applied Dynamic Programming
•1967. Introduction to the Mathematical Theory
of Control Processes
•1970. Algorithms, Graphs and Computers
•1972. Dynamic Programming and Partial
Differential Equations

41
Important mathematicians in the optimization field

•1982. Mathematical Aspects of Scheduling and


Applications
•1983. Mathematical Methods in Medicine
•1984. Partial Differential Equations
•1984. Eye of the Hurricane: An Autobiography,
World Scientific Publishing.
•1985. Artificial Intelligence
•1995. Modern Elementary Differential
Equations
•1997. Introduction to Matrix Analysis
•2003. Dynamic Programming
•2003. Perturbation Techniques in Mathematics,
Engineering and Physics
•2003. Stability Theory of Differential Equations

42
Ronald A. Howard

Ronald A. Howard
Ronald A. Howard (born August 27, 1934) has
been a professor at Stanford University since
1965. In 1964 he defined the profession of
decision analysis, and since then has been
developing the field as professor in the
Department of Engineering-Economic Systems
(now the Department of Management Science
and Engineering) in the School of Engineering at
Stanford.
Professor Howard directs teaching and research
in decision analysis at Stanford, and is the
Director of the Decisions and Ethics Center,
which examines the efficacy and ethics of social
arrangements. He was a founding Director and
Chairman of Strategic Decisions Group. Current
research interests are improving the quality of
decisions, life-and-death decision making, and the
creation of a coercion-free society.
In 1986 he received the Operations Research
Society of America's Frank P. Ramsey Medal "for
distinguished contributions in decision analysis".
In 1998 he received from the Institute for
Operations Research and the Management
Sciences (INFORMS) the first award for the
teaching of operations research/management
science practice. In 1999 INFORMS invited him

43
Important mathematicians in the optimization field

to give the Omega Rho Distinguished Plenary


Lecture at the Cincinnati National Meeting. In
the same year he was elected to the National
Academy of Engineering, and received the Dean's
Award for Academic Excellence.
Professor Howard earned his Sc.D. in Electrical
Engineering from MIT in 1958 and was an
associate professor there until he joined Stanford.
He pioneered the policy iteration method for
solving Markov decision problems, and this
method is sometimes called the 'Howard policy-
improvement algorithm' in his honor (Sargent,
1987, p. 47). He was also instrumental in the
development of the Influence diagram for the
graphical analysis of decision situations.

Publications
•1960. Dynamic Programming and Markov
Processes, The M.I.T. Press.
•1971. Dynamic Probabilistic Systems (two
volumes), John F. Wiley & Sons, Inc., New York
City.
•1977. Readings in Decision Analysis. With Jim E.
Matheson (editors). SRI International, Menlo
Park, California.

44
Ronald A. Howard

•1984. Readings on the Principles and


Applications of Decision Analysis. 2 volumes.
With Jim E. Matheson (editors). Menlo Park CA:
Strategic Decisions Group.
•2008. Ethics for the Real World. With Clinton D.
Korver. Harvard Business Press.

45
Important mathematicians in the optimization field

Leonid Kantorovich
Leonid Vitaliyevich Kantorovich
Born 19 January 1912
Saint Petersburg, Russian Empire
Died 7 April 1986 (aged 74)
Moscow, Russia, USSR
Nationality Soviet Russia
Fields Mathematics
Alma mater Leningrad State University
Known for Linear programming
Kantorovich theorem
normed vector lattice (Kantorovich space)
Kantorovich metric
Kantorovich inequality
approximation theory
iterative methods
functional analysis
numerical analysis
scientific computing
Notable Sveriges Riksbank Prize in Economic Sciences
awards in Memory of Alfred Nobel (1975)
Leonid Vitaliyevich Kantorovich (Russian:
Леонид Витальевич Канторович) (19 January
1912, Saint Petersburg – 7 April 1986, Moscow)
was a Soviet/Russian mathematician and
economist, known for his theory and development
of techniques for the optimal allocation of
resources. He was the winner of the Nobel Prize

46
Leonid Kantorovich

in Economics in 1975 and the only winner of this


prize from the USSR.
Kantorovich worked for the Soviet government.
He was given the task of optimizing production in
a plywood industry. He came up (1939) with the
mathematical technique now known as linear
programming, some years before it was
reinvented and much advanced by George
Dantzig. He authored several books including
The Mathematical Method of Production Planning
and Organization and The Best Uses of Economic
Resources. For his work, Kantorovich was
awarded Stalin Prize (1949).
During the Siege of Leningrad, Kantorovich was
in charge of safety on the Road of Life. He
calculated the optimal distance between cars on
ice, depending on thickness of ice and
temperature of the air. In December 1941 and
January 1942, Kantorovich personally walked
between cars driving on the ice of Lake Ladoga,
on the Road of Life, to ensure the cars did not
sink. However many cars with food for survivors
of the siege were destroyed by the German air-
bombings.
For his feat and courage Kantorovich was
awarded the Order of the Patriotic War, and was
decorated with the medal For Defense of
Leningrad.

47
Important mathematicians in the optimization field

The Sveriges Riksbank Prize in Economic


Sciences in Memory of Alfred Nobel, which he
shared with Tjalling Koopmans, was given "for
their contributions to the theory of optimal
allocation of resources."

Mathematics
In mathematical analysis, Kantorovich had
important results in functional analysis,
approximation theory, and operator theory.
In particular, Kantorovich formulated
fundamental results in the theory of normed
vector lattices, which are called "K-spaces" in his
honor.
Kantorovich showed that functional analysis
could be used in the analysis of iterative methods,
obtaining the Kantorovich inequalities on the
convergence rate of the gradient method and of
Newton's method.
Kantorovich considered infinite-dimensional
optimization problems, such as the Kantorovich-
Monge problem of mass transfer. His analysis
proposed the Kantorovich metric, which is used
in probability theory, in the theory of the weak
convergence of probability measures.

48
Leonid Kantorovich

References
•V. Makarov (1987). "Kantorovich, Leonid
Vitaliyevich" The New Palgrave: A Dictionary of
Economics, v. 3, pp. 14-15.
•L.V. Kantorovich (1939). "Mathematical
Methods of Organizing and Planning Production"
Management Science, Vol. 6, No. 4 (Jul., 1960),
pp. 366-422.
•Klaus Hagendorf (2008). Spreadsheet
presenting all examples of Kantorovich, 1939
with the OpenOffice.org Calc Solver as well as
the lp_solver.

49
Important mathematicians in the optimization field

Narendra Karmarkar
Narendra K. Karmarkar (born 1957) is an
Indian mathematician, renowned for developing
Karmarkar's algorithm. He is listed as an ISI
highly cited researcher.

Biography
Narendra Karmarkar was born in Gwalior to a
Marathi family. Karmarkar received his B.Tech at
the IIT Bombay in 1978. Later, he received his
M.S. at the California Institute of Technology,
and his Ph.D. in Computer Science at the
University of California, Berkeley.
He published his famous result in 1984 while he
was working for Bell Laboratories in New Jersey.
Karmarkar was a professor at the Tata Institute
of Fundamental Research in Bombay.
He is currently working on a new architecture for
supercomputing. Some of the ideas are published
at Fab5 conference organised by MIT center for
bits and atoms.
Karmarkar received a number of awards for his
algorithm, among them:

50
Narendra Karmarkar

•Paris Kanellakis Award, 2000 given by The


Association for Computing Machinery.
•Distinguished Alumnus Award, Computer
Science and Engineering, University of
California, Berkeley (1993)
•Ramanujan Prize for Computing, given by Asian
Institute Informatics (1989)
•Fulkerson Prize in Discrete Mathematics given
jointly by the American Mathematical Society &
Mathematical Programming Society (1988)
•Fellow of Bell Laboratories (1987- )
•Texas Instruments Founders’ Prize (1986)
•Marconi International Young Scientist Award
(1985)
•Frederick W. Lanchester Prize of the Operations
Research Society of America for the Best
Published Contributions to Operations Research
(1984)
•National Science Talent Award in Mathematics,
India (1972, India)

51
Important mathematicians in the optimization field

Work

Karmarkar's algorithm
Karmarkar's algorithm solves linear
programming problems in polynomial time. These
problems are represented by "n" variables and
"m" constraints. The previous method of solving
these problems consisted of problem
representation by an "x" sided solid with "y"
vertices, where the solution was approached by
traversing from vertex to vertex. Karmarkar's
novel method approaches the solution by cutting
through the above solid in its traversal.
Consequently, complex optimization problems are
solved much faster using the Karmarkar
algorithm. A practical example of this efficiency
is the solution to a complex problem in
communications network optimization where the
solution time was reduced from weeks to days.
His algorithm thus enables faster business and
policy decisions. Karmarkar's algorithm has
stimulated the development of several other
interior point methods, some of which are used in
current codes for solving linear programs.

52
Narendra Karmarkar

Paris Kanellakis Award


The Association for Computing Machinery
awarded him the prestigious Paris Kanellakis
Award in 2000 for his work. The award citation
reads:
For his theoretical work in devising an
Interior Point method for linear
programming that provably runs in
polynomial time, and for his implementation
work suggesting that Interior Point methods
could be effective for linear programming in
practice as well as theory. Together, these
contributions inspired a renaissance in the
theory and practice of linear programming,
leading to orders of magnitude improvement
in the effectiveness of widely-used
commercial optimization codes.

Research on computer
architecture

Galois geometry
After working on the Interior Point Method,
Karmarkar worked on a new architecture for
supercomputing, based on concepts from

53
Important mathematicians in the optimization field

finitegeometry, especially projective geometry


over finite fields.

Current investigations
Currently, he is synthesizing these concepts with
some new ideas he calls sculpturing free space (a
non-linear analogue of what has popularly been
described as folding the perfect corner). This
approach allows him to extend this work to the
physical design of machines. He is now
publishing updates on his recent work, including
an extended abstract. This new paradigm was
presented at IVNC, Poland on 16 July 2008, and
at MIT on 25 July 2008. Some of the recent work
is published at and Fab5 conference organised by
MIT center for bits and atoms

54
William Karush

William Karush
William Karush
Born March 1, 1917
Died February 22, 1997 (aged 79)
Known for Contribution to Karush-Kuhn-Tucker conditions
William Karush (1917-03-01 – 1997-02-22) was
a professor emeritus of California State
University at Northridge and is a mathematician
best known for his contribution to Karush-Kuhn-
Tucker conditions. He was the first to publish the
necessary conditions for the inequality
constrained problem in his Masters thesis,
although he became renowned after a seminal
conference paper by Harold W. Kuhn and Albert
W. Tucker.

Selected works
•Webster's New World Dictionary of
Mathematics, MacMillan Reference Books,
Revised edition (April 1989), ISBN 978-
0131926677
•On the Maximum Transform and Semigroups of
Transformations (1998), Richard Bellman,
William Karush,

55
Important mathematicians in the optimization field

•The crescent dictionary of mathematics, general


editor (1962) William Karush, Oscar Tarcov
•Isoperimetric problems & index theorems.
(1942), William Karush, Thesis (Ph. D.) University
of Chicago, Department of Mathematics.
•Minima of functions of several variables with
inequalities as side conditions, William Karush.
(1939), Thesis (M.S.)--University of Chicago,
1939.
•Minima of functions of several variables with
inequalities as side conditions. (1939), William
Karush, Thesis (S.M.)--University of Chicago,
Department of Mathematics (UoC).

56
Harold W. Kuhn

Harold W. Kuhn
Harold W. Kuhn
Residence United States
Nationality American
Fields Mathematics
Institutions Princeton University
Known for Hungarian method
Karush–Kuhn–Tucker conditions
Notable John von Neumann Theory Prize
awards
Harold William Kuhn (born 1925) is an
American mathematician who studied game
theory. He won the 1980 John von Neumann
Theory Prize along with David Gale and Albert W.
Tucker. A Professor-Emeritus of Mathematics at
Princeton University, he is known for the Karush-
Kuhn-Tucker conditions, for developing Kuhn
poker as well as the description of the Hungarian
method for the assignment problem.
He is known for his association with John Forbes
Nash, as a fellow graduate student, a lifelong
friend and colleague, and a key figure in getting
Nash the attention of the Nobel Prize committee
that led to Nash's 1994 Nobel Prize in Economics.
Kuhn and Nash both had long associations and
collaborations with Albert W. Tucker, who was
Nash's dissertation advisor. Kuhn co-edited The

57
Important mathematicians in the optimization field

Essential John Nash, and is credited as the


mathematics consultant in the 2001 movie
adaptation of Nash's life, A Beautiful Mind.
His son is historian Clifford Kuhn, noted for his
scholarship on the American South and for
collecting oral history. Another son, Nick Kuhn, is
a professor of mathematics at the University of
Virginia.

Bibliography
•Kuhn, H. W. (1955), "The Hungarian method for
the assignment problem", Naval Research
Logistics Quarterly, 2:83–87.
1. Republished as: Kuhn, H. W. (2005),
"The Hungarian method for the
assignment problem", Naval Research
Logistics, 52(1):7–21. DOI:
10.1002/nav.20053.
•Guillermo Owen (2004) IFORS' Operational
Research Hall of Fame Harold W. Kuhn
International Transactions in Operational
Research 11 (6), 715–718. doi:10.1111/j.1475-
3995.2004.00486.
•Kuhn, H.W. "Classics in Game Theory."
(Princeton University Press, 1997). ISBN 978-0-
691-01192-9.

58
Harold W. Kuhn

•Kuhn, H.W. "Linear Inequalities and Related


Systems (AM-38)" (Princeton University Press,
1956). ISBN 978-0-691-07999-8.
•Kuhn, H.W. "Contributions to the Theory of
Games, I (AM-24)." (Princeton University Press,
1950). ISBN 978-0-691-07934-9.
•Kuhn, H.W. "Contributions to the Theory of
Games, II (AM-28)." (Princeton University Press,
1953). ISBN 978-0-691-07935-6.
•Kuhn, H.W. "Lectures on the Theory of Games."
(Princeton University Press, 2003). ISBN 978-0-
691-02772-2.
•Kuhn, H.W. and Nasar, Sylvia, editors. "The
Essential John Nash." (Princeton University
Press, 2001). ISBN 978-0-691-09527-1.

59
Important mathematicians in the optimization field

Joseph Louis Lagrange


Joseph-Louis Lagrange
Joseph-Louis (Giuseppe Lodovico),
conte de Lagrange
Born 25 January 1736
Turin, Piedmont
Died 10 April 1813 (aged 77)
Paris, France
Residence Piedmont
France
Prussia
Nationality Italian
French
Fields Mathematics
Mathematical physics
Institutions École Polytechnique
Doctoral advisor Leonhard Euler
Doctoral students Joseph Fourier
Giovanni Plana
Simeon Poisson
Known for Analytical mechanics
Celestial mechanics
Mathematical analysis
Number theory

Notes
Note he did not have a doctoral advisor but

60
Joseph Louis Lagrange

academic genealogy authorities link his


intellectual heritage to Leonhard Euler, who
played the equivalent role.

Joseph-Louis Lagrange (25 January 1736,


Turin, Piedmont – 10 April 1813), born Giuseppe
Lodovico (Luigi) Lagrangia, was an Italian-
born mathematician and astronomer, who lived
part of his life in Prussia and part in France,
making significant contributions to all fields of
analysis, to number theory, and to classical and
celestial mechanics. On the recommendation of
Euler and D'Alembert, in 1766 Lagrange
succeeded Euler as the director of mathematics
at the Prussian Academy of Sciences in Berlin,
where he stayed for over twenty years, producing
a large body of work and winning several prizes
of the French Academy of Sciences. Lagrange's
treatise on analytical mechanics (Mécanique
Analytique, 4. ed., 2 vols. Paris: Gauthier-Villars
et fils, 1888-89), written in Berlin and first
published in 1788, offered the most
comprehensive treatment of classical mechanics
since Newton and formed a basis for the
development of mathematical physics in the
nineteenth century.
Born Giuseppe Lodovico Lagrangia in Turin of
Italian parents, Lagrange had French ancestors

61
Important mathematicians in the optimization field

on his father's side. In 1787, at age 51, he moved


from Berlin to France and became a member of
the French Academy, and he remained in France
until the end of his life. Therefore, Lagrange is
alternatively considered a French and an Italian
scientist. Lagrange survived the French
Revolution and became the first professor of
analysis at the École Polytechnique upon its
opening in 1794. Napoleon named Lagrange to
the Legion of Honour and made him a Count of
the Empire in 1808. He is buried in the Panthéon.

Scientific contribution
Lagrange was one of the creators of the calculus
of variations, deriving the Euler–Lagrange
equations for extrema of functionals. He also
extended the method to take into account
possible constraints, arriving at the method of
Lagrange multipliers. Lagrange invented the
method of solving differential equations known as
variation of parameters, applied differential
calculus to the theory of probabilities and
attained notable work on the solution of
equations. He proved that every natural number
is a sum of four squares. His treatise Theorie des
fonctions analytiques laid some of the
foundations of group theory, anticipating Galois.

62
Joseph Louis Lagrange

In calculus, Lagrange developed a novel


approach to interpolation and Taylor series. He
studied the three-body problem for the Earth,
Sun, and Moon (1764) and the movement of
Jupiter’s satellites (1766), and in 1772 found the
special-case solutions to this problem that are
now known as Lagrangian points. But above all
he impressed on mechanics, having transformed
Newtonian mechanics into a branch of analysis,
Lagrangian mechanics as it is now called, and
exhibited the so-called mechanical "principles" as
simple results of the variational calculus.

Biography

Early years
Lagrange was born, of French and Italian descent
(a paternal great grandfather was a French army
officer who then moved to Turin), as Giuseppe
Lodovico Lagrangia in Turin. His father, who
had charge of the Kingdom of Sardinia's military
chest, was of good social position and wealthy,
but before his son grew up he had lost most of his
property in speculations, and young Lagrange
had to rely on his own abilities for his position.
He was educated at the college of Turin, but it

63
Important mathematicians in the optimization field

was not until he was seventeen that he showed


any taste for mathematics – his interest in the
subject being first excited by a paper by Edmund
Halley which he came across by accident. Alone
and unaided he threw himself into mathematical
studies; at the end of a year's incessant toil he
was already an accomplished mathematician, and
was made a lecturer in the artillery school.

Variational calculus
Lagrange is one of the founders of the calculus of
variations. Starting in 1754, he worked on the
problem of tautochrone, discovering a method of
maximizing and minimizing functionals in a way
similar to finding extrema of functions. Lagrange
wrote several letters to Leonhard Euler between
1754 and 1756 describing his results. He outlined
his "δ-algorithm", leading to the Euler–Lagrange
equations of variational calculus and considerably
simplifying Euler's earlier analysis. Lagrange also
applied his ideas to problems of classical
mechanics, generalizing the results of Euler and
Maupertuis.
Euler was very impressed with Lagrange's
results. It has sometimes been stated that "with
characteristic courtesy he withheld a paper he
had previously written, which covered some of
the same ground, in order that the young Italian

64
Joseph Louis Lagrange

might have time to complete his work, and claim


the undisputed invention of the new calculus",
however, this chivalric view has come to be
disputed. Lagrange published his method in two
memoirs of the Turin Society in 1762 and 1773.

Miscellanea Taurinensia
In 1758, with the aid of his pupils, Lagrange
established a society, which was subsequently
incorporated as the Turin Academy of Sciences,
and most of his early writings are to be found in
the five volumes of its transactions, usually
known as the Miscellanea Taurinensia. Many of
these are elaborate papers. The first volume
contains a paper on the theory of the propagation
of sound; in this he indicates a mistake made by
Newton, obtains the general differential equation
for the motion, and integrates it for motion in a
straight line. This volume also contains the
complete solution of the problem of a string
vibrating transversely; in this paper he points out
a lack of generality in the solutions previously
given by Brook Taylor, D'Alembert, and Euler,
and arrives at the conclusion that the form of the
curve at any time t is given by the equation
y=a sin m xsin n t  . The article concludes with a
masterly discussion of echoes, beats, and
compound sounds. Other articles in this volume

65
Important mathematicians in the optimization field

are on recurring series, probabilities, and the


calculus of variations.
The second volume contains a long paper
embodying the results of several papers in the
first volume on the theory and notation of the
calculus of variations; and he illustrates its use by
deducing the principle of least action, and by
solutions of various problems in dynamics.
The third volume includes the solution of several
dynamical problems by means of the calculus of
variations; some papers on the integral calculus;
a solution of Fermat's problem mentioned above:
given an integer n which is not a perfect square,
to find a number x such that x2n + 1 is a perfect
square; and the general differential equations of
motion for three bodies moving under their
mutual attractions.
The next work he produced was in 1764 on the
libration of the Moon, and an explanation as to
why the same face was always turned to the
earth, a problem which he treated by the aid of
virtual work. His solution is especially interesting
as containing the germ of the idea of generalized
equations of motion, equations which he first
formally proved in 1780.

66
Joseph Louis Lagrange

Berlin Academy
Already in 1756 Euler, with support from
Maupertuis, made an attempt to bring Lagrange
to the Berlin Academy. Later, D'Alambert
interceded on Lagrange's behalf with Frederick
of Prussia and wrote to Lagrange asking him to
leave Turin for a considerably more prestigious
position in Berlin. Lagrange turned down both
offers, responding in 1765 that
It seems to me that Berlin would not be at all
suitable for me while M.Euler is there.
In 1766 Euler left Berlin for Saint Petersburg,
and Frederick wrote to Lagrange expressing the
wish of "the greatest king in Europe" to have "the
greatest mathematician in Europe" resident at his
court. Lagrange was finally persuaded and he
spent the next twenty years in Prussia, where he
produced not only the long series of papers
published in the Berlin and Turin transactions,
but his monumental work, the Mécanique
analytique. His residence at Berlin commenced
with an unfortunate mistake. Finding most of his
colleagues married, and assured by their wives
that it was the only way to be happy, he married;
his wife soon died, but the union was not a happy
one.

67
Important mathematicians in the optimization field

Lagrange was a favourite of the king, who used


frequently to discourse to him on the advantages
of perfect regularity of life. The lesson went
home, and thenceforth Lagrange studied his mind
and body as though they were machines, and
found by experiment the exact amount of work
which he was able to do without breaking down.
Every night he set himself a definite task for the
next day, and on completing any branch of a
subject he wrote a short analysis to see what
points in the demonstrations or in the subject-
matter were capable of improvement. He always
thought out the subject of his papers before he
began to compose them, and usually wrote them
straight off without a single erasure or
correction.

France
In 1786, Frederick died, and Lagrange, who had
found the climate of Berlin trying, gladly
accepted the offer of Louis XVI to move to Paris.
He received similar invitations from Spain and
Naples. In France he was received with every
mark of distinction and special apartments in the
Louvre were prepared for his reception, and he
became a member of the French Academy of
Sciences, which later became part of the National
Institute. At the beginning of his residence in

68
Joseph Louis Lagrange

Paris he was seized with an attack of the


melancholy, and even the printed copy of his
Mécanique on which he had worked for a quarter
of a century lay for more than two years
unopened on his desk. Curiosity as to the results
of the French revolution first stirred him out of
his lethargy, a curiosity which soon turned to
alarm as the revolution developed.
It was about the same time, 1792, that the
unaccountable sadness of his life and his timidity
moved the compassion of a young girl who
insisted on marrying him, and proved a devoted
wife to whom he became warmly attached.
Although the decree of October 1793 that
ordered all foreigners to leave France specifically
exempted him by name, he was preparing to
escape when he was offered the presidency of the
commission for the reform of weights and
measures. The choice of the units finally selected
was largely due to him, and it was mainly owing
to his influence that the decimal subdivision was
accepted by the commission of 1799. In 1795,
Lagrange was one of the founding members of
the Bureau des Longitudes.
Though Lagrange had determined to escape from
France while there was yet time, he was never in
any danger; and the different revolutionary
governments (and at a later time, Napoleon)
loaded him with honours and distinctions. A

69
Important mathematicians in the optimization field

striking testimony to the respect in which he was


held was shown in 1796 when the French
commissary in Italy was ordered to attend in full
state on Lagrange's father, and tender the
congratulations of the republic on the
achievements of his son, who "had done honour
to all mankind by his genius, and whom it was the
special glory of Piedmont to have produced." It
may be added that Napoleon, when he attained
power, warmly encouraged scientific studies in
France, and was a liberal benefactor of them.

École normale
In 1795, Lagrange was appointed to a
mathematical chair at the newly established
École normale, which enjoyed only a brief
existence of four months. His lectures here were
quite elementary, and contain nothing of any
special importance, but they were published
because the professors had to "pledge themselves
to the representatives of the people and to each
other neither to read nor to repeat from
memory," and the discourses were ordered to be
taken down in shorthand in order to enable the
deputies to see how the professors acquitted
themselves.

70
Joseph Louis Lagrange

École Polytechnique
Lagrange was appointed professor of the École
Polytechnique in 1794; and his lectures there are
described by mathematicians who had the good
fortune to be able to attend them, as almost
perfect both in form and matter. Beginning with
the merest elements, he led his hearers on until,
almost unknown to themselves, they were
themselves extending the bounds of the subject:
above all he impressed on his pupils the
advantage of always using general methods
expressed in a symmetrical notation.
On the other hand, Fourier, who attended his
lectures in 1795, wrote:
His voice is very feeble, at least in that he
does not become heated; he has a very
pronounced Italian accent and pronounces
the s like z … The students, of whom the
majority are incapable of appreciating him,
give him little welcome, but the professors
make amends for it.

Late years
In 1810, Lagrange commenced a thorough
revision of the Mécanique analytique, but he was
able to complete only about two-thirds of it
before his death in 1813. He was buried that

71
Important mathematicians in the optimization field

same year in the Panthéon in Paris. The French


inscription on his tomb there reads:
JOSEPH LOUIS LAGRANGE. Senator. Count of
the Empire. Grand Officer of the Legion of
Honour. Grand Cross of the Imperial Order of
Réunion. Member of the Institute and the
Bureau of Longitude. Born in Turin on 25
January 1736. Died in Paris on 10 April 1813.

Work in Berlin
Lagrange was extremely active scientifically
during twenty years he spent in Berlin. Not only
did he produce his splendid Mécanique
analytique, but he contributed between one and
two hundred papers to the Academy of Turin, the
Berlin Academy, and the French Academy. Some
of these are really treatises, and all without
exception are of a high order of excellence.
Except for a short time when he was ill he
produced on average about one paper a month.
Of these, note the following as amongst the most
important.
First, his contributions to the fourth and fifth
volumes, 1766–1773, of the Miscellanea
Taurinensia; of which the most important was the
one in 1771, in which he discussed how

72
Joseph Louis Lagrange

numerous astronomical observations should be


combined so as to give the most probable result.
And later, his contributions to the first two
volumes, 1784–1785, of the transactions of the
Turin Academy; to the first of which he
contributed a paper on the pressure exerted by
fluids in motion, and to the second an article on
integration by infinite series, and the kind of
problems for which it is suitable.
Most of the papers sent to Paris were on
astronomical questions, and among these one
ought to particularly mention his paper on the
Jovian system in 1766, his essay on the problem
of three bodies in 1772, his work on the secular
equation of the Moon in 1773, and his treatise on
cometary perturbations in 1778. These were all
written on subjects proposed by the Académie
française, and in each case the prize was
awarded to him.

Lagrangian mechanics
Between 1772 and 1788, Lagrange re-formulated
Classical/Newtonian mechanics to simplify
formulas and ease calculations. These mechanics
are called Lagrangian mechanics.

73
Important mathematicians in the optimization field

Algebra
The greater number of his papers during this
time were, however, contributed to the Prussian
Academy of Sciences. Several of them deal with
questions in algebra.
•His discussion of representations of integers by
quadratic forms (1769) and by more general
algebraic forms (1770).
•His tract on the Theory of Elimination, 1770.
•Lagrange's theorem that the order of a
subgroup H of a group G must divide the order of
G.
•His papers of 1770 and 1771 on the general
process for solving an algebraic equation of any
degree via the Lagrange resolvents. This method
fails to give a general formula for solutions of an
equation of degree five and higher, because the
auxiliary equation involved has higher degree
than the original one. The significance of this
method is that it exhibits the already known
formulas for solving equations of second, third,
and fourth degrees as manifestations of a single
principle, and was foundational in Galois theory.
The complete solution of a binomial equation of
any degree is also treated in these papers.

74
Joseph Louis Lagrange

•In 1773, Lagrange considered a functional


determinant of order 3, a special case of a
Jacobian. He also proved the expression for the
volume of a tetrahedron with one of the vertices
at the origin as the one sixth of the absolute value
of the determinant formed by the coordinates of
the other three vertices.

Number Theory
Several of his early papers also deal with
questions of number theory.
•Lagrange (1766–1769) was the first to prove
that Pell's equation x 2−n y 2=1 has a nontrivial
solution in the integers for any non-square
natural number n.
•He proved the theorem, stated by Bachet
without justification, that every positive integer is
the sum of four squares, 1770.
•He proved Wilson's theorem that if n is a prime,
then (n − 1)! + 1 is always a multiple of n, 1771.
•His papers of 1773, 1775, and 1777 gave
demonstrations of several results enunciated by
Fermat, and not previously proved.
•His Recherches d'Arithmétique of 1775
developed a general theory of binary quadratic
forms to handle the general problem of when an

75
Important mathematicians in the optimization field

integer is representable by the form


a x 2b y 2 c x y .

Other mathematical work


There are also numerous articles on various
points of analytical geometry. In two of them,
written rather later, in 1792 and 1793, he
reduced the equations of the quadrics (or
conicoids) to their canonical forms.
During the years from 1772 to 1785, he
contributed a long series of papers which created
the science of partial differential equations. A
large part of these results were collected in the
second edition of Euler's integral calculus which
was published in 1794.
He made contributions to the theory of continued
fractions.

Astronomy
Lastly, there are numerous papers on problems in
astronomy. Of these the most important are the
following:
•Attempting to solve the three-body problem
resulting in the discovery of Lagrangian points,
1772

76
Joseph Louis Lagrange

•On the attraction of ellipsoids, 1773: this is


founded on Maclaurin's work.
•On the secular equation of the Moon, 1773; also
noticeable for the earliest introduction of the idea
of the potential. The potential of a body at any
point is the sum of the mass of every element of
the body when divided by its distance from the
point. Lagrange showed that if the potential of a
body at an external point were known, the
attraction in any direction could be at once found.
The theory of the potential was elaborated in a
paper sent to Berlin in 1777.
•On the motion of the nodes of a planet's orbit,
1774.
•On the stability of the planetary orbits, 1776.
•Two papers in which the method of determining
the orbit of a comet from three observations is
completely worked out, 1778 and 1783: this has
not indeed proved practically available, but his
system of calculating the perturbations by means
of mechanical quadratures has formed the basis
of most subsequent researches on the subject.
•His determination of the secular and periodic
variations of the elements of the planets, 1781-
1784: the upper limits assigned for these agree
closely with those obtained later by Le Verrier,
and Lagrange proceeded as far as the knowledge

77
Important mathematicians in the optimization field

then possessed of the masses of the planets


permitted.
•Three papers on the method of interpolation,
1783, 1792 and 1793: the part of finite
differences dealing therewith is now in the same
stage as that in which Lagrange left it.

Mécanique analytique
Over and above these various papers he
composed his great treatise, the Mécanique
analytique. In this he lays down the law of virtual
work, and from that one fundamental principle,
by the aid of the calculus of variations, deduces
the whole of mechanics, both of solids and fluids.
The object of the book is to show that the subject
is implicitly included in a single principle, and to
give general formulae from which any particular
result can be obtained. The method of
generalized co-ordinates by which he obtained
this result is perhaps the most brilliant result of
his analysis. Instead of following the motion of
each individual part of a material system, as
D'Alembert and Euler had done, he showed that,
if we determine its configuration by a sufficient
number of variables whose number is the same as
that of the degrees of freedom possessed by the
system, then the kinetic and potential energies of

78
Joseph Louis Lagrange

the system can be expressed in terms of those


variables, and the differential equations of motion
thence deduced by simple differentiation. For
example, in dynamics of a rigid system he
replaces the consideration of the particular
problem by the general equation, which is now
usually written in the form
d ∂ T ∂T ∂V
−  =0 ,
d t · ∂θ ∂θ
∂θ
where T represents the kinetic energy and V
represents the potential energy of the system. He
then presented what we now know as the method
of Lagrange multipliers — though this is not the
first time that method was published — as a
means to solve this equation. Amongst other
minor theorems here given it may mention the
proposition that the kinetic energy imparted by
the given impulses to a material system under
given constraints is a maximum, and the principle
of least action. All the analysis is so elegant that
Sir William Rowan Hamilton said the work could
only be described as a scientific poem. It may be
interesting to note that Lagrange remarked that
mechanics was really a branch of pure
mathematics analogous to a geometry of four
dimensions, namely, the time and the three
coordinates of the point in space; and it is said
that he prided himself that from the beginning to

79
Important mathematicians in the optimization field

the end of the work there was not a single


diagram. At first no printer could be found who
would publish the book; but Legendre at last
persuaded a Paris firm to undertake it, and it was
issued under his supervision in 1788.

Work in France

Differential calculus and calculus


of variations
Lagrange's lectures on the differential calculus at
École Polytechnique form the basis of his treatise
Théorie des fonctions analytiques, which was
published in 1797. This work is the extension of
an idea contained in a paper he had sent to the
Berlin papers in 1772, and its object is to
substitute for the differential calculus a group of
theorems based on the development of algebraic
functions in series. A somewhat similar method
had been previously used by John Landen in the
Residual Analysis, published in London in 1758.
Lagrange believed that he could thus get rid of
those difficulties, connected with the use of
infinitely large and infinitely small quantities, to
which philosophers objected in the usual

80
Joseph Louis Lagrange

treatment of the differential calculus. The book is


divided into three parts: of these, the first treats
of the general theory of functions, and gives an
algebraic proof of Taylor's theorem, the validity
of which is, however, open to question; the
second deals with applications to geometry; and
the third with applications to mechanics. Another
treatise on the same lines was his Leçons sur le
calcul des fonctions, issued in 1804, with the
second edition in 1806. It is in this book that
Lagrange formulated his celebrated method of
Lagrange multipliers, in the context of problems
of variational calculus with integral constraints.
These works devoted to differential calculus and
calculus of variations may be considered as the
starting point for the researches of Cauchy,
Jacobi, and Weierstrass.

Infinitesimals
At a later period Lagrange reverted to the use of
infinitesimals in preference to founding the
differential calculus on the study of algebraic
forms; and in the preface to the second edition of
the Mécanique, which was issued in 1811, he
justifies the employment of infinitesimals, and
concludes by saying that:

81
Important mathematicians in the optimization field

When we have grasped the spirit of the


infinitesimal method, and have verified the
exactness of its results either by the
geometrical method of prime and ultimate
ratios, or by the analytical method of derived
functions, we may employ infinitely small
quantities as a sure and valuable means of
shortening and simplifying our proofs.

Continued fractions
His Résolution des équations numériques,
published in 1798, was also the fruit of his
lectures at École Polytechnique. There he gives
the method of approximating to the real roots of
an equation by means of continued fractions, and
enunciates several other theorems. In a note at
the end he shows how Fermat's little theorem
that
ap−1 − 1 ≡ 0 (mod p)
where p is a prime and a is prime to p, may be
applied to give the complete algebraic solution of
any binomial equation. He also here explains how
the equation whose roots are the squares of the
differences of the roots of the original equation
may be used so as to give considerable
information as to the position and nature of those
roots.

82
Joseph Louis Lagrange

The theory of the planetary motions had formed


the subject of some of the most remarkable of
Lagrange's Berlin papers. In 1806 the subject
was reopened by Poisson, who, in a paper read
before the French Academy, showed that
Lagrange's formulae led to certain limits for the
stability of the orbits. Lagrange, who was
present, now discussed the whole subject afresh,
and in a letter communicated to the Academy in
1808 explained how, by the variation of arbitrary
constants, the periodical and secular inequalities
of any system of mutually interacting bodies
could be determined.

Prizes and distinctions


Euler proposed Lagrange for election to the
Berlin Academy and he was elected on 2
September 1756. He was elected a Fellow of the
Royal Society of Edinburgh in 1790, a Fellow of
the Royal Society and a foreign member of the
Royal Swedish Academy of Sciences in 1806. In
1808, Napoleon made Lagrange a Grand Officer
of the Legion of Honour and a Comte of the
Empire. He was awarded the Grand Croix of the
Ordre Impérial de la Réunion in 1813, a week
before his death in Paris.

83
Important mathematicians in the optimization field

Lagrange was awarded the 1764 prize of the


French Academy of Sciences for his memoir on
the libration of the Moon. In 1766 the Academy
proposed a problem of the motion of the satellites
of Jupiter, and the prize again was awarded to
Lagrange. He also won the prizes of 1772, 1774,
and 1778.
Lagrange is one of the 72 prominent French
scientists who were commemorated on plaques at
the first stage of the Eiffel Tower when it first
opened. Rue Lagrange in the 5th Arrondissement
in Paris is named after him. In Turin, the street
where the house of his birth still stands is named
via Lagrange. The lunar crater Lagrange also
bears his name.

Apocrypha
He was of medium height and slightly formed,
with pale blue eyes and a colorless complexion.
He was nervous and timid, he detested
controversy, and, to avoid it, willingly allowed
others to take credit for what he had done
himself.
Due to thorough preparation, he was usually able
to write out his papers complete without a single
crossing-out or correction.

84
Joseph Louis Lagrange

References
•Columbia Encyclopedia, 6th ed., 2005, "
Lagrange, Joseph Louis."
•W. W. Rouse Ball, 1908, " Joseph Louis
Lagrange (1736 - 1813)," A Short Account of the
History of Mathematics, 4th ed.
•Chanson, Hubert, 2007, "Velocity Potential in
Real Fluid Flows: Joseph-Louis Lagrange's
Contribution," La Houille Blanche 5: 127-31.
•Fraser, Craig G., 2005, "Théorie des fonctions
analytiques" in Grattan-Guinness, I., ed.,
Landmark Writings in Western Mathematics.
Elsevier: 258-76.
•Lagrange, Joseph-Louis. (1811). Mecanique
Analytique. Courcier (reissued by Cambridge
University Press, 2009; ISBN 9781108001748)
•Lagrange, J.L. (1781) "Mémoire sur la Théorie
du Mouvement des Fluides"(Memoir on the
Theory of Fluid Motion) in Serret, J.A., ed., 1867.
Oeuvres de Lagrange, Vol. 4. Paris" Gauthier-
Villars: 695-748.
•Pulte, Helmut, 2005, "Méchanique Analytique"
in Grattan-Guinness, I., ed., Landmark Writings
in Western Mathematics. Elsevier: 208-24.

85
Important mathematicians in the optimization field

John von Neumann


John von Neumann
John von Neumann in the 1940s
Born December 28, 1903
Budapest, Austria-Hungary
Died February 8, 1957 (aged 53)
Washington, D.C., United States
Residence United States
Nationality Hungarian and American
Fields Mathematics and computer science
Institutions University of Berlin
Princeton University
Institute for Advanced Study
Site Y, Los Alamos
Alma mater University of Pázmány Péter
ETH Zürich
Doctoral advisor Leopold Fejér
Doctoral students Donald B. Gillies
Israel Halperin
John P. Mayberry
Other Paul Halmos
notable students Clifford Hugh Dowker
Known for von Neumann Equation
Game theory
von Neumann algebras
von Neumann architecture
Von Neumann bicommutant theorem
Von Neumann cellular automaton

86
John von Neumann

Von Neumann universal constructor


Von Neumann entropy
Von Neumann regular ring
Von Neumann–Bernays–Gödel set theory
Von Neumann universe
Von Neumann conjecture
Von Neumann's inequality
Stone–von Neumann theorem
Von Neumann stability analysis
Minimax theorem
Von Neumann extractor
Von Neumann ergodic theorem
Direct integral
Notable awards Enrico Fermi Award (1956)
John von Neumann (December 28, 1903 –
February 8, 1957) was a Hungarian-American
mathematician who made major contributions to
a vast range of fields, including set theory,
functional analysis, quantum mechanics, ergodic
theory, continuous geometry, economics and
game theory, computer science, numerical
analysis, hydrodynamics (of explosions), and
statistics, as well as many other mathematical
fields. He is generally regarded as one of the
greatest mathematicians in modern history. The
mathematician Jean Dieudonné called von
Neumann "the last of the great mathematicians",
while Peter Lax described him as possessing the
most "fearsome technical prowess" and
"scintillating intellect" of the century. Even in

87
Important mathematicians in the optimization field

Budapest, in the time that produced geniuses like


von Kármán (b. 1881), Szilárd (b. 1898), Wigner
(b. 1902), and Edward Teller (b. 1908), his
brilliance stood out.
Von Neumann was a pioneer of the application of
operator theory to quantum mechanics, in the
development of functional analysis, a principal
member of the Manhattan Project and the
Institute for Advanced Study in Princeton (as one
of the few originally appointed), and a key figure
in the development of game theory and the
concepts of cellular automata and the universal
constructor. Along with Teller and Stanislaw
Ulam, von Neumann worked out key steps in the
nuclear physics involved in thermonuclear
reactions and the hydrogen bomb.

Biography
The eldest of three brothers, von Neumann was
born Neumann János Lajos (in Hungarian the
family name comes first) on December 28, 1903
in Budapest, Austro-Hungarian Empire, to a
wealthy Jewish family. His father was Neumann
Miksa (Max Neumann), a lawyer who worked in a
bank. His mother was Kann Margit (Margaret
Kann). Von Neumann's ancestors had originally
immigrated to Hungary from Russia.

88
John von Neumann

János, nicknamed "Jancsi" (Johnny), was a child


prodigy who showed an aptitude for languages,
memorization, and mathematics. By the age of
six, he could exchange jokes in Classical Greek,
memorize telephone directories, and displayed
prodigious mental calculation abilities. He
entered the German-speaking Lutheran Fasori
Gimnázium in Budapest in 1911. Although he
attended school at the grade level appropriate to
his age, his father hired private tutors to give him
advanced instruction in those areas in which he
had displayed an aptitude. Recognized as a
mathematical prodigy, at the age of 15 he began
to study under Gábor Szegő. On their first
meeting, Szegő was so impressed with the boy's
mathematical talent that he was brought to tears.
In 1913, his father was rewarded with
ennoblement for his service to the Austro-
Hungarian empire. (After becoming semi-
autonomous in 1867, Hungary had found itself in
need of a vibrant mercantile class.) The Neumann
family thus acquiring the title margittai,
Neumann János became margittai Neumann
János (John Neumann of Margitta), which he later
changed to the German Johann von Neumann. He
received his Ph.D. in mathematics (with minors in
experimental physics and chemistry) from
Pázmány Péter University in Budapest at the age
of 22. He simultaneously earned his diploma in
chemical engineering from the ETH Zurich in

89
Important mathematicians in the optimization field

Switzerland at the behest of his father, who


wanted his son to invest his time in a more
financially viable endeavour than mathematics.
Between 1926 and 1930, he taught as a
Privatdozent at the University of Berlin, the
youngest in its history. By age 25, he had
published ten major papers, and by 30, nearly 36.
Max von Neumann died in 1929. In 1930, von
Neumann, his mother, and his brothers
emigrated to the United States. He anglicized his
first name to John, keeping the Austrian-
aristocratic surname of von Neumann, whereas
his brothers adopted surnames Vonneumann and
Neumann (using the de Neumann form briefly
when first in the U.S.).
Von Neumann was invited to Princeton
University, New Jersey in 1930, and,
subsequently, was one of the first four people
selected for the faculty of the Institute for
Advanced Study (two of the others being Albert
Einstein and Kurt Gödel), where he remained a
mathematics professor from its formation in 1933
until his death.
In 1937, von Neumann became a naturalized
citizen of the US. In 1938, von Neumann was
awarded the Bôcher Memorial Prize for his work
in analysis.

90
John von Neumann

Von Neumann married twice. He married


Mariette Kövesi in 1930, just prior to emigrating
to the United States. They had one daughter (von
Neumann's only child), Marina, who is now a
distinguished professor of international trade and
public policy at the University of Michigan. The
couple divorced in 1937. In 1938, von Neumann
married Klari Dan, whom he had met during his
last trips back to Budapest prior to the outbreak
of World War II. The von Neumanns were very
active socially within the Princeton academic
community, and it is from this aspect of his life
that many of the anecdotes which surround von
Neumann's legend originate.
In 1955, von Neumann was diagnosed with what
was either bone or pancreatic cancer. While he
was in the hospital he wrote a short monograph,
The Computer and the Brain, observing that the
basic computing hardware of the brain indicated
a different methodology than the one used in
developing the computer. Von Neumann died a
year and a half later, in great pain. While at
Walter Reed Hospital in Washington, D.C., he
invited a Roman Catholic priest, Father Anselm
Strittmatter, O.S.B., to visit him for consultation
(a move which shocked some of von Neumann's
friends). The priest then administered to him the
last Sacraments. He died under military security
lest he reveal military secrets while heavily

91
Important mathematicians in the optimization field

medicated. John von Neumann was buried at


Princeton Cemetery in Princeton, Mercer County,
New Jersey.
Von Neumann wrote 150 published papers in his
life; 60 in pure mathematics, 20 in physics, and
60 in applied mathematics. His last work,
published in book form as The Computer and the
Brain, gives an indication of the direction of his
interests at the time of his death.

Logic and set theory


The axiomatization of mathematics, on the model
of Euclid's Elements, had reached new levels of
rigor and breadth at the end of the 19th century,
particularly in arithmetic (thanks to Richard
Dedekind and Giuseppe Peano) and geometry
(thanks to David Hilbert). At the beginning of the
twentieth century, set theory, the new branch of
mathematics discovered by Georg Cantor, and
thrown into crisis by Bertrand Russell with the
discovery of his famous paradox (on the set of all
sets which do not belong to themselves), had not
yet been formalized.
The problem of an adequate axiomatization of set
theory was resolved implicitly about twenty years
later (by Ernst Zermelo and Abraham Fraenkel)
by way of a series of principles which allowed for

92
John von Neumann

the construction of all sets used in the actual


practice of mathematics, but which did not
explicitly exclude the possibility of the existence
of sets which belong to themselves. In his
doctoral thesis of 1925, von Neumann
demonstrated how it was possible to exclude this
possibility in two complementary ways: the axiom
of foundation and the notion of class.
The axiom of foundation established that every
set can be constructed from the bottom up in an
ordered succession of steps by way of the
principles of Zermelo and Fraenkel, in such a
manner that if one set belongs to another then
the first must necessarily come before the second
in the succession (hence excluding the possibility
of a set belonging to itself.) To demonstrate that
the addition of this new axiom to the others did
not produce contradictions, von Neumann
introduced a method of demonstration (called the
method of inner models) which later became an
essential instrument in set theory.
The second approach to the problem took as its
base the notion of class, and defines a set as a
class which belongs to other classes, while a
proper class is defined as a class which does not
belong to other classes. Under the
Zermelo/Fraenkel approach, the axioms impede
the construction of a set of all sets which do not
belong to themselves. In contrast, under the von

93
Important mathematicians in the optimization field

Neumann approach, the class of all sets which do


not belong to themselves can be constructed, but
it is a proper class and not a set.
With this contribution of von Neumann, the
axiomatic system of the theory of sets became
fully satisfactory, and the next question was
whether or not it was also definitive, and not
subject to improvement. A strongly negative
answer arrived in September 1930 at the historic
mathematical Congress of Königsberg, in which
Kurt Gödel announced his first theorem of
incompleteness: the usual axiomatic systems are
incomplete, in the sense that they cannot prove
every truth which is expressible in their
language. This result was sufficiently innovative
as to confound the majority of mathematicians of
the time. But von Neumann, who had participated
at the Congress, confirmed his fame as an
instantaneous thinker, and in less than a month
was able to communicate to Gödel himself an
interesting consequence of his theorem: namely
that the usual axiomatic systems are unable to
demonstrate their own consistency. It is precisely
this consequence which has attracted the most
attention, even if Gödel originally considered it
only a curiosity, and had derived it independently
anyway (it is for this reason that the result is
called Gödel's second theorem, without mention
of von Neumann.)

94
John von Neumann

Quantum mechanics
At the International Congress of Mathematicians
of 1900, David Hilbert presented his famous list
of twenty-three problems considered central for
the development of the mathematics of the new
century. The sixth of these was the
axiomatization of physical theories. Among the
new physical theories of the century the only one
which had yet to receive such a treatment by the
end of the 1930s was quantum mechanics.
Quantum mechanics found itself in a condition of
foundational crisis similar to that of set theory at
the beginning of the century, facing problems of
both philosophical and technical natures. On the
one hand, its apparent non-determinism had not
been reduced to an explanation of a deterministic
form. On the other, there still existed two
independent but equivalent heuristic
formulations, the so-called matrix mechanical
formulation due to Werner Heisenberg and the
wave mechanical formulation due to Erwin
Schrödinger, but there was not yet a single,
unified satisfactory theoretical formulation.
After having completed the axiomatization of set
theory, von Neumann began to confront the
axiomatization of quantum mechanics. He
immediately realized, in 1926, that a quantum
system could be considered as a point in a so-

95
Important mathematicians in the optimization field

called Hilbert space, analogous to the 6N


dimension (N is the number of particles, 3
general coordinate and 3 canonical momentum
for each) phase space of classical mechanics but
with infinitely many dimensions (corresponding
to the infinitely many possible states of the
system) instead: the traditional physical
quantities (e.g., position and momentum) could
therefore be represented as particular linear
operators operating in these spaces. The physics
of quantum mechanics was thereby reduced to
the mathematics of the linear Hermitian
operators on Hilbert spaces.
For example, the famous uncertainty principle of
Heisenberg, according to which the
determination of the position of a particle
prevents the determination of its momentum and
vice versa, is translated into the non-
commutativity of the two corresponding
operators. This new mathematical formulation
included as special cases the formulations of both
Heisenberg and Schrödinger, and culminated in
the 1932 classic The Mathematical Foundations
of Quantum Mechanics. However, physicists
generally ended up preferring another approach
to that of von Neumann (which was considered
elegant and satisfactory by mathematicians). This
approach was formulated in 1930 by Paul Dirac.

96
John von Neumann

Von Neumann's abstract treatment permitted him


also to confront the foundational issue of
determinism vs. non-determinism and in the book
he demonstrated a theorem according to which
quantum mechanics could not possibly be derived
by statistical approximation from a deterministic
theory of the type used in classical mechanics.
This demonstration contained a conceptual error,
but it helped to inaugurate a line of research
which, through the work of John Stuart Bell in
1964 on Bell's Theorem and the experiments of
Alain Aspect in 1982, demonstrated that quantum
physics requires a notion of reality substantially
different from that of classical physics.

Economics and game theory


Von Neumann's first significant contribution to
economics was the minimax theorem of 1928.
This theorem establishes that in certain zero sum
games with perfect information (i.e., in which
players know at each time all moves that have
taken place so far), there exists a strategy for
each player which allows both players to
minimize their maximum losses (hence the name
minimax). When examining every possible
strategy, a player must consider all the possible
responses of the player's adversary and the

97
Important mathematicians in the optimization field

maximum loss. The player then plays out the


strategy which will result in the minimization of
this maximum loss. Such a strategy, which
minimizes the maximum loss, is called optimal for
both players just in case their minimaxes are
equal (in absolute value) and contrary (in sign). If
the common value is zero, the game becomes
pointless.
Von Neumann eventually improved and extended
the minimax theorem to include games involving
imperfect information and games with more than
two players. This work culminated in the 1944
classic Theory of Games and Economic Behavior
(written with Oskar Morgenstern). The public
interest in this work was such that The New York
Times ran a front page story, something which
only Einstein had previously elicited.
Von Neumann's second important contribution in
this area was the solution, in 1937, of a problem
first described by Léon Walras in 1874, the
existence of situations of equilibrium in
mathematical models of market development
based on supply and demand. He first recognized
that such a model should be expressed through
disequations and not equations, and then he
found a solution to Walras' problem by applying a
fixed-point theorem derived from the work of L.
E. J. Brouwer. The lasting importance of the work
on general equilibria and the methodology of

98
John von Neumann

fixed point theorems is underscored by the


awarding of Nobel prizes in 1972 to Kenneth
Arrow, in 1983 to Gérard Debreu, and in 1994 to
John Nash who had improved von Neumann's
theory in his Princeton Ph.D thesis.
Von Neumann was also the inventor of the
method of proof, used in game theory, known as
backward induction (which he first published in
1944 in the book co-authored with Morgenstern,
Theory of Games and Economic Behaviour).

Nuclear weapons
Beginning in the late 1930s, von Neumann began
to take more of an interest in applied (as opposed
to pure) mathematics. In particular, he developed
an expertise in explosions—phenomena which are
difficult to model mathematically. This led him to
a large number of military consultancies,
primarily for the Navy, which in turn led to his
involvement in the Manhattan Project. The
involvement included frequent trips by train to
the project's secret research facilities in Los
Alamos, New Mexico.
Von Neumann's principal contribution to the
atomic bomb itself was in the concept and design
of the explosive lenses needed to compress the
plutonium core of the Trinity test device and the

99
Important mathematicians in the optimization field

"Fat Man" weapon that was later dropped on


Nagasaki. While von Neumann did not originate
the "implosion" concept, he was one of its most
persistent proponents, encouraging its continued
development against the instincts of many of his
colleagues, who felt such a design to be
unworkable. The lens shape design work was
completed by July 1944.
In a visit to Los Alamos in September 1944, von
Neumann showed that the pressure increase
from explosion shock wave reflection from solid
objects was greater than previously believed if
the angle of incidence of the shock wave was
between 90° and some limiting angle. As a result,
it was determined that the effectiveness of an
atomic bomb would be enhanced with detonation
some kilometers above the target, rather than at
ground level.
Beginning in the spring of 1945, along with four
other scientists and various military personnel,
von Neumann was included in the target
selection committee responsible for choosing the
Japanese cities of Hiroshima and Nagasaki as the
first targets of the atomic bomb. Von Neumann
oversaw computations related to the expected
size of the bomb blasts, estimated death tolls, and
the distance above the ground at which the
bombs should be detonated for optimum shock
wave propagation and thus maximum effect. The

100
John von Neumann

cultural capital Kyoto, which had been spared the


firebombing inflicted upon militarily significant
target cities like Tokyo in World War II, was von
Neumann's first choice, a selection seconded by
Manhattan Project leader General Leslie Groves.
However, this target was dismissed by Secretary
of War Henry Stimson.
On July 16, 1945, with numerous other Los
Alamos personnel, von Neumann was an
eyewitness to the first atomic bomb blast,
conducted as a test of the implosion method
device, 35 miles (56 km) southeast of Socorro,
New Mexico. Based on his observation alone, von
Neumann estimated the test had resulted in a
blast equivalent to 5 kilotons of TNT, but Enrico
Fermi produced a more accurate estimate of 10
kilotons by dropping scraps of torn-up paper as
the shock wave passed his location and watching
how far they scattered. The actual power of the
explosion had been between 20 and 22 kilotons.
After the war, Robert Oppenheimer remarked
that the physicists involved in the Manhattan
project had "known sin". Von Neumann's
response was that "sometimes someone confesses
a sin in order to take credit for it."
Von Neumann continued unperturbed in his work
and became, along with Edward Teller, one of
those who sustained the hydrogen bomb project.

101
Important mathematicians in the optimization field

He then collaborated with Klaus Fuchs on further


development of the bomb, and in 1946 the two
filed a secret patent on "Improvement in Methods
and Means for Utilizing Nuclear Energy", which
outlined a scheme for using a fission bomb to
compress fusion fuel to initiate a thermonuclear
reaction. Though this was not the key to the
hydrogen bomb — the Teller-Ulam design — it
was judged to be a move in the right direction.

Computer science
Von Neumann's hydrogen bomb work was also
played out in the realm of computing, where he
and Stanislaw Ulam developed simulations on von
Neumann's digital computers for the
hydrodynamic computations. During this time he
contributed to the development of the Monte
Carlo method, which allowed complicated
problems to be approximated using random
numbers. Because using lists of "truly" random
numbers was extremely slow, von Neumann
developed a form of making pseudorandom
numbers, using the middle-square method.
Though this method has been criticized as crude,
von Neumann was aware of this: he justified it as
being faster than any other method at his
disposal, and also noted that when it went awry it

102
John von Neumann

did so obviously, unlike methods which could be


subtly incorrect.
While consulting for the Moore School of
Electrical Engineering at the University of
Pennsylvania on the EDVAC project, von
Neumann wrote an incomplete First Draft of a
Report on the EDVAC. The paper, which was
widely distributed, described a computer
architecture in which the data and the program
are both stored in the computer's memory in the
same address space. This architecture became
the de facto standard until technology enabled
more advanced architectures. The earliest
computers were 'programmed' by altering the
electronic circuitry. Although the single-memory,
stored program architecture was commonly
called von Neumann architecture as a result of
von Neumann's paper, the architecture's
description was based on the work of J. Presper
Eckert and John William Mauchly, inventors of
the ENIAC at the University of Pennsylvania.
Von Neumann also created the field of cellular
automata without the aid of computers,
constructing the first self-replicating automata
with pencil and graph paper. The concept of a
universal constructor was fleshed out in his
posthumous work Theory of Self Reproducing
Automata. Von Neumann proved that the most
effective way of performing large-scale mining

103
Important mathematicians in the optimization field

operations such as mining an entire moon or


asteroid belt would be by using self-replicating
machines, taking advantage of their exponential
growth.
He is credited with at least one contribution to
the study of algorithms. Donald Knuth cites von
Neumann as the inventor, in 1945, of the merge
sort algorithm, in which the first and second
halves of an array are each sorted recursively and
then merged together. His algorithm for
simulating a fair coin with a biased coin is used in
the "software whitening" stage of some hardware
random number generators.
He also engaged in exploration of problems in
numerical hydrodynamics. With R. D. Richtmyer
he developed an algorithm defining artificial
viscosity that improved the understanding of
shock waves. It is possible that we would not
understand much of astrophysics, and might not
have highly developed jet and rocket engines
without that work. The problem was that when
computers solve hydrodynamic or aerodynamic
problems, they try to put too many computational
grid points at regions of sharp discontinuity
(shock waves). The artificial viscosity was a
mathematical trick to slightly smooth the shock
transition without sacrificing basic physics.

104
John von Neumann

Politics and social affairs


Von Neumann obtained at the age of 29 one of
the first five professorships at the new Institute
for Advanced Study in Princeton, New Jersey
(another had gone to Albert Einstein). He was a
frequent consultant for the Central Intelligence
Agency, the United States Army, the RAND
Corporation, Standard Oil, IBM, and others.
Throughout his life von Neumann had a respect
and admiration for business and government
leaders; something which was often at variance
with the inclinations of his scientific colleagues.
He enjoyed associating with persons in positions
of power, and this led him into government
service.
As President of the Von Neumann Committee for
Missiles, and later as a member of the United
States Atomic Energy Commission, from 1953
until his death in 1957, he was influential in
setting U.S. scientific and military policy.
Through his committee, he developed various
scenarios of nuclear proliferation, the
development of intercontinental and submarine
missiles with atomic warheads, and the
controversial strategic equilibrium called mutual
assured destruction. During a Senate committee
hearing he described his political ideology as

105
Important mathematicians in the optimization field

"violently anti-communist, and much more


militaristic than the norm".
Von Neumann's interest in meteorological
prediction led him to propose manipulating the
environment by spreading colorants on the polar
ice caps to enhance absorption of solar radiation
(by reducing the albedo), thereby raising global
temperatures. He also favored a preemptive
nuclear attack on the USSR, believing that doing
so could prevent it from obtaining the atomic
bomb.

Personality
Von Neumann invariably wore a conservative
grey flannel business suit - he was even known to
play tennis wearing his business suit - and he
enjoyed throwing large parties at his home in
Princeton, occasionally twice a week. His white
clapboard house at 26 Westcott Road was one of
the largest in Princeton. Despite being a
notoriously bad driver, he nonetheless enjoyed
driving (frequently while reading a book) -
occasioning numerous arrests as well as
accidents. When Cuthbert Hurd hired him as a
consultant to IBM, Hurd often quietly paid the
fines for his traffic tickets.

106
John von Neumann

He reported one of his car accidents in this way:


"I was proceeding down the road. The trees on
the right were passing me in orderly fashion at 60
miles per hour. Suddenly one of them stepped in
my path." (The von Neumanns would return to
Princeton at the beginning of each academic year
with a new car.) It was said of him at Princeton
that, while he was indeed a demigod, he had
made a detailed study of humans and could
imitate them perfectly.
Von Neumann liked to eat and drink heavily; his
wife, Klara, said that he could count everything
except calories. He enjoyed Yiddish and "off-
color" humor (especially limericks).

Honors
The John von Neumann Theory Prize of the
Institute for Operations Research and the
Management Sciences (INFORMS, previously
TIMS-ORSA) is awarded annually to an individual
(or group) who have made fundamental and
sustained contributions to theory in operations
research and the management sciences.
The IEEE John von Neumann Medal is awarded
annually by the IEEE "for outstanding
achievements in computer-related science and
technology."

107
Important mathematicians in the optimization field

The John von Neumann Lecture is given annually


at the Society for Industrial and Applied
Mathematics (SIAM) by a researcher who has
contributed to applied mathematics, and the
chosen lecturer is also awarded a monetary prize.
The crater Von Neumann on the Moon is named
after him.
The John von Neumann Computing Center in
Princeton, New Jersey (40°20′55″N 74°35′32″W)
was named in his honour.
The professional society of Hungarian computer
scientists, John von Neumann Computer Society,
is named after John von Neumann.
On February 15, 1956, Neumann was presented
with the Presidential Medal of Freedom by
President Dwight Eisenhower.
On May 4, 2005 the United States Postal Service
issued the American Scientists commemorative
postage stamp series, a set of four 37-cent self-
adhesive stamps in several configurations. The
scientists depicted were John von Neumann,
Barbara McClintock, Josiah Willard Gibbs, and
Richard Feynman.
The John von Neumann Award of The Rajk László
College for Advanced Studies was named in his
honour, and has been given every year since
1995 to professors who have made an

108
John von Neumann

outstanding contribution to the exact social


sciences and through their work have strongly
influenced the professional development and
thinking of the members of the college.

Selected works
•Jean van Heijenoort, 1967. A Source Book in
Mathematical Logic, 1879-1931. Harvard Univ.
Press.
1. 1923. On the introduction of transfinite
numbers, 346-54.
2. 1925. An axiomatization of set theory,
393-413.

•1932. Mathematical Foundations of Quantum


Mechanics, Beyer, R. T., trans., Princeton Univ.
Press. 1996 edition: ISBN 0-691-02893-1
•1944. (with Oskar Morgenstern) Theory of
Games and Economic Behavior. Princeton Univ.
Press. 2007 edition: ISBN 978-0-691-13061-3
•1945. First Draft of a Report on the EDVAC.
•1966. (with Arthur W. Burks) Theory of Self-
Reproducing Automata. Univ. of Illinois Press.

109
Important mathematicians in the optimization field

•1963. Collected Works of John von Neumann, 6


Volumes. Pergamon Press

Biographical material
•Aspray, William, 1990. John von Neumann and
the Origins of Modern Computing.
•Chiara, Dalla, Maria Luisa and Giuntini, Roberto
1997, La Logica Quantistica in Boniolo, Giovani,
ed., Filosofia della Fisica (Philosophy of Physics).
Bruno Mondadori.
•Goldstine, Herman, 1980. The Computer from
Pascal to von Neumann.
•Halmos, Paul R., 1985. I Want To Be A
Mathematician Springer-Verlag
•Hashagen, Ulf, 2006: Johann Ludwig Neumann
von Margitta (1903–1957). Teil 1: Lehrjahre eines
jüdischen Mathematikers während der Zeit der
Weimarer Republik. In: Informatik-Spektrum 29
(2), S. 133-141.
•Hashagen, Ulf, 2006: Johann Ludwig Neumann
von Margitta (1903–1957). Teil 2: Ein
Privatdozent auf dem Weg von Berlin nach
Princeton. In: Informatik-Spektrum 29 (3), S. 227-
236.

110
John von Neumann

•Heim, Steve J., 1980. John von Neumann and


Norbert Weiner: From Mathematics to the
Technologies of Life and Death MIT Press
•Macrae, Norman, 1999. John von Neumann: The
Scientific Genius Who Pioneered the Modern
Computer, Game Theory, Nuclear Deterrence,
and Much More. Reprinted by the American
Mathematical Society.
•Poundstone, William. Prisoner's Dilemma: John
von Neumann, Game Theory and the Puzzle of
the Bomb. 1992.
•Redei, Miklos (ed.), 2005 John von Neumann:
Selected Letters American Mathematical Society
•Ulam, Stanisław, 1983. Adventures of a
Mathematician Scribner's
•Vonneuman, Nicholas A. John von Neumann as
Seen by His Brother ISBN 0-9619681-0-9
•1958, Bulletin of the American Mathematical
Society 64.
•1990. Proceedings of the American
Mathematical Society Symposia in Pure
Mathematics 50.
•John von Neumann 1903-1957, biographical
memoir by S. Bochner, National Academy of
Sciences, 1958

111
Important mathematicians in the optimization field

Popular periodicals
•Good Housekeeping Magazine, September 1956
Married to a Man Who Believes the Mind Can
Move the World
•Life Magazine, February 25, 1957 Passing of a
Great Mind

Video
•John von Neumann, A Documentary (60 min.),
Mathematical Association of America

References
This article was originally based on material from
the Free On-line Dictionary of Computing, which
is licensed under the GFDL.
•Doran, Robert S.; John Von Neumann, Marshall
Harvey Stone, Richard V. Kadison, American
Mathematical Society (2004). Operator Algebras,
Quantization, and Noncommutative Geometry: A
Centennial Celebration Honoring John Von
Neumann and Marshall H. Stone. American
Mathematical Society Bookstore.
ISBN 9780821834022.
•Heims, Steve J. (1980). John von Neumann and
Norbert Wiener, from Mathematics to the

112
John von Neumann

Technologies of Life and Death. Cambridge,


Massachusetts: MIT Press. ISBN 0262081059.
•Herken, Gregg (2002). Brotherhood of the
Bomb: The Tangled Lives and Loyalties of Robert
Oppenheimer, Ernest Lawrence, and Edward
Teller. ISBN 978-0805065886.
•Israel, Giorgio; Ana Millan Gasca (1995). The
World as a Mathematical Game: John von
Neumann, Twentieth Century Scientist.
•Macrae, Norman (1992). John von Neumann:
The Scientific Genius Who Pioneered the Modern
Computer, Game Theory, Nuclear Deterrence,
and Much More. Pantheon Press.
ISBN 0679413081.
•Slater, Robert (1989). Portraits in Silicon.
Cambridge, Mass.: MIT Press. pp. 23–33.
ISBN 0262691310.

113
Important mathematicians in the optimization field

Lev Pontryagin
Lev Semenovich Pontryagin (Russian: Лев
Семёнович Понтря‰гин) (3 September 1908 –
3 May 1988) was a Soviet Russian
mathematician. He was born in Moscow and lost
his eyesight in a primus stove explosion when he
was 14. Despite his blindness he was able to
become a mathematician due to the help of his
mother Tatyana Andreevna who read
mathematical books and papers (notably those of
Heinz Hopf, J. H. C. Whitehead and Hassler
Whitney) to him. He made major discoveries in a
number of fields of mathematics, including the
geometric parts of topology.

Work
He worked on duality theory for homology while
still a student. He went on to lay foundations for
the abstract theory of the Fourier transform, now
called Pontryagin duality. In topology he posed
the basic problem of cobordism theory. This led
to the introduction around 1940 of a theory of
characteristic classes, now called Pontryagin
classes, designed to vanish on a manifold that is a
boundary. Moreover, in operator theory there are

114
Lev Pontryagin

specific instances of Krein spaces called


Pontryagin spaces.
Later in his career he worked in optimal control
theory. His maximum principle is fundamental to
the modern theory of optimization. He also
introduced there the idea of a bang-bang
principle, to describe situations where either the
maximum 'steer' should be applied to a system,
or none.

Controversy and anti-


semitism
Pontryagin was a controversial personality.
Although he had many Jews among his friends
and supported them in his early years, he was
accused of anti-Semitism in his mature years. For
example he attacked Nathan Jacobson for being a
"mediocre scientist" representing "Zionism
movement", while both men were vice-presidents
of the International Mathematical Union. He
rejected charges in anti-Semitism in an article
published in Science in 1979, claiming that he
struggled with Zionism which he considered a
form of racism. When a prominent Soviet Jewish
mathematician, Grigory Margulis, was selected
by the IMU to receive the Fields Medal at the

115
Important mathematicians in the optimization field

upcoming 1978 ICM, Pontryagin, who was a


member of the Executive Committee of the IMU
at the time, vigorously objected. Although the
IMU stood by its decision to award Margulis the
Fields Medal, Margulis was denied a Soviet exit
visa by the Soviet authorities and was unable to
attend the 1978 ICM in person. Pontryagin also
participated in a few notorious political
campaigns in the Soviet Union, most notably, in
the Luzin affair.
Pontryagin's students include Dmitri Anosov,
Vladimir Boltyansky, Mikhail Postnikov and
Vladimir Rokhlin.

116
Naum Z. Shor

Naum Z. Shor
Naum Zuselevich Shor
Born 1 January 1937
Kiev, Ukraine, USSR
Died 26 February 2006 (aged 69)
Nationality Soviet Union
Ukraine
Institutions V.M. Glushkov Institute of
Cybernetics, Kiev, Ukraine
Naum Zuselevich Shor (Ukrainian: Шор Наум
Зуселевич) (1 January 1937 – 26 February
2006) was a Soviet and Ukrainian mathematician
specializing in optimization.
He made significant contributions to nonlinear
and stochastic programming, numerical
techniques for non-smooth optimization, discrete
optimization problems, matrix optimization, dual
quadratic bounds in multi-extremal programming
problems.
Shor became a full member of the National
Academy of Science of Ukraine in 1998.

117
Important mathematicians in the optimization field

Subgradient methods
N. Z. Shor is well known for his method of
generalized gradient descent with space dilation
in the direction of the difference of two
successive subgradients (the so-called r-
algorithm). The ellipsoid method was re-
invigorated by A.S. Nemirovsky and D.B. Yudin,
who developed a careful complexity analysis of its
approximation properties for problems of convex
minimization with real data. However, it was
Leonid Khachiyan who provided the rational-
arithmetic complexity analysis, using an ellipsoid
algorithm, that established that linear
programming problems can be solved in
polynomial time.
It has long been known that the ellipsoidal
methods are special cases of these subgradient-
type methods.

118
Naum Z. Shor

References

Bibliography
•"Congratulations to Naum Shor on his 65th
birthday", Journal of Global Optimization 24 (2):
111–114, 2002, doi:10.1023/A:1020215832722.

119
Important mathematicians in the optimization field

Albert W. Tucker
Albert W. Tucker
Born 28 November 1905
Oshawa, Ontario, Canada
Died 25 January 1995 (aged 89)
Highstown, N.J., U.S.
Residence U.S.
Nationality American
Canadian
Fields Mathematician:
Combinatorial topology
Optimization
Institutions Princeton University
Alma mater University of Toronto, Princeton
University
Doctoral advisor Solomon Lefschetz
Doctoral students David Gale
Marvin Minsky
John Forbes Nash
Torrence Parsons
Lloyd Shapley
Known for Prisoner's dilemma
Karush-Kuhn-Tucker conditions
Combinatorial linear algebra
Influenced Harold W. Kuhn
David Gale
R. T. Rockafellar

120
Albert W. Tucker

Albert William Tucker (28 November 1905 – 25


January 1995) was a Canadian-born American
mathematician who made important
contributions in topology, game theory, and non-
linear programming.
Albert Tucker was born in Oshawa, [Ontario]],
Canada, and earned his B.A. at the University of
Toronto in 1928. In 1932, he completed his Ph.D.
at the Princeton University under the supervision
of Solomon Lefschetz, with the thesis An Abstract
Approach to Manifolds.
In 1932–33 he was a National Research Fellow at
Cambridge, Harvard, and the University of
Chicago. He then returned to Princeton to join
the faculty in 1933, where he stayed till 1970. He
chaired the mathematics department for about
twenty years, one of the longest tenures. His
extensive relationships within the field made him
a great source for oral histories of the
mathematics community.
His Ph.D. students include Michel Balinski, David
Gale, Alan Goldman, John Isbell, Stephen Maurer,
Marvin Minsky, Nobel Prize winner John Nash,
Torrence Parsons, Lloyd Shapley, Robert
Singleton, and Marjorie Stein. Although he wasn't
his dissertation advisor, Tucker did advise and
collaborated with Harold W. Kuhn on a number of
papers and models.

121
Important mathematicians in the optimization field

In 1950, Albert Tucker gave the name and


interpretation "prisoner's dilemma" to Merrill M.
Flood and Melvin Dresher's model of cooperation
and conflict, resulting in the most well-known
game theoretic paradox. He is also well known
for the Karush-Kuhn-Tucker conditions, a basic
result in non-linear programming, which was
published in conference proceedings, rather than
in a journal.
In the 1960s, he was heavily involved in
mathematics education, as chair of the AP
Calculus committee for the College Board (1960–
1963), through work with the Committee on the
Undergraduate Program in Mathematics (CUPM)
of the MAA (he was president of the MAA in
1961–1962), and through many NSF summer
workshops for high school and college teachers.
In the early 1980s, Tucker recruited Princeton
history professor Charles Gillispie to help him set
up an oral history project to preserve stories
about the Princeton mathematical community in
the 1930s. With funding from the Sloan
Foundation, this project later expanded its scope.
Among those who shared their memories of such
figures as Einstein, von Neumann, and Gödel
were computer pioneer Herman Goldstine and
Nobel laureates John Bardeen and Eugene
Wigner.

122
Albert W. Tucker

Albert Tucker received an honorary degree from


Dartmouth College. He died in Highstown, N.J. in
1995 at age 89.

123
Important mathematicians in the optimization field

Hoang Tuy
Hoàng "Jefferson" Tụy (born December 17,
1927) is a prominent Vietnamese applied
mathematician. He is one of two founders of
Vietnamese Mathematics, the other is Le Van
Thiem. He received his PhD in mathematics from
Moscow State University in 1959. He has worked
mainly and did pioneering work in the field of
global optimization. He has published more than
160 referred journal and conference articles. He
presently is with the Institute of Mathematics of
the Vietnamese Academy of Science and
Technology, where he was director from 1980 to
1989. His son, Hoang Duong Tuan, is now an
Associate Professor in Electrical Engineering and
Telecommunications at the University of New
South Wales, Australia, where he is working on
the applications of optimization in various
engineering fields. His son-in-law, Phan Thien
Thach, works also on Optimization.
In 1997, a workshop in honor of Hoang Tuy was
organized at the Linkoping Institute of
Technology, Sweden.
In December 2007, an international conference
on Nonconvex Programming was held in Rouen,
France to pay tribute to him on the occasion of
his 80th birthday, in recognition of his pioneering

124
Hoang Tuy

achievements which considerably affected the


field of global optimization.

125

Potrebbero piacerti anche