Sei sulla pagina 1di 13

Notes for MATH 404 Spring, 2012

Chapter 1 [as of 24 January]

Thomas I. Seidman 1 Introduction

The study of partial dierential equations (PDEs) started in the 18th century in the work of Euler, dAlembert, Lagrange and Laplace as a central tool in the description of mechanics of continua and more generally, as the principal mode of analytical study of models in the physical science. The analysis of physical models has remained to the present day one of the fundamental concerns of the development of PDEs. . . . Up to about 1870 the study of PDE was mainly concerned with heuristic methods for nding solutions of boundary value problems for PDEs, as well as explicit solutions for particular problems. (Brezis/Browder: PDEs in the 20th C. 1998) The three classical PDEs which served as the motivating examples and models for much of the development: Wave equation: 2u c2 u = F (for 1-D utt c2 uxx = F ) 2 t u Heat equation: Du = S t Laplace equation: u = 0 (Poisson equation: u = F )

(1.1)

already appeared in the 18th and early 19th centuries. [See the notational comments at the beginning of subsection 1.2.] Exercise 1.1. Verify that each of the following is a solution of the indicated equation: 1. For any smooth f , u(t, x) = f (2t x) satises the 1-D wave equation with c = 2, F = 0. 2. u(t, x) = {0 for t x; (tx) for t x} satises the 1-D wave equation with c = 1, F = 0. 1

3. u(t, x) = t1/2 ex

2 /4Dt

satises the 1-D heat equation with S = 0.

4. Both u1 (t, x, y) = e2t sin(x + y) and u2 (t, x, y) = e2t cos(x y) satisfy the 2-D heat equation with D = 1, S = 0. 5. u(x, y, z) = e2x sin(y + z) satises the 3-D Laplace equation. 6. u(x, y) = [e3x4y + ey cos x] satises the Poisson equation with S = 25 e3x4y . 7. for any smooth function f of a single variable, the function of two variables dened by u(x, y) = f (x2 2y) satises the rst order equation ux + xuy = 0. 8. [u = x2 y 2 , v = 2xy] satises the C-R system: [ux = vy , vx = uy ]. The study of these classical equations (1.1) and their solutions their relation with the physical modeling, some explicit solutions, their behavior, and some approaches to computation provides a good introduction to some of the most signicant ways of thinking about PDEs and working with them, although it is not the most systematic presentation of the modern theory of PDEs. Thus, most of this course will center on these particular equations (and some variants). Our emphasis will be on developing appropriate intuition for the material, an understanding of the important questions which characterize the theory, and a familiarity with some of the solution techniques which have been developed in response to those questions. We note that much of the work of the 20th and 21st centuries in this eld has been concerned with technical issues arising in providing rigorous formulation of these questions, extension to nonlinear problems, and justication for these techniques. In this course our emphasis will not be on such proofs so our manipulations will be essentially formal without actually stating the conditions for a rigorous proof (although we will try, on occasion, to remain aware that there are such technical issues and may wish to obtain some understanding of the formulation of the more modern theory). The prerequisites (Multivariate Calculus, Linear Algebra and ODEs) will be used intensively: we assume a thorough familiarity and facility with that material and will use it on that basis, so adequate review is each students responsibility.

1.1

An outline

Chapter 1 This Introduction, introducing the themes of the course. Chapter 2 Comment on some of the prerequisite background for our study: this should already be familiar and, as already noted, a thorough review is strongly recommended to make sure of that, since that material will be used with little further explanation. A very brief indication will be given here of some of the most signicant material. Our purposes are: to note some vocabulary and notation we will use, for convenient reference (and a few formulas), and also to provide, by analogy in a simpler context, some models of things we will be doing later with PDEs. [Some miscellaneous PDE results are also included here.] Chapter 3 Where do PDEs come from? introducing the Cast of Characters through modeling, providing derivations which connect the mathematical objects of our interest (PDEs) with some physical situations from which they arise and so make plausible why we ask certain kinds of questions about them. Chapters 4,5 In these two chapters we look at two very special situations where one can obtain a general solution and then discuss some relevant questions and answers. In Chapter 4 we present dAlemberts solution of the wave equation in one space dimension; in Chapter 5 we consider the one-dimensional heat equation. Chapter 6 Sturm-Liouville problems serve as a fairly general class of elliptic equations in one space dimension for discussion of some basic issues. They also provide results needed for the treatment of more general problems. Chapters 7,8 Chapter 7 shows how a knowledge of eigenfunctions and eigenvalues can be used to obtain solutions of a variety of basic problems in the form of innite series. Chapter 8 then shows a method to obtain the eigenfunctions and eigenvalues for certain problems in higher spatial dimensionfrom the results of Chapter 6. [The method only works when the geometry and the boundary conditions have certain special forms, but these include many of the most important situations for applications.]

Chapter 9 A very brief introduction to some computational methods. Chapter 10, . . . As time permits, . . .

1.2

PDEs; some notation and terminology

The unknown of a PDE is a function u(t, x) where, for typical applications, t denotes time (starting at some initial time t0 , usually taken as t0 = 0) and x usually varies over some xed spatial domain . [While systems of PDEs occur in important applications, making u vector-valued, we will mostly restrict our attention to scalar PDEs where u(t, x) is a real number for each (t, x).] The spatial variable x is typically 1-, 2-, or 3-dimensional for familiar applications so, e.g., in the 3-D case, we may write x = (x, y, z) or x = (x1 , x2 , x3 ) as convenient (also allowing the possibility of cylindrical or spherical coordinates); we then speak of the d-dimensional wave equation if the spatial region is d-dimensional. We will usually use subscripts for partials (i.e., partial derivatives), as ux for u/x. As usual, the gradient u = (ux1 , . . .) denotes (the vector of) all rst order partials and we will write D2 u for the collection of all second order partials. For the classical equations (1.1), the symbol refers to the Laplace operator meaning: u = uxx = 2 u/x2 in one space dimension, with u = uxx + uyy = 2 u/x2 + 2 u/y 2 in two space dimensions, and u = uxx + uyy + uzz = 2 u/x2 + 2 u/y 2 + 2 u/z 2 in three dimensions. Thus, u = d 2 u/x2 . j j=1 The most general second order scalar PDE would have the form (some function of [t, x, u, u, D2 u]) 0, but we restrict our attention to linear PDEs, having the form Lu = f (t, x) where L : u
j,k

aj,k (t, x)

2u + xj xk

bk (t, x)
k

u + c(t, x) u. xk

(1.2)

We will discuss this idea of linear dierential operators further in Subsection 1.6, but only note here that the most important special cases for L are , /t D, and 2 /t2 c2 , as in (1.1) (although we may also separate the time and space derivatives to write ut Lu = f or utt Lu = f ). The leading terms of L are the ones involving second derivatives, given by the

coecient matrix A = [aj,k ] = [aj,k (x)]. [Note that we will always assume this matrix A is symmetric aj,k = ak,j for each j, k (1.3)

since we have equality of the mixed partials: 2 u/xj xk = 2 u/xk xj .] We speak of the function f in a PDE Lu = f as the right hand side (rhs) but also, depending on the physical interpretation, as the forcing term or the source term. We call the equation homogeneous if f 0. We call the equation autonomous if the coecients are independent of time t. We say L is in divergence form if the matrix A = [aj,k ] and the rst order coecients bk are related so Lu = A u + cu (1.4) so the principal part is the divergence of [A acting on the gradient of u]. The equations we discuss in this course will almost all be PDEs involving linear operators L in divergence form; in particular, the Laplace operator of (1.1) is in divergence form (u = 2 u = div grad u) with A the identity matrix. [We will see in Chapter 3 that the natural generalization of the heat equation would take the conduction coecient D to be a matrix with the equation valid as written only when D is just a constant number; a similar consideration also applies to the wave equation.] We will introduce still more notation and terminology in Chapter 2.

1.3

Problems and data

Knowing the situation for ODEs (and noting, e.g., problem 4. of Exercise 1.1), we do not expect a PDE to have a single (unique) solution. This is also clear from the physics. For example, the heat equation models the evolution of the temperature distribution of a body (varying with time t and position x in the body) and this must clearly depend both on the temperature distribution at the initial time and on the heat exchange across the boundary of the body during the intervening interval as well as on any interior sources described in the equation itself by S. Thus, to obtain a situation where it would become possible to speak of the solution, we expect a necessity of providing this additional information in the form of auxiliary conditions imposed along with the PDE itself. Much of this will become clearer in the context of Chapter 3; for now we only note that the equation Lu = f cannot be solved for the solution u since f alone typically does not provide 5

all the information needed. It is combinations of a PDE together with such auxiliary conditions which we must consider. The relevant formulation is not the equation by itself, but the problem, choosing the type of auxiliary data to specify. Since the kind of auxiliary conditions to be imposed will be dierent for dierent problems, it is this combination which we must consider. [Even for an ODE such as y + y = f (t) we are used to considering the initial value problem (IVP) in xing the two arbitrary constants, but might alternatively consider the two-point boundary value problem (BVP) specifying values, say, for y(a) and for y(b). Indeed, we will consider that question in a bit more detail in Chapter 6.] For the heat equation it is natural to consider augmenting the PDE by an initial condition, specifying a function u0 and requiring u(0, x) = u0 (x). If we are considering the heat equation for a body occupying a bounded region , then we might plausibly assume the temperature values known (e.g., obtained by measurement) at the boundary (which we will denote by ); this means working with a Dirichlet boundary condition: u(t, x) = g(t, x) for x in (1.5)

where g is the (known) data at the boundary, and we call this a Dirichlet problem. Alternatively, we might plausibly assume we have information about the heat ux at (which we will see in Chapter 6 is given by D u) and could specify the normal component D u n of this at , working with a Neumann condition; in particular, the homogeneous Neumann condition D u n = 0 would correspond to an insulated boundary with no ux across it. [Still another possibility comes from Newtons Law of Cooling in which the (normal component of) the ux would be proportional to the dierence between the bodys temperature and the ambient temperature so [D u n](t, x) u(t, x) = g(t, x) for x in (1.6) with g given and n the unit outward normal to the boundary at x; this is called a Robin condition (but would be a Neumann condition if = 0).] Obviously we might also consider situations in which the imposed boundary conditions involve dierent types of information at dierent parts of the boundary. 6

The same types of Dirichlet and Neumann boundary conditions also occur in connection with the wave equation, depending on whether one has information about position deviations or about forces at the boundary. As with the use of Newtons Laws of Motion as ODEs in mechanics, the wave equation is of second order in time so we expect to require two initial conditions, specifying (independently) functions u0 , u1 on for auxiliary conditions u(0, x) = u0 (x) ut (0, x) = u1 (x). (1.7)

The data for these problems consists of the rhs in the PDE (F or S), the function g in the boundary conditions (if the PDE is being considered in a region ), and the initial data: u0 or u0 , u1 (or nothing, for the Poisson equation) as appropriate. When we speak of a problem, however, we are considering the nature of the conditions to be imposed, but not the actual data which would have to be specied for any particular application.

1.4

Problems and Well-Posedness

Much of our basic understanding of PDE problems relates to whether the problem is well-posed, meaning that 1. The problem has an admissible solution for each admissible set of data. 2. Those solutions are unique. 3. If two sets of data for the problem are close to each other, then the corresponding solutions are close to each other. We modify our idea of a problem to include some specication of what data and what solutions are admissible and when we would consider data or solutions to be close. We briey consider the conditions above one by one. 1. The new ideas here are the notions of admissible data and admissible solution. To some extent these are related to the concepts of regularity and ideal solution to be introduced in Section 1.5, but we will also see that restrictions on admissibility may have direct physical meaning. 7

2. This just means that we have augmented the PDE with appropriate auxiliary conditions providing enough information to determine the solution without ambiguity (but not so much as to constrain the existence property unduly). 3. As should be clear (from our description above and from the physical modeling in Chapter 3, the data specied in any particular application is typically obtained from measurement and so is only an approximation so we need this to provide a good approximation to the real solution. The new idea here is that we must specify in each context what we mean by close for this to be useful. This denition of well-posedness was formulated this way by Hadamard about 1929 and now dominates the discussion of PDEs and their computational solution. Much of that discussion has become quite technical, involving the descriptions of numerous useful classes of admissible functions with various notions of what close might mean in these varying contexts. We will mostly be content in this course with accepting that the problems we obtain from reasonable physical situations will be well-posed when that is interpreted in reasonable ways. To get some idea as to how this works, we look at a 1-D example. Exercise 1.2. Consider the ODE y +y = f . This is a 1-dimensional version of the Poisson equation of (1.1) and we can consider the Dirichlet problem for this ODE on the interval [0, ] so our problem is y + y = f (x) for 0 x with y(0) = , y( ) = . (1.8)

In this case we can give a general solution1 of the ODE explicitly:


x

y(x) =
0

[sin(x s)] f (s) ds + a sin x + b cos x

(1.9)

with arbitrary constants a, b, to be determined so the boundary conditions are satised.


Calling this a general solution means that for every choice of the 2 parameters a, b this gives a solution and that every solution can be obtained in this way with a unique choice of the pair a, b. [One of the most important considerations in working with PDEs is that there cannot be any general solution involving only a nite number of parameters: one needs either innitely many arbitrary constants as parameters or else one or more arbitrary functions.]
1

1. Verify that (1.9) is a general solution of y + y = f . 2. Verify that y(x) = x2 + a sin x + b cos x is also a general solution of y + y = f for f = x2 + 2. Solve (1.8) in terms of the data , , xing f (x) = x2 + 2. 3. Explain why the problem (1.8) is not well-posed as stated when is a multiple of so sin = 0. If we do have = , what admissibility conditions must be put on the data , so a solution would exist? [Is the resulting solution then unique?] 4. Show that the problem (1.8) is well-posed for, e.g., = 1 if we interpret close for solutions y and y to mean that one has |y(x) y (x)| (all 0 < x < 1) for a number which should get small when the dierences | | and | | both get small. 5. For = 1, show that the problem (1.8) is well-posed (for the full data f, , ) if we interpret close for functions f and f to mean that one has |f (x) f (x)| (all 0 < x < 1) for a small enough number > 0.

1.5

Idealized solutions
2u 2u + = f (x, y), x2 y 2

If we write the 2-D Poisson equation as u = (1.10)

it is natural to assume that a solution u should be twice dierentiable so [ 2 u/x2 ](x, y) and [ 2 u/y 2 ](x, y) would each be well-dened numbers (varying with x, y) so (1.10) could be interpreted as a numerical identity holding for each x, y. In this case we speak of u as a classical solution. On the other hand, we may view the Laplace operator as giving a function u for each suitable function u so (1.10) would be interpreted as an equivalence of the two functions u and f . [These are almost the same thing, but are subtly dierent since we often consider functions equivalent even if their values dier at some points. This is especially the case in contexts involving integrals: for example, if we consider H(x) = 1 for x < 0 1 for x > 0 9 H(0) = = ?

then H will always give the same value (regardless of ) for arbitrary smooth functions and this may be sucient.] Certainly if the function f in (1.10) would represent a load or a source, then we would not expect this physical interpretation to be sensitive to change at a negligible set of points, although this would formally lead to a discontinuity in f which would make a pointwise interpretation of the equation impossible there. More to the point, we would like to use such a common idealization as a point mass. Example 1.1. We will see that the steady state deection y(x) of a string (xed at the ends so we have the BCs: y(0) = 0 = y(1)) under a distributed load f (x) will satisfy y = f . The solution is then given by
1

y(x) =
0

K(x, s)f (s) ds

K(x, s) =

s(1 x) for s x x(1 s) for x s

(1.11)

We can imagine the load distribution f (of total mass m, so f = m) as being concentrated near the point x = . To idealize this as a point load m at x = , we would still want f = m, but would want f (x) = 0 for all x = . No such load distribution can be represented by an actual function, but we may, somewhat formally, consider a so-called delta function = whose dening property is that
1

(s)(s) ds = ()
0

(1.12)

for every continuous function . For each we see that taking loads f in (1.11) for which f in some suitable sense gives y(x) y (x) = K(x, ) so we may view y as an idealized solution for the BVP: y = with y(0) = 0 = y(1). Exercise 1.3. 1. Verify that (1.11) satises the conditions of the boundary value problem: [y = f with y(0) = 0, y(1) = 0] for any reasonable function f . 2. Consider functions f (x) = 1/2 for < x < + with f (x) = 0 for |x | > . [Explain why these are reasonable approximations to the point distribution so we may say f as 0.] Show that the corresponding solutions y of the BVP converge to y . 10

[As another important example of an idealized solution for a PDE, consider 7. of Exercise 1.1 when f may not be smooth. This is the dAlembert solution of the 1-D wave equation, to be discussed in more detail in Chapter 4.] We next note an important insight. If two functions f, g are equivalent then f = g for every reasonable function (1.13) and the converse of this is also true: f = g for enough f = g. (1.14)

If we would know that f, g are continuous, then it is not dicult to prove (1.14) and this will be true more generally if we interpret the equivalence as meaning that f (x) = g(x) for all x except possibly some negligible set. The importance of this for us is that we can apply it to PDEs such as (1.10). The Divergence Theorem (see Subsection 2.1) then gives [u] =

so (1.10) then becomes

u=

f.

(1.15)

[This assumes that is smooth enough that is nice and also, for example, that vanishes near the boundary so the boundary integral [ u n] vanishes.] A function u satisfying (1.15) for all such is called a weak solution of the Poisson equation (1.10). The advantage of (1.15) over (1.10) in considering ideal solutions is that it makes perfectly good sense for functions u which are dierentiable but not twice dierentiable although a twice dierentiable function which is a weak solution is also a classical solution and, as we will see in Chapter 3 that this weakening of the required regularity has physical signicance.

1.6

Linearity

As promised, we return to consideration of linear dierential operators as in (1.2). At the start of the previous subsection we contrasted a pointwise and 11

a functional view of (1.10) with the latter viewing functions as the elementary objects of our attention. Similarly we may contrast two quite dierent views of the derivative. On the one hand we think about this as the slope, emphasizing the interpretation at a single point. On the other hand we may view D = d/dx as an operator a kind of higher level function acting on a familiar function f to produce a new function Df = f = df /dx so we think of f as an input and the derivative2 as a resulting output. Two key properties of this operator D (and of any partial dierentiation /xj ) are the scaling property D(cf ) = c(Df ) for constants c and the sum property D(f + g) = (Df ) + (Dg). The latter, when it holds, is also called the Principle of Superposition The output associated with a sum of (two) possible inputs is the sum of what would have been the outputs for the inputs considered separately. Any operator with these properties is called a linear operator. Exercise 1.4. Explain how you know that each of these maps is a linear operator, i.e., that it has the two properties dening linearity when applied to suitable functions. 1. The map L of (1.2). 2. The Dirichlet trace map 0 : u (dened on ) : u i.e., re stricting the function u to its boundary values as needed for Dirichlet problems. 3. The Neumann trace map 1 : u (dened on ) : A u n as needed for Neumann problems. 4. The IV maps: u u
t=0
2

i.e.,

and u ut

.
t=0

In most cases we will not make a great eort to be precise about this but it should be very clear that a precise denition of any operator must be precise as to which class of functions are being considered. This obviously includes specifying where these functions are dened, whether they are scalar or vector-valued, etc., but typically this is also a matter of regularity whether the functions are continuous or twice continuously dierentiable or . . . .

12

We will note that the three classical equations are each linear and will emphasize how important that is for the methods we will use. The primary uses of this linearity for us are the possibility of problem splitting and techniques for building up solutions to problems with more general data by sums or integrals from simpler special examples.

13

Potrebbero piacerti anche