Sei sulla pagina 1di 127

Continuous Dynamical Systems

Lee1
Author address:
111 Cummington St., Department of Mathematics, Boston University, Boston MA 02155

E-mail address :

deville@math.bu.edu

based exclusively on notes given by G. R. Hall in MA 775, Fall 1996, at Boston University

Contents
Introduction Chapter 1. Introduction 1. Some preliminary de nitions (Sept. 4) 2. More examples, Changing variables (Sept. 6) 3. Di erentiation, change of time variable (Sept. 9) 4. Change of time variables(Sept. 11) 5. Example, using the 2-body problem (Sept.13) 6. McGehee Collision Manifold (Sept. 16) 7. Finishing analysis of collision manifold (Sept.18) Chapter 2. The Two Big Theorems 1. Existence-Uniqueness Theorem (Sept.20) 2. Some proof of the Existence-Uniqueness Theorem (Sept.25) 3. Some more proof of the Existence-Uniqueness Theorem (Sept.27) 4. Picard iteration (Sept. 30) 5. Invariant Sets (Oct. 2) 6. Sinks and conjugacy (Oct. 4 and Oct. 7) 7. Preliminary to Stable/Unstable Manifold Theorem (Oct. 9) 8. Stable/Unstable Manifold Theorem(Oct. 15) 9. More S/U Theorem (analytic version) (Oct. 16) 10. C k version of S/U Theorem (Oct. 18) 11. Continuation of the C k proof (Oct. 21) 12. Smoothness (Oct. 28) 13. The Stable/Unstable Manifold MetaTheorem (Oct. 30) Chapter 3. Using maps to understand ows 1. Periodic orbits and Poincare sections (Nov. 1) 2. Computing Floquet Multipliers (Nov. 4) 3. More computation of Floquet multipliers (Nov. 6) 4. Bifurcation Theory (Nov. 8) 5. Hyperbolicity in bifurcations (Nov. 11) 6. Bifurcation diagrams (Nov. 13) 7. Spiral sinks and spiral sources, normal form calculation (Nov. 15) 8. More normal form calculations, complexi cation (Nov. 18) Chapter 4. Topics 1. Setup for Hopf Bifurcation Theorem (Nov. 20) 2. More Hopf Bifurcation Theorem (Nov. 22,25)
3

5 7 7 9 11 15 18 22 26 29 29 30 33 34 37 41 46 50 54 56 59 62 65 69 69 73 77 80 84 88 90 94 99 99 102

CONTENTS

3. 4. 5. 6. 7. Index

Setup for Melnikov method (Nov. 25, Dec 2) Using Melnikov for existence of a transverse homoclinic (Dec. 4) Calculation of order term (Dec. 6) An example of a Melnikov calculation (Dec. 9) Averaging (Dec. 11)

108 113 117 121 122 126

INTRODUCTION

In Fall 1996, Professor G. R. Hall gave a class at Boston University titled \MA 775 - Ordinary Di erential Equations". The following pages are a (hopefully accurate) copy of the notes I took in this class. He deserves all the credit for assembling the material and presenting it. The only mathematics I ended up doing was checking and making sure I had all of the inequalities pointing the right way, etc. ATEX macros on top of Version This document was produced using AMS -L A 3.1415 of the L TEX2 compiler. I used the amsbook documentclass. The only packages I had to import were eps g and graphicx for the pictures, and theorem to de ne the various theorem environments. All of the pictures were made either with xfig or Mathematica, and then dumped into PostScript1 . For further information (including how to get a copy of this PostScript le), please go to http://math.bu.edu/people/deville/Notes/. The latest versions of this le will be maintained there when corrections are made. If you nd any errors, or have any general comments on the notes, please send email to me at deville@math.bu.edu. I will maintain an errata page for these notes (and all future releases), and any and all comments are greatly appreciated.

Introduction

In this particular version, a few of the pictures are still not in. They tend to be more at the end, and most of them are concerning the Melnikov estimates. I plan to do the rest eventually, but they're hard and complicated, so relax.
1

CONTENTS

CHAPTER 1

Introduction
1. Some preliminary de nitions (Sept. 4) Definition 1.1. Given a vector eld f : Rn ! Rm then the di erential equation associated with f is
(1) (2)
Definition 1.2. A solution is a curve : R ! Rn such that Definition 1.3. An

x _ = f (x) dx dt = f (x) x = (x1 x2 : : : xn ):


_ (t) = d dt = f ( (t))
for all t

lem
(3)

initial condition is x0 2 Rn : An initial value prob-

x _ = f (x) x(0) = x0 such that the solution is (t) with (0) = x0 .


Definition 1.4. A

(4) (5)

(6)

@ (t x ) = f ( (t x )) for all t x : 0 0 @t 0 Example: If x _ = x on R , then we have x(t) = xo et x(0) = x0 : (t x0 ) = x0 et :

(s + t x0 ) = (s (t x0 )) Equation (5) is known as the group property. A ow is a solution of the di erential equation x _ = f (x) if

ow is a map : R Rn ! Rn such that (0 x0 ) = x0 8x0 2 Rn

Claim: This is a ow. Proof: (4) is simple to check.


For (5), we see that and

(s + t x0 ) = xo es+t

(s (t x0 ) = (s xo et ) = (x0 et)es = xo es+t :


7

1. INTRODUCTION

The group property says that the \rules of evolution" (i.e. the vector eld) do not change with time, or f : Rn ! Rn does not depend on t. Such equations are called autonomous. A \good theorem" could be that every \reasonable" di erential equation has (almost) ow for solution, and ow is \nice".

Rn ! Rn, it is the solution of a di erential equation, i.e. there exists a vector eld f : Rn ! Rn with as
solution.

Theorem 1.1. Given a smooth (C 1 ) ow : R

i.e.

@ (0 x0 ). Check that is a solution for x Proof: De ne f (x0 ) = @t _ = f (x),

@ @t (t x0 ) = f ( (t x0 )) for all t x0 : Now, since (t + t x0 ) = ( t (t x0 ))


then

@ ( t (t x0 )) ; (0 (t x0 )) t!0 @t (t x0 ) = lim t @ = @t (0 (t x0 )) def = f ( (t x0 )):

Example:
= ; sin( ) Convert to

(_
Example:

=! ! _ = ; sin

y + 3y _ + 2y = cos(2t)
The 3y _ term is damping, the 2y term is Hooke's Law, and the cos(2t) term is an external forcing term. Convert this to

y _=v v _ = ;3v ; 2y + cos(2t)

2. MORE EXAMPLES, CHANGING VARIABLES (SEPT. 6)

This is not autonomous. Using a new time variable, we get

8 dy > > ds = v > > > < dy > ds = ;3v ; 2y + cos(2t) > > > : dt = 1
ds

Be careful with the initial conditions in this case, since t(s) = s + c. 2 Example: A worse example. Consider dx dt = x : If we solve, we get 1 x(t) = ; t + C 1 We also get C = ; : We get the ow

x0

(t x0 ) = ; 1 1 t;
0

We can check that this is a ow, but the problem is the solution goes to 1 in 1 ). nite time, and is only de ned for t 2 (;1 x

x0

Definition 2.1. A local ow is a map : U of R Rn where, for each x0 2 Rn ,

as it can. The advantage of thinking of this as a ow is that we can x a time T and ask what happens to all (or a large set) of initial conditions after time T . Continuity with respect to initial conditions says that for any xed t, if x0 and x are su ciently close, then (t x0 ) and (t x) are close. Our earlier example x _ = x2 has a solution \ ow" x0 : (t x0 ) = ; x t ;1
0

2. More examples, Changing variables (Sept. 6) Remark: Think of ( x0 ): t 7! (t x0 ) as starting at x0 , and going as far

! Rn where U is an open subset

and (7) (0 x0 ) = x0 (8) (s + t x0 ) = (s (t x0 )) whenever both sides are de ned. Example: The solution of x _ = x2 has local ow x0 (t x0 ) = ; x t 0 ;1

(;1 1) fx0 g \ U = (;a b) fx0 g

10

1. INTRODUCTION

de ned on

8 > <;1 < t < 1=x0 x0 > 0 (t x0 ) where >;1 < t < 1 x0 = 0 :1=x0 < t < 1 x0 < 0
ow is : R+ ! Rn satisfying

(9) (0 x0 ) = x0 (10) (s + t x0 ) = (s (t x0 )) where they are de ned. Example: The Kepler Problem. Let q be the di erence in position of 2 masses where G is the gravity constant and is a constant related to the masses of the objects. Make this a rst order system (q = (q1 q2 )):

Definition 2.2. A semi

q = ;G 3q kqk

(11)

1 2 4 This is not a vector eld on all of R , because 1 2

8q_ = p > 1 1 > > q _ = p 2 2 > > <p_1 = p;G q1 2 + q 2 )3 ( q1 2 > > > G q2 > :p_2 = (p; 2 q + q2 )3

1 pq2 + q2 3 ! 1 as q1 q2 ! 0

The phase space for this problem is R4 n f p1 p2)g } . There are solu| (0 0 {z \the collision set" tions which approach the collision set in nite time. We can de ne a di erential equation on any object where you have a good notion of tangent vector, such as Rn , open subsets of Rn, smooth surfaces on Rn, or manifolds (which look locally like Rn ). Definition 2.3. Given a ow or a di erential equation x _ = f (x), the orbit of x0 is O(x0 ) = f (t x0 ) : de ned at tg i.e., the image of the solution curve through x0 . We also de ne the forward orbit of x0 : O+ (x0 ) = f (t x0 ) : t 0g

3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9)

11

vector elds. Tools we use are calculus tools (local analysis) to build global picture of ow. When are two ows (or di erential equations) \the same"? Given di erential equation, vector eld f : U ! Rn . Suppose U has coordinates X = (x1 x2 : : : xn ), and V has coordinates Y = (y1 y2 : : : yn ). _ = f (X ). The di erential equation on U is X Given a smooth homeomorphism h : V ! U which is smoothly invertible (i.e. h has a su cient number of partials, and so does h;1 ), we can de ne a vector eld and a ow on V . Move vector eld, de ne g : V ! Rn , g(Y ) = (Dh;1 ) h(Y ) (f (h(Y ))) (12) where Dh;1 is the derivative of h;1 : U ! V:

Definition 2.4. A xed point, rest point, critical point, or stationary point, is a point x0 such that (t x0 ) = x0 8t. x0 is called a periodic point if (x0 T ) = x0 for some T > 0 but (x0 t) 6= x0 for 0 < t < T . T is called the least period. Philosophy: For ODE we think of studying ows which come from nice

y h x U V
Figure 1. The map h.

The derivative is the \best linear approximation". To get a notion of the best linear approximation, we need a linear space. Start at x 2 U , and h(x) 2 V . We denote the tangent space to U at x as Tx U . The derivative of h is a linear map from TxU to Th(x)V ., i.e. Dhjx (v) = w v 2 TxU w 2 Th(x)V To compute Dhjv , pick v 2 TxU . Think of it as a velocity vector of a curve on U through x. Call this curve (t), then h( (t)) is a curve on V . The velocity vector of h( (t)) at h(x) is velocity of the image of on V .

3. Di erentiation, change of time variable (Sept. 9)

12

1. INTRODUCTION
h TxU
1 0 1 0

Dh x
1 0 0 1 0 1 h(x)

Th(x) V

Figure 2. The derivative maps tangent planes to tangent planes.

So, de ne Dhjx (v) = w where w is the velocity vector of h( (t)) at h(x). If

h(x1 x2 : : : xn ) = (h1 (x1 x2 : : : xn ) h2 (x1 x2 : : : xn ) : : : hn (x1 x2 : : : xn ))


then

2 @h1 @h1 3 : : : 6 @xn 7 1 6 @x .. . . . 7 Dhjx = 6 . .. 7 . 6 7 4 @hn @hn 5


@x1 : : : @xn

For example, start at x and move with velocity (1 0 0 : : : 0):

h(x1 + t x2 : : : xn ) ; h(x1 x2 : : : xn ) = (h1 (x1 + t x2 : : : xn ) ; th1 (x1 x2 : : : xn ) h2 : : : )


We get

011 0C @h1 @h2 : : : @hn = Dhj B B C B x @. .. C @x1 @x1 @x1 A


0

3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9)

13

Example: h : R2 ! R2

4 Dh(1 1) 1 0 = ; sin(1)

4 2 Dh(1 1) = ; sin(1) 1

h(x y) = (x2 + 2xy y + cos(x)) = (z w) h(1 1) = (3 1 + cos(1)) 2x + 2y 2x Dh = ; sin(x) 1

Let L(X ! Y ) denote the linear maps from X to Y . If h : R2 ! R2 , then Dh : R2 ! L(R2 ! R2) = R4 D2 h : R2 ! L(R2 ! L(R2 ! R2 )) = L(R2 ! R4 ) = R8 Back to changing variables... Let U Rn be open, and f : U ! Rn be a vector eld. Then x _ = f (x) is a di erential equation. Let x0 be an initial condition. Then we have a solution curve x(t), and also a solution (local) ow : R U ! U .
w y

z
11 00 00 11 00 11
x(t)

11 00 00 11

h -1

U V

Figure 3. The map h.

Given h : V ! U (i.e. new variables to old variables): 1. Move vector eld: De ne g : V ! Rn by g(z ) = (Dh;1 ) h(z) f (h(z ))

Note:

= (Dh)jh(z)

;1

f (h(z ))

D(h;1 ) h(z) = Dhjz ;1

14

1. INTRODUCTION

2. Move the solution curves. Given z0 2 V , we have x0 = h(z0 ). Use that as initial condition to obtain solution x(t). Then de ne z (t) = h;1 (x(t)). Claim: These are the same operation.

Proof:

So, dz=dt = g(z ). Definition 3.1. Given (local) ows :R U !U :R V !V we say , are conjugate if there exists a homeomorphism h : V ! U such that (13) h( (t z0 )) = (h(t z0 ))

dz = Dh;1 ( dx ) dt dt = Dh;1 (f (x(t))) = Dh;1 (f (h(z (t))))

h
0 0 1 (t h(z0 )) 1 0 1 1 0 0 1

(t z0 )

I
1 0 1 0

h(z0)

1 0 0 1 Y

z0

Figure 4. A conjugacy.

If h is not a di eomorphism, we can still use as a conjugacy for ows, but we can't move the vector eld. Remark: Conjugacy is a very strong notion of \the same". _ = AX . Do a linear change of variables, X = PY , Example: X 2 Rn, X where P is an n n matrix. Then h(Y ) = PY , so Dh xj= P . So, in the new variables, _ = PX _ = PAX = PAP ;1 Y Y

Example:

_ x 1 2 y = 1 1

x y

4. CHANGE OF TIME VARIABLES(SEPT. 11)

The eigenvalues are 1 So, let X = PY , then

p 1 + 2 0p ; 1 2, so there is P s.t. PAP = 0 1; 2 : p

15

0p _ = 1+ 2 Y 0 1; 2 Y where

These are conjugate linear systems. Another change of variables is change in time variable. Start with f : U ! Rn, with a solution (local) ow . Pick x0 2 U , let x(t) be the solution through x0 . Change to new time variable S (t), so new time is a function of old time. Let T (s) = t, so that S ;1 = T , and old time is a function of new time. So what di erential equation is x(T (s)) a solution of?

z so Y= w p z _ = (1 + 2)z and p w _ = (1 ; 2)w

4. Change of time variables(Sept. 11) Start with a vector eld f : U ! Rn , x0 2 Rn . Change time variables, get new time variable s, s = S (t), and t = T (s). For what di erential equation is x(T (s)) the solution?
Di erentiate:

E ect of changing to a new time is only on the speed, not the direction. Start with f : U ! Rn , all as above. Suppose we have : U ! R+ , smooth. Then we can make a new vector eld g(x) = (x)f (x) What are the solution curves of x _ = g(x)? Start with an IC x0 . Let x(t) be the dx solution of dt = f (x) with x(0) = x0 . The goal is to change x(t) into a solution of

d dt _ (T (s)) ds ds x(T (s)) = x dt = f (x(T (s))) ds s

dx = g(x) by de ning a new time variable. ds Suppose T (s) is a change of time variable such that dx(T (s)) = g(x(T (s))) ds
But then

dt f (x(T (s))) = dx dt T (s) ds s = (x(T (s))) f (x(T (s)))

16

1. INTRODUCTION

So, we need

If there exists such a T (s) then we can change from one system to another just by changing time variable. But the equation is just a 1-dimensional ODE where we know and x. By the ExistenceUniqueness Theorem, there is a unique solution for each initial condition. A nice feature is that if you change speeds (lengths) of the vector eld, you don't change the phase portrait. For example, the system ( x _ = ;x y _ = 2y and the system

dt = (x(T (s))) ds s dt = (x(T (s))) ds s

x _ = ;x(x2 + y2 + 1) y _ = 2y(x2 + y2 + 1)

have exactly the same phase portraits. Another notion of when ows are \the same": Definition 4.1. Two ows : U R ! U , : V R ! V are said to be topologically equivalent if there exists a homeomorphism h : V ! U such that, 8 y0 2 V , (14) hf (t y0) t 2 R g = f (t h(y0 )) t 2 R g where orientation of increasing time is also preserved. Example: Zharkovskii's model of glider ight. (Theory of Oscillators, Andronov) Velocity: v = v(cos sin ) v _ =v _ (cos sin ) + v _(cos sin ) The model is (15) (16) 1 2 m dv dt = ;mg sin ; 2 FCx v 1 FC v2 = ; mg cos + mv d y dt 2

where m is mass, g the acceleration due to gravity, the density of air, F the area of the wing, Cx the resistance to motion per unit area of wing, and Cy the lift per unit area of the wing.

4. CHANGE OF TIME VARIABLES(SEPT. 11)


1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

17

Figure 5. Here the glider is moving with speed v and angle .

So we get

dv = ;g sin ; 1 FCx v2 dt 2 m d = 1 ;g cos + 1 FCy v2 dt v 2 m


What are the essential parameters for the qualitative behavior? We have control of the units of distance and time. Use a new variable y for v, such that v = ky, with k constant.

dy = ; g sin ; k FCx y2 dt k 2 m d = 1 ; g cos + k FCy y2 dt y k 2 m


Use a new time variable with = t.

dy = dy dt = dy d dt d dt d = d dt dt
i.e. multiply old vector eld by to get:

18

1. INTRODUCTION

dy = ; g sin ; k FCx y2 d k 2 m d = 1 ; g cos + k FCy y2 d y k 2 m


2m . Pick k = g , and k = FC y Thus

2m g FCy s mg k = 2FC y =

and

dy = sin ; Cx y2 dt Cy d = ; cos + y2 dt y
x = drag / area . Let a = C Cy lift / area This is the only parameter. What does this say? This says that the qualitative behavior, up to topological equivalence, depends only on Cx =Cy . This model works for a glider moving through air, but also works for a submarine in water. This process is called nondimensionalization. When ky = v, choose a k with units meters per second, and then y has no units. Choose k to be some characteristic velocity, and can use k to focus attention on di erent areas of v variable.

Collision in the 2-body problem.

5. Example, using the 2-body problem (Sept.13)


2 ; q1 m1 q1 = ;Gm1 m22 kq q kq1 ; q2 k 2 ; q1 k ! + Gm m q ; q 1 2 2 1 m2 q 2 = kq1 ; q2 k2 kq2 ; q1 k

Let be the center of mass.

q1 + m2 q2 u = m1 m +m
1 2

5. EXAMPLE, USING THE 2-BODY PROBLEM (SEPT.13)

19

_ = a. We see that u = 0, so u(t) = at + b. Thus u Conservation of momentum tells us that _ 1 + m2 q _2 q u = m1 m1 + m2 Choosing u as above, and letting r = q1 ; q2 , think of (u r) as new coordinates, and only consider the r equation. _ 1 = p1 m1 q _ 2 = p2 m2 q _ =v u _ =0 v _ =s r _ = q1 ; q2 = ;G 3r s krk

So in u,r variables,

where = m1 + m2 . Remark: This is how masses of planets are determined, looking at orbits of their moons. Definition 5.1. A constant of motion or integral for a di erential equation x _ = f (x) is a function from the phase space to R , such that h(x(t)) is constant for every solution. For _ =s r _ = ;G 3r s let r = (x y) and s = (z w), then (17) is a constant of motion (energy). Note: In fact,

krk

2 w2 H (x y z w) = z + 2 ;G

x2 + y 2

x _ = @H @z @H y _ = @w z_ = ; @H @x @H w _ = ; @y

20

1. INTRODUCTION

So, if (x(t) y(t) z (t) w(t)) is a solution, then

d @H _ + @H y @H _ + @H w dt H (x y z w) = @x x @y _ + @z z @w _ = 0 @H y_ = ; @H x_1 = @y 1 @x
1

the Hamiltonian system for H is

Definition 5.2. Given a function H (x1 x2 : : : xn y1 y2 : : : yn ): R2n ! R ,

(18)

. . . . . . @H y_n = ; @H x_n =

Kepler's Laws: 1. Move on conic with sun at one focus. 2. Radius vector sweeps out equal areas in equal times. 3. Semimajor axis cubed is inversely proportional to the period squared. Note: There are some collisions, i.e. some solutions cannot be continued for all time. What happens close to collision and is the singularity removable? We want to preserve continuity as much as possible. Physically, we want to have a \bounce". What about other force laws? r = ;G3+r How do these orbits pass near 0?

@yn

@xn

krk

When a = 3 above, it is the Newtonian problem.


2 w2 H (x y z w ) = z + 2 ;

x _ =z y _=w z _ = p;2G x 2 a x +y w _ = p;2G y 2 a x +y

Our goal is to see what happens near r = 0. First, change variables, using polar coordinates, \blowing up" the origin. Start with (x y) = ( cos sin ) with a constant. We will also use complex notation, i.e. x + iy = ei . Also choose new velocity coordinates z + iw = ei (u + iv)

p (a;1) (a ; 1) x2 + y2

5. EXAMPLE, USING THE 2-BODY PROBLEM (SEPT.13)

21

= 0 is singular
Figure 6. This change of coordinates is singular at = 0.

where u is the \radial velocity", v the \angular velocity", and a constant. (19)

01 0 BC B hB @uC A=B @
v

(cos u ; sin (sin u + cos v)

sin cos

1 0x1 C B yC C B C = A @ v) zA
w

So we know that

z + iw = x _ + iy _ d ( ei ) = dt = ( ;1) _ ei + i _ei
and

ei (u + iv) =
so we get, nally

;1 _ei + i _ei

_ = 1 ; +1 u _ = ; v: +

22

1. INTRODUCTION

Recall that we have

6. McGehee Collision Manifold (Sept. 16)


x _ =z y _=w z _ = p ;G xa+1 x2 + y 2 w _ = p ;G y a+1 x2 + y 2

where a = 2 is the \inverse-square" Newtonian problem. To look for a new collision, we set (x y) = ( cos sin ) i x + iy = e and and we can choose , below. We ended up with

z + iw = ei (u + iv)
_ = 1 ; +1 u _= ; v + iei (u + iv) + ei (u _ + iv _)

To get u, v, we use:

z _ + iw _=

;1 _ei (u + iv)

and

z _ + iw _ = ;G

= ;G ( )ea+1 = ;G ; a e i We can solve for u and v, and after a lot of calculation, get:

px2 + y2a+1 + i px2 + y2a+1


i

x + iy = 0 ei ).

; ; ; )uv: Now choose , , with the condition that ; ; a = ; . If we also pick so that ; = 0, then = 0, which doesn't work (since that would give
v _ = (;

u _ = ;G ; ; a ;

; u2 + ; v 2

6. MCGEHEE COLLISION MANIFOLD (SEPT. 16)

23

Let ; = ;1, and then solve for . In the case that a = 2, we get = 2=3, = 1=3. _=3 2u _ = ;1 v

2 2 u _ = ;1 (;G + 1 2u + v ) v _ = ;1 (; 1 2 uv)

Vector eld doesn't exist at = 0. But we can do a change of time coordinate, multiplying the vector eld by . This corresponds to a new time where dt=d = . So out new clock \runs slow" at = 0. Thus:

d 0 3 d = =2 u 0=v 1 u2 + v2 u0 = ;G + 2 v0 = ; 1 2 uv
What do we have? 1. We have removed the singularity at = 0, and extended the vector eld to = 0. 2. Also, the system s: There are no 's in 0 , u0 , v0 , and no 's or 's in u0 or v0 . Recall the energy in new variables:

H(

2 v2 G u v) = 2 u + 2 ; (a ; 1)( ) ;1 x + iy = ei = 2=3 ei z + iw = ei (u + iv) = ;1=3 ei (u + iv) 2 2 H = ;2=3 u + v ; G 2 v2 u v) = ;2=3 u + 2 ;G

2=3

H(

How do we use this? What happens when = 0?

24

1. INTRODUCTION

So if (0) = 0 then (t) = 0 for all t. What this means is that we have added a boundary to the phase space.

0=0 0=v 2 2 u0 = ;G + 1 2u + v 1 uv v0 = ; 2

x=y=0

=0

@ R @

uv HH j H
6

Figure 7. \Blowing up" the origin

If we x H (

u v) = h, (look at solutions with energy h), we get


2 v2 h = ;2=3 u + 2 ;G 2 2 2=3 h = u + v G :

When = 0 we have

We can say 3 things: 1. The behavior of the new vector eld on new boundary = 0 is independent of h. 2. We can restrict attention to = 0,( u v) with u2 + v2 = 2G , i.e. a circle in u,v-space. 3. Note that when = 0, u0 = v2 =2. So look at the phase space u2 + v2 = 2G , for 2 0 2 ]. With identi cations, this gives us a torus.

u2 + v2 ; G = 0: 2

p2G
vB u
=0
B

6. MCGEHEE COLLISION MANIFOLD (SEPT. 16)

25

B B B N

=2

u; v

= 0. When we have = 0, we know that u2 + v2 = 2G , which gives us a circle in the u v-plane, or a cylinder in u v -space. But since is periodic of period 2 , we can identify the \front" and the \back" of the cylinder to get a torus. So the torus will live in ( u v)-space with the restriction u2 + v2 = 2G , and the vector eld 0=v

Figure 8. The space we restrict to, when

u0 = v2

Suppose I understand what happens for this ow on the torus. The original problem was to study orbits which come close to collision, i.e. come close to = 0 = x = y. An orbit coming close to = 0 must behave almost the same as an orbit on = 0 because of continuity of the solution ow.

v0 = ; 1 2 uv

Figure 9. Continuity with respect to initial conditions. The orbits that start close to the orbit on the torus stay close from some time.

This torus is called the McGehee Collision Manifold. The next step will be to analyze the ow for = 0, i.e. \solve" these equations:

26

1. INTRODUCTION

u0 = v2

We have added the boundary = 0 to our phase space. v = 0 gives the rest points, and if v 6= 0, then u0 = 0, i.e. u is increasing. increases if v > 0, decreases if v < 0. Since u is always increasing, we can make u the time variable. Since u = u(t), we can think of t = t(u), the inverse. So d 1 du = 2v Solving, we get

7. Finishing analysis of collision manifold (Sept.18)

v0 = ; 1 2 uv:

dv u du = ; v

v dv = ;u du v2 = ; u2 + C
2

Plug in and solve for :

v = C ; u2 : We already know that u2 + v2 = 2G , so C = 2G . Thus p v = 2G ; u2


1 d du = 2p2G ; u2 Z (u) = p 1 du 2 2G ; u2

p2

Think about this ow. Why does it have rest points? In original problem, we know that there are solutions which go to collision as t ! t0 . In new variables, there must be solutions which go to = 0 as ! 1. The set of points which go to collision are the points which tend to rest points on bottom.

2 A solution that goes to collision must have u 0 for all . The only way is to have solution approach rest points on the bottom. To pass close to collision means being on an orbit which is close to an orbit going to a rest point on = 0. Follow ow on = 0 until near top of = 0 then leave near an ejection orbit. There are two ways to be close to collision in some direction , \outside" or \inside" the sheet of orbits which go to collision.

0=3 u 2 2 0 u = v >0

7. FINISHING ANALYSIS OF COLLISION MANIFOLD (SEPT.18)

27

ejection

=0 collision Figure 10. There is a circle of rest points on the to (which are

unstable) and a circle on the bottom (stable). Orbits which come near the top circle are ejected, and orbits which come near the bottom circle represent near-collision orbits.

with respect to initial conditions. The left picture is the trajectory is phase space near the collision manifold, and the right are the corresponding paths of the masses in position space, i.e. \in reality". At this point, there is no a priori reason that this picture couldn't happen, even in the case of the inverse square law. The integral calculation below shows that in the case of the inverse square law, this picture could not happen, but this picture could happen with some other force law. Question: When the orbit comes close to collision, in what direction does it leave collision? This is the same question as \Given an orbit on = 0, which as

Figure 11. The angle at which the point-mass leaves is sensitive

28

1. INTRODUCTION

! ;1 goes to ( ) = 0 . What happens to that orbit as ! 1?" Which rest point on the top does the orbit approach?
(u) = 2 p p 1 2 du = (1) ; (;1) = 2 : ; 2G 2G ; u On = 0, solution curves connect a rest point on the bottom to a rest point on the top which turns out to be exactly above the point on the bottom. For other force laws, this is not the case. (20)

Z p2G

CHAPTER 2

The Two Big Theorems


1. Existence-Uniqueness Theorem (Sept.20) Theorem 1.1. The \Big Theorem". Suppose U Rn, f : U ! Rn is a vector eld and f is Lipschitz, i.e. there exists K such that for all x1 ,x2 2 U , kf (x1 ) ; f (x2 )k < K: (21) kx1 ; x2 k The there exists a local ow : V ! U , where V R U , and for each x0 2 U , ftj(t x0 ) 2 V g is a single open interval, such that: 1. 8x 2 U , 8t 2 R such that (t x) 2 V ,
(22) 2. 3.
i.e. t 7! (t x) is a solution with initial condition x0 = (0 x). is continuous on V . is maximal (i.e. if x(t) is a solution of x _ = f (x) and x is de ned for t 2 (;t1 t2 ), then (;t1 t2 ) fx(0)g V . 4. is unique (i.e. if x(t) is a solution for t 2 (;t1 t2 ), then x(t) = (t x0 ). Moreover, if f is C r , C 1 , C ! , then is C r , C 1 , C ! , respectively.

@ @t (t x) = f ( (t x))

Remarks:

1. Condition 2 is called \continuity with respect to initial conditions". If we have x1 ,x2 close, and t1 ,t2 close, then (t1 x1 ) is close to (t2 x2 ). This does not imply that if x1 ,x2 are close, then (t1 x2 ) and (t2 x2 ) are close for all t. 2. We can extend this to cover two more cases: Given x _ = f (x t), we can rewrite this as dx = f (x t) ds This new vector eld is just as smooth as f . Now, suppose f depends on parameters 2 Rm , i.e. x _ = f (x). For example, x _ = ax + by2 y _ = cx2 + d cos( y) has parameters a,b,c,d, . We rewrite this as x _ = f (x) _ =0
29

ds = 1 dt

30

2. THE TWO BIG THEOREMS

which is just as smooth as f . Given x _ = f (x) which is C r in x and , then r (t x) is C in x, , and t. This is known as continuity in parameters. 3. Much better existence theorems exist (we can even talk about f being \measurable"). However, this is the best possible result that also has uniqueness. For example: (p f (x) = x x 0 0 x<0 We can make two solutions with x(0) = 0, x1 (t) 0

x2 (t) = 02 t 0 t =2 t > 0
4. Suppose f is C 2 , then is also C 2 and

@ @t ( (t x)) = f ( (t x)) If we di erentiate with respect to x: @ ( (t x)) = D f ( (t x)) Dx @t x


so we get

Consider the di erential equation _ = Dx f (t x0 )jX X For xed x0 2 U , where X (t) is an n n matrix. If t = 0, what is Dx (0 x)? Since x 7! (0 x) = x, then Dx (0 x) = In . So for xed x0 , de ne the variational equation: _ = Dx f (t x)jX (23) X (24) X (0) = In This is a nonautonomous linear equation on Rn2 . The solution turns out to be Dx (t x) = X (t). If f is C r , the solution is at least C r;1. We will prove a bunch of little pieces of the theorem. Theorem 2.1 (Cauchy's Existence Theorem). Suppose f : U ! Rn is analytic so f (x1 x2 : : : xn ) = (f1 (x1 x2 : : : xn ) : : : )

@ (D (t x)) = D f (t x)j D f ( (t x))): x ( x @t x For xed t, x 7! (t x) is called the time t map. Dx (t x) is the n n matrix of partials of with respect to x.

2. Some proof of the Existence-Uniqueness Theorem (Sept.25)

2. SOME PROOF OF THE EXISTENCE-UNIQUENESS THEOREM (SEPT.25)

31

and, for k = 1 : : : n, fk (x1 x2 : : : xn ) =

X
l2Z+
n

ak 1
l

l2 ::: ln

xl11 xl22

xln

So, if, for example, n = 2, expanding about 0, 2 f (x1 x2 ) = a00 + a10 x1 + a01 x2 + a02 x2 2 + a11 x1 x2 + a20 x1 + : : : and the fk 's have nonzero radius of convergence at each point. If we expand about x0 = (x01 : : : x0n ), f (x1 x2 ) = a00 + a10 (x1 ; x01 ) + a01 (x2 ; x02 ) + Fix x0 2 U . Then there exists x(t) where x : (; ) ! U for some > 0 such that 1. x(0) = x0 2. x(t) is a solution. 3. If x(t) = (x1 (t) : : : xn (t)), then

x k (t ) =

1 X
l=0

kl tl

k,

with radius of convergence . Idea of Proof. This comes from Siegel and Moser's Celestial Mechanics. Assume f is as above. Fix x0 2 U , then there is an r > 0, M such that for all

This gives a \maximum speed" for the solution. Fix = (n +r1)M . This is how long (at least) we expect solution to exist. Look for x(t) solution, x(0) = x0 , and kx(t) ; x0 k < r for t 2 (; ). The steps in the proof: 1. Change variables so that x0 = 0, r = M = 1. (Can rescale x to get r = 1, and rescale time to get M = 1.) 2. Solve formally: Write for k = 1 : : : n,

jfk (x)j < M for jx ; x0 j < r

xk (t) = x_1 (t) =

1 X

Plug this into x _ = f (x), solve for 's. For example:

m=0

ak mtm

1 X

= f1(x1 x2 : : : xn ) X = a1 1 2 xl11 : : : xln


l2Z

m=1

m 1m tm;1
l l ::: ln n

+n

Equate coe cients and solve for 's:

l2Z+

a1 1
l

1 X
l2 ::: ln

m=0

1m tm

!l

:::

1 X
m=0

nm tm

!l

32

2. THE TWO BIG THEOREMS 10 = 20 = = n0 = 0 11 (constant on left) = a100:::0 (constant on right) 12 = 11 a110:::0 + 21 a101:::0 + : : : : In general, we nd that each km is a polynomial in ak 1 2 where l1 l2 : : : ln < m. 3. We have a formal solution by solving for km 's in terms of a's. Does this power series for xk (t) converge? (If yes, then it must be the solution.) To show that xk (t)'s converge use \method of majorants". This is esl l ::: ln

sentially the Comparison Test { we nd a power series that converges and that has coe cients bigger than the xk (t)'s. To build the new power series (the majorant), look at X l1 : : : y l y _ = g(y) = b1 1 2 y1 n P with ak 1 2 bk 1 2 . Then claim if yk (t) = km tm is a solution with y(0) = 0 then 8k m j km j km . Note: The equations for the 's are the same as the equations for the 's with a's replaced with b's. The equation for kn is a polynomial in a's and has all positive coe eicients. So replacing a's with bigger b's will make 's bigger than the 's. 4. Find a nice g whose solution we know. A fact from complex analysis: Because jfk j < 1(= M ) on ball of radius 1(= r), we know ak 1 2 1: So let all bk 's be 1. Solve X l1 l2 l y _ = g(y) = y1 y2 : : : yn : : :
l l ::: ln n l l ::: ln l l ::: ln l l ::: ln n

X
(25)

Note: Each component of g(y) is the same.


l1 y l2 : : : y l = 1 + y1 + y2 + y1 n 2
n n

l1 l2 ::: l

l1 l2 ::: l

2 + y1 y2 + y1 y3 + + yn + y1

= since 1 + y + y2 + So

n Y r=1

(1 ; yr );1 1 . = 1; y (1 ; yr );1 , so

+ yn +

y_1 =

n Y r=1

with I.C. y(0) = 0. Solution must have y1 (t) = y2 (t) =

y _ (t) = (1 ; y);n y(0) = 0

3. SOME MORE PROOF OF THE EXISTENCE-UNIQUENESS THEOREM (SEPT.27) 33

1 , with jyj < 1. and each yk (t) is a convergent power series for t < n + 1 So the xk (t)'s converge in a least the same size ball.

yk (t)y(t) = 1 ; (1 ; (n + 1)t)

n+1

3. Some more proof of the Existence-Uniqueness Theorem (Sept.27) Example: 8

0.

> < dx = a1 + a2x + a3y + a4x2 + a5xy + a6y2 dt > : dy dt = b1x + b2 y P P Assume x(t) = 1 tm , y(t) = 1 tm , with initial condition
0

0= 0=

Plug in:

mam tm;1 = a1 + a2
+ a4

X m m tm + a3 mt X m 2 m tm + a5 mt X

X X

m tm + a6 m tm

2 m tm

So we need to equate coe cients, and we nd that the m 's and the m 's are polynomials in a's and b's with positive coe cients. So increasing a's and b's in absolute value increases the 's and the 's. For now, we assume that we are given F : U ! Rn Lipschitz with constant L, i.e. kf (x1 ) ; f (x2 )k L: To attack uniqueness problem, look at \separation problem". Suppose we are given two initial conditions x1 x2 2 U . Suppose we have two solutions x1 (t), x2 (t), with xi (0) = xi . Can we get an estimate on the distance kx1 (t) ; x2 (t)k? Only thing we know is that x _ 1 = f (x1 (t)), so (26)

m m tm;1 = b1

m tm + b2

kx1 ; x2 k

d dt (x1 (t) ; x2 (t)) = kf (x1 (t)) ; f (x2 (t))k (27) = L kx1 (t) ; x2 (t)k Let z = kx1 (t) ; x2 (t)k. We sort of have that z _ Lz . d d k(x (t) ; x (t))k. (We don't exactly because dt (x1 (t) ; x2 (t)) 6= dt 1 2 Lt The worst case would be z _ = Lz , i.e. z (t) = e . This means that how fast solutions separate depends on L, or on how fast f changes from point to point.
Change this to integral equations:

x _ i (t) dt = f (xi ( )) d

34

2. THE TWO BIG THEOREMS

xi (t) ; xi (0) = xi (t) = xi +


Then we have (28) (29) (30)
uous and then

Zt
0 t 0

f (xi ( )) d

f (xi ( )) d

k(x1 (t) ; x2 (t))k = x1 ; x2 + (f (x1 ( )) ; f (x2 ( ))) d


0

Zt

kx1 ; x2 k + kx1 ; x2 k +

Zt
0 0

Zt Zt
0

kf (x1 ( )) ; f (x2 ( ))k d


L kx1 ( ) ; x2 ( )k d > 0, : R ! 0 1] is contin-

Theorem 3.1 (Gronwall's Inequality). If ,

(t)

+ (t)

( )d

e t:

The proof is left as an exercise. So this lemma gives us that kx1 (t) ; x2 (t)k kx1 ; x2 k eLt Corollary 3.1 (Uniqueness.). Let x1 (t), x2 (t) be solutions with x1 (0) = x2 (0). Then kx1 (t) ; x2 (t)k 0, i.e. x1 (t) = x2 (t) for all t. _ = f (x) then Corollary 3.2. If : R U ! U is the local ow solution for x is continuous in x. Proof: Fix t. Then k t (x1 ) ; t (x2 )k eLt kx1 ; x2 k

Idea. Start with a candidate for a solution and improve it iteratively, i.e. give a process for improving approximate solutions. We know that x(t) is a solution i
x(t) = x0 +
0 (t) = x0 +

4. Picard iteration (Sept. 30)

Zt
0

f (x( )) d f ( 0 ( )) d

So given 0 : R ! U , we can check to see if it is a solution to see if it satis es

Zt
0

If 0 is not a solution, we might hope to get a closer approximation 1 (t) by

4. PICARD ITERATION (SEPT. 30) 1 (t) = x0 +

35

Zt
0

f ( 0 ( )) d

So we de ne an operator where

;: 0 7! 1

;( 0 )](t) = x0 + f ( 0 (t)) d 0 A solution would be such that ;( ) = so would be a xed point of ;. We hope that there is only one xed point, and ;n ( 0 ) ! . So we assume that f : U ! Rn , f is Lipschitz with constant L. We must set up a domain for ;, and show that ; has xed points. Note: : ; ] ! U for some > 0.

Zt

U1 V
r

x0

Figure 1. We choose so that (t) cannot travel outside of U1 .

We want that ;( ): ; ] ! U . We must assume that ;( ) stays in U . Fix U1 U , compact, containing x0 . Now, let K = supx2U1 kf (x)k. Choose V contained in the interior of U1 , V compact, with x0 in the interior of V , and x < 0 s.t. kx ; yk > whenever x 2 V , y 2 U n U1 . Let = =2K , so if (t) has speed less than K it can't travel from x0 to outside of U1 in time less than . Let C ( ; ] U ) be the set of continuous functions from ; ] to U . If 1 , 2 2 C ( ; ] U ), then k 1 ; 2 k = sup k 1 (t) ; 2 (t)k : So we know that C ( ; ] U ) is complete with this metric, i.e. if f ng is Cauchy with respect to this norm, then lim n exists and is in C ( ; ] U ).
t2 ; ]

36

2. THE TWO BIG THEOREMS

So, we really need to show that if (0) = x0 and ;( ) 2 C ( ; ] U ). 1. ;( ) is continuous.

2 C ( ; ] U ), then

k;( )(t) ; x0 k = x0 +

Zt
0

Zt
0

f ( ( )) d ; x0

Z0 t
If t 2 ; ], then = jtj

kf ( ( ))k d
d

which means that ;( )(t) 2 U1 for all t 2 ; ]: Let C ( ; ] U x0 ) = f 2 C ( ; ] U ) j (0) = x0 g. Then we see that ;: C ( ; ] U x0 ) ! C ( ; ] U x0 ). 2. ; is a contraction. Pick 1 2 2 C ( ; ] U x0 ). Look at k;( 1 ) ; ;( 2 )k = sup k ;( 1 )(t) ; ;( 2 )(t)]k
t2 ; ] t2 ; ] t2 ; ] t2 ; ]

k;( )(t) ; x0 k < K < K ( 2K ) < =2

= sup sup sup

Zt Zt
0 0

f ( 1 (t)) d ;

Zt
0

f ( 2 (t)) d

Zt Z 0t

kf ( 1 ( )) ; f ( 2 ( ))k d
L k 1 ( ) ; 2 ( )k d

= sup jtj L k 1 ; 2 k We have to have L = < 1. Make smaller if necessary, i.e. We get that
t2 ; ] L k 1 ; 2k

t2 ; ] 0

sup

L k 1 ; 2k d

< min( 21 L 2K )

k;( 1 ) ; ;( 2 )k < k 1 ; 2 k Thus if 0 2 C ( ; ] U x0 ), we can construct

n+1 = ;( n ), so that

5. INVARIANT SETS (OCT. 2)

37

k n;
So if m > n,

n+1 k = k;n ( 0 ) ; ;n ( 1 )k < n k 0 ; 1k :

k m ; nk k n ;

n+1 k + + k m;1 ; m k (1 + + 2 + + m;n ) k n ; n+1 k m;n+1 n k 0 ; 1k = 1;

1;

So f n g is Cauchy, thus converges to , which is xed because ;( ) = ;(nlim !1 n ) = nlim !1 ;( n ) = nlim !1 n+1 = Also, we know this xed point is unique, so we're done. Note: is de ned on a short interval of time. Also, (t) = x0 + so

Zt
0

f ( ( )) d

_ (t) = f ( (t)) and is obviously di erentiable.

Since we can't do all of that, we have two choices: 1. Get a lot of info about a few orbits, or 2. Get a little info about a lot of orbits. Definition 5.1. Given a ow : R U ! U , an invariant set is a subset I U s.t. for all x0 2 I , O(x0 ) I . In other words, if x0 2 I , then (t x0 ) 2 I for all t. A positively invariant set is a set J U s.t. if xo 2 J , then O+ (x0 ) J . Definition 5.2. An isolated invariant set is a subset I U satisfying 1. I is invariant, 2. I is compact, 3. There exists N U (N compact) s.t. I N o and I is the maximal invariant set in N . (If J N , J invariant, then J I .) N is called an isolating neighborhood.

5. Invariant Sets (Oct. 2) From now on we will study a ow : R U ! U associated to a vector eld f : U ! Rn , where U is a surface in Rn . Ideally, we would like to nd some sort of algorithm for going from x0 to complete information about O(x0 ) = f (t x0 ) j t 2 R g .

38

2. THE TWO BIG THEOREMS

Figure 2. A maximal invariant set. Lemma 5.1. If I is the maximal invariant set in N , and x0 2 N n I , then there is a t such that (t x0 ) 62 N . Proof: If O(x0 ) N then O(x0 ) is invariant, so O(x0 ) I . Theorem 5.1. Suppose I ,J are invariant for . Then I \ J , I J , I , and I o are all invariant sets. If x0 2 I \ J , then O(x0 ) I , O(x0 ) J . Philosophy. We study ows by studying invariant sets, and there are two ways to do this: 1. What's the ow like restricted to an invariant set? 2. How are invariant sets connected?

Figure 3. Orbits moving from one maximally invariant set to another

Are there orbits (t x) ! J ( as t ! 1), or (t x) ! I ( as t ! ;1)?

5. INVARIANT SETS (OCT. 2)

39

Definition 5.3. For

a ow and x an initial point, the !-limit set of x is \ ( t 1) x)


t>0

where the overbar denotes closure. The -limit set is the same thing in backwards time: \ ((1 t] x):

The rst type of invariant sets are rest points. Question 1 is easy. For question 2, we ask, what do orbits near the rest points do? We use calculus (mostly) to study the local behavior. Example: x _ = Ax, x 2 Rn , A is a n n matrix. We know that x = 0 is a rest point. It is isolated as a rest point, provided that det(A) 6= 0 (no other rest 0 1 . points nearby). We have to be careful, however. For example, let A = ; 1 0 It has eigenvalues i. x = 0 is not an isolated invariant set. We will see that the condition guaranteeing that x = 0 is isolated is that all eigenvalues have nonzero real part. We will get the ow (t x) = eAt x We will start to classify the linear systems: 1. Find eigenvalues 1 2 : : : n (with repeats). 2. Put A into normal form, i.e. nd P s.t. P ;1 AP = B where 3 22 1 1 3 0 6 7 6 7 . 6 6 7 0 .. 1 0 7 0 0 6 7 6 7 6 7 4 5 0 0 1 1 6 7 6 7 0 0 0 1 2 6 7 3 7 a ; b 1 0 B=6 1 1 6 7 6 6 7 7 b a 0 1 1 1 6 6 7 0 07 6 7 4 5 0 0 a ; b 1 1 6 7 6 7 0 0 b a 1 1 4 5 ... 0 0 Essentially, put the matrix in Jordan canonical form. 3. Change variables x = Py, so that y P ;1 APy = By, and if 0B1 0_ = 0 1 0 B 0 B2 0 0 C B C B B = B 0 0 ... 0 C C @ A ... 0 0 0 then B1 t 0 ! e y(t) = eBt = . . . y0 : 0 4. Classify each y _ = eB t y0 :
j

t>0

40

2. THE TWO BIG THEOREMS

We will try to classify the nonlinear behavior in the same way: linearize, and try to describe the behavior near x = 0. Now we can start classifying. Definition 5.4. An isolated invariant set I is called an attractor if there exists an isolating neighborhood N (so I is maximal invariant set in N ) such that for all x 2 N , (t x) 2 N o , for all t > 0. Such an N is called an attractor block. An attracting xed point is called a sink. Lemma 5.2. If N is an attractor block for I , then for all x 2 N , !(x) I . Proof: Now that !(x) is an invariant set (exercise), so !(x) I , since I is maximal in N , and !(x) N . In a weak sense, sinks are all the same, as we see by the following Theorem 5.2. If , are ows on Rn with N Rn an attractor block for both and with maximal invariant sets xed points x0 for , and y0 for , then jN and jN are conjugate, i.e. there is a homeomorphism h : N ! N such that (t h(x)) = h( (t x)):

@ R @

x0

; ;

@ R @
r

; ;

y0
; ; @ I @

@ I @ ; ;

Figure 4. These ows are conjugate. Lemma 5.3. If x 2 N , and x0 = 6 x, then 9t 0 such that (t x) 2 @N and (s x) 2 N o for 0 > s > t. Proof: If not, then O (x) N implies that O (x) fx0g because fx0 g is maximal. )(

Idea of proof of theorem. 1. De ne hj@N = identity. 2. For x 2 N , x = 6 x0 x 2 N o , pick rst t < 0 such that (t x) 2 @N . So de ne h(x) = (;t h( (t x))).
3. Let h(x0 ) = y0 . 4. Need to show that h is continuous.

6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7)

41

Given two ows with sinks, we know that these ows are locally conjugate near the sinks: 1. De ne h on boundaries. 2. Extend h inside using ows. 3. Show continuity of h (and h;1 ). Note: h is only a homeomorphism, but this is the best one can hope for. Example: Suppose that x _ = f (x) y _ = g (y ) with f (0) = g(0) = 0. Suppose there is h : U ! V where U is a neighborhood of 0 in y-space, and V is a neighborhood of 0 in x-space such that h is a homeomorphism and takes g to f. i.e. g(y) = y _ = Dhjx x _ = Dhjx f (x).

6. Sinks and conjugacy (Oct. 4 and Oct. 7)

f (x) = f (h(y)) = x _ = Dhjy y _ = Dhjy g(y) i.e. h is a smooth conjugacy between solution ow of x _ = f (x) and y _ = g(y). If h was smooth, f (h(y)) = Dhjy g(y), so f (h(0)) = Dhjy g(0) = 0: So h(0) is a xed point of f , assume h(0) = 0. Take f (h(y)) = Dhjy g(y) and Df jh(y) Dhjy = D2 h y g(y) + Dhjy Dgjy

di erentiate:

0.

Df j0 Dhj0 = D2 h 0 g(0) + Dhj0 Dgj0 Df j0 Dhj0 = Dhj0 Dgj0 Df j0 = Dhj0 Dgj0 ( Dhj0 );1 So, if n = 1, we get f 0 (0) = g0(0). If n > 1, then Df j0 is a conjugate matrix of Dgj0 . So Df j0 and Dgj0 have the same eigenvalues, and the same dimension e-spaces (same Jordan form). Given 2 vector elds f ,g with xed points at 0. If solution ows of f and g are smoothly conjugate, then Df j0 and Dgj0 are similar matrices. Example: x _ = ;x y _ = ;2y These are certainly topologically the same, but h cannot be di erentiable at

At y = 0, we get

So if we have x _ = f (x) with f (0) = 0 then Df j0 should help in determining local structure. Recall, we can write f (x) = f (0) + Df j0 x + higher order terms

42

2. THE TWO BIG THEOREMS

So our system x _ = f (x) is

Given x _ = f (x), f (0) = 0, can you tell if 0 is a sink? Theorem 6.1. If x _ = f (x) has 0 as a rest point and the eigenvalues 1 2 : : : n of Df j0 all have real part negative, then 0 is a sink. Remark: Assume that f is C 2. Proof: Show that there is a one-parameter family of attractor blocks around 0 which limit onto 0. (This will show that f0g is an isolated invariant set and that 0 is a sink.)
@ R @ @ R @
s

x _ = Df j0 x + h.o.t.

; ;

; ;

@ I @ @ I @

Figure 5. A sink

There are two big steps: 1. Compare x _ = f (x) to x _ = Ax where A = Df j0 . 2. Show attractor block results for x _ = Ax. 1. Write x _ = f (x) = Ax + g(x) where g(x) = f (x) ; Ax. There there is a K > 0 such that for all kxk < 1, kg(x)k K kxk2 We know this is true in one dimension for C 2 functions by Taylor's Remainder Theorem. For each x0 with kx0 k = 1 look at the map s 7! f (sx0 ) ; A(sx0 ) and each component satis es the 1-dimensional Taylor theorem. So kf (sx0 ) ; A(sx0 )k < Kx0 s2

d2 f (sx ), which is determined by D2 f . But all Kx0 is determined by ds 0 2 second partials of f are uniformly bounded in the unit ball so we can replace Kx0 by K .

6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7)

43

2. Find P such that P ;1 AP = B , where B is the Jordan form of A, and let x = Py. Then Py _ =x _ = f (x) = Ax + g(x) = APy + g(Py) and y _ = By + P ;1 (g(Py)) Let g1 (y) = P ;1 (g(Py)). Claim: kg1(y)k < K1 kyk2 provided that kyk < , for some xed > 0.

Proof: P ;1 (g(Py)) P ;1 kg(Py)k P ;1 K kPyk2 provided that kPyk < 1. But there is a 1 > 0 such that if kyk < 1 , then kPyk < 1, and kg1 (y)k < | P ;1 {zK kP k k yk2 }
K1

3. Make o -diagonal terms small in B . Suppose 0 1 01 1A : B1 = @0 0 0 Look for a diagonal matrix P such that 0 1 0 A: P ;1 BP = @0 0 0 It is easy to check that 01 0 0 1 P = @0 0A: 0 0 2 As an exercise, choose P for 0a b 1 0 1 Bb a 0 1 C B=B @0 0 a ;bC A 0 0 b a and end up with P s.t. all o -diagonal terms of P ;1 BP are . Let y = P z . The new equation is ;1 BP z + P ;1 g1 (P (z )) z _=P | {z } | {z }
B1 g2 (z)

If B were diagonal, then life would be simple. B might be of the form 0B 1 0 1 01 1 B 1A for example: @ B2 . C A B1 = @0 .. 0 0

44

2. THE TWO BIG THEOREMS

There is a 2 > 0 s.t. if kz k < 2 , then kg2 (z )k < K2 kz k2 . 4. Check that in these coordinates, for r > 0, if kz k r then (B z + g2 (z )) z < 0 i.e. the vector eld on the boundary points into the ball, so ball of radius r is an attractor block.

Figure 6. Any point on the boundary of the circle has a vector

which points inside, so that this ball is an attractor block.

We want to show that B z z is su ciently negative so that g2 (z ) z can't make it positive. Need to induct on size of Jordan blocks of B : 0B 1 0 1 B =B @ B2 . C A . . 0 so

B z z = B1 z1 z1 + B2 z2 z2 +
So we have several cases: 1. Bi = i ], a simple real eigenvalue, then

Bi zi zi = i zi2 < 0 for i < 0:


2.

0 B Bi = B @

1 C C A

(Bi z ) z =

m X j =1

zj zj + zj+1 zj

6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7)

45

3. 4.

1 0 .. . 0 0 0 So provided that < 0.

Now

20 6 0 6 6 ... 6 6 40

0 7 1 0 7 . . . . . . .. 7 z z kz k2 : .7 7 0 0 15 0 0 0 < j j =2, then B1 z z < kz k2 + kz k2 < 0 because


2 + z 2) 0 (Bi z ) z = a(z1 2

;b Bi = a b a

0 which is < 0 provided that < jaj =2. So provided that is su ciently small (less than half the real part of the eigenvalue closest to the imaginary axis) then B z z < =2 kz k2 , i.e. if the o diagonal terms of B are su ciently small then B z z (Diag B ) z < 0: But we wanted (B z + g2 (z )) z < 0 But note that kg2 (z ) zk K2 kz k2 kz k = K2 kz k3 provided that kz k < 2 . Thus we have (B z + g2 (z )) z = B z z + g2 (z ) z < ; kz k2 + K2 kz k3 = kz k2 (; + K2 kz k): So provided that r < =2K2, we have (B z + g2 (z )) z < 0: Pull back to original coordinates: So we get a family of attractor blocks about x = 0 in the original variables. Thus 0 is a sink.

0 a ;b B b a Bi = B B @

0 ...

1 0C C C A

(Bi z ) z a kz k2 + kz k2 :

Remark: We showed that two sinks are conjugate by homeomorphism. We didn't expect the conjugacy the be smooth, because then the eigenvalues would have to be the same.

46

2. THE TWO BIG THEOREMS

For example, if we have a system x _ = f (x) and y _ = g(y), and is a solution ow for f , and is a solution ow for g, and there exists h : N ! N such that h( (t y)) = (t h(y)) Now, if h is C 2 , then Df jx0 and Dgjy0 are similar matrices.

Example:
Then

x _ = ;x y _ = ;2y

(t x) = e;t x (t y) = e;2t y Assume that there is an h : (;1 1) ! (;1 1) which conjugates, so that h(e;2t y) = e;t h(y) But h can't be C 1 at 0. Suppose h(y0 ) = x0 so that h(e;2t y0 ) = e;t x0 and p ; 2t e y0 = e;t py0 = e;t kx0 So h(y) = py=k, and h0 (0) cannot exist. So far, what we have is that if x _ = f (x) on Rn , and f (x0 ) = 0, then Df jx0 having all eigenvalues in left half-plane implies a sink, so all sinks in Rn are the same with respect to a continuous change of variables. A symmetric theorem gives us that if Df jx0 has all eigenvalues in the right half-plane, then x0 is a source (i.e. if we replace f by ;f , x0 is a sink.) Two questions: 1. What about \mixed cases" (some eigenvalues with positive real part, some with negative)? 2. Can we do a better classi cation (smoother) if we take eigenvalues into account? Definition 6.1. Given x _ = f (x), f a smooth vector eld on Rn , f (x0 ) = 0, we say that x0 is hyperbolic if none of the eigenvalues of Df jx0 have real part = 0. We will study hyperbolic rest points, with two attacks: 1. Try to show that x _ = f (x) is conjugate to y _ = Ay near x0 for some A = Df jx0 , 2. Try to single out the directions in which x0 looks like a sink or a source. Our goal is to understand orbits in a neighborhood of xed points. Given f : Rn ! Rn , f (0) = 0, we have two attacks:

7. Preliminary to Stable/Unstable Manifold Theorem (Oct. 9)

7. PRELIMINARY TO STABLE/UNSTABLE MANIFOLD THEOREM (OCT. 9)

47

=
Figure 7. This is a continuous conjugacy, but it is obviously not

a di erentiable conjugacy.

1. Let A = Df j0 , try to nd a conjugacy between solution ow of x _ = f (x) and solution ow of y _ = Ay (a local conjugacy). This is good in that it gives us info about all the orbits near 0. This is bad in that in general, we can't hope for a di erentiable conjugacy (it will only be a homeo). (Hartman-Grobman-Sturmberg Theorem) 2. Try to separate out the most important/interesting orbits and show that they have nice structures. Suppose A = Df j0 have eigenvalues 1 2 : : : s , 1 2 : : : u where the real part of the 's are < 0, and the real part of the 's are > 0. If 0 is hyperbolic, suppose A is in real Jordan form: blocks 0 0 blocks -eigenspace

-eigenspace

Figure 8. This is a saddle point the 's are stable and the 's are unstable.

We know that from part 1 that near 0, the phase space of the solution to x _ = f (x) \looks like" this up to homeomorphism: We want to show that the set of points whose orbits go to 0 as t ! 1, or as t ! ;1, are nice smooth manifolds, i.e. recover some of the structure of the linear ow by restricting solutions that go to 0 as t ! 1.

48

2. THE TWO BIG THEOREMS

y Ay
_ =

x fx
_ = (

Figure 9. The ow will look like its linear part. Theorem 7.1 (Hartman-Grobman-(part of)Sturmberg). Let x0 be a hyperbolic xed point for f : Rn ! Rn (where f is C 2 ) vector eld, with a solution ow for x _ = f (x). Let A = Df jx0 , then there is a neighborhood V of x0 and a homeomorphism h : V ! V which locally conjugates to the solution ow for x _ = A(x ; x0 ), i.e. if x 2 V , then (t h(x)) = h(x0 + eAt (x ; x0 )) for all t with both sides in V for time between 0 and t.

1. Note comparing solutions of x _ = f (x) to x _ = A(x ; x0 ) where f (x) = A(x ; x0 )+ h.o.t. 2. h is only guaranteed to be a homeomorphism. Moreover, if n = 2 and f is C 2 , then h can be a C 1 di eomorphism (Hartman). Also, if 1 P 2 : : : n are eigenvalues of the linear part Df jx0 , f 2 C 1 , and for + f0g with mj 2, then we can all k, k 6= m j =1 mj j for m1 m2 : : : mn 2 Z choose h to be C 1 . The converse is not strictly true. For example, it is forbidden that A has dimension 3, with 1 = 2 + 3 , called \resonance" in the eigenvalues. Remark: C 1 is good, since spiral sinks are distingushed from straight-line sinks, etc. Definition 7.1. Suppose X Rn and X = g(Rm) or g(U ), U Rm, such that g : Rm ! Rn, and 1. g is one-to-one, and 2. Dgjx (n m matrix) has rank m at all x 2 Rm. Then X is called an immersed m-submanifold. Example: m = 1,n 2, then g : R ! Rn, if g(R) immersed then: 1. g(R ) doesn't cross itself, 0 (x) g2 0 (x)), 2. g0 6= 0 for every x 2 R , and g(x) = (g1 (x) g2 (x)), and g0 (x) = g1 0 0 so g1 and g2 are never simultaneously 0.

Remarks:

7. PRELIMINARY TO STABLE/UNSTABLE MANIFOLD THEOREM (OCT. 9)

49

Figure 10. These are not allowed. The rst has a cusp and the

second intersects itself.

Figure 11. These are allowed. The manifold can limit onto itself.

We say that X is C k if g if C k . An alternate de nition: Definition 7.2. X Rn is an immersed m-submanifold if for every x0 2 X , there is a neighborhood of x0 in Rn such that if V is the component of X containing x0 in X U then V is the graph of a smooth function from a neighborhood of 0 in Rm to Rn;m, i.e. we can choose coordinates in U s.t.
and is di erentiable. V is a graph of a smooth function, so the only possible weirdness is \global" rather than local, i.e. if you are inside X , it looks like Rm locally.

V = f(x1 x2 : : : xm (x1 x2 : : : xm )) where (x1 x2 : : : xm 0 : : : 0) 2 U g

50

2. THE TWO BIG THEOREMS

U V

Figure 12. This will be a graph of a continuous function if the

coordinates are chosen correctly.

8. Stable/Unstable Manifold Theorem(Oct. 15) Definition 8.1. Let x _ = f (x), f : Rn ! Rn , a C k -vector eld with f (x0 ) = 0. For V a neighborhood of x0 , we de ne the local stable set W s (x0 V ) = fz 2 V j (t z ) 2 V for all t 0 (t z ) ! x0 as t ! 1g the local unstable set W u (x0 V ) = fz 2 V j (t z ) 2 V for all t 0 (t z ) ! x0 as t ! ;1g the stable manifold W s (x0 ) = fz j (t z ) ! x0 as t ! 1g the unstable manifold W u (x0 ) = fz j (t z ) ! x0 as t ! ;1g: Suppose x0 = 0, i.e. f (0) = 0, and Df j0 has eigenvalues 1 2 : : : s , 1 2 : : : u where Re( i ) < 0, Re( i ) > 0. Assume Df j0 is in Jordan form, with Df j0 = 0's 0 's So if f = Df j0 x (i.e. if f were linear), then the ow would look like this:
Let Solution to linear ow would be

E u = f(x1 x2 : : : xu 0 : : : 0)g E s = f(0 : : : 0 xu+1 : : : xn )g x(t) = e( Df j0 t x0 :

8. STABLE/UNSTABLE MANIFOLD THEOREM(OCT. 15)

51

Figure 13. Stable and unstable manifolds of the linear system

For example, in R2 , So

Df j0 = 0 0 :

x1 (t) = exp 0 t x1 (0) = e t x1 (0) : x2 (t) 0 x2 (0) e t x2 (0) So W s (0) = E s and W u (0) = E u . Given a (nonlinear) f as above, the W s (0), W u (0) are immersed k C -submanifolds, with W s (0) tangent to E s at 0, W u (0) tangent to E u at 0.

s (0)

u (0)

Figure 14. The stable and unstable manifolds are tangent to the linear subspaces.

The local version of the Stable/Unstable Manifold Theorem says that there exists a neighborhood V of x0 such that W u (0 V ) is the graph of a C k function

52

2. THE TWO BIG THEOREMS

: E u \ V ! E s , i.e. W s (0 V ) = f| (x1 x2 : : :{z xu 0 : : : 0) xu 0 : : : 0) }+( |x1 x2 : : : {z }g where (0) = 0, D j0 = 0. Similarly, for W s (0 V ), there exists a C k function : E s \ V ! E u , where s W (0 V ) is the graph of , (0) = 0, D j0 = 0. Moreover, if f depends smoothly on , then so do W s , W u (i.e. for near 0 , f has a rest point near the rest point of f 0 and stable/unstable manifolds depend smoothly on ), as long as the xed point of f is hyperbolic. Moreover, if z 2 V , z 62 W s (0 V ), then for some t > 0, (t z ) 62 V , and if z 62 W u (0 V ), for some some t < 0, (t z ) 62 V .

2E

2E

s (0)

u (0)

Figure 15. \The box"

should be graphs of analytic functions.

Example: Look at n = 2, u = s = 1, f analytic. Then W s(0 V ), W u (0 V )


Df j0 = 0 0

with > 0 > . Linearization gives us that x _ = x and y _ = y, and we see that E u = f(x 0)g and E s = f(0 y)g. Let where

f (x y) = (f1 (x y) f2 (x y)) f1 (x y) = x + f2 (x y) = y +

X
i+j 2

aij xi yj bij xi yj

i+j 2

8. STABLE/UNSTABLE MANIFOLD THEOREM(OCT. 15)

53

d = 0. where (x) is analytic, (0) = 0, dx 0 So


(x) =

Look for W u (0 V ) to be graph : E u \ V ! E s , so W u (0 V ) = f(x (x))j x(0) 2 V g

1 X

W u (0

If we have an initial condition in W u (0 V ), the only way the solution leaves V ) is to leave V .

n=2

n xn = 2 x2 + 3 x3 +

x x
(

))

@ R @

Figure 16. (x) is a graph near the xed point.

So vector eld on W u (0 V ) must be tangent to W u (0 V ). At a point (x (x)), the vector eld is f (x (x)) = (f1 (x (x)) f2 (x (x))): At (x (x)), we want f (x (x)) to be the tangent line to . A vector in tangent line at (x (x)) is (1 0 (x)). So ( 0 (x) ;1) is orthogonal to the tangent line, and we want that f (x (x)) ( 0 (x) ;1) = 0 for all x or f1 (x (x)) 0 (x) ; f2 (x (x)) = 0 Plug in: x + a20 x2 + a11 x( 2 x2 + 3 x2 + ) + a02 ( 2 x2 + 3 x3 + ] (2 2 x + 3 3 x2 + ) ; ( 2 x2 + 3 x3 + ) + b21 x2 + b11 x( 2 x2 + 3 x3 + ) + ] = 0 Some of the results will be:

54

2. THE TWO BIG THEOREMS 2=2

; ; 2 a 20 2 + b11 2 + b30 3= 3 ;
We can solve for those 's formally. 1. Do they converge? 2. Are solutions on graph of ? 3. Unique? Try the C k version, x _ = f (x), x 2 R2 , f (0) = 0. Look at time T map associated to the solution ow for x _ = f (x). Let ;1 exists, since FT ;1 (x) = (x ;T ). FT is as di erentiable as FT (x) = (x T ). FT is in x. FT (0) = 0, a rest point. n (0) = FT FT FT | {z FT} = (x nT ) n times by the group property. Let n (x)j n 2 Z g: OF (x) = fFT
T

b20

9. More S/U Theorem (analytic version) (Oct. 16)

F
Figure 17. The ow

and its associated map FT . lim F n (x) = 0g n!1 T lim F n (x) = 0g n!;1 T

Let

s (0) = fxj WF u (0) = fxj WF


T T

and

where V is a neighborhood of 0.

s (0 V ) = W s (0) \ fxj F n (x) 2 V n 0g WF F T


T T

9. MORE S/U THEOREM (ANALYTIC VERSION) (OCT. 16)


T

55

s (0) be a stable set Lemma 9.1. Let W s (0) be a stable set of 0 for and WF of 0 for FT T > 0. Then s (0) W s (0) = WF s (0), because if x 2 W s (0) then (x t) ! 0 as Proof: Clearly, W s(0) WF t ! 1 which means that (x nT ) ! 0 as n ! 1. s (0) then F n ! 0 as n ! 1 or (x nT ) ! 0 as n ! 1. If x 2 WF T
T T

Note that for all > 0, there is a > 0 such that if z 2 R2 , kz k < , then k (z t)k < for 0 < t T . Fix > 0, and choose N so large such that for all n N , if k (x nT )k < , then k (x nT + s)k < , so that k (x t)k < for all t > NT . s (0) similarly. So we can deal with W s (0) and WF What is a hyperbolic xed point for a map? Suppose we have the same situation as above for x _ = f (x), with Df j0 = 0 0 How do we compute DFT ?
T T

where X is an n n matrix, with initial conditions x(0) = x0 and X (0) = I . If x(t) X (t) is a solution, then X (t) = Dx (x0 t): Recall, this comes from di erentiating twice:

x _ = f (x) _ X = Df jx X

So

Suppose we take x(0) = 0 and start at the rest point. Solve x _ = f (x) part, have x(t) = 0. So _ = Df jx(t) X = Df j0 X X and we get 0 X X (0) = I: _= X 0

d d dt Dx = Dx( dt ) = Dxf j (x t) Dx (x t)

X (t) = exp DxFT j0 = exp

0 t

0 T = eT 0 : 0 0 eT The eigenvalues of DFT j0 are e(eigenvalues of Df j0 )T . So if is an eigenvalue of Df j0 with real part > 0, then the corresponding eigenvalue for DFT j0 is e T with e T > 1. If is an eigenvalue of Df j0 with real part < 0, then the corresponding eigenvalue for DFT jo 0 is e T with e T < 1.

56

2. THE TWO BIG THEOREMS

Definition 9.1. Given F : Rn ! Rn , F (x0 ) = x0 , F a di eomorphism, then x0 is hyperbolic i none of the eigenvalues of DF jx0 has norm 1. Definition 9.2. We call a xed point x0 of F : Rn ! Rn a sink if there is a neighborhood U of x0 such that for all z 2 U F n (z ) 2 U , all n 0, and lim F n (z ) = x0 : n!1 Theorem 9.1. If all the eigenvalues of DF jx0 are inside the unit circle, then

x0 is a sink.

Theorem 9.2 (2-dim Stable/Unstable Manifold Theorem for Maps). Suppose we have F : R2 ! R2 a di eomorphism, F (0) = 0, and DF j0 = 0 0 with j j > 1 j j < 1: Then there is a neighborhood V of 0 and : ; ] ! R with W u (0 V ) the graph of , i.e. W u (0 V ) = f(x (x))g, and (0) = 0, 0 (0) = 0. Steps: 1. Prove is Lipschitz. 2. Bootstrap to get more smoothness. Look at neighborhood of 0:
0

;!
0

Figure 18. What the linear part of F does to the box

For small boxes, the map will look a lot like its linear part. Note that F (W u (0)) = W u (0). Set up a map on graphs in the box and look for xed points. (Graph Transform Method).

submanifold, i.e. local implies global. Proof: Recall our de nition of a C k immersed submanifold is that a neighborhood of each point in the manifold is (in nice coordinates) a C k graph. So pick z 2 W u (0), i.e. F n (z ) ! 0 as n ! 1. So for any neighborhood V of 0, there is an N such that F ;n (z ) 2 V for all n N . So F ;N (z ) 2 W u (0 V ), then F ;N (z )

10. C k version of S/U Theorem (Oct. 18) Lemma 10.1. Suppose F is a C k -di eomorphism and F (0) = 0 is a hyperbolic xed point. If W u (0 V ) is the graph of a C k function then W u (0) is a C k immersed

10. C VERSION OF S/U THEOREM (OCT. 18)


k

57

;!
Figure 19. What F does to the box

is on graph of C k function. Use F ;N to change coordinates, in coordinates in V , W u (0) near z is a graph. So W u (0) near z is a graph of a C k function. Let us set up the 2-dimensional version: F : R2 ! R2 , F (0) = 0, with F 2 C 2. DF j0 = 0 0 >1> >0 Our goal is to show that W u (0 V ) is a graph of a function : E u \ V ! E s . The idea is to set up a nice neighborhood V . Iterate this process, and hope the graphs go to W u (0 V ). The local unstable manifold has to map to itself. W u (0 V ) should be the xed point of this process. 1. Set up appropriate space of graphs on function ( ). We want that F (graph in set) is a graph in set, and to preserve as much smoothness as possible. 2. Need our space to be complete. 3. Map on this space has a sink (is a contraction). 4. Need to show that W u (0 V ) is a xed point. Start by choosing as neighborhood V of 0 that looks like ; v v ] ; v v ], with v chosen below. Now look at F (x y) = ( x y) + (g1 (x y) + g2 (x y)). By the 2-dimensional Taylor's Theorem, we have Dg1 j0 = 0 = Dg2 j0 , so Dg1 , Dg2 are small near 0, so there is K1 such that if k(x y)k < , then kgi (x y)k < K1 k(x y)k2 Take v < 1, and use that to get control of F (V ). We want the right edge to map to the right edge, so we want: j + g1 ( y)j > for ; < y < p 2 i.e., jg1 ( y)j < K1 2 + y2 for < 1 p < K1 2 2 < ( ; 1) : So we need

Details.

<

2K1 2

;p 1 for > 1:

58

2. THE TWO BIG THEOREMS

V
?

C W C

FV
(

C O C C

Figure 20. What the map F does to the box V

Take v < . For the top, we need

So choose v to be less than all of these. Next, chose our set of graphs. Let GL = f : E u \ V ! E s j x1 x2 2 ; v v ] j (x1 ) ; (x2 )j < L jx1 ; x2 jg Assume that L < 1. Let GL = fA V j A is the graph of 2 GL g De ne the graph transform: For A 2 GL let F (A) = fF (x y)j (x y) 2 Ag \ V We need that F (A) 2 GL . We will rst show that F (A) is the graph of some function, and then that this function is in GL . Claim: for xed A 2 GL , for each x 2 ; v v ], there is a y 2 ; v v ] such that (x y) 2 F (A). Graph of A is curve connecting left to right. So F (A) does the same thing for F (V ). So by conditions on F (V ), we know that for each x 2 ; v v ], there is a y 2 ; v v ] such that (x y) 2 F (A). If the line containing x from top to bottom didn't intersect F (A), this would be a contradiction. But what if both (x y1 ) and (x y2 ) 2 F (A)? Look at F ;1 (x y1 ) F ;1 (x y2 ) in A. But we can write F ;1 (x y) = (x= y= ) + ( 1 (x y) 2 (x y))

j + g2(x )j < for ; < x < i.e. jg2(x )j < (1 ; ) p k1 2 2 < (1 ; ) < 1 ;p ( < 1): 2K 2
1

11. CONTINUATION OF THE C PROOF (OCT. 21)


k

59

FA
(

Figure 21. What F does to V and F does to A


(

x y0 x y1

) )

Figure 22. The points are \almost" on top of each other

where

0 DF ;1 0 = ( DF j0 );1 = 1= 0 1= Then there is 2 K2 such that if k(x y)k < 2, then k 1 (x y)k < K2 k(x y)k2 replacing old 1 by 2 , and K1 by K2 if necessary. We expect that F ;1 (x y1 ) and F ;1 (x y2 ) are \almost" on top of each other, but this is bad because A is the graph of a Lipschitz function.

function is in GL .

11. Continuation of the C k proof (Oct. 21) Now we know that each F (A) is a graph of a function, but we want that this

60

2. THE TWO BIG THEOREMS

At each (x y) 2 V , let

v is su ciently small.)

C(x y) = f(x1 y1)j jy ; y1 j < L jx ; x1 jg Claim: If (x y) 2 V and F (x y) 2 V then (F (C(x y) \ V )) \ V CF (x y) : We say then that F satis es a cone condition. (This will be true provided that

but

Proof: Suppose not, i.e. jy ; y1 j < L jx ; x1 j for some (x y) (x1 y1 ) 2 V

j( y + g2(x y)) ; ( y1 ; g2(x1 y1 ))j L j( x + g1(x y)) ; x1 + g1 (x y))j : Now, jy ; y1 j ; jg2 (x y) ; g2 (x1 y1 )j > L jx ; x1 j ; L jg1 (x y) ; g1 (x1 y1 )j
and

where So

jg2 (x y) ; g2(x1 y1 )j < jx ; x1 j + jy ; y1j jg1 (x y) ; g1(x1 y1 )j < jx ; x1 j + jy ; y1j


i > max @g @x

@fi i = 1 2 8x y 2 V : @x

( + ) jy ; y1 j + jx ; x1 j > L jx ; x1 j ; L jx ; x1 j ; L jy ; y1 j ( + + L ) jy ; y1 j > (L ; L ; ) jx ; x1 j which gives us that

Claim: By taking v su ciently small, L ;L ; >L


+ +L

; L ; jx ; x j jy ; y1 j > L + 1 +L

L ; L ; ! L > L ( ! 0) + +L So if A 2 GL , for each (x y) 2 A, A C(x y) \ V ) F (A) CF (x y) \ V: So F (A) 2 GL .

i @gi We know that @g @x @y are continuous (and 0) at 0. By taking v su ciently small, we can make as small as we like.

Claim: F is a contraction. Given 1 2 2 GL , compute kF ( 1 ) ; F ( 2 )k. We want it to be less than r k 1 ; 2 k = sup j 1 (x) ; 2 (x)j (r < 1): x2 ; ] Fix x 2 ; v v ] (x yi ) 2 graph of i .
v v

11. CONTINUATION OF THE C PROOF (OCT. 21)


k

61

;1(x y1 )

F ;!
;1(x y2 ) f
2

(x

1)

F F

1) 2)

(x

2)

Figure 23. F is a contraction mapping.

F ;1 (x yi ) = ( x + 1 (x yi ) yi + 2 (x yi )) Estimate jy1 ; y2 j compared to k 1 ; 2 k. First note that x + (x y ) ; x + (x y ) < jy ; y j where 1 1 1 2 1


So we have

@i @x

@i : @y

y1 + (x y ) ; y2 + (x y ) 2 1 2 2

k 1 ; 2 k + L( jy ; y1 j)

which gives us 1 jy ; y j ; j ( x y ) ; ( x y ) j k ; k + L j y ; y j 1 2 2 1 2 2 1 2 1 2 1 jy ; y j ; jy ; y j k ; k + L jy ; y j 1 2 1 2 1 2 1 2 ( 1 ; ; L ) jy1 ; y2j k 1 ; 2 k jy1 ; y2j 1= ;1 ; L k 1 ; 2 k Since the partials of i are 0 at 0, take v small so that 1 1= ; ; L = r < 1 and < 1: So F is a contraction. Now use the contraction mapping principle, and the fact that GL is a metric space. So there must exist a xed point for F , u . Claim: u = W u(0 V ): 1. If (x y) 2 graph u then F ;n (x y) 2 V for all n 0 and F ;n (x y) ! 0 as n ! 1. 2. If (x y) 62 graph u then there is an n 0 such that F ;n (x y) 62 V .

Proof:

62

2. THE TWO BIG THEOREMS

Claim: For v su ciently small, there is an r < 1 such that if (x y) 2 C0, then if (x y1 ) = F ;1 (x y) then jx1 j < r jxj. Proof: We know that x1 = x= + 1(x y). Then j 1 (x y)j = j 1 (x y) ; 0j jxj + jyj jxj + L jxj : So jx1 j < x + ( + L) jxj or jx1 j < ( 1 + + L) jxj 1= < 1, so take v small so that 1 + + L = r < 1:
We know that u (0) = 0, because for and 2 GL , F n ( ) ! u . If (0) = 0, then the value of F ( ) at 0 is also 0. And u 2 GL , so the graph of u is in C0 and if (x y) 2 graph u , then F ;1 (x y) 2 graph u . So F ;n (x y) 2 graph u C0 for all n 0. So x-coordinate of F ;n (x y) ! 0, but this is Lipschitz, so the y-coord ! 0 also. Thus the graph of u W u (0 V ). If you're not in the graph, the vertical distance increases, so eventually you leave V . Start with F : R2 ! R2 , a di eomorphism, F (0) = 0, and Lift to the unit tangent bundle T 1R2 = f((x y) v)j v 2 R2 kvk = 1g

12. Smoothness (Oct. 28)

DF j0 = 0 0

> 1 > > 0:

and

At x = y = 0, Eigendirections at (x y) = (0 0) give the xed points of F . Here = 0 is stable in the -direction. Compute DF j((0 0) 0) : 0 0 01 @0 0A ? At x = y = 0, what does F do to ((0 0) )? Note: corresponds to (cos sin ).

DF(x y)(v) F ((x y) v) = F (x y) DF(x y)(v)

12. SMOOTHNESS (OCT. 28)

63

Figure 24. The xed points for (x y) = (0 0)

Use the Lipschitz Unstable Manifold Theorem to get a 1-dimensional Lipschitz curve in T 1R2 through ((0 0) 0) corresponding to unstable eigendirection. We need to show that W u ((0 0) 0) is the derivative of W u (0 0), i.e. for z 2 u W (0 0), the tangent (unit) vector of W u (0 0) at z is the corresponding point (z ) 2 W u ((0 0) 0).

-coordinate of image of F ((0 0) ) is ) p (2 cos2 sin 2 cos + sin2 and angle of image is arctan cos sin i.e. F ((0 0) 0) = (0 0) arctan cos sin

@ ( ; coord) = : @

W x

((0 0) 0)

Figure 25. The unstable manifold at ((0 0) 0). The picture may be a bit unclear. The -axis moves back into the 3rd dimension. Recall that there are xed points on the -axis, alternating between sink and source.

64

2. THE TWO BIG THEOREMS

W x

((0 0))

Figure 26. The unstable manifold at (0 0) (what we get when we restrict to the plane

For (x (x)) 2 W u (0 V ), de ne

8 > < C (x) = >vj v = :

where xn ! x. W u (0

Claim: if F (x (x)) = (x1 (x1 )), then

V ) is Lipschitz.

n ) ; (x) 1 limn!1 (xx n;x ( x n ) ; (x) 1 limn!1 x ;x n

9 > = >:

and (xn ) ; (x) = lim DF j (xn ) ; (x) : (31) DF j(x (x)) 1 nlim (x (x)) 1 !1 xn ; x n!1 xn ; x Proof: If we let b DF jx (x)) = a c d we get in equation (31):
n!1 n ) ; (x)) c + d ( (xn ) ; (x)) lim a + b ( (xx xn ; x n;x c(xn ; x) + d( (xn ) ; (x)) : = 1a (x ; x) + b( (x ) ; (x))
(up to length)

F (C (x)) = C (x1 )

We know that xn ! x, so F (xn (xn )) ! (x1 (x1 )). Look at fF (xn (xn ))g: Use X Y as projection onto x y coordinates. Check: Y (F (xn (xn ))) ; Y (F (x (x))) X (F (xn (xn ))) ; X (F (x (x)))

Y (F (x (x))) + Y ( DF j (xn ; x (xn ) ; (x))) + Y R ; Y (F (x (x))) = X (F (x (x))) + X ( DF j(x (x)) (x ; x (x ) ; (x))) + XR ; X (F (x (x))) : n (x (x)) n

13. THE STABLE/UNSTABLE MANIFOLD METATHEOREM (OCT. 30)

65

we get

R is the higher order terms, so we can forget about it when we take limit, so
nlim !1

c(xn ; x) + d( (xn ) ; (x)) a(xn ; x) + b( (xn ) ; (x)) F : C (x) ! C (x1 ):

thus

What does this say? So if F is C 3 , then we have that W u (0) is C 1 Lip. If F is C 1 , then W u (0) is C 1 . In fact, we could show that W u (0) is as smooth as F , i.e. if F is C k , then W u (0) is C k .

((0 0) 0)

Figure 27. The stable manifold of ((0 0) 0).

What is the geometric interpretation?

Remarks:

1. Stable manifold works the same. Replace F by F ;1 . 2. Higher dimensional, we showed F (graph) is a graph. In higher dimensions, if A is Lipschitz graph Eu ! Es , we need Near the xed point, F ;1 (Es + z ) is a perturbation of Es + w for some w. So use transversality of intersection of Es + w with A.

F (A) \ Es + z 6= 0 , A \ F ;1 (Es + z ) 6= 0:

13. The Stable/Unstable Manifold MetaTheorem (Oct. 30) Theorem 13.1. Given smooth (C k C 1 ) vector eld F : Rn ! Rn , F (0) = 0. 2 : : : n are the eigenvalues of DF j0 (with redundancy), and E1 E2 : : : En

66

2. THE TWO BIG THEOREMS

z A

FA
(

Figure 28. Perturbation of the stable manifold

are the corresponding normalized e-spaces, e.g. if

0 B B B DF j0 = B B B B @

;b a

a b

...

1 C C C C C C C A

then

E1 = f(x1 0 : : : 0)g E2 = f(0 x2 x3 : : : 0)g : : :


The Ei are invariant subspaces for DF j0 . For r 0, there is an invariant manifold Wrs (0) tangent to E1 Em , where Re 1 ,Re 2 : : : Re m r. First, dim Wrs (0) = dim(E1 Em ). Also, if r < 0 r < Re i < 0 for some i, then Wrs (0) is called a strong stable manifold and denoted Wrss (0). If r = 0 and there is some eigenvalue, i , with Re i = 0, then Wrs (0) is called the center stable manifold and denoted Wrcs (0). Same for r 0, we get invariant manifolds associated with eigenvalues j with Re j r. If r > 0, and there is j with 0 < Re j < r, then Wru (0) is called the strong unstable manifold, and denoted Wrsu (0). If r = 0 and there is j with Re j = 0, Wru (0) is called the center stable manifold, and denoted Wrcu (0). A center manifold is de ned as

Wrsc (0) \ Wruc (0)


and is denoted W c (0). (It is tangent to e-spaces for eigenvalues with real part 0.)

Remark: Proofs use same sort of ideas (set contraction).

13. THE STABLE/UNSTABLE MANIFOLD METATHEOREM (OCT. 30)

67

Example:

x _ = x2 y _ = ;y x2 F x = y ;y

0 DF j0 = 0 0 ;1 :

Choose any orbit on the left, and the x-axis on the right. So center manifolds are not unique, as opposed to the hyperbolic case. 1. Warning: These center manifolds are not necessarily as smooth as F . 2. The hyperbolic case is generic: Given a vector eld with xed point at 0 you can \expect" it to be hyperbolic, i.e. in the set of vector elds with xed point at 0, the vector elds with hyperbolic xed points at 0 form an open dense set in the C 1 topology. Proof: If F has hyperbolic xed point at 0 and G is su ciently close to F and G has a xed point at 0 (close in C 1, so DF j0 DGj0 , so eigenvalues are close), so if none of the eigenvalues of DF j0 are on the imaginary axis, and G is close enough, the same holds true for G. If 0 is not hyperbolic for F then DF j0 has a Jordan block with 0 on the diagonal. Simply add In to F , then D(F + In )j0 = DF j0 + In so the diagonal elements will be nonzero if is su ciently small. So why do we need center manifolds at all? If we're dealing with a oneparameter family of vector elds F (all of whom have a xed point at 0, if when = 0 DF j0 is hyperbolic with s-dimensional stable and (n ; s)-dimensional unstable, but DF1 j0 has (s + 1)-dimensional stable and (n ; s ; 1)-dimensional unstable, then for some 2 0 1], DF j0 is not hyperbolic. We'll talk a lot more about this when we do bifurcation theory.

Figure 29. A center manifold W ss (0) is the y-axis, W cs (0) is all of R2 , so we have a lot of choices for W c (0).

Remarks:

68

2. THE TWO BIG THEOREMS

CHAPTER 3

Using maps to understand ows


The next most complicated invariant set (after the xed point) is the periodic orbit. For example, if is a ow on Rn , then x0 2 Rn is called a periodic point with least period T if (x0 T ) = x0 (x0 t) 6= x0 for 0 < t < T: Of course, we see that (x0 nT ) = x0 for all n 2 N . We call O(x0 ) a periodic orbit. O(x0 ) is the smooth image of a circle in Rn, so the dynamics of jO(x0) are very easy. What about near periodic orbits? Idea. (due to Poincare) Let be a (n ; 1)-dimensional surface (copy of (n ; 1)-dimensional disk) that contains x0 such that for all x 2 , F (x) is not tangent to .

1. Periodic orbits and Poincare sections (Nov. 1)

Figure 1. A Poincare section

x 2 , let

is called a Poincare section for . De ne a map P :

! . For each

(x) = ftj t > 0 (x t) 2 g: If (x t) 62 , let (x) = 1. By continuity with respect to initial conditions, if x is near enough to x0 , then (x) < 1. We need to show that (x) is a smooth function in a neighborhood of x0 , ( : ! R ).
69

70

3. USING MAPS TO UNDERSTAND FLOWS

Figure 2. If the initial condition is far enough away, it may not actually return to .

We know that (x0 ) = T , so make smaller if necessary so that (x) 2 T ; 1 T + 1]. (Local theory near periodic orbits) Next choose coordinates so that x0 = 0 f(0 x2 : : : xn )g in a neighborhood of x0 . Let f : Rn;1 R ! R , f (x2 : : : xn t) = 1 (0 x2 : : : xn t) where 1 is restriction to the 1st coordinate. This is de ned for x2 : : : xn near 0 and t near T . Look at the level set f ;1 (0). The Implicit Function Theorem implies that there is a (x2 : : : xn ) such that f (x2 : : : xn (x2 : : : xn )) = 0 provided that

This will work because

@f @t (0 T ) 6= 0:

and F (x0 ) is not tangent to . So (x) is smooth on near x0 . De ne P : ! , P (x2 : : : xn ) = (0 x2 : : : xn (x2 : : : xn )) which is de ned and smooth on a neighborhood of x0 in . P is the rst return map or Poincare Map. di eomorphism on some neighborhood of x0 in .

@f @ ( (0 T )) = = 1 F ( (0 T )) @t (0 T ) @t 1

Remarks: 1. P (x0 ) = x0 , so P has a xed point at O(x0 ) \ . 2. P ;1 exists (follow ow backwards). Of course P ;1 is smooth, so P is a

1. PERIODIC ORBITS AND POINCARE SECTIONS (NOV. 1)

71

Remark: This does not require that (z t) (y t) for some y 2 O(x0 ) when t is large, just that (z t) is close to the set O(x0 ) (i.e. not necessarily \in phase".) Definition 1.1. We say that O(x0 ) is a sink if W s (O(x0 )) contains an open neighborhood of O(x0 ), a source if W u (O(x0 )) contains an open neighborhood of O(x0 ). Theorem 1.1. Let x0 P be as above and 1 2 : : : n;1 be the eigenvalues of DP jx0 (perhaps with redundancy). 1. If k i k < 1 for i = 1 : : : n, then O(x0 ) is a sink. 2. If k i k > 1 for i = 1 : : : n, then O(x0 ) is a source. 3. If k 1 k : : : k s k < 1, and k s+1 k : : : k n;1 k > 1, i.e. x0 is a hyperbolic xed point for P and if E s is the direct sum of the generalized e-spaces for 1 2 : : : s and E u is the direct sum of the generalized e-spaces for s+1 : : : n;1 , then W s (O(x0 )) and W u (O(x0 )) are immersed submanifolds as smooth as of dimension E s + 1 and E u + 1 respectively, and W s (O(x0 )) \ is tangent to E s at x0 , and W u (O(x0 )) \ is tangent to E u at x0 . Proof:
1. Look at picture.

Let us denote W s (O(x0 )) = fz 2 Rn j y2O max k (z t) ; yk ! 0 t ! 1g (x0 ) W u (O(x0 )) = fz 2 Rn j y2O max k (z t) ; yk ! 0 t ! ;1g: (x )
0

U F (U )

Figure 3. Poincare map of a neighborhood

2. Same picture in backwards time.

72

3. USING MAPS TO UNDERSTAND FLOWS

Figure 4. Just as in the ow, the stable manifold shrinks in the

Poincare map and the unstable manifold grows

3. We have by transversality, dim W s (O(x0 )) + dim W u (O(x0 )) = n + 1:

Note: The periodic orbit is a subset of the intersection O(x0 ) W u (O(x0 )) \ W s (O(x0 ))
They intersect transversally at orbit, but the following picture is possible:
1

-1

-1

Figure 5. The xed point at the origin has two homoclinic orbits

in this picture.

Two questions: 1. What of this theory is independent of the choice of ? 2. Can you ever compute the eigenvalues for DP jx0 , which is the eigenvalues of Dx (x0 T )?

2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4)

73

2. Computing Floquet Multipliers (Nov. 4) As before, we have vector eld F , ow , periodic orb IR O(x0 ), Poincare section , and Poincare map P : ! .

Figure 6. Poincare section

The behavior of O(x0 ) is determined by the eigenvalues of DP jx0 . The rst problem is the independence of the choice of . We could choose two 's, P1 : 1 ! 1 , P2 : 2 ! 2 .

Figure 7. Is the Poincare map independent of the choice of section?

Are Floquet multipliers independent of choice of ?

74

3. USING MAPS TO UNDERSTAND FLOWS

Idea. De ne
where

p1 : 1 ! 2 x 7! (x 1 (x))

and

1 (x) = minft 0j (x (x)) 2 2 g: As before, p1 is de ned near x0 and as smooth as and . Also, is invertible: 1 ;1 p; 1 (x) = (x 1 (x)) where x 2 2 :

;1 1 (x) = maxft < 0j (x t) 2 1 g: 1 Then our claim is that P1 = p; 1 P2 p1 . This is the group property of . So 1 P1 jx0 = Dp; 1 x1 DP2 jx1 Dp1 jx0 :
But
1 ;1 D(p; 1 ) x1 = (Dp1 )jx0 : So DP1 jx0 is similar to DP2 jx1 , and in particular these two matrices have the

same eigenvalues. The same idea works if x0 = x1 :

Figure 8. We assume that the Poincare sections intersect at x0 .

p1 : 1 ! 2 x 7! (x 1 (x)) where 1 (x) = t near 0 such that (x t) 2 2 .

De ne

So the eigenvalues of the return map are invariants of the orbit. But we still need a way to computer the eigenvalues of the Poincare maps. Claim: The eigenvalues of Dx j(x0 T ) are 1 1 : : : n;1 where 1 : : : n;1 are the Floquet multipliers. Proof: The time T map takes a neighborhood of x0 to a neighborhood of x0 . But ( T ) preserves the periodic orbit O(x0 ). Let (t) = (x0 t) so : R ! Rn .

2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4)

75

( (t) T ) = ( (xo t) T ) = (x0 t + T ) = (x0 t) = (t) i.e. all points on are xed under ( T ). So d ( (t) T ) = D j d x ( ( t ) T ) dt dt Also,

Then

If we let t = 0, ( (0) = x0 )

d ( (t) T ) = d : dt dt t

i.e. d dt 0 is an e-vector with eigenvalue 1. Also, so

d Dx j(x0 T ) d dt 0 = dt 0 d dt t=0 = F ( (0)) = F (x0 )

Dx j(x0 T ) F (x0 ) = F (x0 ): We need that the rest of the eigenvalues of Dx j(x0 T ) are eigenvalues of DP jx0 for P : ! . (i.e. Poincare section) Choose nice variables so that F (x0 ) = (1 0 : : : 0), and then

0 # From linear algebra, we know that the eigenvalues of B are the rest of the eigenvalues of Dx , so 1 0 1 0x 1 0x1+?x2 +?0 x3 +1 +?xn x1 1 B C x1 C B C B . . Dx j(x0 T ) B = = B B C C @.A @ @ ... C A: B @ ... A A x x
n

01 B0 Dx j(x T ) = B B @ ...
0

1 C " C C: B ! A
?

Next, choose = f(0 x2 : : : xn )g where in these nice coordinates, x0 = 0. Let : ! R be the rst return time map, so Poincare map P : ! is P (x1 x2 : : : xn ) = 1 ((0 x2 : : : xn ) (0 x2 : : : xn )) where 1 (x1 x2 : : : xn ) = (x2 : : : xn ):

xn

76

3. USING MAPS TO UNDERSTAND FLOWS

Extend to neighborhood of x0 in Rn continuously so that (x1 x2 : : : xn ) is the time near T such that (x1 x2 : : : xn (x1 x2 : : : xn )) 2 : Compute the (n ; 1)-dimensional vector

@ @P @x2 = 1 ( @x2 ( (0 x2 : : : xn ) (0 x2 : : : xn ))):

@ @x2 (0 x2 : : : xn (0 x2 : : : xn )) @ d = @x +@ @t (0 x2 ::: x (0 x2 ::: x )) dx2 (0 x2 ::: x ) : 2 (0 x2 ::: x (0 x2 ::: x )) Evaluate this at x0 = 0 so that (0) = T . @ d d + @x2 (x0 T ) dt (x0 T ) dx2 x0
n n n n n

Recall the variational equation:

@ d (1 0 : : : 0) : @x2 (x0 T ) + vector dx 2 x0 eld at x0 @ ( (0)) = @P are the same as So the last (n ; 1) components of @x @x2 x0 2 x0 @ the last n ; 1 components of @x . 2 (x0 T ) The proof works exactly the same for x3 : : : xn , so DP jx0 = B in these nice coordinates. Thus the eigenvalues of P at x0 are the same as B . This shifts the problem. Now how do we compute the eigenvalues of Dx j(x0 T ) ? There's only one way to get at Dx (unless we have an explicit form for ). x _ = F (x) _ = DF jx(t) X X x(0) = x0 X (0) = I

n vector

n vector

scalar

where

x(t) = (x0 t) is the periodic orbit, and X (t) = Dx j (x0 t)


Verify that @ @t (x0 t) = F ( (x0 t)) by di erentiating with respect to x. So, can we solve this in general? There are some situations where we can: 1. Find an explicit expression for the periodic orbit (x0 t). (Easier, in general, than nding formula for (x t)). _ = DF j (x0 t) X has the form X _ = A(t)X where A(t) is an n n matrix, 2. X \nice" in a sense to be made precise later.

3. MORE COMPUTATION OF FLOQUET MULTIPLIERS (NOV. 6)

77

So when can we compute these Floquet multipliers? Let A(t) = DF j (x0 t) , so that we have the equations x _ = F (x) x(0) = x0 _ X = A(t)X X (0) = I: There are some cases where we can solve this system: 1. A(t) = A0 , a constant matrix. In this case the solution is X (t) = eA0 t : _ = a(t)X . Then 2. If dim n = 1, A is a 1 1 matrix, and X

3. More computation of Floquet multipliers (Nov. 6)

X (t) = e 0 a(s) ds :
3. If

Rt

0a (t) 1 B a2 (t) A(t) = B B a3 (t) @ nR o

...

1 C C C A 1 C C C C C C A

o 0 nR t exp a ( s ) ds 1 0 B B B X (t) = B B B @

i.e. A is diagonal, then

exp 0t a2 (s) ds

exp 0t a3 (s) ds

nR

o
...

We would want that this solution would always work (even in the non-diagonal case). Since

we would like to factor, i.e. exp

nR o nR o exp 0t+ t A(s) ds ; exp 0t A(s) ds _ = lim X t!0 t Zt


0

o 1 0 nR t+ t exp t A(s) ds ; I A: A(s) ds @ t

But this presumes that eB eC = eB+C , but this is not true in general, since B and C may not commute. Example: Set-Up for Bifurcation Theory] y = ; sin y. We can change this to a system: y _ =v v _ = ; sin y

78

3. USING MAPS TO UNDERSTAND FLOWS

sin( )

111 000 000 111 0 1 000 111 0 1 0 1 0 1 0 1

= 0

Figure 9. The acceleration due to gravity is of the magnitude sin(y).

;
Figure 10. The phase plane for y = ; sin(y)

We can see that ( 0) are hyperbolic xed points. The point (0 0) have eigenvalues i. We have an invariant of motion H (y v) = v2 =2 ; cos y.

So this is a Hamiltonian system. Add a small periodic forcing and get y = ; sin y + |{z} g(t) external where g(t + T ) = g(t). We can make this system autonomous: y _ =v v _ = ; sin(y) + g(s) s _ = 1: The phase space is R2 S 1 = f(y v s)j (y v) 2 R2 s 2 S 1g: and S 1 = 0 T ] where 0 and T are identi ed. When = 0, we get y _ =v v _ = ; sin y s _=1

d H (y(t) v(t)) = @H y @H v _ + dt @y @v _ = sin(y)v + v(; sin(y)) = 0

3. MORE COMPUTATION OF FLOQUET MULTIPLIERS (NOV. 6)

79

and the points (0 0 0) and (

0 0) correspond to periodic orbits.

v y s
Figure 11. Some of the periodic orbits of the newly autonomous system

What happens to the periodic orbits when > 0 but small, i.e. for small periodic forcing? Are there period T periodic orbits near (y = 0 v = 0) and/or (y = v = 0)? This is a problem in bifurcation theory: what changes or stays the same when we change a parameter? The rst step is to understand the orbits in the = 0 case. Compute the Floquet multipliers of (0 0 0) ( 0 0). Let's start with ( 0 0). We expect a saddle with 1-dimensional W s , 1-dimensional W u . Let ((y v s ) t) be the solution ow for y _ =v v _ = ; sin y = F (y v s) s _ = 1: Then (( 0 0) t) = ( 0 t) is the periodic orbit. 00 1 01 _ = DF j( 0 t) X = @1 0 0A X: X 0 0 0 If we diagonalize, with 01 ;1 01 P = @1 1 0A 0 0 1 we see that 00 1 01 01 0 01 P ;1 @1 0 0A P = @0 ;1 0A : 0 0 0 0 0 0 Making a change of coordinates PZ = X , we get that 01 0 01 _ = @0 ;1 0A Z Z Z (0) = I: 0 0 0

80

3. USING MAPS TO UNDERSTAND FLOWS

So

At time T the eigenvalues are eT e;T 1. Note that one is greater than 1, one is less than 1. This is a hyperbolic periodic orbit. Now, at (0 0 0), ((0 0 0) t) = (0 0 t), and 0 0 1 01 _ = @;1 0 0A X X X (0) = I: 0 0 0 Thus 0 cos t sin t 01 X (t) = @; sin t cos t 0A 0 0 1 and the eigenvalues of this matrix are cos(T ) sin(T ) 1. This is not hyperbolic. If T = 2n then the eigenvalues are 1 1 1, and if T = (2n + 1) then the eigenvalues are ;1 ;1 1.

0 0 1 and changing back to the original coordinates we get 0et ;e;t 01 1 X (t) = 2 @et e;t 0A : 0 0 1

0et 0 01 Z (t) = @ 0 e;t 0A

family.

4. Bifurcation Theory (Nov. 8) Consider the family of systems x _ = F (x), where F : Rn ! Rn for all . If 2 R , we call this a 1-parameter family, and if 2 Rm , we call this an m-parameter

Figure 12

4. BIFURCATION THEORY (NOV. 8)

81

How does the solution change as changes? We say that a bifurcation occurs at = 0 if the ow for < 0 is \di erent" from the ow for > 0 . It does depend on the context, however, what you mean by \di erent". Example: Is the ow for _ x ;1 0 x y = 0 2 y (See Figure 13) di erent from

Figure 13. A straight sink

_ x ;1 ;1 y = ;1 ;1

x ? (See Figure 14) y

Figure 14. A spiral sink

It is certainly di erent from _ x 1 0 y = 0 ;1

x : (See Figure 15) y

82

3. USING MAPS TO UNDERSTAND FLOWS

Figure 15. A saddle

The most important thing to note is that bifurcations almost never happen. Theorem 4.1 (Straightening-Out Lemma). Given a vector eld F : Rn ! Rn , suppose that F (x0 ) 6= 0, then there is a neighborhood V of x0 and coordinates on V such that F (x) = (1 0 0 : : : 0) for all x 2 V In these coordinates, the solution ow is (x t) = x + (t 0 : : : 0). 1. Choose surface of section at x0 such that vector eld is never tangent to . 2. Choose coordinates so that f(0 x2 : : : xn )g. 3. For x close to , nd a small time t so that (x t) 2 . Then coordinates of x are (;t x2 : : : xn ) where (x t) = (0 x2 : : : xn ). 4. Check, in these coordinates, that F = (1 0 : : : 0). So, give a 1-parameter family of ows, if F 0 (x0 ) 6= 0 for some choice of 0 x0 , then (assuming that F is at least continuous in ), for near 0 , F (x0 ) 6= 0. So there is a neighborhood about x0 so that the ow for F is just the straight-line ow. Up to a change of coordinates, locally, there is no change in the ow. This says that bifurcations can only happen 1. at F (x0 ) = 0 (at xed points), or 2. globally. These global bifurcations are hard to see, need to nd another way to make them \local". We will have three ways of attacking this problem: 1. Bifurcations of xed points (in nitely many cases), 2. Bifurcations of periodic orbits ( xed points on Poincare maps), and the 3. Melnikov method So, for the rst: Let F be a 1-parameter family, and assume that F is as smooth as necessary in x and . Assume we have a xed point F0 (0) = 0. Again, we will see that bifurcations of xed points almost never happen.

Idea of Proof.

4. BIFURCATION THEORY (NOV. 8)

83

and

Theorem 4.2. If 0 is a hyperbolic xed point of F0 , then there is an 1 > 0

: (; 1 1 ) ! R n

satisfying 1. (0) = 0, 2. F ( ( )) = 0, 3. ( ) is smooth (as smooth as F , 4. there exists a neighborhood V of 0 such that if 2 (; 1 1 ) and x 2 V , and F (x) = 0, then ( ) = x. s ( ( )) and W u ( ( )) are the same as W s (0) and 5. The dimensions of WF F F0 u WF0 (0) (and ( ) is hyperbolic for F ). Also, DxF ( ( )) is continuous in . Proof: Let F : Rn R ! Rn, with F (x ) = F (x). We know that F (0 0) = 0. Location of xed points is given by 0 level set of F . What can we say about F ;1 (0)? The Implicit Function Theorem tells us that there is an and a smooth function : (; 1 1) ! R2 so that F ( ( ) ) = (0 0) i.e., the level set is a graph of . The hypothesis that has to be satis ed is some condition on the partials of F , This might not hold if the conditions of the theorem do not hold. Compute d where ( ) = ( 1 ( ) 2 ( )). We know that

F ( 1 ( ) 2 ( )) = (0 0):
d (F ( ( ) ( ) ) = d F (0 0) = (0 0): 1 2 d d
0 : 0

Let F (x) = (f (x) g(x)). Then we need Computing, we need:

0 @f @f 1 0 d 1 1 0 df 1 B @x C B C B d C A= @ @x @g1 @g2 A @ dd 2 A + @ dg
@x1 @x2 d d

So we need

0 @f @f 1 B @x C @ @x @g1 @g2 A

to be hyperbolic. No zero eigenvalues, so DF0 jx=0 exists.

@x1 @x2 x=0 =0

= DxF0 jx=0

84

3. USING MAPS TO UNDERSTAND FLOWS

Recall the proof of the last theorem: F : Rn R ! Rn (x ) 7! F (x) with the xed points the level set F ;1 (0). The Inverse Function Theorem gives us that this level set is the graph of a function. If we di erentiate F ( ( ) ) with respect to , we get: so

5. Hyperbolicity in bifurcations (Nov. 11)

DxF d d +D F =0

We know that Dx F is invertible because 0 is hyperbolic. To get information about ( ), we compute the Taylor series. We need to compare the ow near ( ) for the solution ow of x _ = F (x) to ow near 0 for F0 (x). Compute the linear part: (DxF )j ( ) and this gives a curve of matrices M ( ) = (Dx F )jx= ( ) : As we change , components change smoothly with , which means the eigenvalues change continuously with . Example: For the matrix ;1 ; 1 we can compute the eigenvalues, and see that they are ;1 i j j. So the roots of the polynomial are only continuous in the coe cients, and not necessarily smooth. So there is an interval where the number of eigenvalues with real part positive, and the number with real part negative, is constant for M ( ). However, we know s ( ( )) and W u ( ( )) change smoothly with . that WF F This leads to a Scholium. In the theorem, we only used that DxF0 j0 was invertible to get a curve ( ), i.e. no eigenvalues = 0. Thus we need only invertibility, not hyperbolicity, to apply the Inverse Function Theorem. However, if we do not assume hyperbolicity, the dimensions of the unstable and stable manifolds can change. Bifurcations can happen at equilibrium points where eigenvalues are 0 or are pure imaginary. Example: F : R2 ! R2, F (0) = 0, Df j0 = 0 0 0 Assume that < 0, and let F (x) = (f (x) g (x)). What happens when we change ? Avoid the question of the behavior of the solution of x _ = F0 (x) for as long as possible. Try the same trick: F (x ) = F (x)

d = ; (D F );1 D F : x d

5. HYPERBOLICITY IN BIFURCATIONS (NOV. 11)

85

We know that Dx F ;1 does not exist at x = 0 = 0. But we're still looking for F ;1 (f(0 0)g). Try to look at level set as a graph in another direction: Let x = (x1 x2 ), Try to nd ( (x1 ) (x1 )) such that F (x1 (x1 ) (x1 )) = (0 0) i.e. f (x1 ) (x1 (x1 )) = 0 : g (x1) (x1 (x1 )) 0

x2 x1
; ; ;
( )

Figure 16. ( ) is a graph

d ,d : Let's try to compute dx 1 dx1

0 @f @f @ @f @ 1 + + B 1 @x2 @x1 @ @x1 C @ @x @g @g @ @g @ A =


@x1 + @x2 @x1 + @ @x1

0 0

so we have to have

0 @ 1 0 @f @f 1;1 0 @f 1 C B @x @ C B @x C B @ @x @ 1 A = ; @ @g2 @g A @ @g1 A :


@x1 @x2 @ @x1

Thus we need

0 @f @f 1 B @ C @ @x @g2 @g A
@x2 @
2 (0 0)

x=0 =0

to be invertible. @f We know that @x

@g = 0 and @x

2 (0 0)

= .

86

3. USING MAPS TO UNDERSTAND FLOWS

This gives us

0 @f 1 0 @ C B @ @g A
@

x=0 =0

which is clearly invertible i @f @ (0 0) 6= 0. This means that the x1 -component of F0 depends on , i.e. changes at nonzero speed as changes.
Theorem 5.1. Given F : R2 ! R2 , F0 (0) = 0,

0 Dx F0 jx=0 = 0 0
provided that

<0

@ f1st component of F g 6= 0 @ (x=0 =0)


then there is a 1 > 0 and a curve : (; 1 1 ) ! R2 , (x1 ) = ( (x1 ) (x1 )), and a neighborhood V of ((0 0) 0) such that 1. F (x1 ) (x1 (x1 )) = 0 for all x 2 (; 1 1 ), 2. If ((x1 x2 ) ) 2 V and F (x1 x2 ) = 0 then x2 = (x1 ), = (x1 ), and 3. = ( ) is smooth.

x2 x1
; ;

a curve of xed points

Figure 17. We can see that there are no xed points for < 0, and two for > 0.

5. HYPERBOLICITY IN BIFURCATIONS (NOV. 11)

87

x2 x1

Figure 18. Here, we have two xed points for < 0, and none for > 0. The di erence between this picture and the last is the sign of the second partial.

x2 x1

Figure 19. No bifurcation here.

Need to compute more about (x1 ), and we need its 1st and 2nd derivatives. We know from before that

0 @ 1 0 @f @f 1;1 0 @f 1 B C B @x @ C B @x C @ @x @ 1 A = ; @ @g2 @g A @ @g1 A @x1 @x2 @ 1 0 @x @g @f ; @f @g 1 @x1 @ @x1 C = @f @g 1 @f @g B @ @ @g @f @f @g A :


; @x @x + @x @x @x1 @ ; @ @x2 2 1 2 1

88

3. USING MAPS TO UNDERSTAND FLOWS

At x = 0 = 0, we get (0 0), i.e. the curve (x1 (x1 ) (x1 )) is tangent to the x1 -axis at (0 0). We also have 2 d2 = ;@ 2f =@x2 1 d = 0: dx2 @f =@ dx2 1 1 We want to know if the nondegeneracy conditions from the last chapter are satis ed, so let's try to reduce dimension, since nothing much happens in the x2 direction. Consider 0x_ 1 0F (x )1 0x 1 1 1 ^ @x2 A : @x1 2 A = @F (x2 )A = F 0 At the xed point (0 0 0), 00 0 @f =@ 1 @g =@ A : D(x1 x2 )F (0 0 0) = @0 0 0 0 The eigenvalues are 0 0 and . So there is a 2-dimensional center manifold and a 1-dimensional (since < 0) strong stable manifold, W ss . Note: All xed points must be on W c(0 0 0), since if they are not, in backwards time, the distances stretch exponentially. Consider the ow restricted to W c (0 0 0), and return to the 1-parameter family. If is considered as a coordinate on W c (0 0 0), then _ = 0, so we have x _ = h (x) a 1-dimensional, 1-parameter family. We know that H0 (0) = 0, and dh dx (0 0) = 0, and we require that

6. Bifurcation diagrams (Nov. 13)

@h 6= 0 @

and

@ 2 h 6= 0: @x2

6. BIFURCATION DIAGRAMS (NOV. 13)

89

Bifurcation diagrams:

@2h @x2
A

<0

@h > 0 @

@h < 0 @

A U A

h >0 h =0 h <0
tial is negative. The di erence between the two bottom pictures is the sign of the rst partial @h @ . When it is negative, for example, we go from no xed points to two.
Figure 20. These are the possible bifurcations if the second par-

h <0 h =0 h >0

90

3. USING MAPS TO UNDERSTAND FLOWS

@2h @x2

>0

h >0 h =0 h <0

@h > 0 @

A U A

@h < 0 @

h <0 h =0 h >0

Figure 21. These are the possible bifurcations if the second par-

tial is positive. See the last gure also. Again, the sign of the rst partial determines whether we go from two xed points to none or vice-versa.

A saddle-node bifurcation occurs when there is 1 simple zero eigenvalue with nondegeneracy conditions. What about 2 zero eigenvalues, such as 1 0 0 ? DF0 j0 = 0 or even 0 0 0 0 And why would we consider such a degenerate situation? So, to study these situations, we need a 2-parameter family. (See Figure 25

justify this by saying that if the dimension is > 2 then restrict to the center manifold for 0x 1 0F (x )1 1 1 B B .. C .. C B C B ^B . C=B . C F @xn A @F (xn )C A 0

7. Spiral sinks and spiral sources, normal form calculation (Nov. 15) Consider F : Rn ! Rn , F0 (0) = 0, and DF0 j0 has eigenvalues i! 3 : : : n , with the real part of the j all not 0. First, restrict attention to 2 dimensions, and consider F : R2 ! R2 . We can

7. SPIRAL SINKS AND SPIRAL SOURCES, NORMAL FORM CALCULATION (NOV. 15) 91
sink

source

Figure 22. In the left picture, we undergo a bifurcation from no

xed points, through a node, and then to a sink-source pair. The parabola represents the xed points. Any vertical slice through the picture corresponds to the system at some value of . The picture on the right is the same, except in reverse order.
source

XXX XXX XXX z X

XXX y X

XXX XX X

Figure 23. In the left picture, we undergo a bifurcation from no

sink

xed points, through a node, and then to a sink-source pair. The parabola represents the xed points. Any vertical slice through the picture corresponds to the system at some value of . The picture on the right is the same, except in reverse order.

which is 3-dimensional and one dimension is a parameter. We know that there is a curve ( ) such that F ( ( )) = 0. Do an -dependent change of variables to get

G (x) = F (x + ( )):
So G (0) = 0 for all , so we can assume that = 0, and that the xed point is at 0.

92

3. USING MAPS TO UNDERSTAND FLOWS

<

= 0

>

Figure 24. This is a saddle-node bifurcation. In this gure, we

see that there are no xed points for < 0. At = 0 is the bifurcation, and we have a node. For positive , we have the birth of a sink and a saddle.

Figure 25. You can't get around the two-parameter family.

So we can assume that the system looks like DF0 j0 = 0 0 : We then expect 1. Higher order terms of F0 make 0 into a spiral sink or a spiral source. 2. If we change , we expect the eigenvalues of DF j0 to have nonzero real part The above conditions will be our \nondegeneracy conditions" which insure that there is a bifurcation. Suppose, for example, the higher order terms of F0 make 0 a spiral sink, so there must exist an attractor block for this sink. Suppose we change a little and eigenvalues of DF0 have positive real part. Then 0 is a spiral source (hyperbolic). But old attractor block for = 0 is still an attractor block. Orbits entering the block must limit somewhere, but it can't be 0, and there are no other equilibria.

7. SPIRAL SINKS AND SPIRAL SOURCES, NORMAL FORM CALCULATION (NOV. 15) 93

onto a periodic orbit or a xed point or a \limit cycle". So there must be at least one periodic orbit near zero. The point of the Hopf Bifurcation Theorem is to show 1. that there is only 1 periodic orbit, and 2. to give some machinery for manipulating higher order terms so we can see for which 's this orbit exists. So, as above, we assume that DF0 j0 = 0 0 : We want to try to get a handle on the higher order terms of F . 0 x ax2 + bxy + cy2 F0 x y = 0 y + x2 + xy + y2 + h.o.t.

Theorem 7.1 (Poincare-Bendixon). A bounded orbit of a ow in

R2 limits

If 0 is a spiral sink, then a change of variables won't change that. So what is it about power series expansion that is topological information? Try to simplify the power series of F0 by changing variables. Is there any topological information in the 2nd order terms, i.e. can I do a change of variables so that in the new variables, _ z z 3rd order w = A w + and higher ? So, let's try a change of variables of the form where

; =A x y + 2nd order + h.o.t.

x = z +h z y w w

z = a1 z 2 + b1 zw + c1 w2 h w 1 z 2 + 1 zw + 1 w2 i.e. identity plus 2nd order terms, and we can pick a1 b1 : : : 1 . Now, x x F0 x y = A y + F2 y + h.o.t.
Then _ _ _ x z z = + Dh j z w y w w x x =F x y = A y + F2 y + h.o.t. z + h z + F z + h.o.t. =A w 2 w w _ z z z z w = A w + Ah w + F2 w + h.o.t.

Thus we have that

I + Dhjz w

94

3. USING MAPS TO UNDERSTAND FLOWS

_ ;1 z z + Ah z + F = I + Dh j A w 2 z w w w Can we write (I + Dh);1 as I + B ? Well, I = (I + Dh) (I + B ) = I + Dh + B + Dh so, up to rst order, (I + Dh);1 = (I ; Dh). Thus _ z z z z w = I ; Dhjz w A w + Ah w + F2 w Since 2a1 z + b1 w b1 z + 2c1 w Dhjz w = 2 1z + 1w 1z + 2 1w

or

z w + h.o.t. : B
+ h.o.t. :

;b1z ; 2c1w 1 ; 1z ; 2 1w + )z 2 + (;2a1 + 1 + b ; c1 )zw + (;b1 + 1 + c)w2 : )z 2 + (2 1 + 2 1 ; b1 + )zw + ( 1 ; c1 + )w2 Make all coe cients 0, 00 0 0 1 0a 1 0;a 1 0 ; 1 B B B ;2 0 ;2 0 0C b1 C ;b C B C B C B C B C B C B 0 ; 0 0 0 c1 C = B ;c C B C B C B B B ; 0 0 0 0C ; C 1C B C B C B @ 0 ; 0 2 0 2 A @ 1 A @; C A 0 0 ; 0 ; 0 ; 1 6 The determinant of this matrix is ;3 6= 0, so we can change variables and eliminate all 2nd order terms. Remark: This is an example of a normal form calculation. We will try the cubic terms next.
In the case we have been talking about, the linear part is a center. But we could have a change in dim W u W s at = 0. What is happening at = 0? Let by the eigenvalues of DF j0 . So = + i where 0 = 0 0 = . Write _ x x x + h.o.t. y =F y = y

_ z 1 ; 2a1 z + b1 w w = ;2 1 z ; 1 w w + ( 1 + a1 ; b1 = ; t + ( 1 ; a1 +

8. More normal form calculations, complexi cation (Nov. 18)

In the last section, we saw that we can do an algebraic change of variables which eliminates all 2nd order terms of F0 . How many terms can we eliminate? Which terms have geometric information? Let us take advantage of the fact that we're in R2 , and use complex-valued algebra. Think of (x y) 2 C2 , and can linearize linear part. Let P = 1 1 for :

i i

8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18)

95

Do change of variables and get the system

x 1 P z z2 = y
1 P z z2 + h.o.t.

1 = 0 0 z z2 + h.o.t. Where did the original R2 go, i.e. where is f(x y)j x y 2 R g? Since

z_1 = P ;1 z2

and so

z1 = P ;1 x z2 y

1 ;i P ;1 = 1 2 1 i

Let = f(z1 z2 )j z2 = z1 g, a 2-dimensional subspace of the 4-dimensional C2 . Moreover, since the space f(x y)j x t 2 R g is invariant in the original equation, is invariant under the transformed equation. So if we only care about (x y) 2 R , then we need only consider solutions on where z2 = z1 . So we take the expansion z _1 = z1 + F2 (z1 z2 ) and replace z2 with z 1 . So X ij z1 = z1 + aij z1 z1 : (If F is C N +1 , then z1 = z1 +
i+j 2

z1 = (x ; iy)=2 z2 = (x + iy)=2:

X
i+j 2

i z j + terms of order N + 1:) aij z1 1

1. This is not complex analytic! 2. aij also depend on . Now, try to do a change of variables which eliminate terms of the power series. For example, try to get rid of z k zl (k + l 2). Try a new variable w 2 C where z = w + bz k zl : We see that z _=w _ + bkww _ k;1 wl + blwl;1 wk

Remarks:

96

3. USING MAPS TO UNDERSTAND FLOWS

and

z+

X
i+j 2

aij z i z j

= (w + bwk wl ) + So we get

X
i+j 2

aij (w + bwk wl )i (w + bwk wl ) : aij wi z j

w _ = w+
All we need is So let and we need worry only if

X
k+l i+j 2 i j 6=k l

+ ( b ; bk ; bl + akl )wk wl + h.o.t.

b ; bk ; bl + akl = 0: b=

;akl ;k ;l

Remark: This change of variable only a ects k l terms and terms of higher order. This does change the coe cients of the higher order terms, so we must do it in order, from lowest order to highest order. = + i , and there are two cases: 1. 6= 0 6= 0. We only need that ; k ; l 6= 0 or ; k + l 6= 0 Since we have that 6= 0 6= 0, we need that 1 ; k ; l 6= 0 or 1 ; k + l 6= 0: Since k + l 2, both are never 0. Thus the denominator is never 0!. So we can eliminate any non-linear terms. Does this mean that we can linearize, i.e. change variables to w _ = w? No. There are in nitely many changes of variable and the domain may shrink to 0. But there does exist a neighborhood of 0 and a polynomial change of variables such that in the new variables w _ = w + order N where F is C N . We can linearize formally, though (if F is C 1 C ! ). This was Poincare's thesis. 2. When = 0, = i = i : The denominator i ; ki + li = 0 if 1 ; k + l = 0 or l = k ; 1. So we can eliminate all terms, except z 2 z z 3 z2 z 4z 3 : : :

;k ;l =0

8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18)

97

Formally, we can change variables to X z _ = i z + aj z j z j ; 1 provided F such that


j +1 <2N X j 2

j 2 N is C , there is a neighborhood of 0 and a change of variables

z _ =i z+

aj z j z j;1 + order N:

This is known as \normal form". What's the geometry in this case? Let z _ = i z + a2 z 2 z + order 4: and assume a2 = + i 6= 0. Change to polar coordinates: z_ = re _ i + ir _ei = i rei + ( + i )r3 ei + h.o.t. So r_ = r3 + h.o.t. 4 _ = + r2 + h.o.t. 3: If > 0, we have a spiral source, as in Figure 26.

Figure 26. The nonlinear part makes this a spiral source.

If < 0, we have a spiral sink,as in Figure 27. If = 0, we still have no info, so have to look at z 3 z2 terms. Remark: What does tell you? If 6= 0, the amount of rotation changes with r.

98

3. USING MAPS TO UNDERSTAND FLOWS

Figure 27. The nonlinear part makes this a spiral source.

CHAPTER 4

Topics
put z _ = F (z ) into normal form near = 0, i.e. Take so =

1. Setup for Hopf Bifurcation Theorem (Nov. 20) We know from before the the eigenvalues of DF0 j0 are i . Let 0 = i , and
z_ = (z ) + a1 ( )z 2 z + h.o.t.
+ i , and a1 ( ) = + i . In polar coordinates, we get re _ i + i _rei = ( + i )rei + ( + i )r3 ei + h.o.t.

r_ = r + r3 + h.o.t. _ = + r3 + h.o.t.:
What happens as is changed through 0, = 0? Assume that

= 0, 0 is a spiral sink. Now let us look at the global picture at = 0. There exists an attractor block around 0. Look at

d d =0 6= 0: There are two cases, where d =d > 0 and d =d < 0. In the rst case, for < 0, 0 is a spiral sink, and for > 0, 0 is a spiral source. Also, the behavior at = 0 depends on : If 0 < 0, then for near 0, < 0, so that means that at

r_ = r + r3 :

; = . In two dimensions, The xed points of r + r3 = 0 are r = 0 Do we get the same picture with higher order terms? The answer is yes. Set up regions in the (r ) plane: Show that near 0, even with higher order terms, r is increasing. Far from 0, r is decreasing. Inside the ring show that the derivative of ow in r direction is < 1. Look at Poincare return map to = 0, show derivative < 1. Three other pictures: Global picture doesn't change for small changes in (r3 term in r _ is huge when r is big). Behavior near r = 0 is changed by changing . So the r^ ole of the attractor is transferred from the xed point to the periodic orbit. One other case: 0 = 0 = for all
99

100

4. TOPICS

<

<

<

Figure 1. For < 0, we have a spiral sink. When is 0, we do not have a center, for the nonlinear term makes this still into a spiral sink. And for some value of > 0, a periodic orbit is born, which is an attractor.

Figure 2. The same deal pictured in higher dimensions. The

spiral sink at the origin becomes a spiral source, and the attracting cycle is born.

Then

r_ = r
_ = + r2

d d

Think of 0 as taking the plane of periodics at = 0 and bending it. Now, how can periodic orbits bifurcate? Bifurcation of periodic orbits of ows corresponds to bifurcations of xed points of maps. Given a periodic orbit, we can associate a Poincare map. Look at the 1-parameter family of maps P : Rn ! Rn with P0 (0) = 0.

=0

> 0:

1. SETUP FOR HOPF BIFURCATION THEOREM (NOV. 20)

101

Figure 3. the ring

Figure 4. The other three possibilities. Theorem 1.1.

1. Suppose the eigenvalues of DP0 j0 are o of the unit circle. Then there is a neighborhood V of 0, 1 > 0, and : (; 1 1 ) ! V as smooth as P such that (a) (0) = 0.

102

4. TOPICS

Figure 5. The spiral sink becomes a center which becomes a spiral source.

(b) 8 2 (; 1 1 ) P ( ( )) = ( ) (c) if x 2 V , and 2 (; 1 1) and P (x) = x, then x = ( ). (d) The dimensions of W u and W s for P at ( ) are constant. 2. If DP0 j0 has no eigenvalues = 1, then there is 1 V such that the rst three conclusions hold. Proof: The same as for ows. De ne P (x ) = P (x) ; x and look at P (x ) = 0. To use the Inverse Function Theorem to get , we need that Dx Pj(0 0) is non-singular, that is that Dx P0 ; I has no eigenvalues equal to 1. Also note that the eigenvalues of DP j ( ) are continuous in . So, there are 3 di erent codimension 1 bifurcations for xed points of maps: 1. 1 eigenvalue = 1 (saddle-node) 2. 1 eigenvalue = ;1 (period-doubling) 3. Pair of eigenvalues on unit circle (Hopf, Neimark) Remember that bifurcations of xed points of maps correspond to bifurcation of periodic orbits for ows. As we saw last time, there are three distinct types of bifurcations: 1. Saddle node bifurcation (center manifold) It su ces to consider the system P : R ! R . Example: P (x) = + x. For = 0, all points are xed, but for 6= 0, there are no xed points whatsoever. We expect that the generic picture should bend this picture one way or the other. Look at P (x) = + x + x2 . We need the two nondegeneracy conditions

2. More Hopf Bifurcation Theorem (Nov. 22,25)

dP 6= 0 d 0

d2 P0 6= 0: dx2 0

2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25)

103

Figure 6. For ows, the bifurcations happen as the eigenvalues

cross the imaginary axis. For maps, they happen when the eigenvalues cross the unit circle.

Figure 7. A radical bifurcation. No xed points becomes all xed

points which again becomes no xed points.

In one dimension, source and sink are born (or die) at bifurcation (at = 0). In 2 or more dimensions, we expect to have either saddle-sink (if all other eigenvalues of DxP0 j0 are inside the unit circle), saddle-source (if all other eigenvalues are outside the unit circle), or saddle-saddle. 2. Period-doubling (one eigenvalue = ;1) Simple model, we know P0 (0) = 0 and if all other eigenvalues are not on the unit circle, then there is a neighborhood U of 0, 1 > 0, and a smooth curve : (; 1 1 ) ! U such that if P (x) = x, and x 2 U , then x = ( ). So we may as well assume that P (0) = 0 8 . By center manifold it su ces to consider P : R ! R . Example: This example is a bit too simple. Consider P (x) = ;(1 + )x For < 0, 0 is a sink, for > 0, a source, and for = 0, every point is periodic of period 2.

104

4. TOPICS

Figure 8. These are the possible Hopf bifurcations. These are

the same pictures as in Figures 22 and 23.

y x
=

Figure 9. This is the map P (x) = +x;x2 and its corresponding

bifurcation diagram.

for > 0. We expect the birth of a period 2 orbit at = 0. Look for xed points of the 2nd iterate. P 2 (x) = ;(1 + )P (x) + (P (x))2 = (1 + )2 x + ( + 2 )x2 ; 2(1 + )x3 + x4 :

Example: P (x) = ;(1 + )x + x2. 0 is a sink for < 0, and a source

2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25)

105

Note that at = 0, P02 has derivative 1 at x = 0. Also, P02 (x) = x + 0x2 ; 2x3 + x4 so it is not a saddle-node bifurcation. Look for xed points of P 2 (x). x = 0 is always xed. For > 0, we have the xed points p p ; 4 + 2 and + 4 + 2 2 2 Note that x = 2 + far from x = 0. The bifurcation for P 2 is known as a pitchfork bifurcation. If is an eigenvalue with 0 = ;1 then we need which corresponds to

d 6= 0 d

Note that d2 P02 =dx2 is always 0 because P02 is not undergoing a saddlenode bifurcation. The proof for the saddle-node is the same as for ows. Let P (x ) = P (x) ; x, and try to use the Implicit Function Theorem to write level set of P = 0 as (x (x)), that is, = (x). There is one nondegeneracy condition used for the Implicit Function Theorem, i.e. Other condition forces

d3 P02 6= 0: dx3

@P @ (0 0) 6= 0: d2 dx2 x=0 6= 0: x

For period doubling, we try to do the same thing. Let 2 Q(x ) = P (x) ; x : Apply Implicit Function Theorem to Q (extend Q smoothly to x = 0). The level set of Q = 0 is Q(x (x)) = 0. Now for the Hopf bifurcation. Let F : R3 ! R3 be a 1-parameter family of vector elds. For all , x _ = F (x) has a periodic orbit and for = 0 the eigenvalues of the rst return map (the Floquet multipliers) are on the unit circle. For maps, let P : R2 ! R2 be a 1-parameter family, such that P (0) = 0, and DP0 j0 has eigenvalues on the unit circle, ei 6= 1. How does the stability change as changes? Do exactly the same calculation. Complexify and linearize in C2 . On the image of R2 we only need one of the complex variables (the other is the conjugate). Write X ij aij z z P (z ) = z +
i+j 2

106

4. TOPICS

where is the eigenvalue of DP at x = 0. We want to put this in normal form, i.e. get rid of as many higher order terms as possible. To get rid of z k zl do the change of coordinates w + bwk wl = z and end up with b = alkl :
k

which must not be 0. So we cannot get rid of terms with k ; l ; 1 = 0. Another case is that if = 2 p=q, (i.e. ei is a root of unity), we can't have k ; l ; 1 0 (mod q): The lowest order term where this happens is k = 0 l = q ;1. So for 0 = e2 ip=q , the normal form is P0 (z ) = 0 z + a1 z k z + czq : Let us add the assumption that if 0 = e2 ip=q then q 5. Then P (z ) = z + a1 z 2z + h.o.t. Switching to polar coordinates, if = ei , then P (rei ) = ei re + a1 r3 ei + h.o.t. Let a1 = ei , then P (rei ) = ( r)ei( + ) + r3 ei( + ) + So r3 + h.o.t. P r = ( + r+ ) + O(r2 ) + h.o.t. :
i

At = 0, we get = ei , so the denominators look like eikl e;i l ; ei = ei ei(k;l;1) ; 1

r 7! r + r3 are

Suppose k k < 1 for < 0, and k k > 1 for > 0. The xed points of 0

r1 ;

r
sink source

sink

Figure 10. The sink becomes a source-sink pair.

Combine r and : Adjusting the signs of d =d for = 0 and

give the other three pictures:

2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25)

107

Figure 11. The sink becomes an attracting cycle.

Figure 12. Depending on the signs of d =d and have one of these three pictures.

, we may

One can show (by building the attractor and repeller blocks) that if the higher order terms are added in, for near 0, the picture is the same.

108

4. TOPICS

The exceptional cases are 0 = e2 ip=q for q = 1 2 3 4. If q = 1, then 0 = 1, and this is a saddle-node bifurcation, provided only one eigenvalue is 1. If q = 2, then 0 = ;1, and this is a period-doubling bifurcation, again provided only one eigenvalue is 1. The cases q = 3 and q = 4 are special. Up this point, every theorem that we've had has had the words \for small". Everything has been local, with one exception: in the Stable-Unstable Manifold Theorem, we get the result that they are globally immersed submanifolds. The reason this is is that all of our techniques so far have used calculus, so it must be local. We start with something we know, and perturb away. Also, all we have worked with are xed points, and periodic orbits (which are themselves xed points of maps). But we can also do local analysis near bigger things, e.g. whole nontrivial orbits. Let x _ = F (x t) be a 1-parameter family. Let us assume that we understand an orbit for = 0 (could be a W u or a W s ). Then we compute what happens for 6= 0 (but small). Definition 3.1. If there is a point z 6= x0 such that z 2 W u (x0 ) \ W s (x0 ) then z is called a homoclinic point. O(z ) is called a homoclinic orbit. Note: In 2 dimensions, we have either that W u(x0 ) \ W s(x0 ) = fx0g or that a branch of W s (x0 ) is the same as a branch of W u (x0 ). Example: y = y ; y3. Convert this to the system y _=v v _ = y ; y3 : This system is Hamiltonian, with Note that W u (0) = W s (0).
2 4 2 H (v y) = v2 ; y2 + y4

3. Setup for Melnikov method (Nov. 25, Dec 2)

-1

-1

Figure 13. The phase plane for y = y ; y3 .

3. SETUP FOR MELNIKOV METHOD (NOV. 25, DEC 2)

109

If we perturb and get

for small, we can break the connection (adding damping or antidamping) by letting G y = 0 :

y = v + G y v y ; y3 v v v

damped and anti-damped pendula. Example: Consider the system y = ; sin y, or y _=v v _ = ; sin y this is also Hamiltonian, with

Figure 14. This should be a picture of the phase planes of the

This is a \heteroclinic connection", i.e. a branch of W s ( 0) is the same as a branch of W u (; 0). We can again break the connection with damping or antidamping. In the 2-dimensional autonomous case, the W s and W u manifolds are made up of 3 orbits ( xed point, saddle) and 2 branches. If z 2 W s (x0 ) \ W u (x0 ), then O(z ) W s (x0 ) \ W u (x0 ). What about the nonautonomous case, e.g. _ y v v = y ; y3 + G(y v t) where G(y v t + T ) = G(y v t). Make this system 3-dimensional: 0y _1 0 v 1 @vA = @y ; y3A + G(y0 v s) : s 1

H (y v) = v2 ; cos(y)

110

4. TOPICS

Figure 15. This will be a picture of the phase plane for v2 =2 ; cos(y).

We only need s 2 0 T ] because G is periodic. So our new phase space is R2 0 T ] with 0 and T identi ed. For = 0, there are 3 period-T periodic orbits.

v y s

s T
=

Figure 16. The Floquet multipliers of O(0 0 0) make it a saddle

(the other two xed points are centers).

First, draw the Poincare map for s = 0. This is the period T map of the 2-dimensional ow. The 3-dimensional picture: Now make > 0. Damping or antidamping is still possible, with G(y v t) = (0 v). Remark: By bifurcation theory, we know that if is small then the Poincare map has a xed point near (0 0). Another (new) possibility:

3. SETUP FOR MELNIKOV METHOD (NOV. 25, DEC 2)

111

Figure 17. This will be a picture of the Poincare map for the

unperturbed ow.

Figure 18. This will be the phase plane for the ow corresponding

to the map in the last gure.

We could have that z 2 W u ( ) \ W s ( ), so O(z ) W u ( ) \ W s ( ). All we know is that the manifolds go through these points, but the picture could be: This is a picture of a \transverse homoclinic orbit" with z being the transverse homoclinic point, because W u and W s cross transversally, i.e. not tangentially. The 3-dimensional picture: In this setup, a transverse homoclinic point gives us 3 things: 1. in nitely many homoclinic orbits, 2. very nontrivial recurrence, 3. \escape" orbits, i.e. \chaos". In this case, note that W u and W s for the Poincare map are only immersed (not embedded).

112

4. TOPICS

Figure 19. A 3-dimensional picture of the ow, with variables y v t.

Figure 20. The Poincare map, and the stable and unstable man-

ifolds for the damped pendulum.

These are interesting ows. Are there speci c examples? Start with y _ =v v _ = y ; y3 and devise a perturbation that creates transverse homoclinics, _ y v v = y ; y3 + G(y v t):

Look at the Poincare map, P : R2 ! R2 and pick a neighborhood V as shown. Choose Q : R2 ! R2 , such that Q = I outside V , and is a small rotation well inside V . Then Q P creates a transverse homoclinic orbit.

4. USING MELNIKOV FOR EXISTENCE OF A TRANSVERSE HOMOCLINIC (DEC. 4) 113

Figure 21. The Poincare map in the case of a homoclinic orbit

Figure 22. A transverse homoclinic crossing

Our goal is to show a given system has (transverse) homoclinic points. The following is the outline of a method, rather than a theorem. Again, assume _ y v = F (y v) + G(v y t) where G(y v t) = G(y v t + T ). Further assume that when = 0, F (the unperturbed system) has a saddle, x0 , and has a homoclinic connection, where a branch of W u (x0 ) is a branch of W s (x0 ).

4. Using Melnikov for existence of a transverse homoclinic (Dec. 4)

Notation. 1. Let q0 : R ! R2 be an orbit which equals this homoclinic connection. Let


z = q0 (0). q0 (t) is a solution for F .

114

4. TOPICS

Figure 23. A 3-dimensional picture of a transverse homoclinic orbit

Figure 24. How to make a transverse homoclinic orbit out of any homoclinic orbit

has a period T orbit near y = v = 0. Call it . What happens when we change slightly? To see if W s ( ) and W u ( ) cross, try to measure the distance between them. Steps: 1. Pick z = q0 (0) on unperturbed connecting orbit, 2. build transverse to W u ( ) = W s ( ), 3. Look at W u ( ) \ and W s ( ) \ . There are 3 possibilities:

2. When 6= 0, think of the system in R2 0 T ] with 0 and T identi ed. Call the third variable t. 3. For 6= 0, but close to 0, the system _ y v = F (y v) + G(y v t)

4. USING MELNIKOV FOR EXISTENCE OF A TRANSVERSE HOMOCLINIC (DEC. 4) 115

Figure 25. The three-dimensional homoclinic

Figure 26. The three possible con gurations of W s ( ) and W u ( )

4. Try to measure the distance between W u ( ) \ and W s ( ) \ as a function of t. Let d(t ) be the distance between W s ( ) \ \ ft = t0 g and W u ( ) \ \ ft = t0 g. 5. Write d(t ) as d0 (t) + d1 (t) + 2 d2 (t) + (assume that we can). When = 0, W u = W s , so d0 (t) = 0, so d(t ) = d1 (t) + 2 d2 (t) + What we'll actually compute is d1 (t). Lemma 4.1. If d(t ) = d1 (t) + O( 2 ) stays bounded, and d(t + T ) = d(t ) for all t, then (a) If d1 (t) > 0 for all t, then for su ciently small, d(t ) > 0 for all t. (b) If d1 (t) < 0 for all t, then for small, d(t ) < 0 for all t. (c) If d1 (t1 ) > 0 d1 (t2 ) < 0, then 9t such that for su ciently small, d(t ) 6= 0.

116

4. TOPICS

Figure 27. The geometric interpretation of d(t )

(d) If d1 (t0 ) = 0 d0 (t0 ) 6= 0, then 9t for su ciently small such that d(t ) = 0 and @d(t ) 6= 0: 6. Compute d1 (t), which is the order term of the signed distance between W s ( ) and W u ( ) on . Then check if d1 (t) is ever 0. If it is, then for su ciently small, the system has homoclinic points. Now why do we hope that we can compute d1 (t)? Look at system _ y v = F (y v) + G(y v t):

@t

We know that q0 (t) is a solution for = 0, and q0 (t) ! (0 0) as t ! 1. When 6= 0, but small, try to write solutions near q0 (t), using Euler's method:

Figure 28. The errors in Euler's Method

5. CALCULATION OF ORDER TERM (DEC. 6)

117

If we follow the vector eld for 6= 0, the rst step error is of order t, where t is the step size. There are two sources of error on the next step: 1. Because the red orbit is o of q0 (t), the orbits can move exponentially apart. This error is of order and fast-growing.

Figure 29. How fast can order- error grow?

Remark: In the example systems, we don't have exponential divergence. 2. Same type of error as rst step (we changed vector eld by so there is a t change in the next step). 3. Compounding of change in vector eld, but this error is order 2 . We expect d1 (t) to have 2 terms: 1. For divergence of nearby orbits if = 0, 2. -order changes along q0 (t). We need two tools: 1. A natural way to pick . We will use F ? (z ) to choose . At each point of q0 , measure the perturbation in the F ? direction. 2. A nice representation of W s ( ) and W u ( ). 5. Calculation of order term (Dec. 6) So, we chose to be a surface in R3 through q0 (0) 0 T ] containing F ? (z ). We need a way to name orbits on W s ( ),W u ( ). Let qs ( t0 ): R2 ! R2 be a solution of
_ y v = F (y v) + G(y v t) and qs (t t0 ) ! as t ! +1 (so qs is on W s ( )) and (qs (t0 t0 ) t0 ) 2 and for all t t0 , (qs (s t0 ) s) 62 . Do the same for qu (t t0 ), with (qu (t0 t0 ) t0 ) 2 and (qu (s t0 ) s) 62 for all s < t0 . We are interested in measuring kqs (t0 t0 ) ; qu (t0 t0 )k = d(to ).

118

4. TOPICS

Figure 30. F (z ) and F ? (z ) Lemma 5.1. As

! 0, qs qu ! q0 . We can write s (t t0 ) + O( 2 ) qs (t t0 ) = q0 (t ; t0 ) + q1 u (t t0 ) + O( 2 ): qu (t t0 ) = q0 (t ; t0 ) + q1

F ? (z ) qs (t0 t0 ) ; qu (t0 t0 )] ; s (t0 t0 )] ; q0 (t0 ; t0 ) + q u (t0 t0 )] + O( 2 ) = F ? (z ) q0 (t0 ; t0 ) + q1 1 2 ? s u = | F (z ) q1 (t0 {z t0 ) ; q1 (t0 t0 )] } +O( )


compute this!
s (t0 t0 ) ; q u (t0 t0 )] F ? (z ) q1 1 s (t0 t0 ) ; q u (t0 t0 )]: = F ? (q0 (t0 ; t0 )) q1 1 This is the value at t = t0 of the function s (t t0 ) ; q u (t t0 )]: F ? (q0 (t ; t0 )) q1 1

This is a corollary of the Stable/Unstable Manifold theorem. Note: q0(t ; t0) = q0(0) when t = t0. We want to compute

Now,

Compute d=dt of this and hope it turns out nice. First note that
s (t t0 ) + O( 2 ) qs (t t0 ) = q0 (t ; t0 ) + q1

so
s (t t0 ) + O( 2 ): q_s (t t0 ) = q_0 (t ; t0 ) + q_1

5. CALCULATION OF ORDER TERM (DEC. 6)

119

We know that qs is a solution of the di erential equation, so

so

So we get s ( t t0 ) + O ( 2 ) q_0 (t ; t0 ) + q_1 s (t t0 ) + G(q0 (t ; t0 ) t) + O( 2 ) = F (q0 (t ; t0 )) + DF jq0 (t;t0 ) q1

d (qs (t t )) = F (qs (t t )) + G(q (t t ) t) 0 0 0 dt s (t t0 ) + O( 2 )) = F (q0 (t ; t0 ) + q1 s (t t0 ) + O( 2 ) t) + G(q0 (t ; t0 ) + q1 s (t t0 ) + G(q0 (t ; t0 ) t): = F (q0 (t ; t0 )) + DF jq0 (t;t0 ) q1

s (t t0 ) = DF j s 2 q_1 q0 (t;t0 ) q1 (t t0 ) + G(q0 (t ; t0 ) t) + O( ) since F (q0 (t ; t0 )) = q_0 (t ; t0 ).

Thus

term 2 term 1 Term 1 involves the rate of change of F , where the small error grows exponentially, and term 2 is an order perturbation along the unperturbed orbit. Now look at s (t t0 ) = F ? (q0 (t ; t0 )) q s (t t0 ): 1
s (t _ s (t t0 ) = DF jq0 (t;t0 ) q_0 (t ; t0 ) q1 = DF ? q0 (t;t0 ) F (q0 (t ; t0 )) s (t t0 ) t0 ) + F ? (q0 (t ; t0 ))q_1 s (t t0 ) q1 s (t t0 ) + G(q0 (t ; t0 ) t + F ? (q0 (t ; t0 )) DF jq0 (t;t0 ) q1

s (t t0 ) = DF j s q_1 ; t0 ) t} ): q0 (t;t0 ) q1 (t t0 ) + G | (q0(t{z

{z

or h i s _ s (t t0 ) = DF ? q (t;t ) F (q0 (t ; t0 )) + F ? (q0 (t ; t0 )) DF jq0 (t;t0 ) q1 (t t0 ) 0 0 + F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t0 ): So if we know q0 (t ; t0 ), then we get a hard ODE. In the example y _=v v _ = y ; y3 we see that we don't always have exponential splitting.

Note:

DF ? q0 (t;t0 ) F (q0 (t ; t0 )) + F ? (q0 (t ; t0 )) DF jq0 (t;t0 ) = trace DF jq0 (t;t0 ) F ? (q0 (t ; t0 ))


1 F= f f2

Let

f2 F? = ; f1 :

120

4. TOPICS

so

0 @f1 @f1 1 @y @v C DF = B @ @f 2 @f2 A


@y @v

0 @f2 @f2 1 ; @y ; @v C DF ? = B @ @f @f1 A : 1


@y @v

trace DF measures the rate of growth of pieces of area under the ow F . Our example was Hamiltonian, with so

0 @2H B @y@v DF = B @ @ 2H

@H F = @H @v ; @y

so trace(DF ) = 0! So let us assume that F is Hamiltonian (or at least that trace(DF ) = 0), so _ s (t t0 ) = F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) giving

@2H @v22 C C A @ H ; @y2 ; @v@y

F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt: Note: F ?(q0(t ; t0)) ! 0 as t ! 1.


s (t t0 ) =

So

The above, and gives

d s (t t ) = ; Z 1 F ? (q (t ; t )) G(q (t ; t ) t) dt: 0 0 0 0 dt 0 0 t0 d u (t t ) = F ? (q (t ; t )) qu (t ; t ) 0 0 0 0 1 dt

s (t t0 ) =

Zt

by the same calculation. So q0 (0) = q0 (t0 ; t0 ). Putting the two calculations together, we have s (t0 t0 ) ; q u (t0 t0 )] F ? (z ) q1 1 = s (t0 t0 ) ; u (t0 t0 ) =;

;1

F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt

Z1

W u ( ) on .

So we have calculated the order term of the distance between W s ( ) and

F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt ;1 Z1 =; F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt: ;1

Z t0t0

F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt

6. AN EXAMPLE OF A MELNIKOV CALCULATION (DEC. 9)

121

Theorem 6.1. Assume a Hamiltonian system

6. An example of a Melnikov calculation (Dec. 9)

_ y @H=@v v = ;@H=@y = F (y v) with saddle x0 and homoclinic orbit q0 (t).


For the perturbed system

_ y v = F (y v) + G(y v t) where for all t, G(y v t + T ) = G(y v t), let denote the hyperbolic periodic orbit near x0 .

Then 1. If M (t0 ) 6= 0 for all t0 , then for small, has no homoclinic orbits. 2. If M (t0 ) > 0 for some t0 ,M (t1 ) > 0 for some t1 , then for small , W s ( ) \ W u ( ) contains more than . 3. If M (t0 ) = 0, M 0 (t0 ) 6= 0 for some t0 then for small , W s ( ) intersects W s ( ) transversally, i.e. there is a transverse homoclinic orbit. Example: Guckenheimer and Holmes, p.191] Consider

F ? (q0 (t ; t0 )) G(q0 (t ; t0 ) t) dt ;1 Z1 = F ? (q0 ( )) G(q0 ( ) + t0 ) d : ;1 This is known as the Melnikov integral. Remark: M (t) is periodic with period T . M (t0 ) =

Let

Z1

y _=v v _ = y ; y3 + ( cos(!t) ; v)
For = 0, the Hamiltonian is
2 4 2 H (y v) = v2 ; y2 + y4 :

There is a saddle at (0 0) with a homoclinic connection. Solving H we get

and we see that

v = y 1 ; y2

q0 (t) = ( 2 sech t ; 2 sech t tanh t):


The perturbation has 4 parameters: is the damping term, is the periodic forcing, ! is the frequency, and is the total size of the perturbation. So let us x !.

122

4. TOPICS

The Melnikov integral is

M (t0 ) =

(F ? (q0 (t)) G(q0 (t) t + t0 )) dt ;1 Z1 p p p = 2 sech t ; ( 2 sech t)3 ( 2 sech t tanh t) ;1 p 0 cos(!(t + t0 )) ; (; 2 sech t tanh t) dt Z1p 2 sech t tanh t cos(!(t ; t0 )) dt = ;1Z 1 +

Z1

p ! sin (!t ) = ; 43 ; 2 ! sech 2 0 = ;k1 ; k2 sin(!t0 ) p where k1 = 4 =3 k2 = 2 sech( !=2). If = 0, then M (t0 ) does change sign transversally, so there are homoclinics. If jk2 j < jk1 j then no homoclinics for small , but if jk2 j > jk1 j, then there are homoclinics for su ciently small . Note: As ! ! 1, 2 ! = sech 2 != 2 e + e; !=2 so even small damping will kill the homoclinics, where ! big implies short periodic forcing.
For this section, look at

;1

2 sech2 t tanh2 t dt

7. Averaging (Dec. 11)

x _ = f (x t ) where x 2 Rn, and for all t, f (x t + T ) = f (x t ), for small.

What can we say about this? Almost nothing { because when = 0, x _ = 0. How can x _ = 0 bifurcate? Every way! We might ask: How much does it matter that f depends on t? Let ZT 1 f (x) = f (x t 0) dt

be an averaged vector eld. Let f~(x t ) = f (x t ) ; f (x): Theorem 7.1. If f is C r , then there is a C r change of coordinates x = y + w(y t ) such that the system x _ = f (x t ) becomes y _ = f (y) + 2 f1 (y t ) where for all t, f1(y t + T ) = f1 (y t ). Remark: This is progress because the nonautonomous part is higher order.

T 0

7. AVERAGING (DEC. 11)

123

Note: This is not the same as

where is arbitrarily small, since above is not arbitrarily small, = 2 . Moreover, 1. Let z _ = f (z ). If z (t) y(t) are solutions such that jz (0) ; y(0)j < O( ), then jz (t) ; y(t)j < O( ) on a time scale t 1= . 2. If z0 is a hyperbolic xed point of z _ = f (z ) then there is an 0 > 0 such that for 0 < < 0 , there is a periodic orbit y (t) for y _ = f (y) + 2 f1 (y t ) with y (t) = z0 + O( ). 3. If z s(t) is in W s (z0 ), ys (t) is in W s (y ), and jz s(0) ; ys (0)j = O( ), then jz s (t) ; ys (t)j = O( ) for all t 0. So we can push time dependence to higher order in . We need to nd w in the change of coordinates. Since x = y + w(y t )

y _ = f (y) + f1 (y t )

I + Dy w] y _ =x _ ; @w @t I + Dy w] y _ = f (x) + f~(x t ) ; @w @t ~ where f (x t ) = f (x t ) ; f (x).


So ~(y + w t ) ; @w y _ = I + Dy w];1 f (y + w) + f @t ~(y t ) ; @w + O( 2 ) y _ = I + Dy w];1 f (y) + f @t

This gives us

x _ =y _ + Dy w ( y t ) y _ + @w @t :

assuming everything is at least C 2 . So take so

@w = f~(y t ) @t

w(y t ) = f~(y t ) dt:

Note:

I + Dy w];1 = I + O( ):

124

4. TOPICS

Then Compare

y _ = f (y ) + O ( 2 ) = f (y) + 2 f1 (y t ): y _ = f (y) + 2 f1 (y t ) to ~(z ): z _= f

Let us prove the rst part of the theorem:

Proof:

jz (t) ; y(t)j = z (0) ; y(0) + ;


So
2

Z tz
0

Zt
0

f (z (s)) ; f (y(s)) ds

Ljz(s);y(s)j

}|

f1 (y(s) s ) ds

jy(t) ; z (t)j jz (0) ; y(0)j


+

Zt
0 Z

f (z (s)) ; f (y(s)) ds
t

+ 2 Now let (t) = z (t) ; y(t). Then

jf1 (y(s) s )j ds
(s)j ds + 2

j (t)j j (0)j + L

Zt
0

Zt
0

C dt

where L is the Lipschitz constant for f 1 and C is a bound for f1 (on neighborhoods z (s) y(s) 0 s t. Remember Gronwall's inequality: if

v(t) c(t) +

Zt
0

u(s)v(s) ds then

v(t) c(0) exp

Zt
0

u(s) ds +

Zt
0

c0 (s) exp

Zt
s

u( ) d

ds:

Let c(t) = (0) + 2 Ct, then

j (t)j j
So

(0)j e Lt + 2 C

Zt
0

e L(t;s) ds:

C e tL ; C j (t)j j (0)j e Lt + L L C e tL j (0)j e Lt + L

so

j (t)j

C e Lt: j (0)j + L

7. AVERAGING (DEC. 11)

125

Suppose that j (0)j < k for some k. Then

j (t)j
so for t < 1= ,

C e Lt: k+ L
L k+ C L e :

j (t)j

Why look at a system like this? One case may be where you have a center. If you take the right T , the time T map looks like the identity.

Index

-limit set, 34 -limit set, 34 attractor, 35 attractor block, 35, 86 autonomous, 4 averaging, 115 best linear approximation, 8 bifurcation, 75 Cauchy, 31 Cauchy's Existence Theorem, 26 center manifold, 62, 82, 96 center of mass, 15 center stable manifold, 62 change of coordinates, 53, 74, 76, 98, 115, 116 change of variable, 89 change of variables, 8, 9, 11, 17, 19, 70, 86, 88{90 chaos, 105 codimension 1, 85 codimension 1 bifurcation, 85, 93 collision set, 7 Comparison Test, 27 complete metric space, 31 complex notation, 18 cone condition, 55 conjugate, 10 Conservation of momentum, 15 constant of motion, 16 continuity in parameters, 25 continuity with respect to initial conditions, 25, 64 contraction, 32, 53, 62 contraction mapping principle, 57
!

critical point, 7 decouple, 20 derivative, 8 di erential equation, 7 di erential equation associated with f , 4 Di erentiation, 8 rst return map, 65 xed point, 7 Floquet multipliers, 66, 71, 98, 103 ow, 4 forward orbit, 7 glider ight, 13 global, 7, 44 global bifurcations, 77 Gronwall's inequality, 117 Graph Transform Method, 52 gravity, 6 Gronwall's Inequality, 29 group property, 4 Hamiltonian, 72, 101, 112{114 Hamiltonian system for H, 16 Hartman-Grobman-Sturmberg Theorem, 42 heteroclinic connection, 102 homoclinic orbit, 101 homoclinic point, 101 Hopf bifurcation, 98 Hopf Bifurcation Theorem, 86, 92, 94 hyperbolic, 42, 51, 62, 74 hyperbolic xed point, 50, 52 immersed m-submanifold, 44 Implicit Function Theorem, 65, 77, 98 initial condition, 4, 9 initial value problem, 4
126

7. INDEX

127

integral, 16 invariant set, 33 Inverse Function Theorem, 78, 79, 93 isolated invariant set, 33 isolating neighborhood, 33 Kepler Problem, 6 Kepler's Laws, 17 least period, 7 level set, 77 limit cycle, 86 linear space, 8 Linearization, 48 linearization, 37 Lipschitz, 24, 29, 30, 55, 58, 59, 117 local, 7 local ow, 6 local stable set, 45 local unstable set, 45 manifold, 7 map, 50, 94 maximal, 25 maximal invariant set, 33 McGehee Collision Manifold, 22 measurable, 25 Melnikov integral, 113, 115 Melnikov method, 100 method of majorants, 27 nice vector elds, 7 nondegeneracy conditions, 82, 86 normal form, 88, 90, 98 orbit, 7 periodic forcing, 72 periodic orbit, 63 periodic point, 7 periodic point with least period T , 63 phase space, 7 pitchfork bifurcation, 98 Poincare Map, 65 Poincare section, 64, 70, 76 Poincare-Bendixon, 86 positively invariant set, 33 power series, 27 recurrence, 104 resonance, 44 rest point, 7 saddle node bifurcation, 82 semi ow, 6 Siegel and Moser, 27 singularity, 17, 20 sink, 35, 37, 51, 65 solution, 4 source, 42, 65 spiral sink, 86, 90

spiral source, 86, 90 stable manifold, 45 stable set, 50 Stable-Unstable Manifold Theorem, 100 Stable/Unstable Manifold Theorem for Maps, 51 stationary point, 7 strong stable manifold, 62, 82 strong unstable manifold, 62 tangent vector, 7 Taylor series, 78 Taylor's Remainder Theorem, 38 Taylor's Theorem, 53 time T map, 50, 69 time t map, 26 topologically equivalent, 13 torus, 21 transversality, 61, 66 transverse homoclinic orbit, 103, 114 transverse homoclinic point, 103 unit tangent bundle, 58 unstable manifold, 46 variational equation, 26, 71 Zharkovskii, 13

Potrebbero piacerti anche