Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
October-10-12 1:09 PM
Given L and V, can we solve the inverse problem? Is there always a solution to this problem? Yes, no shit.
Given L and V, find u where v = L(u) Let M be the matrix associated with L, so Mu = v Can we find N such that NM=I?
So we are looking for N (inverse of M) that satisfies NM = I
Mu= v NMu = Nv
Iu = Nv U = Nv
ae + fc = 1 eb + fd = 0 ag + hc = 0 gb + hd = 1 (given) (unknown) Therefore, after manipulations:
Multiply NM=
Therefore,
N=
Therefore, we do have a solution for N if ad-bc As such, some matrices do not have an inverse. example: For projection: not unique
review projections
projection on x-axis if given a projection, can we identify a vector uniquely that would have that projection? No.
Determinant
Anamjit Sivia Page 1
Arise frequently when modelling a wide range of engineering systems e.g. electric circuits Modelling: the art of writing a mathematical description of a system being analyzed Model consisting one equation and 3 unknowns:
RECALL: We can write the solution to AX=B by starting at a single solution (particular solution) and then moving along the homogeneous solution (plural). The key is that we have been able to express each leading variable in terms of the free variables. AX = B
If A = M=
M=
A matrix is in RNF if: a. b. c. d. Example: The first non-zero entry in each row is A1. The other entries in the columns containing these leading 1's are zero. The leading 1's move to the right as we move down the rows. Any zero rows are collected at the bottom.
(augmented matrix)
So, if M is already in RNF, it is easy to simply write down the solution to AX = B. If not, then we are going to perform operations on M to put it in reduced normal form (M'), where M' corresponds to an equivalent linear system with the same solution as M.
(augmented matrix)
1. 2. 3. 4. 5. 6. 7.
Housekeeping: Multiply row 1 by -1 Subtract 1 from 2. Subtract 1 from 6 Multiply 2 by -1 Row 3 - row 2 Row 5 - row 2 And so on
Summary:
There are three possible outcomes for a linear system: 1. Consistent Systems: a. Unique solution (occurs when every variable in RNF is a leading variable i.e. free variable) b. Infinite solutions (occurs when there is at least one free variable) 2. Inconsistent Systems: No solution (occurs when a contradiction is seen in the matrix)
Matrix Ranks
It turns out that the RNF of a matrix is a unique.
Anamjit Sivia Page 5
It turns out that the RNF of a matrix is a unique. The number of leading 1's in the RNF of the matrix A is called the RANK of A. r = rank(A) In terms of solutions to AX = B with m equations and n variables, if the rank is r, the number of free variables is (n -r)
r = number of leading variables (n-r) = number of free variables n = total number of variables
Can you solve: 2x1 + 3x2 - 4x3 = 1 We are looking for all values
Example: x + y + 3w = 2 z - 2w = 1 two equations with four unknowns Are there values of x, y, z and w that satisfy both equations?
Declare x1 to be a basic variable and x2 and x3 as free variables 2x1 = 1-3x2 + 4x3
x and z are leading (basic) w and y are free variables. (can be freely assigned) x = 2 - y - 3w z = 1 + 2w
+ x2
+x3
x y z = w
2 0 1 0
-1 1 y[ 0 ] 0
-3 0 z 2 1
Not correct
In general, homogeneous systems (AX=0): - x=0 is always a solution to homogeneous systems and is called the trivial solution. - Therefore, only two possibilities exist for homogeneous systems: Unique solution Infinite solutions (including trivial solution)
- Limit to square systems - AX = B as a starting point. The inverse of A will be denoted as a unique matrix.
- Assume that AX = B, where A is a square matrix and has a unique solution. - This means the RNF of [A|B] must be of the form [I|C] - So we are looking for a matrix (D) such that
Formal Definition of a Matrix Inverse: - D is called an inverse of A if we can find D such that AD = I and DA = I. - Can A have more than one inverse? Let's say D and E are both inverses of A Therefore, DA = I = AE (= AD = EA) D = DI = D(AE) = (DA)E = IE = E Hence, there is only one inverse to a matrix. It is denoted as
- Going back to use of elementary operations to obtain a matrix in RNF Elementary matrices: We can associate matrices with the three elementary operations. Interchange two rows:
Referred to as a shear matrix. Therefore, we can conclude that there is a sequence of elementary matrices such that:
Algorithm to Find A:
Anamjit Sivia Page 9
Algorithm to Find A:
Note: this is an augmented matrix where A is a matrix and I is the identity matrix in the same dimension. For example, if A is 4x4, I is also 4x4 - Perform Gaussian Elimination on M to bring it to RNF.
Changes to: Therefore: Every elementary matrix is itself invertible and is also an elementary matrix corresponding to the inverse of the row operation that produced .
REVIEW: Every elementary matrix E is invertible and is also an elementary matrix corresponding to the inverse of the row operation that produced E. sivia at 31/10/2012 1:24 PM Two Important properties:
- Property 1: If A and B are both invertible matrices and of the same size, so is AB invertible and the inverse of We will make use of the formal definition of an inverse to prove this property. Proof: Take as a candidate inverse for
Therefore,
is the inverse of
Using property 1:
Therefore, every invertible matrix can be expressed as a product of elementary matrices. This is a significant result, don't know why.
Curve Fitting: Take a set of points and fit a function to those points.
Interpolation: Takes a curve fit and constructs new, related data points on the curve
Extrapolation: Take a curve, extend the graph to generate new data points i.e. new data point outside the original data interval
We are dealing with the class of AX = B problems where there is no solution. We begin by defining an error vector. (where Sizes: A m x n, X n x 1, B m x 1 There is no solution to this problem. Let's extend the notion of magnitude of an
or
vector to
This gives us a geometric way of thinking about the consistency of AX = B, namely AX=B is consistent if and only if B can be expressed as a linear combination of the column vectors of A. (or B is in the column space of A) Let's define as the column space of A. Therefore, as varies over will vary over the entire column space The least squares problem arisen when vector B does not lie in the column space N. Solving this problem amounts to finding such that is the closest vector in W to B. We can think of this as solving for: It then follows that vector of A. is orthogonal to Therefore, is orthogonal to every column
This is called the Least Squares Solution (Assuming that Numerical Example: We have (independent variables) and Y 1 0 1 1
exists)
is a dependent variable
0
1
1
-1
2
-1
0
0
Anamjit Sivia Page 13
1 2
-1 0
-1 0
0 1
Therefore, no solution. But, we will not give up. We are EngScis. Why don't we try a least squares solution? Normal System:
Now:
Therefore,
B= Distance from AX to B
AX - B =
Where f(x) is a continuous function and F is a function such that This cannot be used when f(x) is not defined or cannot be found. We will use numerical integration in such cases. We will approximate:
Another approach rather than using simple rectangles to approximate area: Consider fitting a polynomial P(x) through the discrete values such that P(x) is approximately f(x) Where We can use the 2-point Newton Cotes rule to fit a polynomial of degree 1 (line) but this is just a trapezoid. Now, lets use the 3- point Newton Cotes rule to fit a polynomial of degree 2 to each triplet of points:
Because 'area' is unaffected by horizontal position, we will use instead solve the simpler problem
Therefore, and so on B=
Errors pervade all areas of Science. Therefore, it is important to understand, quantify and control them.
The slope of the function does not affect the function much; rather, it is the second derivative that gives us an estimate of the error we can expect. (e.g. slope may change faster and yield a greater error in a given function) Example: Given:
Evaluate:
Since 1
then
This is an estimate on the upper bound of the error. In order to adjust this, we can increase n. Therefore, will get smaller. Actual Error: Calculate
Simpson's Rule:
The order of the integration method is the first integer m such that the method does not compute exactly. For example, in the Trapezoid Rule: Second Order Simpson's Rule: Fourth Order
Curve Fitting
Sometimes we need to fit data where the data points do not follow a linear/ quadratic/ perfect solution
Use gaussian elimination to obtain a =2, b = -3, c = 1 What if we had tried to fit a straight line rather than a quadratic?
Transpose If A is an m x n matrix, the transpose ( is the n x m matrix, where the rows of a transpose are the columns of A written in the same order. Example: and
Properties: 1. 2. 3. 4. 5. If A is invertible, so is its transpose and Proof of property 5: exists by assumption. Similarly, for the other case for non-commutative multiplication.
We haven't yet done any fucking differential equations but we still do this shit. Circuit Example: (Ohm's Law) A change in resistance represents an instant change in voltage. This is a static system. Spring Example: (Newton's Law) A mass is attached to a spring, on a frictionless surface. The spring does not instantaneously change its position when it is stretched or compressed; it oscillates.. This is a dynamic system. In terms of derivatives:
, a function.
What if ? In this case, numerical methods may have to be used as it is not intuitive to guess a function that fulfills the conditions.
Euler's Method
We are going to make use of the fact the derivative is always known. Start at Use of slope at this point
Where
The Euler's Method is a numerical approach to solving first order IVP's that makes use of derivative information. Given: Start at Algorithm: ----> slope Where are approximate values of and move forward in time by using the 'slope' to estimate the change in y.
Sources of Error: 1. The formula for going from to , is not exact. 2. The information going into the formula is not exact. How may we improve on the formula? Consider : slope estimate at : slope estimate at
Q: What do we use for A: Replace by the estimate obtained using Euler's Method
Summary of Algorithm:
and
Euler's Method:
OR
These types of equations are called difference equations. These equations allow us to solve for y at different times. For example, if we choose
Choose
Recap: Numerical solutions to differential equations (Initial Value Problems) - Euler's Method, improved Euler's method - In a class of predictor-corrector methods (Runge-Kutta) A general approach to representing higher order systems. For example, consider two springs attached together.
Let be the compression of the spring with Equations of motion describing this system:
and similarly
Suppose we want to solve an equation of the following form: on the interval [0,1], with the boundary conditions
This is distinct from the previous chapter as we are given an endpoint, and two points on y but nothing about the derivative of y. Looking at the beam example:
X=L
Young's Modulus and I = the second moment of area = linear load density If the beam is hinged at 0 and free at x = L, the Boundary Conditions are:
As with numerical integration, we are going to partition the interval into 1 A= [a,b] We will simultaneously estimate the solution at the grid points where 1 1 1 1
We will approximate the derivatives in the differential equation using finite differences keeping in mind that:
Different differences may be used: Forward difference denotes the 'forward difference' Backward difference
Central Difference:
for
and
for
This is now an algebraic equation in terms of the unknown values of y. This can be further simplified:
This equation can be easily solved. LOL Next, approximate the derivatives using finite differences. For this example:
Let's choose n = 5 We want to now simultaneously estimate Substitute at the grid points using the given boundary conditions.
We can change the grid point (n) to increase the accuracy which increases the accuracy of finite differences. These results are approximate to two decimal places. However, doing so would involve solving a larger matrix.