Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
importance
Submitted by
Kiran panigrahi Roll No. ELE201610938
2019 - 2020
Arabinda panda
Eigen values were used to determine the theoretical limit to how much information
can be transmitted through a communication medium like your telephone line or
through the air. This is done by calculating the eigenvectors and eigenvalues of the
communication channel (expressed a matrix), and then water filling on the
eigenvalues. The eigenvalues are then, in essence, the gains of the fundamental modes
of the channel, which themselves are captured by the eigenvectors. The natural
frequency of the bridge is the eigenvalue of smallest magnitude of a system that
models the bridge. The engineers exploit this knowledge to ensure the stability of
their constructions. Eigenvalue analysis is also used in the design of the car stereo
systems, where it helps to reproduce the vibration of the car due to the music. The
application of eigenvalues and eigenvectors is useful for decoupling three-phase
systems through symmetrical component transformation. Eigenvalues and
eigenvectors allow us to "reduce" a linear operation to separate, simpler, problems.
Oil companies frequently use eigenvalue analysis to explore land for oil. Oil, dirt, and
other substances all give rise to linear systems which have different eigenvalues, so
eigenvalue analysis can give a good indication of where oil reserves are located.
i
ACKNOWLEDGEMENT
I give my sincere thanks to Mr Arabinda panda, Seminar Advisor for giving me the
opportunity and motivating us to complete the seminar within stipulated period of
time and providing a helping environment.
I acknowledge with immense pleasure the sustained interest, encouraging attitude and
constant inspiration rendered by Prof. Sukanta Mohapatra (Chairman), Prof.
Sangram Mudali (Director), Prof. Geetika Mudali (Placement Director) &Prof.
Ajit Kumar Panda (Dean), N.I.S.T. Their continued drive for better quality in
everything that happens at N.I.S.T. and selfless inspiration has always helped me to
move ahead.
KIRAN PANIGRAHI
Roll No. 201610938
ii
TABLE OF CONTENTS
ABSTRACT .................................................................................................................... i
ACKNOWLEDGEMENT .............................................................................................ii
TABLE OF CONTENTS ..............................................................................................iii
LIST OF FIGURES ...................................................................................................... iv
1. INTRODUCTION ..................................................................................................... 1
2. HISTORY .................................................................................................................. 2
3. VARIOUS OPERATIONAL PROBLEM WITH LOOP FLOW .............................. 3
4. LOOP FLOW IN A POWER SYSTEM CONSIDERING DC MODEL .................. 8
5. LOOP FLOW IN A RING AC POWER SYSTEM................................................. 12
6. PREVENTION OF LOOP FLOW PROBLEM ....................................................... 15
6.1 FACTS Application In preventing Loop Flows In Interconnected System Error!
Bookmark not defined.
6.2. Topologically Protected Loop Flow In High Voltage AC Power Grid............. 19
7. CONCLUSION ........................................................ Error! Bookmark not defined.
REFERENCES ............................................................................................................ 22
iii
LIST OF FIGURES
iv
Eigen value and Eigen vector and its importance
1.INTRODUCTION
Eigenvectors and eigenvalues have many important applications in computer vision
and machine learning in general. An interesting use of eigenvectors and eigenvalues is
also illustrated in my post about error ellipses. Furthermore, eigendecomposition
forms the base of the geometric interpretation of covariance matrices, discussed in an
more recent post. In this article, I will provide a gentle introduction into this
mathematical concept, and will show how to manually obtain the eigendecomposition
of a 2D square matrix.
If there exists a square matrix called A, a scalar λ, and a non-zero vector v, then λ is
the eigenvalue and v is the eigenvector if the following equation is satisfied:
Av=λv
In other words, if matrix A times the vector v is equal to the scalar λ times the vector
v, then λ is the eigenvalue of v, where v is the eigenvector. An eigenspace of A is the
set of all eigenvectors with the same eigenvalue together with the zero vector.
However, the zero vector is not an eigenvector. These ideas often are extended to
1
Eigen value and Eigen vector and its importance
more general situations, where scalars are elements of any field, vectors are elements
of any vector space, and linear transformations may or may not be represented by
matrix multiplication. For example, instead of real numbers, scalars may be complex
numbers; instead of arrows, vectors may be functions or frequencies; instead of
matrix multiplication, linear transformations may be operators such as the derivative
from calculus. These are only a few of countless examples where eigenvectors and
eigenvalues are important.
2.HISTORY
Eigenvalues are often introduced in the context of linear algebra or matrix theory.
Historically, however, they arose in the study of quadratic forms and differential
equations. In the 18th century Euler studied the rotational motion of a rigid body and
discovered the importance of the principal axes. Lagrange realized that the principal
axes are the eigenvectors of the inertia matrix. Cauchy also coined the term racine
caractéristique (characteristic root) for what is now called eigenvalue; his term
survives in characteristic equation. This was extended by Hermite in 1855 to what are
now called Hermitian matrices. Finally, Weierstrass clarified an important aspect in
the stability theory started by Laplace by realizing that defective matrices can cause
instability. In the meantime, Liouville studied eigenvalue problems similar to those of
Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory.
Schwarz studied the first eigenvalue of Laplace's equation on general domains
towards the end of the 19th century, while Poincaré studied Poisson's equation a few
years later. At the start of the 20th century, Hilbert studied the eigenvalues of integral
operators by viewing the operators as infinite matrices. He was the first to use the
German word eigen, which means "own", to denote eigenvalues and eigenvectors in
1904,though he may have been following a related usage by Helmholtz. For some
time, the standard term in English was "proper value", but the more distinctive term
"eigenvalue" is standard today.The first numerical algorithm for computing
eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power
method. One of the most popular methods today, the QR algorithm, was proposed
independently by John G.F. Francis and Vera Kublanovskaya in 1961.
2
Eigen value and Eigen vector and its importance
Eigenvalues and eigenvectors are often introduced to students in the context of linear
algebra courses focused on matrices. Furthermore, linear transformations over a
finite-dimensional vector space can be represented using matrices, which is especially
common in numerical and computational applications.
Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-
dimensional vectors
These vectors are said to be scalar multiples of each other, or parallel, if there is a
scalar λ such that x= λy
In this case λ = −1/20. Now consider the linear transformation of n-dimensional vectors
defined by an n by n matrix A,
Av=w
3
Eigen value and Eigen vector and its importance
Av=w=λv ........(1)
then v is an eigenvector of the linear transformation A and the scale factor λ is the
eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue
equation for the matrix A.
(A-λI)v=0 ......(2)
Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A −
λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
......(3)
Using Leibniz' rule for the determinant, the left-hand side of Equation (3) is a
polynomial function of the variable λ and the degree of this polynomial is n, the order
of the matrix A. Its coefficients depend on the entries of A, except that its term of
degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of
A. Equation (3) is called the characteristic equation or the secular equation of A.
4
Eigen value and Eigen vector and its importance
.....(4)
where each λi may be real but in general is a complex number. The numbers λ1, λ2, ...
λn, which may not all have distinct values, are roots of the polynomial and are the
eigenvalues of A.
As a brief example, which is described in more detail in the examples section later,
consider the matrix
𝟐 𝟏
A=[ ]
𝟏 𝟐
eigenvalues,
5
Eigen value and Eigen vector and its importance
The eigen values of the 𝑘 𝑡ℎpower of A;the eigenvalues of 𝐴𝑘 ,for any positive integer
k are 𝜆1𝑘 ,....𝜆𝑛^𝑘.
The matrix A is invertible if and only if every eigenvalue is nonzero.
1 1
If A is invertible,the the eigenvalues of 𝐴−1 are , … . , 𝜆𝑛 and each eigen
𝜆1
The eigenvalue and eigenvector problem can also be defined for row vectors that left
multiply matrix A. In this formulation, the defining equation is
uA=ku,
where k is a scalar and u is a 1×n matrix.any row vector u satisfying this equation is
called a left eigenvector of A and k is its associated eigenvalue. Taking the transpose
of this equation,
𝐴𝑇 𝑢𝑇 = 𝑘𝑢𝑇
6
Eigen value and Eigen vector and its importance
Q=[v1 v2 …. Vn].
With its mind,define a diagonal matrix ⋀ whereeach diagonal element ⋀𝑖𝑖 is the
eigenvalue associated with the ith column of q.then 𝑄^(−1)
AQ=Q⋀Q^-1
7
Eigen value and Eigen vector and its importance
A matrix that is not diagonalizable is said to be defective. For defective matrices, the
notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix
of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed
field, any matrix A has a Jordan normal form and therefore admits a basis of
generalized eigenvectors and a decomposition into generalized eigenspaces.
4. MATRIX EXAMPLES
The figure on the right shows the effect of this transformation on point coordinates in
the plane. The eigenvectors v of this transformation satisfy Equation (1), and the
values of λ for which the determinant of the matrix (A − λI) equals zero are the
eigenvalues.
8
Eigen value and Eigen vector and its importance
For λ = 1, it becomes,
Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues
λ = 1 and λ = 3, respectively.
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three
eigenvalues of A. These eigenvalues correspond to the eigenvectors.
9
Eigen value and Eigen vector and its importance
This matrix shifts the coordinates of the vector up by one position and moves the first
coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are
For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an
eigenvector. For example,
10
Eigen value and Eigen vector and its importance
Matrices with entries only along the main diagonal are called diagonal matrices. The
eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the
matrix .
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as
well as the eigenvalues of A.
11
Eigen value and Eigen vector and its importance
5.CALCULATION
Classical method
The classical method is to first find the eigenvalues, and then calculate the
eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact
arithmetics such as floating-point.
Eigenvalues
12
Eigen value and Eigen vector and its importance
Eigenvectors
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can
be found by finding nonzero solutions of the eigenvalue equation, that becomes a
system of linear equations with known coefficients. For example, once it is known
that 6 is an eigenvalue of the matrix.
Both equations reduce to the single linear equation y=2x Therefore, any vector of the
𝑎
form[ ]for any nonzero real number , is an eigenvector of with eigenvalue λ=6.
2𝑎
The matrix A above has another eigenvalue λ=1.A similar calculations shows that the
corresponding eigenvectors are the nonzerosolutions of 3x+y=0,that is,any vector of
𝑏
the form[ ],for any nonzero real number b.
−3𝑏
The converse approach, of first seeking the eigenvectors and then determining each
eigenvalue from its eigenvector, turns out to be far more tractable for computers. The
easiest algorithm here consists of picking an arbitrary starting vector and then
repeatedly multiplying it with the matrix (optionally normalising the vector to keep its
elements of reasonable size); surprisingly this makes the vector converge towards an
eigenvector. A variation is to instead multiply the vector by (𝐴 − µ𝐼)−1 ; this causes it
to converge to an eigenvector of the eigenvalue closest to µ£c.
13
Eigen value and Eigen vector and its importance
Modern methods
Most numeric methods that compute the eigenvalues of a matrix also determine a set
of corresponding eigenvectors as a by-product of the computation, although
sometimes implementors choose to discard the eigenvector information as soon as it is
no longer needed.
14
Eigen value and Eigen vector and its importance
6. APPLICATION
15
Eigen value and Eigen vector and its importance
However, in the case where one is interested only in the bound state solutions of the
Schrödinger equation, one looks for Ψ within the space of square integrable
functions. Since this space is a Hilbert space with a well-defined scalar product, one
can introduce a basis set in which Ψ and E can be represented as a one-dimensional
array (i.e., a vector) and a matrix respectively. This allows one to represent the
Schrödinger equation in a matrix form. The bra–ket notation is often used in this
context. A vector, which represents a state of the system, in the Hilbert space of
square integrable functions is represented by . In this notation, the Schrödinger
equation is:
16
Eigen value and Eigen vector and its importance
Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue
problems. Such equations are usually solved by an iteration procedure, called in this
case self-consistent field method. In quantum chemistry, one often represents the
Hartree–Fock equation in a non-orthogonal basis set. This particular representation is
a generalized eigenvalue problem called Roothaan equations.
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are
used as a method by which a mass of information of a clast fabric's constituents'
orientation and dip can be summarized in a 3-D space by six numbers. In the field, a
geologist may collect such data for hundreds or thousands of clasts in a soil sample,
which can only be compared graphically such as in a Tri-Plot (Sneed and Folk)
diagram, or as a Stereonet on a Wulff Net.
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of
space. The three eigenvectors are ordered v1,v2,v3 by their eigenvalues E1>E2E3;
then is the primary orientation/dip of clast, is the secondary and is the tertiary, in
terms of strength. The clast orientation is defined as the direction of the eigenvector,
on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the
tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of
E1,E2,E3, and are dictated by the nature of the sediment's fabric. If E1=E2=E3, the
fabric is said to be isotropic. If E1=E2>E3, the fabric is said to be planar. If
E1>E2>E3, the fabric is said to be linear.
17
Eigen value and Eigen vector and its importance
Principal component analysis is used to study large data sets, such as those
encountered in bioinformatics, data mining, chemical research, psychology, and in
marketing. PCA is also popular in psychology, especially within the field of
psychometrics. In Q methodology, the eigenvalues of the correlation matrix determine
the Q-methodologist's judgment of practical significance (which differs from the
statistical significance of hypothesis testing; cf. criteria for determining the number of
factors). More generally, principal component analysis can be used as a method of
factor analysis in structural equation modeling.
18
Eigen value and Eigen vector and its importance
6.5 Eigenfaces
19
Eigen value and Eigen vector and its importance
7.CONCLUSION
20
Eigen value and Eigen vector and its importance
21
Eigen value and Eigen vector and its importance
REFERENCES
22