Sei sulla pagina 1di 612

Matrix (mathematics)

From Wikipedia, the free encyclopedia


Jump to navigationJump to search
For other uses, see Matrix.
"Matrix theory" redirects here. For the physics topic, see Matrix string theory.

An m × n matrix: the m rows are horizontal and the n columns are vertical. Each element of a matrix is often
denoted by a variable with two subscripts. For example, a2,1 represents the element at the second row and first
column of the matrix.

In mathematics, a matrix (plural matrices) is a rectangular array[1] (see irregular matrix)


of numbers, symbols, or expressions, arranged in rows and columns.[2][3] For example, the dimension
of the matrix below is 2 × 3 (read "two by three"), because there are two rows and three columns:

Provided that they have the same size (each matrix has the same number of rows and the same
number of columns as the other), two matrices can be added or subtracted element by element
(see conformable matrix). The rule for matrix multiplication, however, is that two matrices can be
multiplied only when the number of columns in the first equals the number of rows in the
second (i.e., the inner dimensions are the same, n for an (m×n)-matrix times an (n×p)-matrix,
resulting in an (m×p)-matrix). There is no product the other way round, a first hint that matrix
multiplication is not commutative. Any matrix can be multiplied element-wise by a scalar from its
associated field.
The individual items in an m×n matrix A, often denoted by ai,j, where i and j usually vary from 1
to m and n, respectively, are called its elements or entries.[4] For conveniently expressing an
element of the results of matrix operations the indices of the element are often attached to the
parenthesized or bracketed matrix expression; e.g., (AB)i,j refers to an element of a matrix
product. In the context of abstract index notation this ambiguously refers also to the whole matrix
product.
A major application of matrices is to represent linear transformations, that is, generalizations
of linear functions such as f(x) = 4x. For example, the rotation of vectors in three-
dimensional space is a linear transformation, which can be represented by a rotation matrix R:
if v is a column vector (a matrix with only one column) describing the position of a point in space,
the product Rv is a column vector describing the position of that point after a rotation. The
product of two transformation matrices is a matrix that represents the composition of
two transformations. Another application of matrices is in the solution of systems of linear
equations. If the matrix is square, it is possible to deduce some of its properties by computing
its determinant. For example, a square matrix has an inverse if and only if its determinant is
not zero. Insight into the geometry of a linear transformation is obtainable (along with other
information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics,
including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum
electrodynamics, they are used to study physical phenomena, such as the motion of rigid
bodies. In computer graphics, they are used to manipulate 3D models and project them onto
a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to
describe sets of probabilities; for instance, they are used within the PageRank algorithm that
ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions
such as derivatives and exponentials to higher dimensions. Matrices are used in economics to
describe systems of economic relationships.
A major branch of numerical analysis is devoted to the development of efficient algorithms for
matrix computations, a subject that is centuries old and is today an expanding area of
research. Matrix decomposition methods simplify computations, both theoretically and
practically. Algorithms that are tailored to particular matrix structures, such as sparse
matrices and near-diagonal matrices, expedite computations in finite element method and other
computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example
of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor
series of a function.

Contents

 1Definition
o 1.1Size
 2Notation
 3Basic operations
o 3.1Addition, scalar multiplication, and transposition
o 3.2Matrix multiplication
o 3.3Row operations
o 3.4Submatrix
 4Linear equations
 5Linear transformations
 6Square matrix
o 6.1Main types
 6.1.1Diagonal and triangular matrix
 6.1.2Identity matrix
 6.1.3Symmetric or skew-symmetric matrix
 6.1.4Invertible matrix and its inverse
 6.1.5Definite matrix
 6.1.6Orthogonal matrix
o 6.2Main operations
 6.2.1Trace
 6.2.2Determinant
 6.2.3Eigenvalues and eigenvectors
 7Computational aspects
 8Decomposition
 9Abstract algebraic aspects and generalizations
o 9.1Matrices with more general entries
o 9.2Relationship to linear maps
o 9.3Matrix groups
o 9.4Infinite matrices
o 9.5Empty matrices
 10Applications
o 10.1Graph theory
o 10.2Analysis and geometry
o 10.3Probability theory and statistics
o 10.4Symmetries and transformations in physics
o 10.5Linear combinations of quantum states
o 10.6Normal modes
o 10.7Geometrical optics
o 10.8Electronics
 11History
o 11.1Other historical usages of the word "matrix" in mathematics
 12See also
 13Notes
 14References
o 14.1Physics references
o 14.2Historical references
 15External links

Definition[edit]
A matrix is a rectangular array of numbers or other mathematical objects for which operations
such as addition and multiplication are defined.[6] Most commonly, a matrix over a field F is a
rectangular array of scalars each of which is a member of F.[7][8] Most of this article focuses
on real and complex matrices, that is, matrices whose elements are real numbers or complex
numbers, respectively. More general types of entries are discussed below. For instance, this is a
real matrix:

The numbers, symbols, or expressions in the matrix are called its entries or its elements.
The horizontal and vertical lines of entries in a matrix are called rows and columns,
respectively.
Size[edit]
The size of a matrix is defined by the number of rows and columns that it contains. A matrix
with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are
called its dimensions. For example, the matrix A above is a 3 × 2 matrix.
Matrices with a single row are called row vectors, and those with a single column are
called column vectors. A matrix with the same number of rows and columns is called
a square matrix. A matrix with an infinite number of rows or columns (or both) is called
an infinite matrix. In some contexts, such as computer algebra programs, it is useful to
consider a matrix with no rows or no columns, called an empty matrix.

Name Size Example Description

Row
1×n A matrix with one row, sometimes used to represent a vector
vector
Column
n×1 A matrix with one column, sometimes used to represent a vector
vector

A matrix with the same number of rows and columns, sometimes


Square
n×n used to represent a linear transformation from a vector space to
matrix
itself, such as reflection, rotation, or shearing.

Notation[edit]
Matrices are commonly written in box brackets or parentheses:

The specifics of symbolic matrix notation vary widely, with some prevailing trends.
Matrices are usually symbolized using upper-case letters (such as A in the examples
above), while the corresponding lower-case letters, with two subscript indices (for
example, a11, or a1,1), represent the entries. In addition to using upper-case letters to
symbolize matrices, many authors use a special typographical style, commonly boldface
upright (non-italic), to further distinguish matrices from other mathematical objects. An
alternative notation involves the use of a double-underline with the variable name, with

or without boldface style, (for example, ).


The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j,
(i,j), or (i,j)th entry of the matrix, and most commonly denoted as ai,j, or aij. Alternative
notations for that entry are A[i,j] or Ai,j. For example, the (1,3) entry of the following
matrix A is 5 (also denoted a13, a1,3, A[1,3] or A1,3):

Sometimes, the entries of a matrix can be defined by a formula such as ai,j = f(i, j).
For example, each of the entries of the following matrix A is determined by aij = i − j.

In this case, the matrix itself is sometimes defined by that formula, within square
brackets or double parentheses. For example, the matrix above is defined
as A = [i−j], or A = ((i−j)). If matrix size is m × n, the above-mentioned
formula f(i, j) is valid for any i = 1, ..., m and any j = 1, ..., n. This can be either
specified separately, or using m × n as a subscript. For instance, the
matrix A above is 3 × 4 and can be defined as A = [i − j] (i = 1, 2, 3; j = 1, ..., 4),
or A = [i − j]3×4.
Some programming languages utilize doubly subscripted arrays (or arrays of
arrays) to represent an m-×-n matrix. Some programming languages start the
numbering of array indexes at zero, in which case the entries of an m-by-
n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1.[9] This article follows the
more common convention in mathematical writing where enumeration starts
from 1.
An asterisk is occasionally used to refer to whole rows or columns in a matrix.
For example, ai,∗ refers to the ith row of A, and a∗,j refers to the jth column of A.
The set of all m-by-n matrices is denoted 𝕄(m, n).

Basic operations[edit]
External video

How to organize, add and multiply

matrices - Bill Shillito, TED ED[10]

There are a number of basic operations that can be applied to modify matrices,
called matrix addition, scalar multiplication, transposition, matrix
multiplication, row operations, and submatrix.[11]
Addition, scalar multiplication, and transposition [edit]
Main articles: Matrix addition, Scalar multiplication, and Transpose

Operation Definition Example

The sum A+B of two m-by-


n matrices A and B is calculated
Addition entrywise:
(A + B)i,j = Ai,j + Bi,j, where 1
≤ i ≤ m and 1 ≤ j ≤ n.

The product cA of a
number c (also called a scalar in
the parlance of abstract algebra)
and a matrix A is computed by
multiplying every entry of A by c:
(cA)i,j = c · Ai,j.
Scalar This operation is
multiplication called scalar multiplication,
but its result is not named
"scalar product" to avoid
confusion, since "scalar
product" is sometimes used
as a synonym for "inner
product".

The transpose of an m-by-


Transposition n matrix A is the n-by-
m matrix AT (also
denoted Atr or tA) formed by
turning rows into columns
and vice versa:
(AT)i,j = Aj,i.

Familiar properties of numbers extend to these operations of matrices: for


example, addition is commutative, that is, the matrix sum does not depend on
the order of the summands: A + B = B + A.[12] The transpose is compatible with
addition and scalar multiplication, as expressed by (cA)T = c(AT) and
(A + B)T = AT + BT. Finally, (AT)T = A.
Matrix multiplication[edit]
Main article: Matrix multiplication

Schematic depiction of the matrix product AB of two matrices A and B.

Multiplication of two matrices is defined if and only if the number of columns of


the left matrix is the same as the number of rows of the right matrix. If A is
an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is
the m-by-p matrix whose entries are given by dot product of the corresponding
row of A and the corresponding column of B:

where 1 ≤ i ≤ m and 1 ≤ j ≤ p.[13] For example, the underlined entry 2340 in


the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:

Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity),


and (A + B)C = AC + BC as well as C(A + B) = CA + CB (left and
right distributivity), whenever the size of the matrices is such that the
various products are defined.[14] The product AB may be defined
without BA being defined, namely if A and B are m-by-n and n-by-
k matrices, respectively, and m ≠ k. Even if both products are defined,
they need not be equal, that is, generally
AB ≠ BA,
that is, matrix multiplication is not commutative, in marked contrast
to (rational, real, or complex) numbers whose product is
independent of the order of the factors. An example of two matrices
not commuting with each other is:
whereas

Besides the ordinary matrix multiplication just described,


there exist other less frequently used operations on
matrices that can be considered forms of multiplication,
such as the Hadamard product and the Kronecker
product.[15] They arise in solving matrix equations such as
the Sylvester equation.
Row operations[edit]
Main article: Row operations
There are three types of row operations:

1. row addition, that is adding a row to another.


2. row multiplication, that is multiplying all entries of a
row by a non-zero constant;
3. row switching, that is interchanging two rows of a
matrix;
These operations are used in a number of ways, including
solving linear equations and finding matrix inverses.
Submatrix[edit]
A submatrix of a matrix is obtained by deleting any
collection of rows and/or columns.[16][17][18] For example, from
the following 3-by-4 matrix, we can construct a 2-by-3
submatrix by removing row 3 and column 2:

The minors and cofactors of a matrix are found by


computing the determinant of certain submatrices.[18][19]
A principal submatrix is a square submatrix obtained
by removing certain rows and columns. The definition
varies from author to author. According to some
authors, a principal submatrix is a submatrix in which
the set of row indices that remain is the same as the
set of column indices that remain.[20][21] Other authors
define a principal submatrix as one in which the
first k rows and columns, for some number k, are the
ones that remain;[22] this type of submatrix has also
been called a leading principal submatrix.[23]

Linear equations[edit]
Main articles: Linear equation and System of linear
equations
Matrices can be used to compactly write and work with
multiple linear equations, that is, systems of linear
equations. For example, if A is an m-by-
n matrix, x designates a column vector (that is, n×1-
matrix) of n variables x1, x2, ..., xn, and b is an m×1-
column vector, then the matrix equation

is equivalent to the system of linear equations[24]

Using matrices, this can be solved more


compactly than would be possible by writing
out all the equations separately. If n = m and
the equations are independent, this can be
done by writing

where A−1 is the inverse matrix of A.


If A has no inverse, solutions if any can be
found using its generalized inverse.

Linear transformations[edit]
Main articles: Linear
transformation and Transformation matrix

The vectors represented by a 2-by-2 matrix


correspond to the sides of a unit square
transformed into a parallelogram.

Matrices and matrix multiplication reveal


their essential features when related
to linear transformations, also known
as linear maps. A real m-by-
n matrix A gives rise to a linear
transformation Rn → Rm mapping each
vector x in Rn to the (matrix) product Ax,
which is a vector in Rm. Conversely, each
linear transformation f: Rn → Rm arises
from a unique m-by-n matrix A: explicitly,
the (i, j)-entry of A is the ith coordinate
of f(ej), where ej = (0,...,0,1,0,...,0) is
the unit vector with 1 in the jth position and
0 elsewhere. The matrix A is said to
represent the linear map f, and A is called
the transformation matrix of f.
For example, the 2×2 matrix

can be viewed as the transform of


the unit square into
a parallelogram with vertices at (0,
0), (a, b), (a + c, b + d), and (c, d). The
parallelogram pictured at the right is
obtained by multiplying A with each of

the column vectors , and in


turn. These vectors define the vertices
of the unit square.
The following table shows a number
of 2-by-2 matrices with the associated
linear maps of R2. The blue original is
mapped to the green grid and shapes.
The origin (0,0) is marked with a black
point.

Refle
Squee Scali
Horizo ction t Rota
ze ng b
ntal hroug tion
mappi ya
shear h the by
ng wit facto
with m vertic π/6 =
hr= r of
= 1.25. al 30°
3/2 3/2
axis
Under the 1-to-1
correspondence between matrices
and linear maps, matrix multiplication
corresponds to composition of
maps:[25] if a k-by-
m matrix B represents another linear
map g: Rm → Rk, then the
composition g ∘ f is represented
by BA since
(g ∘ f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x.
The last equality follows from the
above-mentioned associativity of
matrix multiplication.
The rank of a matrix A is the
maximum number of linearly
independent row vectors of the
matrix, which is the same as the
maximum number of linearly
independent column
vectors.[26] Equivalently it is
the dimension of the image of the
linear map represented
by A.[27] The rank–nullity
theorem states that the dimension
of the kernel of a matrix plus the
rank equals the number of
columns of the matrix.[28]

Square matrix[edit]
Main article: Square matrix
A square matrix is a matrix with
the same number of rows and
columns. An n-by-n matrix is
known as a square matrix of
order n. Any two square matrices
of the same order can be added
and multiplied. The entries aii form
the main diagonal of a square
matrix. They lie on the imaginary
line that runs from the top left
corner to the bottom right corner
of the matrix.
Main types[edit]
Name Example with n = 3

Diagonal matrix

Lower triangular matrix

Upper triangular matrix

Diagonal and triangular


matrix[edit]
If all entries of A below the
main diagonal are zero, A is
called an upper triangular
matrix. Similarly if all entries
of A above the main diagonal
are zero, A is called a lower
triangular matrix. If all entries
outside the main diagonal are
zero, A is called a diagonal
matrix.
Identity matrix[edit]
Main article: Identity matrix
The identity matrix In of
size n is the n-by-n matrix in
which all the elements on
the main diagonal are equal to
1 and all other elements are
equal to 0, for example,

It is a square matrix of
order n, and also a
special kind of diagonal
matrix. It is called an
identity matrix because
multiplication with it
leaves a matrix
unchanged:
AIn = ImA = A for any m-by-n matrix A.
A nonzero scalar
multiple of an identity
matrix is called
a scalar matrix. If the
matrix entries come
from a field, the
scalar matrices form
a group, under matrix
multiplication, that is
isomorphic to the
multiplicative group of
nonzero elements of
the field.
Symmetric or skew-
symmetric
matrix[edit]
A square
matrix A that is equal
to its transpose, that
is, A = AT, is
a symmetric matrix. If
instead, A is equal to
the negative of its
transpose, that is, A =
−AT, then A is
a skew-symmetric
matrix. In complex
matrices, symmetry is
often replaced by the
concept of Hermitian
matrices, which
satisfy A∗ = A, where
the star
or asterisk denotes
the conjugate
transpose of the
matrix, that is, the
transpose of
the complex
conjugate of A.
By the spectral
theorem, real
symmetric matrices
and complex
Hermitian matrices
have an eigenbasis;
that is, every vector is
expressible as
a linear
combination of
eigenvectors. In both
cases, all eigenvalues
are real.[29] This
theorem can be
generalized to infinite-
dimensional
situations related to
matrices with infinitely
many rows and
columns, see below.
Invertible matrix and
its inverse[edit]
A square matrix A is
called invertible or no
n-singular if there
exists a matrix B such
that
AB = BA = In ,[30][31]
where In is
the n×n identity
matrix with 1s on
the main
diagonal and 0s
elsewhere.
If B exists, it is
unique and is
called the inverse
matrix of A,
denoted A−1.
Definite
matrix[edit]
Positi
ve Indef
defini inite
te matri
matri x
x

Q(x, y
Q(x, y
)=
)=
1/4 x2
1/4 x2

+ y2
1/4 y2
Point
s such Point
that Q s such
(x,y)= that Q
1 (x,y)=
(Ellip 1
se). (Hyp
erbol
a).

A symmetric n×n-
matrix A is
called positive-
definite if the
associated quadr
atic form
f (x) = xTA x
has a positive
value for
every
nonzero
vector x in Rn
. If f (x) only
yields
negative
values
then A is neg
ative-definite;
if f does
produce both
negative and
positive
values
then A is inde
finite.[32] If the
quadratic
form f yields
only non-
negative
values
(positive or
zero), the
symmetric
matrix is
called positiv
e-
semidefinite (
or if only non-
positive
values, then
negative-
semidefinite);
hence the
matrix is
indefinite
precisely
when it is
neither
positive-
semidefinite
nor negative-
semidefinite.
A symmetric
matrix is
positive-
definite if and
only if all its
eigenvalues
are positive,
that is, the
matrix is
positive-
semidefinite
and it is
invertible.[33] T
he table at
the right
shows two
possibilities
for 2-by-2
matrices.
Allowing as
input two
different
vectors
instead yields
the bilinear
form associat
ed to A:
BA (x, y) = xTAy.[34]
Orthogo
nal
matrix[e
dit]
Main
article: Or
thogonal
matrix
An orthog
onal
matrix is
a square
matrix wit
h real ent
ries
whose
columns
and rows
are ortho
gonal unit
vectors (t
hat
is, orthon
ormal vec
tors).
Equivale
ntly, a
matrix A i
s
orthogon
al if
its transp
ose is
equal to
its invers
e:

whic
h
entail
s

w
h
e
r
e
I
n

i
s
t
h
e
i
d
e
n
ti
t
y
m
a
tr
i
x

o
f
s
i
z
e

n
.
A
n
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x

A
i
s
n
e
c
e
s
s
a
ri
l
y
i
n
v
e
rt
i
b
l
e
(
w
it
h
i
n
v
e
r
s
e

A
−1

A
T
)
,
u
n
it
a
r
y
(
A
−1

A
*
),
a
n
d

n
o
r
m
a
l
(
A
*
A

A
A
*
).
T
h
e

d
e
t
e
r
m
i
n
a
n
t
o
f
a
n
y
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
i
s
e
it
h
e
r
+
1
o
r

1
.
A

s
p
e
c
i
a
l
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
i
s
a
n
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
w
it
h

d
e
t
e
r
m
i
n
a
n
t
+
1
.
A
s
a
l
i
n
e
a
r
tr
a
n
s
f
o
r
m
a
ti
o
n
,
e
v
e
r
y
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
w
it
h
d
e
t
e
r
m
i
n
a
n
t
+
1
i
s
a
p
u
r
e
r
o
t
a
ti
o
n

w
it
h
o
u
t
r
e
fl
e
c
ti
o
n
,
i.
e
.,
t
h
e
tr
a
n
s
f
o
r
m
a
ti
o
n
p
r
e
s
e
r
v
e
s
t
h
e
o
ri
e
n
t
a
ti
o
n
o
f
t
h
e
tr
a
n
s
f
o
r
m
e
d
s
tr
u
c
t
u
r
e
,
w
h
il
e
e
v
e
r
y
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
w
it
h
d
e
t
e
r
m
i
n
a
n
t
-
1
r
e
v
e
r
s
e
s
t
h
e
o
ri
e
n
t
a
ti
o
n
,
i.
e
.,
i
s
a
c
o
m
p
o
s
it
i
o
n
o
f
a
p
u
r
e
r
e
fl
e
c
ti
o
n

a
n
d
a
(
p
o
s
s
i
b
l
y
n
u
ll
)
r
o
t
a
ti
o
n
.
T
h
e
i
d
e
n
ti
t
y
m
a
tr
i
c
e
s
h
a
v
e
d
e
t
e
r
m
i
n
a
n
t
1
,
a
n
d
a
r
e
p
u
r
e
r
o
t
a
ti
o
n
s
b
y
a
n
a
n
g
l
e
z
e
r
o
.
T
h
e

c
o
m
p
l
e
x

a
n
a
l
o
g
u
e
o
f
a
n
o
rt
h
o
g
o
n
a
l
m
a
tr
i
x
i
s
a

u
n
it
a
r
y
m
a
tr
i
x
.
M
a
i
n
o
p
e
r
a
t
i
o
n
s
[
e
d
it
]
T
r
a
c
e
[
e
d
it
]
T
h
e
t
r
a
c
e
,
tr
(
A
)
o
f
a
s
q
u
a
r
e
m
a
tr
i
x

A
i
s
t
h
e
s
u
m
o
f
it
s
d
i
a
g
o
n
a
l
e
n
tr
i
e
s
.
W
h
il
e
m
a
tr
i
x
m
u
lt
i
p
li
c
a
ti
o
n
i
s
n
o
t
c
o
m
m
u
t
a
ti
v
e
a
s
m
e
n
ti
o
n
e
d

a
b
o
v
e
,
t
h
e
tr
a
c
e
o
f
t
h
e
p
r
o
d
u
c
t
o
f
t
w
o
m
a
tr
i
c
e
s
i
s
i
n
d
e
p
e
n
d
e
n
t
o
f
t
h
e
o
r
d
e
r
o
f
t
h
e
f
a
c
t
o
r
s
:
tr(AB) = tr(BA).
T
h
i
s
i
s
i
m
m
e
d
i
a
t
e
f
r
o
m
t
h
e
d
e
f
i
n
i
t
i
o
n
o
f
m
a
t
r
i
x
m
u
l
t
i
p
l
i
c
a
t
i
o
n
:

I
t
f
o
l
l
o
w
s
t
h
a
t
t
h
e
t
r
a
c
e
o
f
t
h
e
p
r
o
d
u
c
t
o
f
m
o
r
e
t
h
a
n
t
w
o
m
a
t
r
i
c
e
s
i
s
i
n
d
e
p
e
n
d
e
n
t
o
f

c
y
c
l
i
c
p
e
r
m
u
t
a
t
i
o
n
s

o
f
t
h
e
m
a
t
r
i
c
e
s
,
h
o
w
e
v
e
r
t
h
i
s
d
o
e
s
n
o
t
i
n
g
e
n
e
r
a
l
a
p
p
l
y
f
o
r
a
r
b
i
t
r
a
r
y
p
e
r
m
u
t
a
t
i
o
n
s
(
f
o
r
e
x
a
m
p
l
e
,
t
r
(
A
B
C
)

t
r
(
B
A
C
)
,
i
n
g
e
n
e
r
a
l
)
.
A
l
s
o
,
t
h
e
t
r
a
c
e
o
f
a
m
a
t
r
i
x
i
s
e
q
u
a
l
t
o
t
h
a
t
o
f
i
t
s
t
r
a
n
s
p
o
s
e
,
t
h
a
t
i
s
,
tr(A) = tr(AT).
D
e
t
e
r
m
i
n
a
n
t
[
e
d
i
t
]
M
a
i
n
a
r
t
i
c
l
e
:

D
e
t
e
r
m
i
n
a
n
t

A
l
i
n
e
a
r
t
r
a
n
s
f
o
r
m
a
t
i
o
n
o
n

R
2

g
i
v
e
n
b
y
t
h
e
i
n
d
i
c
a
t
e
d
m
a
t
r
i
x
.
T
h
e
d
e
t
e
r
m
i
n
a
n
t
o
f
t
h
i
s
m
a
t
r
i
x
i
s

1
,
a
s
t
h
e
a
r
e
a
o
f
t
h
e
g
r
e
e
n
p
a
r
a
l
l
e
l
o
g
r
a
m
a
t
t
h
e
r
i
g
h
t
i
s
1
,
b
u
t
t
h
e
m
a
p
r
e
v
e
r
s
e
s
t
h
e

o
r
i
e
n
t
a
t
i
o
n
,
s
i
n
c
e
i
t
t
u
r
n
s
t
h
e
c
o
u
n
t
e
r
c
l
o
c
k
w
i
s
e
o
r
i
e
n
t
a
t
i
o
n
o
f
t
h
e
v
e
c
t
o
r
s
t
o
a
c
l
o
c
k
w
i
s
e
o
n
e
.

T
h
e

d
e
t
e
r
m
i
n
a
n
t

d
e
t
(
A
)
o
r
|
A
|
o
f
a
s
q
u
a
r
e
m
a
t
r
i
x

i
s
a
n
u
m
b
e
r
e
n
c
o
d
i
n
g
c
e
r
t
a
i
n
p
r
o
p
e
r
t
i
e
s
o
f
t
h
e
m
a
t
r
i
x
.
A
m
a
t
r
i
x
i
s
i
n
v
e
r
t
i
b
l
e

i
f
a
n
d
o
n
l
y
i
f

i
t
s
d
e
t
e
r
m
i
n
a
n
t
i
s
n
o
n
z
e
r
o
.
I
t
s

a
b
s
o
l
u
t
e
v
a
l
u
e

e
q
u
a
l
s
t
h
e
a
r
e
a
(
i
n
R
2

)
o
r
v
o
l
u
m
e
(
i
n

R
3

)
o
f
t
h
e
i
m
a
g
e
o
f
t
h
e
u
n
i
t
s
q
u
a
r
e
(
o
r
c
u
b
e
)
,
w
h
i
l
e
i
t
s
s
i
g
n
c
o
r
r
e
s
p
o
n
d
s
t
o
t
h
e
o
r
i
e
n
t
a
t
i
o
n
o
f
t
h
e
c
o
r
r
e
s
p
o
n
d
i
n
g
l
i
n
e
a
r
m
a
p
:
t
h
e
d
e
t
e
r
m
i
n
a
n
t
i
s
p
o
s
i
t
i
v
e
i
f
a
n
d
o
n
l
y
i
f
t
h
e
o
r
i
e
n
t
a
t
i
o
n
i
s
p
r
e
s
e
r
v
e
d
.
T
h
e
d
e
t
e
r
m
i
n
a
n
t
o
f
2
-
b
y
-
2
m
a
t
r
i
c
e
s
i
s
g
i
v
e
n
b
y

T
h
e
d
e
t
e
r
m
i
n
a
n
t
o
f
3
-
b
y
-
3
m
a
t
r
i
c
e
s
i
n
v
o
l
v
e
s
6
t
e
r
m
s
(
r
u
l
e
o
f
S
a
r
r
u
s
)
.
T
h
e
m
o
r
e
l
e
n
g
t
h
y

L
e
i
b
n
i
z
f
o
r
m
u
l
a

g
e
n
e
r
a
l
i
s
e
s
t
h
e
s
e
t
w
o
f
o
r
m
u
l
a
e
t
o
a
l
l
d
i
m
e
n
s
i
o
n
s
.
[
3
5
]

T
h
e
d
e
t
e
r
m
i
n
a
n
t
o
f
a
p
r
o
d
u
c
t
o
f
s
q
u
a
r
e
m
a
t
r
i
c
e
s
e
q
u
a
l
s
t
h
e
p
r
o
d
u
c
t
o
f
t
h
e
i
r
d
e
t
e
r
m
i
n
a
n
t
s
:
det(AB) = det(A) · det(B).[36]
Av = λv
[42]
A−1 = adj(A) / det(A)
An = (VDV−1)n = VDV−1VDV−1...VDV−1 = VDnV−1
MTM = I,
yi ≈ axi + b, i = 1, ..., N
I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which
different systems of determinants may be engendered as from the womb of a common
parent.[108]
,
"Let us give the name of matrix to any function, of however many variables, that does not
involve any apparent variables. Then, any possible function other than a matrix derives from
a matrix by means of generalization, that is, by considering the proposition that the function
in question is true with all possible values or with some value of one of the arguments, the
other argument or arguments remaining undetermined".[117]
∀bj∀ai: Φ(ai, bj).

Potrebbero piacerti anche