Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Linear Algebra Problem Solver (REA)
Linear Algebra Problem Solver (REA)
Linear Algebra Problem Solver (REA)
Ebook1,647 pages13 hours

Linear Algebra Problem Solver (REA)

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

The Problem Solvers are an exceptional series of books that are thorough, unusually well-organized, and structured in such a way that they can be used with any text. No other series of study and solution guides has come close to the Problem Solvers in usefulness, quality, and effectiveness. Educators consider the Problem Solvers the most effective series of study aids on the market. Students regard them as most helpful for their school work and studies. With these books, students do not merely memorize the subject matter, they really get to understand it. Each Problem Solver is over 1,000 pages, yet each saves hours of time in studying and finding solutions to problems. These solutions are worked out in step-by-step detail, thoroughly and clearly. Each book is fully indexed for locating specific problems rapidly. For linear algebra courses, as well as for courses in computers, physics, engineering, and sciences which use linear algebra. Concentrations on solutions to applied problems in economics, mechanics, electricity, chemistry, geometry, business, probability, graph theory, and linear programming.
LanguageEnglish
Release dateJan 1, 2013
ISBN9780738668451
Linear Algebra Problem Solver (REA)

Read more from The Editors Of Rea

Related to Linear Algebra Problem Solver (REA)

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Linear Algebra Problem Solver (REA)

Rating: 3.8333333333333335 out of 5 stars
4/5

6 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Linear Algebra Problem Solver (REA) - The Editors of REA

    info@rea.com.

    PROBLEM

    SOLVERS®

    Linear

    Algebra

    Staff of Research & Education Association

    Research & Education Association

    61 Ethel Road West

    Piscataway, New Jersey

    08854 E-mail:info@rea.com

    THE LINEAR ALGEBRA

    PROBLEM SOLVER®

    Copyright © 2007, 1999, 1996, 1980 by Research & Education Association, Inc. All rights reserved. No part of this book may be reproduced in any form without permission of the publisher.

    Printed in the United States of America

    Library of Congress Control Number 2006928793

    International Standard Book Number 0-87891-518-4

    J06

    Let REA’s Problem Solvers® work for you

    REA’s Problem Solvers are for anyone—from student to seasoned professional—who seeks a thorough, practical resource. Each Problem Solver offers hundreds of problems and clear step-by-step solutions not found in any other publication.

    Perfect for self-paced study or teacher-directed instruction, from elementary to advanced academic levels, the Problem Solvers can guide you toward mastery of your subject.

    Whether you are preparing for a test, are seeking to solve a specific problem, or simply need an authoritative resource that will pick up where your textbook left off, the Problem Solvers are your best and most trustworthy solution.

    Since the problems and solutions in each Problem Solver increase in complexity and subject depth, these references are found on the shelves of anyone who requires help, wants to raise the academic bar, needs to verify findings, or seeks a challenge.

    For many, Problem Solvers are homework helpers. For others, they’re great research partners. What will Problem Solvers do for you?

    ■    Save countless hours of frustration groping for answers

    ■    Provide a broad range of material

    ■    Offer problems in order of capability and difficulty

    ■    Simplify otherwise complex concepts

    ■    Allow for quick lookup of problem types in the index

    ■    Be a valuable reference for as long as you are learning

    Each Problem Solver book was created to be a reference for life, spanning a subject’s entire breadth with solutions that will be invaluable as you climb the ladder of success in your education or career.

    —Staff of Research & Education Association

    How To Use This Book

    The genius of the Problem Solvers lies in their simplicity. The problems and solutions are presented in a straightforward manner, the organization of the book is presented so that the subject coverage will easily line up with your coursework, and the writing is clear and instructive.

    Each chapter opens with an explanation of principles, problem-solving techniques, and strategies to help you master entire groups of problems for each topic.

    The chapters also present progressively more difficult material. Starting with the fundamentals, a chapter builds toward more advanced problems and solutions—just the way one learns a subject in the classroom. The range of problems takes into account critical nuances to help you master the material in a systematic fashion.

    Inside, you will find varied methods of presenting problems, as well as different solution methods, all of which take you through a solution in a step-by-step, point-by-point manner.

    There are no shortcuts in Problem Solvers. You are given no-nonsense information that you can trust and grow with, presented in its simplest form for easy reading and quick comprehension.

    As you can see on the facing page, the key features of this book are:

    ■  Clearly labeled chapters

    ■  Solutions presented in a way that will equip you to distinguish key problem types and solve them on your own more efficiently

    ■  Problems numbered and conveniently indexed by the problem number, not by page number

    Get smarter....Let Problem Solvers go to your head!

         Anatomy of a Problem Solver®

    CONTENTS

    1   VECTOR SPACES

    Examples of Vector Spaces

    Vectors in 2-Space and 3-Space

    Linear Combinations

    The Basis of a Vector Space

    Subspaces

    Operations in Vector Spaces

    2   LINEAR TRANSFORMATIONS

    Examples of Linear Transformations

    The Kernel and Range of Linear Transformations

    Invertible Linear Transformations

    Operations on the Set of Linear Transformations

    3   MATRICES OF LINEAR TRANSFORMATIONS

    The Representation of a Linear Transformation by a Matrix

    The Transition Matrix

    Arithmetic of Linear Transformations

    4   BASIC MATRIX ARITHMETIC

    Addition of Matrices

    Matrix Multiplication

    Special Kinds of Matrices

    5   DETERMINANTS

    Permutations

    Evaluation of Determinants by Minors

    Reduction of Determinants to Triangular Form

    The Adjoint of a Matrix

    Miscellaneous Results in Determinants

    Cramer’s Rule

    6   THE INVERSE OF A MATRIX

    The Inverse and Elementary Row Operations

    The Inverse and Systems of Linear Equations

    7   THE RANK OF A MATRIX

    Finding the Rank of a Matrix

    The Rank and Basis for a Vector Space

    Existence of Solutions

    Determinant Rank

    8   SYSTEMS OF LINEAR EQUATIONS

    Echelon Form Matrices

    The Dimension of the Solution Space

    Homogeneous Systems

    Non-Homogeneous Systems

    Existence of Solutions

    Miscellaneous Results for Systems of Linear Equations

    9   POLYNOMIAL ALGEBRA

    Definition of Polynomials

    Characteristic and Minimum Polynomials

    Polynomial Operations

    10   EIGENVALUE PROBLEMS

    Finding the Eigenvalues of a Matrix

    Finding the Eigenvectors of a Matrix

    The Basis for an Eigenspace

    Applications of the Spectral Theorem

    Eigenvalues and Eigenvectors of Linear Operators

    11   DIAGONALIZATION OF A MATRIX

    Change of Basis

    Finding a Diagonal Matrix to Represent a Linear Transformation

    Properties of Diagonal Matrices

    Orthogonal Diagonalization

    12   THE JORDAN CANONICAL FORM

    Invariant Subspaces

    Jordan Block Matrices

    The Jordan Canonical Form of Matrices and Operators

    13   MATRIX FUNCTIONS

    Functions of Matrices

    Calculating the Exponential of a Matrix

    14   INNER PRODUCT SPACES

    Examples of Inner Product Spaces

    Applications of Dot Product

    Norms of Vectors

    The Cross-Product

    Gram-Schmidt Orthogonalization Process

    15   BILINEAR FORMS

    Bilinear Functions

    Quadratic Forms

    Conic Sections and Quadratic Forms

    16   NUMERICAL METHODS OF LINEAR ALGEBRA

    Definitions and Examples

    Gaussian Elimination with Partial Pivoting

    Crout Method

    Choleski Factorization Process

    Iterative Methods for Solving Systems of Linear Equations

    Methods for Finding Eigenvalues

    Use of Flow Charts in Numerical Methods

    17   LINEAR PROGRAMMING AND GAME THEORY

    Examples, Geometric Solutions and the Dual

    The Simplex Algorithm and Sensitivity Analysis

    Integer Programming

    Two Person Zero-Sum Games

    Non-Zero Sum Games and Mixed Strategies

    18   APPLICATIONS TO DIFFERENTIAL EQUATIONS

    The Differential Operator

    Solving Systems by Finding Eigenvalues

    Converting a System to Diagonal Form

    The Fundamental Matrix

    The Nth Order Linear Differential Equation as a First Order Linear System

    Stability of Non-Linear Systems

    19   ECONOMIC AND STATISTICAL APPLICATIONS

    Problems in Production

    Problems in Consumption

    Games and Programming

    Statistical Applications

    20   APPLICATIONS TO PHYSICAL SYSTEMS I (MECHANICS, CHEMICAL)

    Representing Forces by Vectors

    Vibrating Systems

    Stress Problems

    Chemical Reactions and Mixing Problems

    21   APPLICATIONS TO PHYSICAL SYSTEMS II (ELECTRICAL)

    Kirchhoff’s Voltage Law

    Voltage and Current Laws

    Bridge Networks

    RLC Networks

    22   LINEAR FUNCTIONALS

    Examples of Linear Functionals

    The Dual Space and Dual Basis

    Annihilators, Transposes and Adjoints

    Multilinear Functionals

    23   GEOMETRICAL APPLICATIONS

    Equations for Lines and Planes

    Parametric and Affine Forms of Lines and Planes

    Hyperplanes

    Orthogonal Relations

    Geometrical Problems

    Local Linear Mappings

    24   MISCELLANEOUS APPLICATIONS

    Applications of Matrix Arithmetic

    Applications to Biological Science

    Business Applications

    Probability

    Graph Theory

    INDEX

    CHAPTER 1

    VECTOR SPACES

    EXAMPLES OF VECTOR SPACES

    • PROBLEM 1–1

    Define a field and give an example of i) an infinite field and ii) a finite field.

    Solution: A field is a set of elements F = {a, b, ...} together with two binary operations in F, called multiplication and addition (not necessarily the multiplication and addition we are familiar with) for which the following postulates hold:

    1)   Associative Law of Addition: For every triplet (a, b, c) in F, a + (b+c) = (a+b) + c .

    2)   Commutative Law of Addition: For every pair (a, b) in F, a + b = b + a .

    3)   Associative Law of Multiplication: For every triplet (a, b, c) , a(bc) = (ab)c .

    4)   Commutative Law for Multiplication: For every pair (a, b) in F, ab = ba .

    5)   Distributive Law: For every triplet (a, b, c) in F, a(b+c) = ab + ac .

    The set F must contain an identity element for multiplication and for addition.

    6)   There exists an element 0 in F such that for any a in F, a + 0 = a .

    7)   There exists an element in F, call it 1, such that for every a in F, a–1 = a .

    Each non–zero element must have an inverse with respect to each operation.

    8)   For each element a in F, there is in F an element –a such that a + (–a) = 0 (0 is identity for addition).

    9)   For each element a in F (a ≠ 0), there is an element a–1 in F such that a–1.a = 1 (1 is identity for multiplication).

    i)   The set of real numbers over + and × is an infinite field. Notice that from the definition of a field, two binary operations in F, we require that F be closed. That is, each result of an operation between two elements of the field is again in the field. For any three real numbers a, b, c, we know ab and a+b are real numbers. We also know from the properties of real numbers that:

    a)   a + (b+c) = (a+b) + c; b) a(bc) = (ab)c; c) a + b = b + a;

    d)   ab = ba; e) a(b+c) = ab + ac.

    We also have identities for multiplication and addition (0, 1). We know that each element a has an additive inverse –a and a multiplicative one 1/a . Thus, R(+, x) is a field.

    ii)   We can construct a finite field from the set of integers J = {0, ± 1, ± 2, ...} .

    Let p denote a fixed prime and define a = (mod p) to mean b is the remainder when a is divided by p or, equivalently, b – a is divisible by p. (i.e., 5 = 2 (mod 3) and 12 = 1(mod 11)). Choosing p = 5, we can construct a field from the integers mod 5. The modulus 5 (as it is called) relation partitions the integers into 5 mutually disjoint classes. That is, every integer is 0, 1, 2, 3 or 4 mod 5. This set of classes (residue classes) is a finite field over addition and multiplication defined as follows: a(mod 5) + b(mod 5) = (a+b)(mod 5), and a(mod 5) · b(mod 5) = ab(mod 5). Since a and b are integers, we have commutativity, associativity and distribution. We also have identities 0 and 1 for addition and multiplication, respectively.

    If we draw an addition and a multiplication table for integers mod 5, we see that each element has an additive and a multiplicative inverse since there is a 0 or a 1 respectively in each row.

    Notice 0 has no multiplicative inverse. A similar finite field can be constructed using any prime integer.

    • PROBLEM 1–2

    Show that the space Rn (comprised of n–tuples of real numbers (x1, ..., xn) is a vector space over the field R of real numbers. The operations are addition of n–tuples, i.e., (x1, ..., xn) + (y1, y2, ..., yn) = (x1 + y1, x2 + y2, ..., xn + yn), and scalar multiplication, α(x1, x2, ..., xn)

    = (αx1, αx2, ..., αxn) where α ∈ R.

    Solution: Any set that satisfies the axioms for a vector space over a field is known as a vector space. We must show that Rn satisfies the vector space axioms. The axioms fall into two distinct categories:

    A)    the axioms of addition for elements of a set

    B)    the axioms involving multiplication of vectors by elements from the field.

    1)    Closure under addition

    By definition, (x1, x2, ..., xn) + (y1, y2, ..., yn) = (x1 + y1, x2 + y2, ..., xn + yn).

    Now, since x1, y1, x2, y2, ..., xn, yn are real numbers, the sums of x1 + y1, x2 + y2, ..., xn + yn are also real numbers. Therefore, (x1 + y1, x2 + y2, ..., xn + yn ) is also an n–tuple of real numbers; hence, it belongs to Rn .

    2)    Addition is commutative

    The numbers x1, x2, ..., xn are the coordinates of the vector (x1, x2, ..., x ), and y1, y2, ..., yn are the coordinates of the vector (y1, y2, ..., yn.

    Show (x1, x2, ..., xn) + (y1, ..., yn) = (y1, ..., yn) + (x1, . .., xn ). Now, the coordinates x1 + y1, x2 + y2, ..., xn + yn are sums of real numbers. Since real numbers satisfy the commutativity axiom, x1, + y1 = y1 + x1, x2 + y2 = y2 + x2, ..., xn + yn = yn + xn.

    Thus, (x1, x2, ..., xn) + (y1, y2, ..., yn)

          = (x1 + y1, x2 + y2, ..., xn + yn) (by definition)

          = (y1 + x1, y2 + x2, ..., y + x ) (by commutativity of real numbers)

          = (y1, y2, ..., yn) + (x1, x2, ..., xn) (by definition)

    We have shown that n–tuples of real numbers satisfy the commutativity axiom for a vector space.

    3)    Addition is associative: (a+b) + c = a + (b+c). Let (x1, x2, ..., xn), (y1, y2, ..., yn) and (z1, z2, ..., zn) be three points in Rn.

    Now, ((x1, x2, ..., xn) + (y1, y2, ..., yn)) + (z1, z2, ..., zn)

    = (x1 + y1, x2 + y2, ..., xn + yn) + (z1, z2, ..., zn)

    = ((x1 + y1) + z1, (x2 + y2) + z2, ..., (xn + yn) + zn) . (1)

    The coordinates (xi + yi) + zi (i = 1, ..., n) are real numbers. Since real numbers satisfy the associativity axiom, (xi + yi) + zi = xi + (yi + zi). Hence, (1) may be rewritten as

    (x1 + (y1 + z1), x2 + (y2 + z2), ..., xn + (yn + zn))

    = (x1, x2, ..., xn) + (y1 + z1, y2 + z2, ..., yn + zn)

    = (x1, x2, ..., xn) + ((y1, y2, ..., yn) + (z1, z2, ..., zn)).

    , where 0 is the unique zero of the real number system, satisfies this requirement.

    5)    Existence and uniqueness of an additive inverse. Let (x1, x2, ..., xn) ∈ Rn. . An additive inverse of (x1, x2, ..., xn ) is an n–tuple (a1, a2, ..., an) such that (x1, x2, ..., xn) + (a1, a2, ..., an) = (0, 0, ..., 0). Since x1, x2, ..., xn belong to the real number system, they have unique additive inverses (–x1), (–x2), ..., (–xn). Consider (–x1, –x2, ..., –xn) ∈ Rn.

    (x1, x2, ..., xn) + (–x1, –x2, ..., –xn)

    = (x1 + (–x1), x2 + (–x2, ..., xn + (–xn))

    = (0, 0, ..., 0).

    We now turn to the axioms involving scalar multiplication.

    6)    Closure under scalar multiplication.

    By definition, α(x1, x2, ..., xn ) = (αx1, αx2, ..., αxn) where the coordinates αxi are real numbers. Hence,

    (αx1, αx2, ..., αxn) ∈ Rn.

    7)    Associativity of scalar multiplication.

    Let α, β be elements of R. We must show that (α β)(x1, x2, .., xn) = α (βx1, βx2, ..., βxn). But, since α, β and x1, x2, ..., xn are real numbers, (α β) (x1, x2, ..., xn) = (αβx1, αβx2, ..., αβxn) = α(βx1 βx2, ..., βxn).

    8)    The first distributive law

    We must show that α(x+y) = αx + αy where x and y are vectors in Rn and α ∈ R.

          α[(x1, x2, ..., xn) + (y1, y2, ..., yn) ]

         = α((x1 + y1), (x2 + y2), ..., (xn + yn ))

         = (α(x1 + y1), α(x2 + y2), ..., α(xn + yn)) (2)

         (by definition of scalar multiplication).

    Since each coordinate is a product of a real number and the sum of two real numbers, α(xi + yi) = αxi + αyi. Hence,

    (2) becomes [( αx1 + αy1), ( αx2 + αy2) , . .., (αxn + αyn)]

          = (αx1, αx2, ..., αxn) + (αy1, αy2, ..., αyn)

          = αx + αy.

    9)    The second distributive law

    We must show that (α + β) x = αx + βx where α , β ∈ R and x is a vector in Rn. Since α + β is also a scalar,

          (α + β) (x1, x2, . . . , xn)

          = ((α + β) x1, (α + β) x2, ..., (α + β) xn) (3)

    Since α, β, xi are all real numbers, then (α + β) xi = αxi + βxi. Therefore, (3) becomes

    ((αx1 + βx1), (αx2 + βx2), ..., (αxn + βxn))

    = (αx1, αx2, ..., αxn) + (βx1, βx2, ..., βxn)

    = α (x1, x2 ..., xn) + β(x1, x2, ..., xn)

    10)    The existence of a unit element from the field.

    We require that there exist a scalar in the field R, call it 1, such that 1 (x1, x2, ..., xn) = (x1, x2, ..., xn).

    Now the real number 1 satisfies this requirement.

    Since a set defined over a field that satisfies (1) – (10) is a vector space, the set Rn of n–tuples is a vector space when equipped with the given operations of addition and scalar multiplication.

    • PROBLEM 1–3

    Show that V, the set of all functions from a set S ≠ Φ to the field R, is a vector space under the following operations: if f(s) and g(s) ∈ V, then (f + g)(s) = f(s) + g(s). If c is a scalar from R, then (cf)(s) = cf(s).

    Solution: Since the points of V are functions, V is called a function space. Because our field is R, V is the space of real–valued functions defined on S. Also, since addition of real numbers is commutative, f(s) + g(s) = g(s) + f(s). (Here f(s) and g(s) are real numbers, the values of the functions f and g at the point s ∈ S). Since addition in R is associative, f(s) + [g(s) + h(s)] = [f(s) + g(s)] + h(s) for all s ∈ S. Hence, addition of functions is associative. Next, the unique zero vector is the zero function which assigns to each s ∈ S the value 0 ∈ R. For each f in V, let (–f) be the function given by

    (–f) (s) = –f(s) .

    Then f + (–f) = 0 as required.

    Next, we must verify the scalar axioms. Since multiplication in R is associative,

    (cd) f(s) = c(df (s)).

    The two distributive laws are: c(f + g) (s) = cf(s) + cg(s) and (c + d) f(s) = cf(s) + df(s). They hold by the properties of the real numbers. Finally, for 1 ∈ R, 1f(s) = f(s).

    Thus, V is a vector space. If, instead of R we had considered a field F, we would have had to verify the above axioms using the general properties of a field (any set that satisfies the axioms for a field).

    • PROBLEM 1–4

    Does the space V of potential functions of the nth degree form a vector space over the field of real numbers? Addition and scalar multiplication are as defined below:

    i) f (x, y) + g (x, y) = (f + g) (x1 y) for f, g ∈ v.

    ii) c [f (x, y)] = cf (x, y).

    Solution: A potential function is any twice differentiable function V (x, y) that satisfies the second–order, partial differential equation

    Examples of functions that satisfy (1) are 1) V = k (a constant); 2) V = x; 3) V = y; 4) V = xy.

    Note that, if V1 and V2 are potential functions,

    ,

    their sum is again a potential function. (This gives us closure under addition.)

    If V is a potential function, then so is cV where c is a scalar. To show this, we observe that

    .

    Then,

    .

    Finally, the set of potential functions V is a subset of the vector space of all real valued functions G. Since V is closed under addition and scalar multiplication, it is a subspace of G. Thus, it is a vector space in its own right.

    To find the dimension of the nth degree potential function, let

    V (x, y) = a0xn + a1xn–1 y + a2xn–2y² + ... +an–2x²yn–2 + an–1 xyn–1 + anyn.

    is given by

    a0n(n–1) xn–2 + a1(n–1) (n–2) xn–3 y + ... + 3.2an–3 xyn–3 + 2an–2yn–2 + 2a2xn–2 + 3(2)a3xn–3y + ... + (n–1)(n–2)an–1 xyn–3 + n(n–1) an yn–2 = 0.

    [n(n–1)a0 + 2a2]xn–2 + [(n–1)(n–2)a1 + 3(2)a3]xn–3y + ... + [(n–1)(n–2)an–1 + 3(2)an–3]xyn–3 + [n(n–1)an + 2an–2]yn–2 =0.

    .      (1)

    .      (2)

    The general recurrence relationship is, for

    only if the coefficients of xryn–r+2 are equal to zero. Hence,

    n(n–1)a0 + 2a2 = (n–1) (n–2)a1 + 3(2)a3= ...

    = (n–1) (n–2)an–1 + 3(2)an–3 = n(n–1)an + 2an–2 =0.

    But from (1) and (2), if n and a0 are given, then a2, a4, ...a2k can be calculated using the recurrence relation (3). Similarly, if a1 is given, then a1, a3, ..., a2k–1 can be calculated using (3).

    Hence, every potential function of degree n can be expressed as a linear combination of two potential functions of degree n, one for a0 and one for a1.

    Thus, potential functions of degree n form a space of dimension 2.

    • PROBLEM 1–5

    Show that the set of semi–magic squares of order 3 × 3 form a vector space over the field of real numbers with addition defined as: aij +bij = (a + b)ij for i, j =1, ..., 3.

    Solution: First, consider an example of a magic square.

    We see that in (1).

    i)   every row has the same total, T;

    ii)  every column has the same total, T;

    iii) the two diagonals have the same total, again T.

    A square array of numbers that satisfies i) and ii) but not iii) is called a semi–magic square. For example, from (1)

    is semi–magic. Let M be the set of 3 × 3 semi–magic squares. We should notice that when two semi–magic squares are added, the result is a semi–magic square. For example, suppose m1 and m2 ∈ M have row (and column) sums of T1 and T2 respectively; then each row and column in m1 + m2 will have a sum of T1 + T2. Now, for m1 , m2 and m3 ∈ M

    i) m1 + m2 = m2 + m1.

    This follows from the commutativity property of the real numbers.

    ii) (m1 + m2) + m3 = m1 + (m2 + m3)

    Again, since the elements of m1, m2 and m3 are real numbers and real numbers obey the associative law, semi–magic squares are associative with respect to addition.

    iii) Existence of a zero element

    The 3 × 3 array with zeros everywhere is a semi–magic square.

    iv) Existence of an additive inverse.

    If we replace every element in a semi–magic square with its negative, the result is a semi–magic square. Adding, we obtain the zero semi–magic square.

    Next, let α be a scalar from the field of real numbers. Then, αm1 is still a semi–magic square. The two distributive laws hold, and (αβ)m1 = α(βm1) from the properties of the real numbers. Finally, 1(m1) = m1, i.e., multiplication of a vector in M by the unit element from R leaves the vector unchanged.

    Thus, M is a vector space.

    • PROBLEM 1–6

    Show that the set of all arithmetical progressions forms a two–dimensional vector space over the field of real numbers. Addition and scalar multiplication are defined as follows: if x = (x1, x2, ..., xn, ...) and y = (y1, y2, ..., yn, ...), then x + Y = (x1 + y1, x2 + y2, ..., xn + yn, ...) and αx = α(x1, x2, ..., xn, ...) = (αx1 , αx2 ..., αxn, ...).

    Solution: An arithmetic progression of real numbers is a sequence of real numbers such that the difference between any two successive terms is a constant (| xn+1 – xn | = constant).

    To make things clearer, consider a numerical example. Let x = (2, 5, 8, 11, 14, ..., 3n–1, ...) and y = (6, 11, 16, 21, 26, ..., 5n+1, ...). Then (x + Y) = (8, 16, 24, 32, 40, ..., 8n, ...). Letting α = 3, we have αx = 3(2, 5, 8, 11, 14, ..., 3n–1, ...) = (6, 15, 24, 33, 42, ..., 9n–3, ...).

    It should be clear that the sum of two progressions is a progression, and that a progression multiplied by a scalar is a progression. That is, the set of progressions is closed with respect to addition and scalar multiplication.

    If we view a progression as an infinite tuple of real numbers, we see that they satisfy the axioms for a vector field. Thus,

    i)     (x + Y) = (y + x) ;

    ii)     (x + y) + z = x + (y + z);

    iii)    

    iv)     (x1, x2, ..., xn, ...) + ((–x1), (–x2), ..., ) = (0, 0, ...).

    Similarly, the axioms of scalar multiplication also hold. Thus, the set of all arithmetic progressions is a vector space.

    To find the dimension note that any arithmetical progression can be written as the linear combination of the two vectors: e1 = (1, 1, 1, ...) and e2 = (0, 1, 2, 3, ...).

    For example, letting x = (2, 5, 8, 11, 14, ...), we can express x as

    2e1+ 3e2= 2(1, 1, 1, ...) + 3(0, 1, 2, 3, ...).

    In general, an arithmetic progression has the form x = (a, a+b, a+2b, ..., a+nb, ...).

    This can be decomposed into

    (a, a, a, ...) + (0, b, 2b, ..., nb, ...)

    = a(l, 1, 1, ...) + b(0, 1, 2, 3, ..., n, ...)

    = ae1 + be2.

    The two progressions e1 and e2 form a basis for the space. Since the dimension of a finite dimensional vector space is equal to the number of vectors in the basis, the vector space of arithmetical progressions has dimension 2.

    • PROBLEM 1–7

    Consider the figure shown.

    It could, for example, represent an electrical network. Define closed loops to be of the form ab, bcf, cde, etc. What are the appropriate steps whereby one can construct a vector space containing closed loops?

    Solution: Examining the figure, we see that the closed loop bdef can be decomposed into the sum of the two loops bcf and cde. This suggests that we define addition of loops using the links a, b, c, ..., g. Let us represent each link by a 7–tuple, since there are 7 segments in the diagram. Thus, a = (1, 0, 0, 0, 0, 0, 0), b = (0, 1, 0, 0, 0, 0, 0), ... g=(0, 0, 0, 0, 0, 0, 1). Now, when we add links, we would like to obtain combinations of links. This can be done by defining addition of links modulo 2. That is, 0 + 0 = 0, 1 + 1 = 0 and 0 + 1 = 1. Then the only values the components can take on are 0 and 1. Thus, bcf + dec

    = (0, 1, 1, 0, 0, 1, 0) + (0, 0, 1, 1, 1, 0, 0)

    = (0, 1, 0, 1, 1, 1, 0) = bdef.

    If we define the empty loop as the loop containing no links, then, using the indicated operation, we can satisfy the vector space axioms.

    VECTORS IN 2–SPACE & 3–SPACE

    • PROBLEM 1–8

    A force of 25 newtons is being opposed by a force of 20 newtons, the acute angle between their lines of action being 60°. Use a scale diagram to approximate the magnitude and direction of the resultant force.

    Solution: The situation is shown below where AB and BC represent, respectively, the 25–newton force and the 20–newton force. Using vector addition, we see that AC is the resultant.

    The resultant could also be found using Newton’s parallelogram law of forces. By actual measurement, we find that the resultant is the vector AC with an approximate magnitude of 23–newtons. Its direction differs from that of the 25–newton force by roughly 50°.

    This illustrates that vector addition as it is defined corresponds exactly to Newton’s parallelogram law of forces. Force can accurately be represented by vectors.

    • PROBLEM 1–9

    The water in a river is flowing from west to east at a speed of 5 km/h. If a boat is being propelled across the river at a (water) speed of 20 km/h in a direction 50° north of east, use a scale diagram to approximate the direction and land speed of the boat.

    Solution: In physics, vectors are used to represent forces. They have both magnitude and direction. Some common vector forces are velocity, weight, gravitational attraction, etc.

    In the given problem, the velocities of the water and boat are represented by AB and BC, respectively, with AC representing the resultant. The resultant can be found using the parallelogram law of forces. In the diagram, the scale is 1 in. = 10 km/hr., and actual measurement reveals that |AC| ≈ 2.4 in. and < CAB ≈ 40°. Thus, the land speed of the boat is approximately 25 km/hr. in a direction that is roughly 40° north of east.

    • PROBLEM 1–10

    If v = (1, –3, 2) and w = (4, 2, 1) , find 2v, v + w and v – w.

    Solution: v and w represent points in three dimensional space, R . If we imagine v connected to the origin (0,     0, 0) by an arrow with base at (0, 0, 0) and tip at (1, –3,     2), then this arrow may also be thought of as a vector v. (A vector is actually an equivalence class of arrows.)

    The vector 2v represents another vector in space, each of whose components is double that of the corresponding components of v. Thus,

    2v = 2(1, –3, 2)

    = (2, –6, 4).

    The sum of two vectors is clearly another vector. It is found by adding the corresponding coordinates of v and w. Thus,

    v + w = (1, –3, 2) + (4, 2, 1) = (5, –1, 3).

    Similarly, the difference of two vectors is found by subtracting corresponding coordinates. Thus,

    v – w = (1, –3, 2) – (4, 2, 1) = (– 3, –5, 1).

    • PROBLEM 1–11

    Find the norm of the three dimensional vector u = (– 3, 2, 1) and the distance between the points (– 3, 2, 1) and (4, –3, 1).

    :

    . Next, consider a vector from the origin (0, 0, 0) to the point v1, v2, v3 in R³.

    Here, two applications of the Pythagorean theorem are necessary to find || v || . From the figure, ORP is a right triangle. Hence, || v ||² = OP² = OR² + RP².     (1)

    But now, the vector OR in the x y plane is itself the hypotenuse of a right triangle whose other sides are OQ and OS. Hence, OR² = OQ² + OS²         (2)

    Substituting (2) into (1),

    .

    The given vector is u = (– 3, 2, 1). Using formula (3),

    .

    To find the distance between two points P1, P2 in R³ we reason as follows:

    The distance d is the norm of the vector P1P2. But P1P2 = (v1 – u1, v2 – u2, v3 – u3) and, using (3),

    The given points are u = (– 3, 2, 1) and v = (4, –3, 1). Hence,

    .

    Note the following points: i) in n–dimensional space, (n–1) applications of the Pythagorean theorem will yield the norm of a vector v as

    .

    where v = (v1, v2, ..., vn).

    ii) || v – u || = || u – v ||, that is, the distance between two points is a symmetric function of its arguments.

    • PROBLEM 1–12

    Find the components of the vector v = P1 P2 with initial point P1 (2, –1, 4) and terminal point P2 (7, 5, –8).

    Solution: On the real number line a vector with initial point a and terminal point b has length | b – a |.

    If we take R × R (the Cartesian product of the set of real numbers with itself) we obtain R², the Cartesian plane. Here a vector with initial point (xl, y1) and terminal point (x2, y2) has components (x2 – x1, y2 – y1).

    Forming the Cartesian triple product R² x R we obtain the set of all triples. A vector with initial point (x1, y1, z1) and terminal point (x2, y2, z2) has components (x2 – x1, y2 – y1 , z2 – z1).      (1)

    In the given problem, (x1, y1, z1) = (2, –1, 4) and (X2, y2, Z2) = (7, 5, –8). Hence, v = P1 P2 has components (7 – 2, 5 – (–1), –8 –4) = (5, 6, –12).

    LINEAR COMBINATIONS

    • PROBLEM 1–13

    Give an example of a pair of non–ccllinear vectors in R². Then, show that the point (x1, x2) = (8, 7) can be expressed as a linear combination of the non–collinear vectors.

    Solution: Two vectors in R² are non–collinear when neither is a multiple of the other; that is, v1, v2(v1 and v2 are not both the 0 vector) are non–collinear if there is no scalar c such that v2 = cv1. As justification, note that a vector in R² is an arrow (straight line) from the origin to some point (x1, x2). If we multiply this vector by a scalar, we will only expand or shorten the old vector to obtain a new vector.

    Let e1 = (1, 2); e2 = (2, 1). Then e1 and e2 are non–collinear since setting

    (2, 1) = c (1, 2), we obtain

    (2, 1) = (c, 2c)

         2 = c

         1 = 2c.

    There exists no c satisfying these two inconsistent equations.

    Two non–collinear vectors in R² are sufficient to express any vector in R² as a linear combination. Now,

    (8, 7) = c1 (1, 2) + c2 (2, 1)

    (8, 7) = (c1, 2c1) + (2c2, c2)

    (8, 7) = (c1 + 2c2, 2c1 + c2).

    Setting coordinates equal to each other:

         8 = c1 + 2c2    (1)

         7 = 2c1 + c2.    (2)

    From (1), c1 = 8 – 2c2. Then, from (2),

    7    = 2(8–2c2) + c2; c2 = 3. Substituting in (1),

    8    = c1 + 2(3) gives c1 = 2,

    and    (8, 7) = c1 (1, 2) + c2 (2, 1)

        = 2 (1, 2) + 3 (2, 1).

    • PROBLEM 1–14

    Write the polynomial v = t² + 4t – 3 over R as a linear combination of the polynomials e1 = t² – 2t + 5, e2 = 2t² – 3t and e3 = t + 3.

    Solution: The polynomial v = t² + 4t – 3 is a polynomial in a three–dimensional vector space. It is possible to see this because the polynomial is of the form α0 + α1 t + α2 t², and, therefore, it takes 3 elements, α0, α1 and α2, to determine a polynomial in this space v3. Therefore, we need only 3 independent vectors to form a basis. A basis for an n–dimen–sional vector space consists of n independent vectors. Here the polynomials {e1, e2, e3} form a basis for V3. To show this, set

    c1 (t² – 2t + 5) + c2 (2t² – 3t) + c3 (t + 3) = 0t² + 0t + 0.

    Hence,       (c1 + 2c2)t² = 0t²      (1)

         (–2c1 – 3c2 + c3)t = 0t     (l’)

         (5c1 + 3c3) = 0      (1")

    From (1), c1 = –2c2. Substitution into (1’) and (1") yields

         –c2 + c3 = 0     (2)

         –10c2 + 3c2 = 0     (2’)

    Multiplying (2) by 3 and subtracting from (2’) gives c2 = 0. From (1), c1 =0 and, hence, from (1’), c3 = 0. Thus, c1 e1 + c2 e2 + c3 e3 = 0 implies c1 = c2 = c3 =0. This implies e1, e2, e3 are independent which implies they form a basis for V3.

    To find v as a linear combination of e1, e2 and e3, we proceed as follows:

    Set v as a linear combination of the ei using the unknowns x, y and z: v = xe1 + ye2 + ze3.

    t² + 4t – 3 = x(t² – 2t + 5) + y(2t² – 3t) + z(t + 3)

         = xt² – 2xt + 5x + 2yt² – 3yt + zt + 3z

         = (x + 2y)t² + (–2x – 3y + z)t + (5x + 3z)

    Set coefficients of the same powers of t equal to each other, and reduce the system to echelon form:

    Note that the system is consistent and so has a solution. Solve for the unknowns to obtain x = –3, y = 2, z = 4. Thus v = –3e1 + 2e2 + 4e3.

    • PROBLEM 1–15

    Write the vector v = (2, –5, 3) in R³ as a linear combination of the vectors e1 = (1, –3, 2), e2 = (2, –4, –1) and e3 =(1, –5, 7).

    Solution: To write v as a linear combination of e1, e2 and e3, we set

    v = c1 e1 + c2 e2 + c3 e3

    where c1, c2 and c3 are scalars chosen from the field K. Thus,

    (2, –5, 3) = c1 (1, –3, 2) + c2 (2, –4, –1) + c3 (1, –5, 7)

    (2, –5, 3) = (c1, –3c1, 2c1) + (2c2, –4c2, –c2) + (c3, –5c3, 7c3)

    (2, –5, 3) = (c1 + 2c2 + c3, –3c1 –4c2 –5c3, 2c1 –c2 + 7c3)

    L1: c2 + 2c2 + c3 = 2

    L2: –3c1 – 4c2 –5c3 = –5     (1)

    L3: 2c1 – c2 + 7c3 = 3

    We may use elementary row operations to reduce this system of equations to echelon form. The system (1) becomes

    (Replace L2 by +3L1 + L2.

    Replace L3 by –2L1 + L3).      Replace L3′ by 5L2 + 2L3

    L2′: c1 + 2c2 + c3 = 2      c1 + 2c2 + c3 = 2

    L²′: 0 + 2c2 – 2c3 =1      0 + 2c2 – 2c3 = 1

    L3′: 0 – 5c2 + 5c3= –1       0 = 3.

    Hence, the system is inconsistent and we conclude that v cannot be written as a combination of e1, e2 and e3.

    Checking the set {e1, e2, e3} for linear independence we see that

    c1 e1 + c2 e2 + c3 e3 = (0, 0, 0)

    c1 (1, –3, 2) + c2 (2, –4, –1) + c3 (1, –5, 7) = (0, 0, 0)

    c1    +    2c2 + c3 = 0

    –3c1    –    4c2 – 5c3 = 0

    2c1    –    c2 + 7c3 = 0

    Reducing    to echelon form

    c1 +    2c2 + c3 = 0      ;      c1 + 2c2 + c3 = 0

    0    +    2c2 – 2c3 =0     c2 – c3 = 0

    0    –    5c2 + 5c3 = 0

    Hence, c2 = c3 and c1 = –3c3 is a general solution of the system, and we see that the vectors e1, e2 and e3 are dependent. Since a basis for R³ must contain three linearly independent vecotrs, {e1, e2, e3} cannot be a basis. This explains why we were unable to write v as a linear combination of e1, e2 and e3.

    In fact, setting e3 = c1 e1 + c2 e2, we have

    (1, –5, 7) = (c1 + 2c2, –3c1 –4c2, 2c1 – c2)

    c1 + 2c2 =1       c1 + 2c2 = 1

    –3c1 – 4c2 = –5     ;     0 + 2c2 = –2.

    2c1 – c2 = 7         0 – 5c2 = 5

    Hence, c2 = –1 and c1 = 3. Thus,

    (1, –5, 7) = 3 (1, –3, 2) – (2, –4, –1)

    and e3 = 3e1 – e2.

    • PROBLEM 1–16

    be written as a linear combination of the matrices

    Solution: One way to proceed would be to attempt to find constants c1, c2, c3 and c4 such that E = c1 m1 + c2 m2 + c3 m3 + c4 m4. Another method of solution would be to reason as follows: We are asked whether E can be written as c1 m1 + c2 m2 + c3 m3 + c4 m4, not actually to find c1, c2, c3 and c4. Now E is a 2 × 2 matrix. Hence, it has dimension 4. A basis for a four–dimensional vector space must contain four linearly independent vectors. Also, m1 , m2, m3, m4 are four 2 × 2 matrices, i.e., they are elements of the four–dimensional vector space of which E is a member. But are they linearly independent?

    A set of vectors {v1, v2, ..., vn} is linearly independent if

    c1 v1 + c2 v2 + ... + cn vn = 0

    implies c1 = c2 = ... = cn = 0.

    be the zero element of the vector space of 2 × 2 matrices, we set c1 m1 + c2 m2 + c3 m3 + c4 m4 = 0.

    That is,

    Two matrices {aij} and {bij} are equal when aij = bij (i, j = 1, ..., n). Thus, from (1)

    c1      = 0       (2)

    c1    + 2c3 + c4 = 0      (2′)

    c1 + c2       +c4 = 0      (2″)

         c2 – c3      = 0      (2″′)

    From (2), c1 = 0. From (2″′) , c2 = c3. Hence, (2′) and (2″) reduce to

    L1:  2c3 + c4 = 0

    L2:    c3 + c4 = 0

    Now, –2L2 + L1 = –2(c3 + c4) + 2c3 + c4 = 0. Hence, c4 = 0. Therefore, c3 = c4 = 0 can satisfy both of these equations. Further, by (2″′), c2 = 0. Thus, c1 = c2 = c3 = c4 = 0, and we conclude that m1 , m2, m3 and m4 are linearly independent, i.e., they form a basis for the vector space of 2 × 2 matrices.

    Therefore, we can say that E can indeed be written as a linear combination of the given matrices mi (i = l, ..., 4).

    The reader should note the following points: 1) questions about linear independence frequently lead to finding the solution set of a system of linear homogeneous equations; 2) an m×n matrix is an element of a vector space of dimension mn.

    • PROBLEM 1–17

    Write the vector v = (1, –2, 5) as a linear combination of the vectors e1 = (1, 1, 1), e2 = (1, 2, 3) and e3 = (2, –1, 1).

    Solution: We first show that the vectors e1, e2, e3 are linearly independent and form a basis for R³. Then we find the required constants.

    A set of vectors {e1, e2, ..., en} in Rn is linearly independent if, for ci ∈ K,

    c1 e1 + c2 e2 + ..., + cn en = 0

    implies ci =0 (i = 1, ..., n). Setting

    c1 e1 + c2 e2 + c3 e3 = (0, 0, 0),

    c1 (1, 1, 1) + c2 (1, 2, 3) + c3 (2, –1, 1) = (0, 0, 0)

    (c1, c1, c1) + (c2, 2c2, 3c2) + (2c3, –c3, c3) = (0, 0, 0).

    Adding coordinates,

    (c1 + c2 + 2C3, c1 + 2c2 – c3, c1 + 3c2 + c3) = (0, 0, 0)

    Two points in R³ are equal only if their respective coordinates are equal. Hence,

    L1 :  c1 + c2 + 2c3 = 0     (1)

    L2 :  c1 + 2c2 –c3    = 0

    L3 :  c1 + 3c2 + c3  = 0

    We can reduce this system to row echelon form as follows: System (1) becomes

    (Replace L2 by L1 – L2 and replace L3 by L1 – L3)

    L1 : c1 + c2 + 2c3 = 0      Now replace L3 by (–2) times

    L2 : 0 – c2 + 3c3 = 0.     L2 added to L 3. We arrive at

    L3: 0 – 2c2 + c3 = 0       c1 + c2 + 2c3 = 0

         0 – c2 + 3c3 = 0

         0 + 0 – 5c 3 = 0

    Thus, c1 = c2 = c3 = 0 is the only solution and {e1, e2, e3} is linearly independent. Since a set of n linearly independent n–tuples forms a basis for Rn, the set {e1, e2, e3} is a basis for R³. This means that any triple in R³ (and in particular the given v) can be written as a linear combination of e1, e2 and e3.

    Let v = c1 e1 + c2 e2 + c3 e 3.

    (1, –2, 5) = c1 (1, 1, 1) + c2 (1, 2, 3) + c3 (2, –1, 1)

    (1, –2, 5) = (c1 c1 c1) + (c2, 2c2 + 3c 2) + (2c3, –c3, c3)

    (1, –2, 5) = (c1 + c2 + 2c3, c1+ 2c2 – c3, c1 + 3c2 + c3).

    Setting corresponding coordinates equal to each other,

    L1 : c1 + c2 + 2c3 = l

    L2 : c1 + 2c2 – c3 = –2       (2)

    L3 : cl + 3c2 + c3 = 5.

    We can apply row operations on the inhomogeneous system (2) to reduce it to row echelon form. System (2) becomes

    L1 : c1 + c2 + 2c3 = 1

    L2 : 0 + c2 – 3c3 = –3

    L3 : 0 + 2c2 – c3 = 4

    (multiply L1 by –1 and add it to L2. Also multiply L1 by –1 and add it to L3).

    c1 + c2 + 2c3 = 1

    0 + c2 – 3c3 = –3

    0 + 0 + 5c3 = 10

    Multiply L2 by –2 and add to L3, the results of which are given above.

    Solving by back–substitution,

    c3 = 2; c2 = 3; c1 = –6.

    Hence, v = –6e1 + 3e2 + 2e3.

    • PROBLEM 1–18

    Determine whether or not u and v are linearly dependent if:

    (i)    u =    (3, 4), v = (1, –3)

    (ii)    u =    (2, –3) , v = (6, –9)

    (iii)    u =    (4, 3, –2), v = (2, –6, 7)

    (iv)    u =    (–4, 6, –2), v = (2, –3, 1)

    (v)

    (vi)

    (vii) u = 2 – 5t + 6t² – t³, v = 3 + 2t – 4t² + 5t³

    (viii) u = 1 – 3t + 2t² – 3t³, v = –3 + 9t – 6t² + 9t³

    Solution: Two vectors v1 and v2 are linearly dependent if (1) c1 v1 + c2 v2 = 0 has a solution(s) for c1 and c2, and c1 and c2 are not both 0. (1) implies

    That is, if two vectors are dependent then one of the vectors is a multiple of the other.

    i)   If (3, 4) and (1, –3) are dependent, then there exists a scalar c1 such that (3, 4) = c1 (1, –3) = (c1, –3c1). (2) In other words,

    c1 = 3

    –3c1 = 4.

    These two equations are inconsistent, i.e., there is no value of c1 such that (2) holds.

    Thus, u, v are linearly independent.

    ii)   Using the same reasoning as in i), u and v are dependent if there exists a c1 such that c1 u = V so,

    c1 (2, –3) = (6, –9)

    2c1 = 6      (2)

    –3c1 = –9.       (3)

    From (2), c1 = 3 which, when substituted into (3), is true. Thus, v = 3u and the two vectors are linearly dependent.

    iii)   We wish to find c1 such that

    c1 (4, 3, –2) = (2, –6, 7)     (4)

    is true. Setting the corresponding coordinates of the triple of real numbers equal to each other, we obtain

    4c1 = 2

    3c1 = –6       (4′)

    –2c1 = 7

    From the first equation of C1 is clearly overworked. We conclude that the two vectors are linearly independent since (4) leads to a contradiction.

    iv)   Setting c1 (–4,     6, –2) = (2, –3, 1)

         we have –4c1 =    2

         6c1 =    –3

          –2c1 =    1.

    and the two vectors are linearly dependent.

    v)   Here the vectors are 2 × 3 matrices (a m × n matrix is an element of a vector space of dimension mn), but vectors nevertheless. Hence, setting

    Two matrices {aij , bij} are equal if and only if, aij = bij (i = 1, ..., m ; j = 1, ..., n). Hence, (5) holds if and only if

       c1 = 2       3c1 = 6

    –2c1 = –4       0 = 0

     4c1 = 8       –c1 = –2.

    Since all of the above equalities are consistent, c1 = 2 and v = 2u, thus, we have v is dependent on u (and vice–versa).

    .

     Then c u = v implies

    and gives c = 6       (1)

         2c = –5, etc.       (2)

    but (1) says c = 6. Since there exists no such c, v and u are linearly independent.

    vii)  The set of all polynomials with degree ≤ n is an (n + 1) dimensional vector space. Here u and v are vectors from v4. Setting

    c1 (2 – 5t + 6t² – t³) = (3 + 2t – 4t² + 5t³),

    2c1 – 5c1t + 6c1t² – c1t³ = 3 + 2t – 4t² + 5t³.

    Two polynomials are equal when coefficients of like powers of x are equal. Thus,

    2c1 =    3

    –5c1 =    2

    6c1 =    –4

    – c1 =    5

    Since c1 cannot satisfy all these equations simultaneously, we conclude that it does not exist. Hence, u and v are not multiples of each other, i.e., they are independent.

    viii) Again, u and v are two vectors in a 4–dimensional polynomial vector space. Let

    c1 (1 – 3t + 2t² – 3t³) = –3 + 9t – 6t² + 9t³.

    c1 – 3c1t + 2ct² – 3c1t³ = –3 + 9t – 6t² + 9t³.

    Setting the coefficients of like powers of x equal to each other,

    c1 = –3

    –3c1 = 9

    2c1 = –6

    –3c1 = 9.

    c1 = –3 satisfies all of the above equations. Hence, v = –3u and u and v are dependent.

    • PROBLEM 1–19

    Determine whether the vectors x,     y and z are dependent or independent where

    i)   x = (1, 1, –1) , y = (2, –3,     1), z = (8, –7, 1)

    ii)  x = (1, –2, –3), y = (2, 3,     –1), z = (3, 2, 1)

    iii) x = (x1, x2), y = (y1, y2),     z = (z1, z2).

    Solution: A set of vectors {v1, v2, ..., vn} is said to be linearly dependent if there exists ci i = 1, 2, ..., n, such that

    c1 v1 + c2 v2 + ... + cn vn = 0      (1)

    where at least one of the ci in (1) ≠ 0. In the given problems we may proceed as follows:

    a)   let c1 x + c2 y + c3 z = 0 where c1, c2, c3 are unknown scalars;

    b)   find the equivalent homogeneous system of equations;

    c)   determine whether the system has a non–zero solution. If the system does, then the vectors are linearly dependent; if the only solution is the trivial solution, they are independent.

    i)   Let c1 x + c2 y + c3 z = 0.

    c1 (1, 1, –1) + c2 (2, –3, 1) + c3 (8, –7, 1) = (0, 0, 0),

    (c1, c1 –c1) + (2c2, –3c2, c2) + (8c3, –7c3, c3) = (0, 0, 0)

    or    c1 + 2c2 + 8c3 = 0

         cl – 3c2 – 7c3 = 0      (2)

          –cl + c2 + c3 =0.

    The system (2) may now be reduced to echelon form. Constructing the coefficient matrix from (2),

    .

    where Ti are the row operations as follows: T1: replace the third row with the sum of the first and the third rows. Also, replace the second row with the second row minus the first row.

    times the third row.

    T3 : The third row is the same as the second and so it may be dropped.

    Thus, c1 + 2c2 + 8c3 = 0       (3)

        c 2 + 3c 3 = 0.

    The system (3), equivalent to system (2), has two equations in three unknowns. The second equation in (3) yields c2 = –3c3. Substituting in the first equation, we get c1 + 2 (–3c3 ) + 8c3 = 0, so c1 = –2c3. Thus, all of the solutions of this system are c1 = –2a c2 = –3a where c3 – a a is any scalar. If a = 1, one non–zero solution is c1 = –2, c2 = –3, c3 = 1, Hence, the vectors are dependent.

    ii)  Let c1 x + c2y + c3z = 0;

    (c1, –2c1 – 3c1) + (2c2, 3c2 – c2) + (3c3, 2c3, c3) = (0, 0, 0)

    or   c1 + 2c2 + 3c3 = 0

       –2c2 + 3c2 + 2c3 = 0      (4)

       –3c1 – c2 + c3 = 0

    Reducing the coefficient matrix of (4) to echelon form using appropriate row operations,

    where Ti are elementary row operations: T1: replace the second row with two times the first row plus the second row; replace the third row with three times the first row plus the third row.

    T2: replace the third row with the sum of –5 times the second row and 7 times the third row.

    The system c1 + 2c2 + 3c3 = 0

         7c2 + 8c3 = 0

          30c3 = 0

    has only the trivial solution c1 = c2 = c3 = 0. Thus by definition, the vectors are linearly independent.

    iii) Note here that x, y, z are arbitrary vectors in R².

    Let c1 (x1, x2) + c2 (y1, y2) + c3 (z1, z2) = (0, 0).

    or     x1 c1 + y1 c2 + z1 c3 = 0    (5)

          x2 c1 + y2 c2 + z2 c3 = 0.

    Because there are more unknowns (c1, c2, c3) than equations, the system has a non–zero solution.

    Since x, y, z were arbitrary vectors, this shows that any three vectors in R² are linearly dependent. In general, any n + 1 vectors in Rn are linearly dependent.

    • PROBLEM 1–20

    Let V be the vector space of 2 × 2 matrices over R. Determine whether the matrices A, B, C ∈ V are dependent where:

    (i)

    (ii)

    is the zero element of the vector space of 2 × 2 matrices. Letting c1, c2, c3 be unknown constants, we set

    The matrices A, B, C are dependent if in a solution of the above matrix equation at least one ci, i = 1, 2, 3, is not 0. From the above equation we have

    ,

    .

    Two matrices {aij}, {bij} are equal if and only if aij = bij (i = 1, ..., m i j = 1, ..., n). Hence,

    c1 + c2 + c3 = 0

    c1      + c3 = 0

    c1      = 0

    c1 + c2       = 0

    We see that c1 = c2 = c3 = c4 = 0 is the only solution of the above system. The matrices A, B, C are therefore linearly independent.

    ii)  Proceeding as in i),

    so

    ,

    which gives c1    + 3c2 + c3  = 0

          2c1 – c2 – 5c3    = 0      (1)

          3c1 + 2c2 – 4c3 = 0

         c1 + 2c2      = 0

    We can reduce the system to row echelon form by applying elementary row operations. First multiply the first equation by (–2) and add it to the second equation. Replace the second equation with the result. Then replace, the third equation with –3 times the first equation plus the third equation. Lastly, replace the fourth equation with the fourth equation minus the first. Thus, (1) becomes

    c1 + 3c2 + c3 = 0

        –    7c2 – 7c3 = 0

        –    7c2 – 7c3 = 0

          –c2 – c3 = 0

    or c1 + 3c2 + c3 = 0       (2)

         c2 + c3 = 0.

    c1 = 2; c2 = –1 and c3 = 1 satisfy (2) (among other values). We conclude that the matrices A, B, C are linearly dependent.

    • PROBLEM 1–21

    Determine whether or not the following vectors in R³ are linearly dependent:

    (i) (1, –2, 1), (2, 1, –1) , (7, –4, 1)

    (ii) (1, –3, 7), (2, 0, –6), (3, –1, –1), (2, 4, –5)

    (iii) (1, 2, –3), (1, –3, 2), (2, –1, 5)

    (iv) (2, –3, 7), (0, 0, 0), (3, –1, –4)

    Solution: All of the given sets of vectors are triples of real numbers. A set of n–tuples {v1, ..., vn} is linearly dependent if there exist ci (i = 1, ..., n), not all zero, such that c1 v1 + c2 v2 + ... + cn vn =0.

    If no such ci exist, v1, ..., vn are linearly independent.

    i)    Set a linear combination of the vectors equal to the zero vector using unknown scalars c1, c2 and c3 from the field K over which the vector space is defined.

    c1 (1, –2, 1) + c2 (2, 1, –1) + c3 (7, –4, 1) = (0, 0, 0).

    Setting corresponding coordinates equal to each other, we obtain the following homogeneous system:

     c1 + 2c2 + 7c3 = 0 I

    –2c1 + c2 – 4c3 = 0 II      (1)

         c1 – c2 + c3 = 0 III

    We wish to solve the system (1) for the unknows c1, c2 and c3. One method is the row reduction method in which we successively reduce the number of unknowns in each equation applying elementary row operations. These are: i) interchanging rows, ii) multiplying a row by a constant, iii) adding a constant multiple of one row to another. Any combination of these operations results in a system of equations which has the same solution(s) as the original system. Applying such operations on (1) we get:

    c1 + 2c2 + 7c3 = 0 I

    0 + 5c2 + 10c3 = 0 II      (2)

    c1 – c2 +      c3 = 0 III

    (where we multiplied I of (1) by 2 and added to II of (1) to obtain II of (2)) .

    Continuing, we get

    c1 + 2c2 + 7c3 = 0 (same)

    0 + 5c2 + 10c3 = 0 (same)      (3)

    0  – 3c2 –  6c3   = 0  (I x (–1) + III of (2)).

    We could proceed to eliminate c2 in the third equation, then read off the solution of c3, and then solve for the other unknowns

    Enjoying the preview?
    Page 1 of 1