Sei sulla pagina 1di 759

Grassmann Algebra

Exploring extended vector algebra with Mathematica


Incomplete draft Version 0.50 September 2009

John Browne

Quantica Publishing, Melbourne, Australia

2009 9 3

Grassmann Algebra Book.nb

2009 9 3

Grassmann Algebra Book.nb

Table of Contents
1 Introduction
1.1 Background The mathematical representation of physical entities The central concept of the Ausdehnungslehre Comparison with the vector and tensor algebras Algebraicizing the notion of linear dependence Grassmann algebra as a geometric calculus 1.2 The Exterior Product The anti-symmetry of the exterior product Exterior products of vectors in a three-dimensional space Terminology: elements and entities The grade of an element Interchanging the order of the factors in an exterior product A brief summary of the properties of the exterior product 1.3 The Regressive Product The regressive product as a dual product to the exterior product Unions and intersections of spaces A brief summary of the properties of the regressive product The Common Factor Axiom The intersection of two bivectors in a three dimensional space 1.4 Geometric Interpretations Points and vectors Sums and differences of points Determining a mass-centre Lines and planes The intersection of two lines 1.5 The Complement The complement as a correspondence between spaces The Euclidean complement The complement of a complement The Complement Axiom 1.6 The Interior Product The definition of the interior product Inner products and scalar products Sequential interior products Orthogonality Measure and magnitude Calculating interior products Expanding interior products The interior product of a bivector and a vector The cross product 2009 9 3

1.6 The Interior Product The definition of the interior product Inner products and scalar products Sequential interior products Orthogonality Measure and magnitude Calculating interior products Expanding interior products The interior product of a bivector and a vector The cross product 1.7 Exploring Screw Algebra To be completed 1.8 Exploring Mechanics To be completed 1.9 Exploring Grassmann Algebras To be completed 1.10 Exploring the Generalized Product To be completed 1.11 Exploring Hypercomplex Algebras To be completed 1.12 Exploring Clifford Algebras To be completed 1.13 Exploring Grassmann Matrix Algebras To be completed 1.14 The Various Types of Linear Product Introduction Case 1 The product of an element with itself is zero Case 2 The product of an element with itself is non-zero Examples 1.15 Terminology Terminology 1.16 Summary To be completed

Grassmann Algebra Book.nb

2 The Exterior Product


2.1 Introduction 2.2 The Exterior Product Basic properties of the exterior product Declaring scalar and vector symbols in GrassmannAlgebra Entering exterior products 2.3 Exterior Linear Spaces Composing m-elements Composing elements automatically Spaces and congruence The associativity of the exterior product Transforming exterior products

2009 9 3

2.3 Exterior Linear Spaces Composing m-elements Composing elements automatically Spaces and congruence The associativity of the exterior product Transforming exterior products 2.4 Axioms for Exterior Linear Spaces Summary of axioms Grassmann algebras On the nature of scalar multiplication Factoring scalars Grassmann expressions Calculating the grade of a Grassmann expression 2.5 Bases Bases for exterior linear spaces Declaring a basis in GrassmannAlgebra Composing bases of exterior linear spaces Composing palettes of basis elements Standard ordering Indexing basis elements of exterior linear spaces 2.6 Cobases Definition of a cobasis The cobasis of unity Composing palettes of cobasis elements The cobasis of a cobasis 2.7 Determinants Determinants from exterior products Properties of determinants The Laplace expansion technique Calculating determinants

Grassmann Algebra Book.nb

2.8 Cofactors Cofactors from exterior products The Laplace expansion in cofactor form Exploring the calculation of determinants using minors and cofactors Transformations of cobases Exploring transformations of cobases 2.9 Solution of Linear Equations Grassmann's approach to solving linear equations Example solution: 3 equations in 4 unknowns Example solution: 4 equations in 4 unknowns 2.10 Simplicity The concept of simplicity All (n|1)-elements are simple Conditions for simplicity of a 2-element in a 4-space Conditions for simplicity of a 2-element in a 5-space 2.11 Exterior division The definition of an exterior quotient Division by a 1-element Division by a k-element Automating the division process 2009 9 3

Grassmann Algebra Book.nb

2.11 Exterior division The definition of an exterior quotient Division by a 1-element Division by a k-element Automating the division process 2.12 Multilinear forms The span of a simple element Composing spans Example: Refactorizations Multilinear forms Defining m:k-forms Composing m:k-forms Expanding and simplifying m:k-forms Developing invariant forms Properties of m:k-forms The complete span of a simple element 2.13 Unions and intersections Union and intersection as a multilinear form Where the intersection is evident Where the intersections is not evident Intersection with a non-simple element Factorizing simple elements 2.14 Summary

3 The Regressive Product


3.1 Introduction 3.2 Duality The notion of duality Examples: Obtaining the dual of an axiom Summary: The duality transformation algorithm 3.3 Properties of the Regressive Product Axioms for the regressive product The unit n-element The inverse of an n-element Grassmann's notation for the regressive product 3.4 The Duality Principle The dual of a dual The Grassmann Duality Principle Using the GrassmannAlgebra function Dual 3.5 The Common Factor Axiom Motivation The Common Factor Axiom Extension of the Common Factor Axiom to general elements Special cases of the Common Factor Axiom Dual versions of the Common Factor Axiom Application of the Common Factor Axiom When the common factor is not simple 2009 9 3

3.5 The Common Factor Axiom Motivation The Common Factor Axiom Extension of the Common Factor Axiom to general elements Special cases of the Common Factor Axiom Dual versions of the Common Factor Axiom Application of the Common Factor Axiom When the common factor is not simple 3.6 The Common Factor Theorem Development of the Common Factor Theorem Proof of the Common Factor Theorem The A and B forms of the Common Factor Theorem Example: The decomposition of a 1-element Example: Applying the Common Factor Theorem Automating the application of the Common Factor Theorem 3.7 The Regressive Product of Simple Elements The regressive product of simple elements The regressive product of (n|1)-elements Regressive products leading to scalar results Expressing an element in terms of another basis Exploration: The cobasis form of the Common Factor Axiom Exploration: The regressive product of cobasis elements 3.8 Factorization of Simple Elements Factorization using the regressive product Factorizing elements expressed in terms of basis elements The factorization algorithm Factorization of (n|1)-elements Factorizing simple m-elements Factorizing contingently simple m-elements Determining if an element is simple 3.9 Product Formulas for Regressive Products The Product Formula Deriving Product Formulas Deriving Product Formulas automatically Computing the General Product Formula Comparing the two forms of the Product Formula The invariance of the General Product Formula Alternative forms for the General Product Formula The Decomposition Formula Exploration: Dual forms of the General Product Formulas 3.10 Summary

Grassmann Algebra Book.nb

4 Geometric Interpretations
4.1 Introduction 4.2 Geometrically Interpreted 1-elements Vectors Points Declaring a basis for a bound vector space Composing vectors and points Example: Calculation of the centre of mass

2009 9 3

4.2 Geometrically Interpreted 1-elements Vectors Points Declaring a basis for a bound vector space Composing vectors and points Example: Calculation of the centre of mass 4.3 Geometrically Interpreted 2-elements Simple geometrically interpreted 2-elements Bivectors Bound vectors Composing bivectors and bound vectors The sum of two parallel bound vectors The sum of two non-parallel bound vectors Sums of bound vectors Example: Reducing a sum of bound vectors 4.4 Geometrically Interpreted m-Elements Types of geometrically interpreted m-elements The m-vector The bound m-vector Bound simple m-vectors expressed by points Bound simple bivectors Composing m-vectors and bound m-vectors 4.5 Geometrically Interpreted Spaces Vector and point spaces Coordinate spaces Geometric dependence Geometric duality 4.6 m-planes m-planes defined by points m-planes defined by m-vectors m-planes as exterior quotients Computing exterior quotients The m-vector of a bound m-vector 4.7 Line Coordinates Lines in a plane Lines in a 3-plane Lines in a 4-plane Lines in an m-plane 4.8 Plane Coordinates Planes in a 3-plane Planes in a 4-plane Planes in an m-plane The coordinates of geometric entities 4.9 Calculation of Intersections The intersection of two lines in a plane The intersection of a line and a plane in a 3-plane The intersection of two planes in a 3-plane Example: The osculating plane to a curve 4.10 Decomposition into Components The shadow Decomposition in a 2-space Decomposition in a 3-space 2009 9 3 Decomposition in a 4-space Decomposition of a point or vector in an n-space

Grassmann Algebra Book.nb

Grassmann Algebra Book.nb

4.10 Decomposition into Components The shadow Decomposition in a 2-space Decomposition in a 3-space Decomposition in a 4-space Decomposition of a point or vector in an n-space 4.11 Projective Space The intersection of two lines in a plane The line at infinity in a plane Projective 3-space Homogeneous coordinates Duality Desargues' theorem Pappas' theorem Projective n-space 4.12 Regions of Space Regions of space Regions of a plane Regions of a line Planar regions defined by two lines Planar regions defined by three lines Creating a pentagonal region Creating a 5-star region Creating a 5-star pyramid Summary 4.13 Geometric Constructions Geometric expressions Geometric equations for lines and planes The geometric equation of a conic section in the plane The geometric equation as a prescription to construct The algebraic equation of a conic section in the plane An alternative geometric equation of a conic section in the plane Conic sections through five points Dual constructions Constructing conics in space A geometric equation for a cubic in the plane Pascal's Theorem Pascal lines 4.14 Summary

5 The Complement
5.1 Introduction 5.2 Axioms for the Complement The grade of a complement The linearity of the complement operation The complement axiom The complement of a complement axiom 2009 9 3 The complement of unity

Grassmann Algebra Book.nb

10

5.2 Axioms for the Complement The grade of a complement The linearity of the complement operation The complement axiom The complement of a complement axiom The complement of unity 5.3 Defining the Complement The complement of an m-element The complement of a basis m-element Defining the complement of a basis 1-element Constraints on the value of ! Choosing the value of ! Defining the complements in matrix form 5.4 The Euclidean Complement Tabulating Euclidean complements of basis elements Formulae for the Euclidean complement of basis elements Products leading to a scalar or n-element 5.5 Complementary Interlude Alternative forms for complements Orthogonality Visualizing the complement axiom The regressive product in terms of complements Glimpses of the inner product 5.6 The Complement of a Complement The complement of a complement axiom The complement of a cobasis element The complement of the complement of a basis 1-element The complement of the complement of a basis m-element The complement of the complement of an m-element Idempotent complements 5.7 Working with Metrics Working with metrics The default metric Declaring a metric Declaring a general metric Calculating induced metrics The metric for a cobasis Creating palettes of induced metrics 5.8 Calculating Complements Entering a complement Creating palettes of complements of basis elements Converting complements of basis elements Simplifying expressions involving complements Converting expressions involving complements to specified forms Converting regressive products of basis elements in a metric space 5.9 Complements in a vector space The Euclidean complement in a vector 2-space The non-Euclidean complement in a vector 2-space 2009 9 3 The Euclidean complement in a vector 3-space The non-Euclidean complement in a vector 3-space

Grassmann Algebra Book.nb

11

5.9 Complements in a vector space The Euclidean complement in a vector 2-space The non-Euclidean complement in a vector 2-space The Euclidean complement in a vector 3-space The non-Euclidean complement in a vector 3-space 5.10 Complements in a bound space Metrics in a bound space The complement of an m-vector Products of vectorial elements in a bound space The complement of an element bound through the origin The complement of the complement of an m-vector Calculating with vector space complements 5.11 Complements of bound elements The Euclidean complement of a point in the plane The Euclidean complement of a point in a point 3-space The complement of a bound element Euclidean complements of bound elements The regressive product of point complements 5.12 Reciprocal Bases Reciprocal bases The complement of a basis element The complement of a cobasis element The complement of a complement of a basis element The exterior product of basis elements The regressive product of basis elements The complement of a simple element is simple 5.13 Summary

6 The Interior Product


6.1 Introduction 6.2 Defining the Interior Product Definition of the inner product Definition of the interior product Implications of the regressive product axioms Orthogonality Example: The interior product of a simple bivector and a vector 6.3 Properties of the Interior Product Implications of the Complement Axiom Extended interior products Converting interior products Example: Orthogonalizing a set of 1-elements 6.4 The Interior Common Factor Theorem The Interior Common Factor Formula The Interior Common Factor Theorem Examples of the Interior Common Factor Theorem The computational form of the Interior Common Factor Theorem 2009 9 3

Grassmann Algebra Book.nb

12

6.4 The Interior Common Factor Theorem The Interior Common Factor Formula The Interior Common Factor Theorem Examples of the Interior Common Factor Theorem The computational form of the Interior Common Factor Theorem 6.5 The Inner Product Implications of the Common Factor Axiom The symmetry of the inner product The inner product of complements The inner product of simple elements Calculating inner products Inner products of basis elements 6.6 The Measure of an m-element The definition of measure Unit elements Calculating measures The measure of free elements The measure of bound elements Determining the multivector of a bound multivector 6.7 The Induced Metric Tensor Calculating induced metric tensors Using scalar products to construct induced metric tensors Displaying induced metric tensors as a matrix of matrices 6.8 Product Formulae for Interior Products The basic interior Product Formula Deriving interior Product Formulas Deriving interior Product Formulas automatically Computable forms of interior Product Formulas The invariance of interior Product Formulas An alternative form for the interior Product Formula The interior decomposition formula Interior Product Formulas for 1-elements Interior Product Formulas in terms of double sums 6.9 The Zero Interior Sum Theorem The zero interior sum Composing interior sums The Gram-Schmidt process Proving the Zero Interior Sum Theorem 6.10 The Cross Product Defining a generalized cross product Cross products involving 1-elements Implications of the axioms for the cross product The cross product as a universal product Cross product formulae 6.11 The Triangle Formulae Triangle components The measure of the triangle components Equivalent forms for the triangle components 2009 9 3

Grassmann Algebra Book.nb

13

6.11 The Triangle Formulae Triangle components The measure of the triangle components Equivalent forms for the triangle components 6.12 Angle Defining the angle between elements The angle between a vector and a bivector The angle between two bivectors The volume of a parallelepiped 6.13 Projection To be completed. 6.14 Interior Products of Interpreted Elements To be completed. 6.15 The Closest Approach of Multiplanes To be completed.

7 Exploring Screw Algebra


7.1 Introduction 7.2 A Canonical Form for a 2-Entity The canonical form Canonical forms in an n-plane Creating 2-entities 7.3 The Complement of 2-Entity Complements in an n-plane The complement referred to the origin The complement referred to a general point 7.4 The Screw The definition of a screw The unit screw The pitch of a screw The central axis of a screw Orthogonal decomposition of a screw 7.5 The Algebra of Screws To be completed 7.6 Computing with Screws To be completed

2009 9 3

Grassmann Algebra Book.nb

14

8 Exploring Mechanics
8.1 Introduction 8.2 Force Representing force Systems of forces Equilibrium Force in a metric 3-plane 8.3 Momentum The velocity of a particle Representing momentum The momentum of a system of particles The momentum of a system of bodies Linear momentum and the mass centre Momentum in a metric 3-plane 8.4 Newton's Law Rate of change of momentum Newton's second law 8.5 The Angular Velocity of a Rigid Body To be completed. 8.6 The Momentum of a Rigid Body To be completed. 8.7 The Velocity of a Rigid Body To be completed. 8.8 The Complementary Velocity of a Rigid Body To be completed. 8.9 The Infinitesimal Displacement of a Rigid Body To be completed. 8.10 Work, Power and Kinetic Energy To be completed.

9 Grassmann Algebra
9.1 Introduction 9.1 Grassmann Numbers Creating Grassmann numbers Body and soul Even and odd components The grades of a Grassmann number Working with complex scalars 2009 9 3

9.1 Grassmann Numbers Creating Grassmann numbers Body and soul Even and odd components The grades of a Grassmann number Working with complex scalars 9.3 Operations with Grassmann Numbers The exterior product of Grassmann numbers The regressive product of Grassmann numbers The complement of a Grassmann number The interior product of Grassmann numbers 9.4 Simplifying Grassmann Numbers Elementary simplifying operations Expanding products Factoring scalars Checking for zero terms Reordering factors Simplifying expressions 9.5 Powers of Grassmann Numbers Direct computation of powers Powers of even Grassmann numbers Powers of odd Grassmann numbers Computing positive powers of Grassmann numbers Powers of Grassmann numbers with no body The inverse of a Grassmann number Integer powers of a Grassmann number General powers of a Grassmann number 9.6 Solving Equations Solving for unknown coefficients Solving for an unknown Grassmann number 9.7 Exterior Division Defining exterior quotients Special cases of exterior division The non-uniqueness of exterior division 9.8 Factorization of Grassmann Numbers The non-uniqueness of factorization Example: Factorizing a Grassmann number in 2-space Example: Factorizing a 2-element in 3-space Example: Factorizing a 3-element in 4-space 9.9 Functions of Grassmann Numbers The Taylor series formula The form of a function of a Grassmann number Calculating functions of Grassmann numbers Powers of Grassmann numbers Exponential and logarithmic functions of Grassmann numbers Trigonometric functions of Grassmann numbers Functions of several Grassmann numbers

Grassmann Algebra Book.nb

15

10 Exploring the Generalized Grassmann Product


2009 9 3

Grassmann Algebra Book.nb

16

10 Exploring the Generalized Grassmann Product


10.1 Introduction 10.2 Geometrically Interpreted 1-elements Definition of the generalized product Case l = 0: Reduction to the exterior product Case 0 < l < Min[m, k]: Reduction to exterior and interior products Case l = Min[m, k]: Reduction to the interior product Case Min[m, k] < l < Max[m, k]: Reduction to zero Case l = Max[m, k]: Reduction to zero Case l > Max[m, k]: Undefined 10.3 The Symmetric Form of the Generalized Product Expansion of the generalized product in terms of both factors The quasi-commutativity of the generalized product Expansion in terms of the other factor 10.4 Calculating with Generalized Products Entering a generalized product Reduction to interior products Reduction to inner products Example: Case Min[m, k] < l < Max[m, k]: Reduction to zero 10.5 The Generalized Product Theorem The A and B forms of a generalized product Example: Verification of the Generalized Product Theorem Verification that the B form may be expanded in terms of either factor 10.6 Products with Common Factors Products with common factors Congruent factors Orthogonal factors Generalized products of basis elements Finding common factors 10.7 The Zero Interior Sum Theorem The Zero Interior Sum theorem Composing a zero interior sum 10.8 The Zero Generalized Sum The zero generalized sum conjecture Generating the zero generalized sum Exploring the conjecture 10.9 Nilpotent Generalized Products Nilpotent products of simple elements Nilpotent products of non-simple elements 10.10 Properties of the Generalized Product Summary of properties

2009 9 3

Grassmann Algebra Book.nb

17

10.10 Properties of the Generalized Product Summary of properties 10.11 The Triple Generalized Sum Conjecture The generalized Grassmann product is not associative The triple generalized sum The triple generalized sum conjecture Exploring the triple generalized sum conjecture An algorithm to test the conjecture 10.12 Exploring Conjectures A conjecture Exploring the conjecture 10.13 The Generalized Product of Intersecting Elements The case l < p The case l p The special case of l = p 10.14 The Generalized Product of Orthogonal Elements The generalized product of totally orthogonal elements The generalized product of partially orthogonal elements 10.15 The Generalized Product of Intersecting Orthogonal Elements The case l < p The case l p 10.16 Generalized Products in Lower Dimensional Spaces Generalized products in 0, 1, and 2-spaces 0-space 1-space 2-space 10.17 Generalized Products in 3-Space To be completed

11 Exploring Hypercomplex Algebra


11.1 Introduction 11.2 Some Initial Definitions and Properties The conjugate Distributivity The norm Factorization of scalars Multiplication by scalars The real numbers ! 11.3 The Complex Numbers ! Constraints on the hypercomplex signs Complex numbers as Grassmann numbers under the hypercomplex product 11.4 The Hypercomplex Product in a 2-Space Tabulating products in 2-space The hypercomplex product of two 1-elements The hypercomplex product of a 1-element and a 2-element 2009 9 3 The hypercomplex square of a 2-element The product table in terms of exterior and interior products

Grassmann Algebra Book.nb

18

11.4 The Hypercomplex Product in a 2-Space Tabulating products in 2-space The hypercomplex product of two 1-elements The hypercomplex product of a 1-element and a 2-element The hypercomplex square of a 2-element The product table in terms of exterior and interior products 11.5 The Quaternions " The product table for orthonormal elements Generating the quaternions The norm of a quaternion The Cayley-Dickson algebra 11.6 The Norm of a Grassmann number The norm The norm of a simple m-element The skew-symmetry of products of elements of different grade The norm of a Grassmann number in terms of hypercomplex products The norm of a Grassmann number of simple components The norm of a non-simple element 11.7 Products of two different elements of the same grade The symmetrized sum of two m-elements Symmetrized sums for elements of different grades The body of a symmetrized sum The soul of a symmetrized sum Summary of results of this section 11.8 Octonions To be completed

12 Exploring Clifford Algebra


12.1 Introduction 12.2 The Clifford Product Definition of the Clifford product Tabulating Clifford products The grade of a Clifford product Clifford products in terms of generalized products Clifford products in terms of interior products Clifford products in terms of inner products Clifford products in terms of scalar products 12.3 The Reverse of an Exterior Product Defining the reverse Computing the reverse 12.4 Special Cases of Clifford Products The Clifford product with scalars The Clifford product of 1-elements The Clifford product of an m-element and a 1-element The Clifford product of an m-element and a 2-element The Clifford product of two 2-elements 2009 9 3 The Clifford product of two identical elements

Grassmann Algebra Book.nb

19

12.4 Special Cases of Clifford Products The Clifford product with scalars The Clifford product of 1-elements The Clifford product of an m-element and a 1-element The Clifford product of an m-element and a 2-element The Clifford product of two 2-elements The Clifford product of two identical elements 12.5 Alternate Forms for the Clifford Product Alternate expansions of the Clifford product The Clifford product expressed by decomposition of the first factor Alternative expression by decomposition of the first factor The Clifford product expressed by decomposition of the second factor The Clifford product expressed by decomposition of both factors 12.6 Writing Down a General Clifford Product The form of a Clifford product expansion A mnemonic way to write down a general Clifford product 12.7 The Clifford Product of Intersecting Elements General formulae for intersecting elements Special cases of intersecting elements 12.8 The Clifford Product of Orthogonal Elements The Clifford product of totally orthogonal elements The Clifford product of partially orthogonal elements Testing the formulae 12.9 The Clifford Product of Intersecting Orthogonal Elements Orthogonal union Orthogonal intersection 12.10 Summary of Special Cases of Clifford Products Arbitrary elements Arbitrary and orthogonal elements Orthogonal elements Calculating with Clifford products 12.11 Associativity of the Clifford Product Associativity of orthogonal elements A mnemonic formula for products of orthogonal elements Associativity of non-orthogonal elements Testing the general associativity of the Clifford product 12.13 Clifford Algebra Generating Clifford algebras Real algebra Clifford algebras of a 1-space 12.14 Clifford Algebras of a 2-Space The Clifford product table in 2-space Product tables in a 2-space with an orthogonal basis Case 1: 8e1 e1 + 1, e2 e2 + 1< Case 2: 8e1 e1 + 1, e2 e2 - 1< Case 3: 8e1 e1 - 1, e2 e2 - 1<

2009 9 3

12.14 Clifford Algebras of a 2-Space The Clifford product table in 2-space Product tables in a 2-space with an orthogonal basis Case 1: 8e1 e1 + 1, e2 e2 + 1< Case 2: 8e1 e1 + 1, e2 e2 - 1< Case 3: 8e1 e1 - 1, e2 e2 - 1< 12.15 Clifford Algebras of a 3-Space The Clifford product table in 3-space !"3 : The Pauli algebra !"3 + : The Quaternions The Complex subalgebra Biquaternions 12.16 Clifford Algebras of a 4-Space The Clifford product table in 4-space !"4 : The Clifford algebra of Euclidean 4-space !"4 + : The even Clifford algebra of Euclidean 4-space !"1,3 : The Dirac algebra !"0,4 : The Clifford algebra of complex quaternions 12.17 Rotations To be completed

Grassmann Algebra Book.nb

20

13 Exploring Grassmann Matrix Algebra


13.1 Introduction 13.2 Creating Matrices Creating general symbolic matrices Creating matrices with specific coefficients Creating matrices with variable elements 13.3 Manipulating Matrices Matrix simplification The grades of the elements of a matrix Taking components of a matrix Determining the type of elements 13.4 Matrix Algebra Multiplication by scalars Matrix addition The complement of a matrix Matrix products 13.5 Transpose and Determinant Conditions on the transpose of an exterior product Checking the transpose of a product Conditions on the determinant of a general matrix Determinants of even Grassmann matrices 13.6 Matrix Powers Positive integer powers Negative integer powers Non-integer powers of matrices with distinct eigenvalues Integer powers of matrices with distinct eigenvalues 2009 9 3

Grassmann Algebra Book.nb

21

13.6 Matrix Powers Positive integer powers Negative integer powers Non-integer powers of matrices with distinct eigenvalues Integer powers of matrices with distinct eigenvalues 13.7 Matrix Inverses A formula for the matrix inverse GrassmannMatrixInverse 13.8 Matrix Equations Two types of matrix equations Solving matrix equations 13.9 Matrix Eigensystems Exterior eigensystems of Grassmann matrices GrassmannMatrixEigensystem 13.10 Matrix Functions Distinct eigenvalue matrix functions GrassmannMatrixFunction Exponentials and Logarithms Trigonometric functions Symbolic matrices Symbolic functions 13.11 Supermatrices To be completed

Appendices
Contents
A1 Guide to GrassmannAlgebra A2 A Brief Biography of Grassmann A3 Notation A4 Glossary A5 Bibliography Bibliography A note on sources to Grassmann's work

Index

2009 9 3

Grassmann Algebra Book.nb

22

Preface
The origins of this book
This book has grown out of an interest in Grassmann's work over the past three decades. There is something fascinating about the beauty with which the mathematical structures Grassmann discovered (invented, if you will) describe the physical world, and something also fascinating about how these beautiful structures have been largely lost to the mainstreams of mathematics and science.

Who was Grassmann?


Hermann Gnther Grassmann was born in 1809 in Stettin, near the border of Germany and Poland. He was only 23 when he discovered the method of adding and multiplying points and vectors which was to become the foundation of his Ausdehnungslehre. In 1839 he composed a work on the study of tides entitled Theorie der Ebbe und Flut, which was the first work ever to use vectorial methods. In 1844 Grassmann published his first Ausdehnungslehre (Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik) and in the same year won a prize for an essay which expounded a system satisfying an earlier search by Leibniz for an 'algebra of geometry'. Despite these achievements, Grassmann received virtually no recognition. In 1862 Grassmann re-expounded his ideas from a different viewpoint in a second Ausdehnungslehre (Die Ausdehnungslehre. Vollstndig und in strenger Form). Again the work was met with resounding silence from the mathematical community, and it was not until the latter part of his life that he received any significant recognition from his contemporaries. Of these, most significant were J. Willard Gibbs who discovered his works in 1877 (the year of Grassmann's death), and William Kingdon Clifford who discovered them in depth about the same time. Both became quite enthusiastic about this new mathematics. A more detailed biography of Grassmann may be found at the end of the book.

From the Ausdehnungslehre to GrassmannAlgebra


The term 'Ausdehnungslehre' is variously translated as 'extension theory', 'theory of extension', or 'calculus of extension'. In this book we will use these terms to refer to Grassmann's original work and to other early work in the same notational and conceptual tradition (particularly that of Edward Wyllys Hyde, Henry James Forder and Alfred North Whitehead). The term 'Exterior Calculus' will be reserved for the calculus of exterior differential forms, originally developed by Elie Cartan from the Ausdehnungslehre. This is an area in which there are many excellent texts, and which is outside the scope of this book. The term 'Grassmann algebra' will be used to describe that body of algebraic theory and results based on the Ausdehnungslehre, but extended to include more recent results and viewpoints. This will be the basic focus of this book. Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software package which accompanies it. 2009 9 3

Grassmann Algebra Book.nb

23

Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software package which accompanies it.

What is Grassmann algebra?


The intrinsic power of Grassmann algebra arises from its fundamental product operation, the exterior product. The exterior product codifies the property of linear dependence, so essential for modern applied mathematics, directly into the algebra. Simple non-zero elements of the algebra may be viewed as representing constructs of linearly independent elements. For example, a simple bivector is the exterior product of two vectors; a line is represented by the exterior product of two points; a plane is represented by the exterior product of three points.

The focus of this book


The primary focus of this book is to provide a readable account in modern notation of Grassmann's major algebraic contributions to mathematics and science. I would like it to be accessible to scientists and engineers, students and professionals alike. Consequently I have tried to avoid all mathematical terminology which does not make an essential contribution to understanding the basic concepts. The only assumptions I have made as to the reader's background is that they have some familiarity with basic linear algebra. The secondary concern of this book is to provide an environment for exploring applications of the Grassmann algebra. For general applications in higher dimensional spaces, computations by hand in any algebra become tedious, indeed limiting, thus restricting the hypotheses that can be explored. For this reason the book is integrated with a package for exploring Grassmann algebra, called GrassmannAlgebra. GrassmannAlgebra has been developed in Mathematica. You can read the book without using the package, or you can use the package to extend the examples in the text, experiment with hypotheses, or explore your own interests.

The power of Mathematica


Mathematica is a powerful system for doing mathematics on a computer. It has an inbuilt programming language ideal for extending its capabilities to other mathematical systems like Grassmann algebra. It also has a sophisticated mathematical typesetting capability. This book uses both. The chapters are typeset by Mathematica in its standard notebook format, making the book interactive in its electronic version with the GrassmannAlgebra package. GrassmannAlgebra can turn an exploration requiring days by hand into one requiring just minutes of computing time.

How you can use this book


Chapter 1: Introduction provides a brief preparatory overview of the book, introducing the seminal concepts of each chapter, and solidifying them with simple examples. This chapter is designed to give you a global appreciation with which better to understand the detail of the chapters which follow. However, it is independent of those chapters, and may be read as far as your interest takes you. Chapters 2 to 6: The Exterior Product, The Regressive Product, Geometric Interpretations, The Complement, and The Interior Product develop the basic theory. These form the essential core for a working knowledge of Grassmann algebra and for the explorations in subsequent chapters. They should be read sequentially. Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read independently of each other, with the proviso that the chapters on hypercomplex and Clifford algebra depend on the notion of the generalized product. 2009 9 3

Grassmann Algebra Book.nb

24

Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read independently of each other, with the proviso that the chapters on hypercomplex and Clifford algebra depend on the notion of the generalized product. Some computational sections within the text use GrassmannAlgebra or Mathematica. Wherever possible, explanation is provided so that the results are still intelligible without a knowledge of Mathematica. Mathematica input/output dialogue is in indented Courier font, with the input in bold. For example:

GrassmannSimplify@1 + x + x xD
1+x

Any chapter or section whose title begins with Exploring may be ommitted on first reading without unduly disturbing the flow of concept development. Exploratory chapters cover separate applications. Exploratory sections treat topics that are: important for the justification of later developments, but not essential for their understanding; of a specialized nature; or, results which appear to be new and worth documenting.

The GrassmannAlgebra website


As with most books these days, this book has its own website: http://sites.google.com/site/grassmannalgebra From this site you will be able to: email me, download a copy of the book or package, learn of any updates to them, check for known errata or bugs, let me know of any errata or bugs you find, get hints and faqs for using the package, and (eventually) link to where you can find a hardcopy if the book.

Acknowledgements
Finally I would like to acknowledge those who were instrumental in the evolution of this book: Cecil Pengilley who originally encouraged me to look at applying Grassmann's work to engineering; the University of Melbourne, Xerox Corporation, the University of Rochester and Swinburne University of Technology which sheltered me while I pursued this interest; Janet Blagg who peerlessly edited the text; my family who had to put up with my writing, and finally to Stephen Wolfram who created Mathematica and provided me with a visiting scholar's grant to work at Wolfram Research Institute in Champaign where I began developing the GrassmannAlgebra package. Above all however, I must acknowledge Hermann Grassmann. His contribution to mathematics and science puts him among the great thinkers of the nineteenth century. I hope you enjoy exploring this beautiful mathematical system.

For I have every confidence that the effort I have applied to the science reported upon here, which has occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to be lost. a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down here will bear fruit. some day these ideas, even if in an altered form, will reappear and with the passage of time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the development of truth, however small the domain it embraces, can pass away without a trace. It remains even if the garments in which feeble men clothe it fall into dust.

2009 9 3

For I have every confidence that the effort I have applied to the science reported upon here, which has Grassmann Algebra Book.nb 25 occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to be lost. a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down here will bear fruit. some day these ideas, even if in an altered form, will reappear and with the passage of time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the development of truth, however small the domain it embraces, can pass away without a trace. It remains even if the garments in which feeble men clothe it fall into dust.

Hermann Grassmann in the foreword to the Ausdehnungslehre of 1862 translated by Lloyd Kannenberg

2009 9 3

Grassmann Algebra Book.nb

26

1 Introduction

1.1 Background
The mathematical representation of physical entities
Three of the more important mathematical systems for representing the entities of contemporary engineering and physical science are the (three-dimensional) vector algebra, the more general tensor algebra, and geometric algebra. Grassmann algebra is more general than vector algebra, overlaps aspects of the tensor algebra, and underpins geometric algebra. It predates all three. In this book we will show that it is only via Grassmann algebra that many of the geometric and physical entities commonly used in the engineering and physical sciences may be represented mathematically in a way which correctly models their pertinent properties and leads straightforwardly to principal results. As a case in point we may take the concept of force. It is well known that a force is not satisfactorily represented by a (free) vector, yet contemporary practice is still to use a (free) vector calculus for this task. The deficiency may be made up for by verbal appendages to the mathematical statements: for example 'where the force f acts along the line through the point P'. Such verbal appendages, being necessary, and yet not part of the calculus being used, indicate that the calculus itself is not adequate to model force satisfactorily. In practice this inadequacy is coped with in terms of a (free) vector calculus by the introduction of the concept of moment. The conditions of equilibrium of a rigid body include a condition on the sum of the moments of the forces about any point. The justification for this condition is not well treated in contemporary texts. It will be shown later however that by representing a force correctly in terms of an element of the Grassmann algebra, both force-vector and moment conditions for the equilibrium of a rigid body are natural consequences of the algebraic processes alone. Since the application of Grassmann algebra to mechanics was known during the nineteenth century one might wonder why, with the 'progress of science', it is not currently used. Indeed the same question might be asked with respect to its application in many other fields. To attempt to answer these questions, a brief biography of Grassmann is included as an appendix. In brief, the scientific world was probably not ready in the nineteenth century for the new ideas that Grassmann proposed, and now, in the twenty-first century, seems only just becoming aware of their potential.

2009 9 3

Grassmann Algebra Book.nb

27

The central concept of the Ausdehnungslehre


Grassmann's principal contribution to the physical sciences was his discovery of a natural language of geometry from which he derived a geometric calculus of significant power. For a mathematical representation of a physical phenomenon to be 'correct' it must be of a tensorial nature and since many 'physical' tensors have direct geometric counterparts, a calculus applicable to geometry may be expected to find application in the physical sciences. The word 'Ausdehnungslehre' is most commonly translated as 'theory of extension', the fundamental product operation of the theory then becoming known as the exterior product. The notion of extension has its roots in the interpretation of the algebra in geometric terms: an element of the algebra may be 'extended' to form a higher order element by its (exterior) product with another, in the way that a point may be extended to a line, or a line to a plane by a point exterior to it. The notion of exteriorness is equivalent algebraically to that of linear independence. If the exterior product of elements of grade 1 (for example, points or vectors) is non-zero, then they are independent. A line may be defined by the exterior product of any two distinct points on it. Similarly, a plane may be defined by the exterior product of any three points in it, and so on for higher dimensions. This independence with respect to the specific points chosen is an important and fundamental property of the exterior product. Each time a higher dimensional object is required it is simply created out of a lower dimensional one by multiplying by a new element in a new dimension. Intersections of elements are also obtainable as products. Simple elements of the Grassmann algebra may be interpreted as defining subspaces of a linear space. The exterior product then becomes the operation for building higher dimensional subspaces (higher order elements) from a set of lower dimensional independent subspaces. A second product operation called the regressive product may then be defined for determining the common lower dimensional subspaces of a set of higher dimensional non-independent subspaces.

Comparison with the vector and tensor algebras


The Grassmann algebra is a tensorial algebra, that is, it concerns itself with the types of mathematical entities and operations necessary to describe physical quantities in an invariant manner. In fact, it has much in common with the algebra of anti-symmetric tensors | the exterior product being equivalent to the anti-symmetric tensor product. Nevertheless, there are conceptual and notational differences which make the Grassmann algebra richer and easier to use. Rather than a sub-algebra of the tensor algebra, it is perhaps more meaningful to view the Grassmann algebra as a super-algebra of the three-dimensional vector algebra since both commonly use invariant (component free) notations. The principal differences are that the Grassmann algebra has a dual axiomatic structure, can treat higher order elements than vectors, can differentiate between points and vectors, generalizes the notion of 'cross product', is independent of dimension, and possesses the structure of a true algebra.

2009 9 3

Grassmann Algebra Book.nb

28

Algebraicizing the notion of linear dependence


Another way of viewing Grassmann algebra is as linear or vector algebra onto which has been introduced a product operation which algebraicizes the notion of linear dependence. This product operation is called the exterior product and is symbolized with a wedge . If vectors x1 , x2 , x3 , are linearly dependent, then it turns out that their exterior product is zero: x1 x2 x3 0. If they are independent, their exterior product is non-zero. Conversely, if the exterior product of vectors x1 , x2 , x3 , is zero, then the vectors are linearly dependent. Thus the exterior product brings the critical notion of linear dependence into the realm of direct algebraic manipulation. Although this might appear to be a relatively minor addition to linear algebra, we expect to demonstrate in this book that nothing could be further from the truth: the consequences of being able to model linear dependence with a product operation are far reaching, both in facilitating an understanding of current results, and in the generation of new results for many of the algebras and their entities used in science and engineering today. These include of course linear and multilinear algebra, but also vector and tensor algebra, screw algebra, the complex numbers, quaternions, octonions, Clifford algebra, and Pauli and Dirac algebra.

Grassmann algebra as a geometric calculus


Most importantly however, Grassmann's contribution has enabled the operations and entities of all of these algebras to be interpretable geometrically, thus enabling us to bring to bear the power of geometric visualization and intuition into our algebraic manipulations. It is well known that a vector x1 may be interpreted geometrically as representing a direction in space. If the space has a metric, then the magnitude of x1 is interpreted as its length. The introduction of the exterior product enables us to extend the entities of the space to higher dimensions. The exterior product of two vectors x1 x2 , called a bivector, may be visualized as the two-dimensional analogue of a direction, that is, a planar direction. Neither vectors nor bivectors are interpreted as being located anywhere since they do not possess sufficient information to specify independently both a direction and a position. If the space has a metric, then the magnitude of x1 x2 is interpreted as its area, and similarly for higher order products.

Depicting a simple bivector. This bivector is located nowhere.

For applications to the physical world, however, the Grassmann algebra possesses a critical capability that no other algebra possesses so directly: it can distinguish between points and vectors and treat them as separate tensorial entities. Lines and planes are examples of higher order 2009 9 3 from points and vectors, which have both position and direction. A line can be constructs represented by the exterior product of any two points on it, or by any point on it with a vector parallel to it.

Grassmann Algebra Book.nb

29

For applications to the physical world, however, the Grassmann algebra possesses a critical capability that no other algebra possesses so directly: it can distinguish between points and vectors and treat them as separate tensorial entities. Lines and planes are examples of higher order constructs from points and vectors, which have both position and direction. A line can be represented by the exterior product of any two points on it, or by any point on it with a vector parallel to it.

Pointvector depiction

Pointpoint depiction

Two different depictions of a bound vector in its line.

A plane can be represented by the exterior product of any point on it with a bivector parallel to it, any two points on it with a vector parallel to it, or any three points on it.

Pointvectorvector

Pointpointvector

Pointpointpoint

Three different depictions of a bound bivector.

Finally, it should be noted that the Grassmann algebra subsumes all of real algebra, the exterior product reducing in this case to the usual product operation amongst real numbers. Here then is a geometric calculus par excellence. We hope you enjoy exploring it.

1.2 The Exterior Product


The anti-symmetry of the exterior product
The exterior product of two vectors x and y of a linear space yields the bivector x y. The bivector is not a vector, and so does not belong to the original linear space. In fact the bivectors form their own linear space. The fundamental defining characteristic of the exterior product is its anti-symmetry. That is, the product changes sign if the order of the factors is reversed.

2009 9 3

Grassmann Algebra Book.nb

30

x y -y x
From this we can easily show the equivalent relation, that the exterior product of a vector with itself is zero.

1.1

xx 0
This is as expected because x is linearly dependent on itself. The exterior product is associative, distributive, and behaves linearly as expected with scalars.

1.2

Exterior products of vectors in a three-dimensional space


By way of example, suppose we are working in a three-dimensional space, with basis e1 , e2 , and e3 . Then we can express vectors x and y as linear combinations of these basis vectors:
x a1 e1 + a2 e2 + a3 e3 y b1 e1 + b2 e2 + b3 e3

Here, the ai and bi are of course scalars. Taking the exterior product of x and y and multiplying out the product allows us to express the bivector x y as a linear combination of basis bivectors.
x y Ha1 e1 + a2 e2 + a3 e3 L Hb1 e1 + b2 e2 + b3 e3 L x y Ha1 b1 L e1 e1 + Ha1 b2 L e1 e2 + Ha1 b3 L e1 e3 + Ha2 b1 L e2 e1 + Ha2 b2 L e2 e2 + Ha2 b3 L e2 e3 + Ha3 b1 L e3 e1 + Ha3 b2 L e3 e2 + Ha3 b3 L e3 e3

The first simplification we can make is to put all basis bivectors of the form ei ei to zero. The second simplification is to use the anti-symmetry of the product and collect the terms of the bivectors which are not essentially different (that is, those that may differ only in the order of their factors, and hence differ only by a sign). The product x y can then be written:
x y Ha1 b2 - a2 b1 L e1 e2 + Ha2 b3 - a3 b2 L e2 e3 + Ha3 b1 - a1 b3 L e3 e1

The scalar factors appearing here are just those which would have appeared in the usual vector cross product of x and y. However, there is an important difference. The exterior product expression does not require the vector space to have a metric, while the cross product, because it generates a vector orthogonal to x and y, necessarily assumes a metric. Furthermore, the exterior product is valid for any number of vectors in spaces of arbitrary dimension, while the cross product is necessarily confined to products of two vectors in a space of three dimensions. For example, we may continue the product by multiplying x y by a third vector z.
z c1 e1 + c2 e2 + c3 e3

2009 9 3

Grassmann Algebra Book.nb

31

xyz HHa1 b2 - a2 b1 L e1 e2 + Ha2 b3 - a3 b2 L e2 e3 + Ha3 b1 - a1 b3 L e3 e1 L Hc1 e1 + c2 e2 + c3 e3 L

Adopting the same simplification procedures as before we obtain the trivector x y z expressed in basis form.
xyz Ha1 b2 c3 - a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 L e1 e2 e3

A trivector is the exterior product of a three vectors.

A trivector in a space of three dimensions has just one component. Its coefficient is the determinant of the coefficients of the original three vectors. Clearly, if these three vectors had been linearly dependent, this determinant would have been zero. In a metric space, this coefficient would be proportional to the volume of the parallelepiped formed by the vectors x, y, and z. Hence the geometric interpretation of the algebraic result: if x, y, and z are lying in a planar direction, that is, they are dependent, then the volume of the parallelepiped defined is zero. We see here also that the exterior product begins to give geometric meaning to the often inscrutable operations of the algebra of determinants. In fact we shall see that all the operations of determinants are straightforward consequences of the properties of the exterior product. In three-dimensional metric vector algebra, the vanishing of the scalar triple product of three vectors is often used as a criterion of their linear dependence, whereas in fact the vanishing of their exterior product (valid also in a non-metric space) would suffice. It is interesting to note that the notation for the scalar triple product, or 'box' product, is Grassmann's original notation, viz [x y z]. Finally, we can see that the exterior product of more than three vectors in a three-dimensional space will always be zero, since they must be dependent. 2009 9 3

Grassmann Algebra Book.nb

32

Finally, we can see that the exterior product of more than three vectors in a three-dimensional space will always be zero, since they must be dependent.

Terminology: elements and entities


To this point we have been referring to the elements of the space under discussion as vectors, and their higher order constructs in three dimensions as bivectors and trivectors. In the general case we will refer to the exterior product of an unspecified number of vectors as a multivector, and the exterior product of m vectors as an m-vector. The word 'vector' however, is in current practice used in two distinct ways. The first and traditional use endows the vector with its well-known geometric properties of direction, sense, and (possibly) magnitude. In the second and more recent, the term vector may refer to any element of a linear space, even though the space is devoid of geometric context. In this book, we adopt the traditional practice and use the term vector only when we intend it to have its traditional geometric interpretation of a (free) arrow-like entity. When referring to an element of a linear space which we are not specifically interpreting geometrically, we simply use the term element. The exterior product of m 1-elements of a linear space will thus be referred to as an m-element. (In the GrassmannAlgebra package however, we have had to depart somewhat from this convention in the interests of common usage: symbols representing 1-elements are called VectorSymbols). In Grassmann algebra however, a further dimension arises. By introducing a new element called the Origin into the basis of a vector space which has the interpretation of a point, we are able to distinguish two types of 1-element: vectors and points. This simple distinction leads to a sophisticated and powerful tableaux of free (involving only vectors) and bound (involving points) entities with which to model geometric and physical systems. Science and engineering make use of mathematics by endowing its constructs with geometric or physical interpretations. We will use the term entity to refer to an element which we specifically wish to endow with a geometric or physical interpretation. For example we would say that (geometric) points and vectors and (physical) positions and directions are 1-entities because they are represented by 1-elements, while (geometric) bound vectors and bivectors and (physical) forces and angular momenta are 2-entities because they are represented by 2-elements. Points, vectors, bound vectors and bivectors are examples of geometric entities. Positions, directions, forces and momenta are examples of physical entities. Points, lines, planes and hyperplanes may also be conveniently considered for computational purposes as geometric entities, or we may also define them in the more common way as a set of points: the set of all points in the entity. (A 1-element is in an m-element if their exterior product is zero.) We call such a set of points a geometric object. We interpret elements of a linear space geometrically or physically, while we represent geometric or physical entities by elements of a linear space. A more complete summary of terminology is given at the end of this chapter.

2009 9 3

Grassmann Algebra Book.nb

33

The grade of an element


The exterior product of m 1-elements is called an m-element. The value m is called the grade of the m-element. For example the element x y u v is of grade 4. An m-element may be denoted by a symbol underscripted with the value m. For example:
a xyuv
4

For simplicity, however, we do not generally denote 1-elements with an underscripted '1'. The grade of a scalar is 0. We shall see that this is a natural consequence of the exterior product axioms formulated for elements of general grade. The dimension of the underlying linear space of 1-elements is denoted by n. Elements of grade greater than n are zero. The complementary grade of an m-element in an n-space is n|m.

Interchanging the order of the factors in an exterior product


The exterior product of is defined to be associative. Hence interchanging the order of any two adjacent 1-element factors will change the sign of the product:
x y - H y x L

In fact, interchanging the order of any two non-adjacent 1-element factors will also change the sign of the product.
x y - H y x L

To see why this is so, suppose the number of factors between x and y is m. First move y to the immediate left of x. This will cause m+1 changes of sign. Then move x to the position that y vacated. This will cause m changes of sign. In all there will be 2m+1 changes of sign, equivalent to just one sign change. Note that it is only elements of odd grade that anti-commute. If, in a product of two elements, at least one of them is of even grade, then the elements commute. For example, 2-elements commute with all other elements.
Hx yL z z Hx yL

A brief summary of the properties of the exterior product


In this section we summarize a few of the more important properties of the exterior product that we have already introduced informally. In Chapter 2: The Exterior Product, the complete set of axioms is discussed. The exterior product of an m-element and a k-element is an (m+k)-element.

2009 9 3

Grassmann Algebra Book.nb

34

The exterior product of an m-element and a k-element is an (m+k)-element. The exterior product is associative.

ab
m k

g a bg
r m k r

1.3

The unit scalar acts as an identity under the exterior product.

a 1a a1
m m m

1.4

Scalars factor out of products.

Ka aO b a a b
m k m k

a ab
m k

1.5

An exterior product is anti-commutative whenever the grades of the factors are both odd.

a b H- 1L m k b a
m k k m

1.6

The exterior product is both left and right distributive under addition.

Ka + b O g a g + b g
m m r m r m r

a Kb + gO a b + a g
m r r m r m r

1.7

2009 9 3

Grassmann Algebra Book.nb

35

1.3 The Regressive Product


The regressive product as a dual product to the exterior product
One of Grassmann's major contributions, which appears to be all but lost to current mathematics, is the regressive product. The regressive product is the real foundation for the theory of the inner and scalar products (and their generalization, the interior product). Yet the regressive product is often ignored and the inner product defined as a new construct independent of the regressive product. This approach not only has potential for inconsistencies, but also fails to capitalize on the wealth of results available from the natural duality between the exterior and regressive products. The approach adopted in this book follows Grassmann's original concept. The regressive product is a simple dual operation to the exterior product and an enticing and powerful symmetry is lost by ignoring it, particularly in the development of metric results involving complements and interior products. The underlying beauty of the Ausdehnungslehre is due to this symmetry, which in turn is due to the fact that linear spaces of m-elements and linear spaces of (n|m)-elements have the same dimension. This too is the key to the duality of the exterior and regressive products. For example, the exterior product of m 1-elements is an m-element. The dual to this is that the regressive product of m (n|1)elements is an (n|m)-element. This duality has the same form as that in a Boolean algebra: if the exterior product corresponds to a type of 'union' then the regressive product corresponds to a type of 'intersection'. It is this duality that permits the definition of complement, and hence interior, inner and scalar products defined in later chapters. To underscore this duality it is proposed to adopt here the ('vee') for the regressive product operation.

Unions and intersections of spaces


Consider a (non-zero) 2-element x y. We can test to see if any given 1-element z is in the subspace spanned by x and y by taking the exterior product of x y with z and seeing if the result is zero. From this point of view, x y is an element which can be used to define the subspace instead of the individual 1-elements x and y. Thus we can define the space of x y as the space of all 1-elements z such that x y z 0. We extend this to more general elements by defining the space of a simple element a as the space of all
m

1-elements b such that a b 0.


m

We will also need the notion of congruence. Two elements are congruent if one is a scalar factor times the other. For example x and 2 x are congruent; x y and - x y are congruent. Congruent elements define the same subspace. We denote congruence by the symbol !. For two elements, congruence is equivalent to linear dependence. For more than two elements however, although the elements may be linearly dependent, they are not necessarily congruent. The following concepts of union and intersection only make sense up to congruence. A union of elements is an element defining the subspace they together span. 2009 9 3

Grassmann Algebra Book.nb

36

A union of elements is an element defining the subspace they together span. The dual concept to union of elements is intersection of elements. An intersection of elements is an element defining the subspace they span in common. Suppose we have three independent 1-elements: x, y, and z. A union of x y and y z is any element congruent to x y z. An intersection of x y and y z is any element congruent to y. The regressive product enables us to calculate intersections of elements. Thus it is the product operation correctly dual to the exterior product. In fact, in Chapter 3: The Regressive Product, we will develop the axiom set for the regressive product from that of the exterior product, and also show that we can reverse the procedure.

A brief summary of the properties of the regressive product


In this section we summarize a few of the more important properties of the regressive product. In Chapter 3: The Regressive Product, we develop the complete set of axioms from those of the exterior product. By comparing the axioms below with those for the exterior product in the previous section, we see that they are effectively generated by replacing with , and m by n-m. The unit element 1 in its form 1 becomes 1.
0 n

The regressive product of an m-element and a k-element in an n-space is an (m+k|n)-element. The regressive product is associative.

ab
m k

g a bg
r m k r

1.8

The unit n-element 1 acts as an identity under the regressive product.


n

a 1a a1
m n m m n

1.9

Scalars factor out of products.

Ka aO b a a b
m k m k

a ab
m k

1.10

A regressive product is anti-commutative whenever the complementary grades of the factors are both odd.

a b H - 1 L Hn- m L Hn-kL b a
m k k m

1.11

The regressive product is both left and right distributive under addition.

2009 9 3

Grassmann Algebra Book.nb

37

The regressive product is both left and right distributive under addition.

Ka + b O g a g + b g
m m r m r m r

a Kb + gO a b + a g
m r r m r m r

1.12

Note: In GrassmannAlgebra the dimension of a space is denoted by 1 to ensure a correct


n

interpretation of the symbol.

The Common Factor Axiom


Up to this point we have no way of connecting the dual axiom structures of the exterior and regressive products. That is, given a regressive product of an m-element and a k-element, how do we find the (m+k|n)-element to which it is equivalent, expressed only in terms of exterior products? To make this connection we need to introduce a further axiom which we call the Common Factor Axiom. The form of the Common Factor Axiom may seem somewhat arbitrary, but it is in fact one of the simplest forms which enable intersections to be calculated. This can be seen in the following application of the axiom to a vector 3-space. Suppose x, y, and z are three independent vectors in a vector 3-space. The Common Factor Axiom says that the regressive product of the two bivectors x z and y z may also be expressed as the regressive product of a 3-element x y z with their common factor z.
Hx zL Hy zL Hx y zL z

Since the space is 3-dimensional, we can write any 3-element such as x y z as a scalar factor (a, say) times the unit 3-element (introduced in axiom 1.9).
Hx y zL z Ka 1O z a z
3

This then gives us the axiomatic structure to say that the regressive product of two elements possessing an element in common is congruent to that element. Another way of saying this is: the regressive product of two elements defines their intersection.

2009 9 3

Grassmann Algebra Book.nb

38

Hx zL Hy zL z

The intersection of two bivectors in a three-dimensional space

Of course this is just a simple case. More generally, let a, b, and g be simple elements with m+k+p
m k p

= n, where n is the dimension of the space. Then the Common Factor Axiom states that

ag bg abg g
m p k p m k p p

m+k+p n

1.13

There are many rearrangements and special cases of this formula which we will encounter in later chapters. For example, when p is zero, the Common Factor Axiom shows that the regressive product of an m-element with an (n|m)-element is a scalar which can be expressed in the alternative form of a regressive product with the unit 1.
a b a b
m n-m m n-m

The Common Factor Axiom allows us to prove a particularly useful result: the Common Factor Theorem. The Common Factor Theorem expresses any regressive product in terms of exterior products alone. This of course enables us to calculate intersections of arbitrary elements. Most importantly however we will see later that the Common Factor Theorem has a counterpart expressed in terms of exterior and interior products, called the Interior Common Factor Theorem. This forms the principal expansion theorem for interior products and from which we can derive all the important theorems relating exterior and interior products. The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based, are possibly the most important theorems in the Grassmann algebra. 2009 9 3

Grassmann Algebra Book.nb

39

The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based, are possibly the most important theorems in the Grassmann algebra. In the next section we informally apply the Common Factor Theorem to obtain the intersection of two bivectors in a three-dimensional space.

The intersection of two bivectors in a three-dimensional space


Suppose that x y and u v are non-congruent bivectors in a three dimensional space. Since the space has only three dimensions, the bivectors must have an intersection. We denote the regressive product of x y and u v by z:
z Hx yL Hu vL

We will see in Chapter 3: The Regressive Product that this can be expanded by the Common Factor Theorem to give

Hx yL Hu vL Hx y vL u - Hx y uL v

1.14

But we have already seen in Section 1.2 that in a 3-space, the exterior product of three vectors will, in any given basis, give the basis trivector, multiplied by the determinant of the components of the vectors making up the trivector. Additionally, we note that the regressive product (intersection) of a vector with an element like the basis trivector completely containing the vector, will just give an element congruent to itself. Thus the regressive product leads us to an explicit expression for an intersection of the two bivectors.
z Det@x, y, vD u - Det@x, y, uD v

Here Det[x,y,v] is the determinant of the components of x, y, and v. We could also have obtained an equivalent formula expressing z in terms of x and y instead of u and v by simply interchanging the order of the bivector factors in the original regressive product. Note carefully however, that this formula only finds the common factor up to congruence, because until we determine an explicit expression for the unit n-element 1 in terms of basis elements
n

(which we do by introducing the complement operation in Chapter 5), we cannot usefully use axiom 1.9 above. Nevertheless, this is not to be seen as a restriction. Rather, as we shall see in the next section it leads to interesting insights as to what can be accomplished when we work in spaces without a metric (projective spaces).

2009 9 3

Grassmann Algebra Book.nb

40

1.4 Geometric Interpretations


Points and vectors
In this section we introduce two different types of geometrically interpreted elements which can be represented by the elements of a linear space: vectors and points. Then we look at the interpretations of the various higher order elements that we can generate from them by the exterior product. Finally we see how the regressive product can be used to calculate intersections of these higher order elements. As discussed in Section 1.2, the term 'vector' is often used to refer to an element of a linear space with no intention of implying an interpretation. In this book however, we reserve the term for a particular type of geometric interpretation: that associated with representing direction. But an element of a linear space may also be interpreted as a point. Of course vectors may also be used to represent points, but only relative to another given point (the information for which is not in the vector). These vectors are properly called position vectors. Common practice often omits explicit reference to this other given point, or perhaps may refer to it verbally. Points can be represented satisfactorily in many cases by position vectors alone, but when both position and direction are required in the same element we must distinguish mathematically between the two. To describe true position in a three-dimensional physical space, a linear space of four dimensions is required, one for an origin point, and the other three for the three spatial directions. Since the exterior product is independent of the dimension of the underlying space, it can deal satisfactorily with points and vectors together. The usual three-dimensional vector algebra however cannot. Suppose x, y, and z are elements of a linear space interpreted as vectors. Vectors always have a direction and sense. But only when the linear space has a metric do they also have a magnitude. Since to this stage we have not yet introduced the notion of metric, we will only be discussing interpretations and applications which do not require elements (other than congruent elements) to be commensurable. Of course, vectors may be summed in a space with no metric: the geometric interpretation of this operation being the standard 'triangle rule'. As we have seen in the previous sections, multiplying vectors together with the exterior product leads to higher order entities like bivectors and trivectors. These entities are interpreted as representing directions and their higher dimensional analogues.

Sums and differences of points


A point is defined as the sum of the origin point and a vector. If " is the origin, and x is a vector, then " + x is a point.

P ! + x
The vector x is called the position vector of the point P. 2009 9 3

1.15

Grassmann Algebra Book.nb

41

The vector x is called the position vector of the point P.

The sum of the origin and a vector is a point.

The sum of a point and a vector is another point.


Q P + y H" + xL + y == " + Hx + yL

P x x+y !

y Q

The sum of a point and a vector is another point

The difference of two points is a vector since the origins cancel.


P - Q H" + x L - H" + x + yL y

The difference of two points is a vector.

A scalar multiple of a point is called a weighted point. For example, if m is a scalar, m P is a weighted point with weight m. The sum of two points gives the point halfway between them with a weight of 2.

2009 9 3

Grassmann Algebra Book.nb

42

P1 + P2 H" + x1 L + H" + x2 L 2 K" +

x1 + x2 2

O 2 Pc

The sum of two points is a point of weight 2 located mid-way between them

Historical Note
The point was originally considered the fundamental geometric entity of interest. However the difference of points was clearly no longer a point, since reference to the origin had been lost. Sir William Rowan Hamilton coined the term 'vector' for this new entity since adding a vector to a point 'carried' the point to a new point.

Determining a mass-centre
A classic application of a sum of weighted points is in the determination of a centre of mass. Consider a collection of points Pi weighted with masses mi . The sum of the weighted points gives the point PG at the mass-centre (centre of gravity) weighted with the total mass M. To show this, first add the weighted points and collect the terms involving the origin.
M PG m1 H" + x1 L + m2 H" + x2 L + m3 H" + x3 L + H m1 + m2 + m3 + L " + H m1 x1 + m2 x2 + m3 x3 + L

Dividing through by the total mass M gives the centre of mass.


PG " + H m1 x1 + m2 x2 + m3 x3 + L H m1 + m2 + m3 + L

To fix ideas, we take a simple example demonstrating that centres of mass can be accumulated in any order. Suppose we have three points P, Q, and R with masses p, q, and r. The centres of mass taken two at a time are given by
Hp + qL GPQ p P + q Q Hq + rL GQR q Q + r R Hp + rL GPR p P + r R

Now take the centre of mass of each of these with the other weighted point. Clearly, the three sums will be equal.

2009 9 3

Grassmann Algebra Book.nb

43

Hp + qL GPQ + r R Hq + rL GQR + p P Hp + rL GPR + q Q p P + q Q + r R Hp + q + rL GPQR

GPR

GQR GPQR

GPQ
Centres of mass can be accumulated in any order

Lines and planes


The exterior product of a point and a vector gives a bound vector. Bound vectors are the entities we need for mathematically representing lines. A line is the set of points in the bound vector. That is, it consists of all the points whose exterior product with the bound vector is zero. In practice, we usually compute with lines by computing with their bound vectors. For example, to get the intersection of two lines in the plane, we take the regressive product of their bound vectors. By abuse of terminology we may therefore often refer to a bound vector as a line. A bound vector can be defined by the exterior product of a point and a vector, or of two points. In the first case we represent the line L through the point P in the direction of x by any entity congruent to the exterior product of P and x. In the second case we can introduce Q as P + x to get the same result.
L P x P HQ - PL P Q

x P
L Px

Q P
L PQ

Two different depictions of a bound vector in its line.

A line is independent of the specific points used to define it. To see this, consider any other point R on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar multiple of the vector x: 2009 9 3

Grassmann Algebra Book.nb

44

A line is independent of the specific points used to define it. To see this, consider any other point R on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar multiple of the vector x:
L R x HP + a xL x P x

A line may also be represented by the exterior product of any two points on it.
L P R P HP + a xL a P x

Note that the bound vectors P x and P R are different, but congruent. They therefore define the same line.

L Px Rx PR

1.16

x x P
Two congruent bound vectors defining the same line.

These concepts extend naturally to higher dimensional constructs. For example a plane P may be represented by the exterior product of single point on it together with a bivector in the direction of the plane, any two points on it together with a vector in it (not parallel to the line joining the points), or any three points on it (not in the same line).
P Pxy PQy PQR

1.17

Pxy

PQy

PQR

Three different depictions of a bound bivector in its plane.

To build higher dimensional geometric entities from lower dimensional ones, we simply take their exterior product. For example we can build a line by taking the exterior product of a point with any point or vector exterior to it. Or we can build a plane by taking the exterior product of a line with any point or vector exterior to it.

The intersection of two lines


2009 9 3

Grassmann Algebra Book.nb

45

The intersection of two lines


To find intersections of geometric entities, we use the regressive product. For example, suppose we have two lines in a plane and we want to find the point of intersection P. As we have seen we can represent the lines in a number of ways. For example:
L1 P1 x1 H" + n1 L x1 " x1 + n1 x1 L2 P2 x2 H" + n2 L x2 " x2 + n2 x2

x1

x2

P1

P2

The intersection of lines as the common factor of bound vectors.

The point of intersection of L1 and L2 is the point P given by (congruent to) the regressive product of the lines L1 and L2 .
P L1 L2 H" x1 + n1 x1 L H" x2 + n2 x2 L

Expanding the formula for P gives four terms:


P H" x1 L H" x2 L + Hn1 x1 L H" x2 L + H" x1 L Hn2 x2 L + Hn1 x1 L Hn2 x2 L

The Common Factor Theorem for the regressive product of elements of the form Hx yL Hu vL in a linear space of three dimensions was introduced as formula 1.14 in Section 1.3 as:
Hx yL Hu vL Hx y vL u - Hx y uL v

Since a bound 2-space is three dimensional (its basis contains three elements - the origin and two vectors), we can use this formula to expand each of the terms in P:
H" x1 L H" x2 L H" x1 x2 L " - H" x1 "L x2 Hn1 x1 L H" x2 L Hn1 x1 x2 L " - Hn1 x1 "L x2 H" x1 L Hn2 x2 L - Hn2 x2 L H" x1 L - H n 2 x 2 x 1 L " + H n 2 x 2 "L x 1 Hn1 x1 L Hn2 x2 L Hn1 x1 x2 L n2 - Hn1 x1 n2 L x2

The term " x1 " is zero because of the exterior product of repeated factors. The four terms involving the exterior product of three vectors, for example n1 x1 x2 , are also zero since any three vectors in a two-dimensional vector space must be dependent (The vector space is 2dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of intersection as congruent to the weighted point P: 2009 9 3

Grassmann Algebra Book.nb

46

The term " x1 " is zero because of the exterior product of repeated factors. The four terms involving the exterior product of three vectors, for example n1 x1 x2 , are also zero since any three vectors in a two-dimensional vector space must be dependent (The vector space is 2dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of intersection as congruent to the weighted point P:
P H" x1 x2 L " + H" n2 x2 L x1 - H" n1 x1 L x2

If we express the vectors in terms of a basis, e1 and e2 say, we can reduce this formula (after some manipulation) to:
P " + Det@n2 , x2 D Det@x1 , x2 D x1 Det@n1 , x1 D Det@x1 , x2 D x2

Here, the determinants are the determinants of the coefficients of the vectors in the given basis. To verify that P does indeed lie on both the lines L1 and L2 , we only need to carry out the straightforward verification that the products P L1 and P L2 are both zero. Although this approach in this simple case is certainly more complex than the standard algebraic approach in the plane, its interest lies in the facts that it is immediately generalizable to intersections of any geometric objects in spaces of any number of dimensions, and that it leads to easily computable solutions.

1.5 The Complement


The complement as a correspondence between spaces
The Grassmann algebra has a duality in its structure which not only gives it a certain elegance, but is also the basis of its power. We have already introduced the regressive product as the dual product operation to the exterior product. In this section we extend the notion of duality to define the complement of an element. The notions of metric, orthogonality, and interior, inner and scalar products are all based on the complement. Consider a linear space of dimension n with basis e1 , e2 , , en . The set of all the essentially different m-element products of these basis elements forms the basis of another linear space, but this time of dimension K
n O. For example, when n is 3, the linear space of 2-elements has three m

elements in its basis: e1 e2 , e1 e3 , e2 e3 . The anti-symmetric nature of the exterior product means that there are just as many basis elements in the linear space of (n|m)-elements as there are in the linear space of m-elements. Because these linear spaces have the same dimension, we can set up a correspondence between m-elements and (n|m)-elements. That is, given any m-element, we can define its corresponding (n|m)-element. The (n|m)-element is called the complement of the m-element. Normally this correspondence is set up between basis elements and extended to all other elements by linearity.

The Euclidean complement


2009 9 3

Grassmann Algebra Book.nb

47

The Euclidean complement


Suppose we have a three-dimensional linear space with basis e1 , e2 , e3 . We define the Euclidean complement of each of the basis elements as the basis 2-element whose exterior product with the basis element gives the basis 3-element e1 e2 e3 . We denote the complement of an element by placing a 'bar' over it. Thus:
e1 e2 e3 e2 e3 e1 e3 e1 e2 e1 e1 e1 e2 e3 e2 e2 e1 e2 e3 e3 e3 e1 e2 e3

The Euclidean complement is the simplest type of complement and defines a Euclidean metric, that is, where the basis elements are mutually orthonormal. This was the only type of complement considered by Grassmann. In Chapter 5: The Complement, we will show however, that Grassmann's concept of complement is easily extended to more general metrics. Note carefully that we will be using the notion of complement to define the notions of orthogonality and metric, and until we do this, we will not be relying on their existence in Chapters 2, 3, and 4. With the definitions above, we can now proceed to define the Euclidean complement of a general 1element x a e1 + b e2 + c e3 . To do this we need to endow the complement operation with the property of linearity so that it has meaning for linear combinations of basis elements.
x a e1 + b e2 + c e3 a e1 + b e2 + c e3 a e2 e3 + b e3 e1 + c e1 e2

A vector and its bivector complement in a three-dimensional vector space

In Section 1.6 we will see that the complement of an element is orthogonal to the element, because we will define the interior product (and hence inner and scalar products) using the complement. We can start to see how the scalar product of a 1-element with itself might arise by expanding the product 2009 9 3x x:

Grassmann Algebra Book.nb

48

In Section 1.6 we will see that the complement of an element is orthogonal to the element, because we will define the interior product (and hence inner and scalar products) using the complement. We can start to see how the scalar product of a 1-element with itself might arise by expanding the product x x:
x x Ha e1 + b e2 + c e3 L Ha e2 e3 + b e3 e1 + c e1 e2 L Ia2 + b2 + c2 M e1 e2 e3

The complement of the basis 2-elements can be defined in a manner analogous to that for 1elements, that is, such that the exterior product of a basis 2-element with its complement is equal to the basis 3-element. The complement of a 2-element in 3-space is therefore a 1-element.
e2 e3 e1 e3 e1 e2 e1 e2 e3 e2 e3 e2 e3 e2 e3 e1 e1 e2 e3 e3 e1 e3 e1 e3 e1 e2 e1 e2 e3 e1 e2 e1 e2 e1 e2 e3

To complete the definition of Euclidean complement in a 3-space we note that:


1 e1 e2 e3 e1 e2 e3 1

Summarizing these results for the Euclidean complement of the basis elements of a Grassmann algebra in three dimensions shows the essential symmetry of the complement operation.

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1

2009 9 3

Grassmann Algebra Book.nb

49

The complement of a complement


Applying the complement operation twice to a 1-element x we can see that the complement of the complement of x in a 3-space is just x itself.
x a e1 + b e2 + c e3 x a e1 + b e2 + c e3 a e1 + b e2 + c e3 a e2 e3 + b e3 e1 + c e1 e2 x a e2 e3 + b e3 e1 + c e1 e2 a e2 e3 + b e3 e1 + c e1 e2 a e1 + b e2 + c e3 xx

More generally, as we shall see in Chapter 5: The Complement, we can show that the complement of the complement of any element is the element itself, apart from a possible sign.

a H - 1 L m Hn- m L a
m m

1.18

This result is independent of the correspondence that we set up between the m-elements and (n|m)elements of the space, except that the correspondence must be symmetric. This is equivalent to the requirement that the metric tensor (and inner product) be symmetric. Whereas in a 3-space, the complement of the complement of a 1- element is the element itself, in a 2-space it turns out to be the negative of the element.

Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1

Although this sign dependence on the dimension of the space and grade of the element might appear arbitrary, it turns out to capture some essential properties of the elements in their spaces to which we have become accustomed. For example in 2-space the complement x of a vector x rotates it anticlockwise by 2 . Taking the complement x of the vector x rotates it anticlockwise by a further 2 into -x.
x a e1 + b e2
p p

2009 9 3

Grassmann Algebra Book.nb

50

x a e1 + b e2 a e2 - b e1 x a e2 - b e1 - a e1 - b e2

x x

x = -x

The complement operation rotating vectors in a vector 2-space

The Complement Axiom


From the Common Factor Axiom we can derive a powerful relationship between the Euclidean complements of elements and their exterior and regressive products. The Euclidean complement of the exterior product of two elements is equal to the regressive product of their complements.

ab ab
m k m k

1.19

However, although this may be derived in this simple case, to develop the Grassmann algebra for general metrics, we will assume this relationship holds independent of the metric. It thus takes on the mantle of an axiom. This axiom, which we call the Complement Axiom, is the quintessential formula of the Grassmann algebra. It expresses the duality of its two fundamental operations on elements and their complements. We note the formal similarity to de Morgan's law in Boolean algebra. We will also see that adopting this formula for general complements will enable us to compute the complement of any element of a space once we have defined the complements of its basis 1elements. As an example consider two vectors x and y in 3-space, their exterior product. The Complement Axiom becomes
xy xy

The complement of the bivector x y is a vector. The complements of x and y are bivectors. The regressive product of these two bivectors is a vector. The following graphic depicts this relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.

2009 9 3

Grassmann Algebra Book.nb

51

The complement of the bivector x y is a vector. The complements of x and y are bivectors. The regressive product of these two bivectors is a vector. The following graphic depicts this relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.

Visualizing the complement axiom in vector three-space

1.6 The Interior Product


The definition of the interior product
The interior product is a generalization of the inner or scalar product to elements of arbitrary grade. First we will define the interior product and then show how the inner and scalar products are special cases. The interior product of an element a with an element b is denoted a b and defined to be the
m k k m k

regressive product of a with the complement of b.


m

ab ab
m k m k

1.20

The grade of an interior product a b may be seen from the definition to be m+(n|k)|n = m|k.
m k

Note that while the grade of a regressive product depends on the dimension of the underlying linear space, the grade of an interior product is independent of the dimension of the underlying space. This independence underpins the important role the interior product plays in the Grassmann algebra - the exterior product sums grades while the interior product differences them. However, grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of negative grade. 2009 9 3

Grassmann Algebra Book.nb

52

Note that while the grade of a regressive product depends on the dimension of the underlying linear space, the grade of an interior product is independent of the dimension of the underlying space. This independence underpins the important role the interior product plays in the Grassmann algebra - the exterior product sums grades while the interior product differences them. However, grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of negative grade. Thus the order of factors in an interior product is important. When the grade of the first element is less than that of the second element, the result is necessarily zero.

Inner products and scalar products


The interior product of two elements a and b of the same grade is (also) called their inner product.
m m

Since the grade of an interior product is the difference of the grades of its factors, an inner product is always of grade zero, hence scalar. A special case of the interior or inner product in which the two factors of the product are of grade 1 is called a scalar product to conform to the common usage. In Chapter 6 we will show that the inner product is symmetric, that is, the order of the factors is immaterial.

a b b a
m m m m

1.21

When the inner product is between simple elements it can be expressed as the determinant of the array of scalar products according to the following formula:

Ha1 a2 ! am L Hb1 b2 ! bm L Det@ai bj D


For example, the inner product of two 2-elements a1 a2 and b1 b2 may be written:

1.22

Ha1 a2 L Hb1 b2 L Ha1 b1 L Ha2 b2 L - Ha1 b2 L Ha2 b1 L

1.23

Sequential interior products


Definition 1.20 for the interior product leads to an immediate and powerful formula relating exterior and interior products by grace of the associativity of the regressive product and the Complement Axiom 1.19.
a Hb1 b2 bk L a Hb1 b2 bk L
m m

a b1 b2 bk KKa b1 O b2 O bk
m m

2009 9 3

Grassmann Algebra Book.nb

53

a Hb1 b2 bk L
m

Ka b1 O Hb2 bk L KKKa b1 O b2 O O bk
m m

1.24

For example, the inner product of two bivectors can be rewritten to display the interior product of the first bivector with either of the vectors
Hx yL Hu vL HHx yL uL v - HHx yL vL u

It is the straightforward and consistent derivation of formulae like 1.24 from definition 1.20 using only the fundamental exterior, regressive and complement operations, that shows how powerful Grassmann's approach is. The alternative approach of simply introducing an inner product onto a space cannot bring such power to bear.

Orthogonality
As is well known, two 1-elements are said to be orthogonal if their scalar product is zero. More generally, a 1-element x is orthogonal to a simple element a if and only if their interior
m

product a x is zero.
m

However, for a Hx1 x2 ! xk L to be zero it is only necessary that one of the xi be


m

orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula


m

1.24 we can write


a Hx1 x2 ! xk L Ka x1 O Hx2 ! xk L
m m

Hence it becomes immediately clear that if a x1 is zero then so is the whole product
m

a Hx1 x2 ! xk L.
m

Measure and magnitude


Formula 1.22 is the basis for computing the measure of a simple m-element. The measure of a simple element A is denoted A, and is defined to be the positive square root of the interior product of the element with itself. For A a1 a2 ! am we have

A 2 A A Ha1 a2 ! am L Ha1 a2 ! am L Det@ai aj D

1.25

Under a geometric interpretation of the space in which vectors are interpreted as representing displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three vectors. The magnitude of a scalar is the scalar itself. 2009 9 3

Grassmann Algebra Book.nb

54

Under a geometric interpretation of the space in which vectors are interpreted as representing displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three vectors. The magnitude of a scalar is the scalar itself. The magnitude of a vector x is, as expected, given by the standard formula.

xx

1.26

The magnitude of a bivector x y is given by formula 1.24 as

x y

Hx yL Hx yL

DetB

xx xy F xy yy

1.27

Of course, a bivector may be expressed in an infinity of ways as the exterior product of two vectors, for example
B = x y x Hy + a xL

From this, the square of its area may be written in either of two ways
B B Hx yL Hx yL B B Hx Hy + a xLL Hx Hy + a xLL

However, multiplying out these expressions using formula 1.23 shows that terms cancel in the second expression, thus reducing them both to the same expression.
B B Hx xL Hy yL - Hx yL2

Thus the measure of a bivector is independent of the actual vectors used to express it. Geometrically interpreted this means that the area of the corresponding parallelogram (with sides corresponding to the displacements represented by the vectors) is independent of its shape. These results extend straightforwardly to simple elements of any grade. The measure of an element is equal to the measure of its complement. By the definition of the interior product, and formulae 1.18 and 1.19 we have
a a a a a a H - 1 L m Hn-m L a a a a a a
m m m m m m m m m m m m

aa aa
m m m m

1.28

` A unit element a can be defined by the ratio of the element to its measure.
m

2009 9 3

Grassmann Algebra Book.nb

55

` a
m

a
m

a
m

1.29

Calculating interior products


We can use definition 1.20 to calculate interior products directly from their definition. In what follows we calculate the interior products of representative basis elements of a 3-space with Euclidean metric. As expected, the scalar products e1 e1 and e1 e2 turn out to be 1 and 0 respectively.
e1 e1 e1 e1 e1 He2 e3 L He1 e2 e3 L 1 1 1 1 e1 e2 e1 e2 e1 He3 e1 L He1 e3 e1 L 1 0 1 0

Inner products of identical basis 2-elements are unity:


He1 e2 L He1 e2 L He1 e2 L He1 e2 L He1 e2 L e3 He1 e2 e3 L 1 1 1 1

Inner products of non-identical basis 2-elements are zero:


He1 e2 L He2 e3 L He1 e2 L He2 e3 L He1 e2 L e1 He1 e2 e1 L 1 0 1 0

If a basis 2-element contains a given basis 1-element, then their interior product is not zero:
He1 e2 L e1 He1 e2 L e1 He1 e2 L He2 e3 L He1 e2 e3 L e2 1 e2 e2

If a basis 2-element does not contain a given basis 1-element, then their interior product is zero:
He1 e2 L e3 He1 e2 L e3 He1 e2 L He1 e2 L 0

Expanding interior products


To expand interior products, we use the Interior Common Factor Theorem. This theorem shows how an interior product of a simple element a with another, not necessarily simple element of
m

equal or lower grade b, may be expressed as a linear combination of the n (= K


k

m O) essentially k

different factors ai (of grade m|k) of the simple element of higher degree.
m -k

1.30

2009 9 3

Grassmann Algebra Book.nb

56

a b ai b ai
m k i=1 k k k

m -k

a a1 a1 a2 a2 ! an an
m k m -k m -k k

m -k

For example, the Interior Common Factor Theorem may be used to prove a relationship involving the interior product of a 1-element x with the exterior product of two factors, each of which may not be simple. This relationship and the special cases that derive from it find application throughout the explorations in the rest of this book.

a b x Ka xO b + H- 1L m a b x
m k m k m k

1.31

The interior product of a bivector and a vector


Suppose x is a vector and B x1 x2 is a simple bivector. The interior product of the bivector B with the vector x is the vector B x. This can be expanded by the Interior Common Factor Theorem, or the formula above derived from it, to give:
B x1 x2 B x Hx1 x2 L x Hx x1 L x2 - Hx x2 L x1

Since B x is expressed as a linear combination of x1 and x2 it is clearly contained in the bivector B.


B HB xL 0

The resulting vector B x is also orthogonal to x. We can show this by taking its scalar product with x, or by using formula 1.24.
HB xL x B Hx xL 0 ` If B is the unit bivector of B, the projection x of x onto B is given by ` ` x - B IB xM

The component x of x orthogonal to B is given by

1.30

2009 9 3

Grassmann Algebra Book.nb

57

` ` x IB xM B

The interior product of a bivector with a vector

These concepts may easily be extended to geometric entities of higher grade. We will explore this further in Chapter 6.

The cross product


The cross or vector product of the three-dimensional vector calculus of Gibbs et al. [Gibbs 1928] corresponds to two operations in Grassmann's more general calculus. Taking the cross-product of two vectors in three dimensions corresponds to taking the complement of their exterior product. However, whilst the usual cross product formulation is valid only for vectors in three dimensions, the exterior product formulation is valid for elements of any grade in any number of dimensions. Therefore the opportunity exists to generalize the concept. Because our generalization reduces to the usual definition under the usual circumstances, we take the liberty of continuing to refer to the generalized cross product as, simply, the cross product. The cross product of a and b is denoted a b and is defined as the complement of their exterior
m k m k

product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.

ab ab
m k m k

1.32

This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space. For 1-elements xi the definition has the following consequences, independent of the dimension of the space. 2009 9 3

Grassmann Algebra Book.nb

58

This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space. For 1-elements xi the definition has the following consequences, independent of the dimension of the space. The triple cross product is a 1-element in any number of dimensions.

Hx1 x2 L x3 Hx1 x2 L x3 Hx3 x1 L x2 - Hx3 x2 L x1


The box product, or triple vector product, is an (n|3)-element, and therefore a scalar only in three dimensions.

1.33

Hx1 x2 L x3 x1 x2 x3
The scalar product of two cross products is a scalar in any number of dimensions.

1.34

Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L

1.35

The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in three dimensions. It corresponds to the regressive product of two exterior products.

Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L


The cross product of the three-dimensional vector calculus requires the space to have a metric, since it is defined to be orthogonal to its factors. Equation 1.35 however, shows that in this particular case, the result does not require a metric.

1.36

1.7 Exploring Screw Algebra


To be completed

1.8 Exploring Mechanics


To be completed

1.9 Exploring Grassmann Algebra


To be completed

1.10 Exploring The Generalized Product


2009 9 3

Grassmann Algebra Book.nb

59

1.10 Exploring The Generalized Product


To be completed

1.11 Exploring Hypercomplex Algebra


To be completed

1.12 Exploring Clifford Algebra


To be completed

1.13 Exploring Grassmann Matrix Algebra


To be completed

1.14 The Various Types of Linear Product


Introduction
In this section we explore the different possible types of linear product axiom that can exist between two elements of a linear space. Our exploration will take a different approach to that used by Grassmann in his Ausdehnungslehre of 1862 (Chapter 2, Section 3: The various types of product structure), but we will arrive at essentially the same conclusions: that there are just two types of linear product axioms: one asserting the symmetry of the product operation, and the other asserting the antisymmetry of the product operation. Of course, this does not mean to imply that all linear products must be either symmetric or antisymmetric! Only that, if there is an axiom relating the products of two arbitrary elements of a linear space, then it must either define the product operation to be symmetric, or must define it to be antisymmetric. Intuitively, this makes sense, corresponding to there being just two parities: even and odd. Suppose a and b are (any) two distinct non-zero elements of a given linear space. To help make the development more concrete, we can conceive of them as 1-elements or vectors, but this is not necessary. A linear product axiom involving a and b is defined as one that remains valid under the substitutions
a pa +qb b ra +sb

where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity with scalars. 2009 9 3

Grassmann Algebra Book.nb

60

where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity with scalars. We denote the product with a dot: a.b. Note that, while a and b must both belong to the same linear space, we make no assertions as to where the product belongs. There are two essentially different forms of product we can form from the two distinct elements a and b: those involving an element with itself (a.a and b.b); and those involving distinct elements (a.b and b.a). For a product of an element with itself, there are just two distinct possibilities: either it is zero, or it is non-zero. In what follows, we shall take each of these two cases and explore for any relations it might imply on products of distinct elements.

Case 1 The product of an element with itself is zero


Suppose a a 0 for all a (thus including b) of the linear space. The assumption of linearity then implies that the product of any linear combination of a and b with itself is also zero. In particular
Ha + bL Ha + bL 0

Since we have assumed the product is distributive, we can expand it to give


Ha + bL Ha + bL a a + a b + b a + b b 0

But since a a 0 and b b 0, this reduces to


ab + ba 0

Conversely, if we suppose that a b + b a 0 for all a and b, we can replace b with a to obtain 2 a a 0. Whence a a 0. Similarly b b 0. Thus for Case 1 we have
a a 0, b b 0

ab + ba 0

1.37

This means that, if a linear product is defined to be nilpotent (that is a a 0), then necessarily it is antisymmetric. Conversely, if it is antisymmetric, it is also nilpotent.

Case 2 The product of an element with itself is non-zero


The second case is a little more complex to explore. Suppose a a is not equal to zero for all a (thus including b) of the linear space. We would like to see if this constraint implies any linear relationship between the products a a, a b, b a and b b. The most general linear relationship between these products may be written:
A: a aa + b ab + c ba + d bb 0

where a, b, c and d are scalars. First, replace a by -a to obtain 2009 9 3

Grassmann Algebra Book.nb

61

First, replace a by -a to obtain


B: a aa - b ab - c ba + d bb 0

then add equations A and B to give


C: 2 a aa + 2 d bb 0

Since, for the case we are considering, the product of no element with itself is zero, we conclude that both a and d must be zero. Thus, equation A becomes
D: b ab + c ba 0

Now, in this equation, replace b by a to give


E: Hb + cL a a 0

Again, since for this case a.a is not zero, we must have that b + c is zero, that is, c is equal to -b. Replace c by -b in equation D to give
F: b Ha b - b aL 0

Now, either b is zero, or a b - b a is zero. In the case b is zero we also have that c is zero. Additionally, from equation C we have that both a and d are zero. This condition leads to no linear axiom of the form A being possible. The remaining possibility is that a b - b a 0. Thus we can summarize the results as leading to either of the two possibilities
a a ! 0, b b ! 0

Either a b - b a 0, or no linear product axiom exists

1.38

This means that if a linear product is defined such that the product of any non-zero element with itself is never zero, then necessarily, either no linear relation is implied between products or, the product operation is symmetric, that is a b b a.

Summary
In sum, there exists just two types of linear product axioms: one asserting the symmetry of the product operation, and the other asserting the antisymmetry of the product operation. This does not preclude linear products which have no axioms relating products of two elements.

Examples
Symmetric products
Inner products These are symmetric for factors of (equal) arbitrary grade.

2009 9 3

Grassmann Algebra Book.nb

62

Scalar products These are a special case of inner product for factors of grade 1. Exterior products These are symmetric for factors of (equal) even grade. Algebraic products The ordinary algebraic product is symmetric. It is equivalent to the special cases of the inner or exterior products of elements of zero grade.

Antisymmetric products
Exterior products These are antisymmetric for factors of (equal) odd grade. Regressive products These are antisymmetric for factors of (equal) grade, but one whose parity is different from that of the dimension of the space. Generalized products These are antisymmetric for factors of (equal) grade, but one whose parity is different from that of the order of the product.

Products defined as sums of symmetric and antisymmetric products


Two types of important product which have not yet been mentioned are the Clifford product, and the hypercomplex product. These products both involve sums of symmetric and antisymmetric products. Since the products are neither symmetric, nor antisymmetric, we might expect no associated linear product axioms to exist for the Clifford and hypercomplex products. To confirm this, we take the simplest case of the Clifford product of two 1-elements, and posit that there does exist a linear product axiom. Our expectation is that this will lead to a contradiction. Consider the linear product axiom
A: a aa + b ab + c ba + d bb 0

The Clifford product of two 1-elements is defined as the sum of their exterior product and their inner (scalar) product, so now let a b a b + a b
B: a Ha a + a aL + b Ha b + a bL + c Hb a + b aL + d Hb b + b bL 0

Since the exterior and scalar product operations are distinct, leading to elements of different grade, we can write B as two separate linear product axioms:
C: D: a Ha aL + b Ha bL + c Hb aL + d Hb bL 0 a Ha aL + b Ha bL + c Hb aL + d Hb bL 0

From the symmetry and antisymmetry of these products, C and D simplify to


E: F: Hb - cL Ha bL 0 a Ha aL + Hb + cL Ha bL + d Hb bL 0

From E we find that b must equal c. Hence F becomes


G: a Ha aL + 2 b Ha bL + d Hb bL 0

Put a equal to - a in G and subtract the result from G to give 4b(a b) 0. Hence b must be zero, giving 2009 9 3

Grassmann Algebra Book.nb

63

Put a equal to - a in G and subtract the result from G to give 4b(a b) 0. Hence b must be zero, giving
H: a Ha aL + d Hb bL 0

Now put b equal to a to get (a + d)(a a) 0, whence a must be equal to -d yielding finally
I: aa - bb 0

This is clearly a contradiction (as we suspected might be the case), hence there is no linear product axiom relating Clifford products of 1-elements.

1.15 Terminology
Terminology
Grassmann algebra is really quite a simple structure, straightforwardly generated by introducing a product operation onto the elements of a linear space to generate a suite of new linear spaces. If we only wish to describe this algebra and its elements, the terminology can be quite compact and consistent with accepted practice. However in applications of the algebra, for example to geometry and physics, we may want to work in an interpreted version of the algebra. This new interpretation may be as simple as viewing one of the basis elements of the underlying linear space as an origin point, and the rest as vectors, but the interpreted algebraic, and therefore terminological, distinctions are now significantly increased. Whereas the exterior product on an algebraically uninterpreted basis multiplied the basis elements into a single suite of higher grade elements, the exterior product on such an interpreted basis multiplies the basis elements into two suites of interpreted higher grade elements - those containing the origin as a factor, and those which do not. This is of course more complex, and requires more terminology to make all the distinctions. Deciding on the terminology to use however poses a challenge, because many of the distinctions while new to some, are known to others by different names. For example, E. A. Milne in his Vectorial Mechanics, although not mentioning Grassmann or the exterior product uses the term 'line vector' for the entity we model by the exterior product of a point with a vector. Robert Stawell Ball in his A Treatise on the Theory of Screws, only briefly mentioning Grassmann, uses the word 'screw' for the entity we model by the sum of a 'line vector' and what is currently known as a bivector. It is not within my capabilities to extract complete consistency from historically precedent terminology in this highly applicable yet largely unexplored area of mathematics. I have had to adopt some compromises. These compromises have been directed by the desire that the terminology of the book be consistent, historically cognizant, modern, and intuitive. It is certainly not perfect.

2009 9 3

Grassmann Algebra Book.nb

64

One compromise, adopted to avoid tedious extra-locution, is to extend the term space in this book to also mean a Grassmann algebra. I have done this because it seems to me that when we visualize space, it is not only inhabited by points and vectors, but also by lines, planes, bivectors, trivectors, and other multi-dimensional entities.

Linear space
In this book a linear space is defined in the usual way as consisting of an abelian group under addition, a field, and a (scalar) multiplication operation between their elements. The dimension of a linear space is the number of elements in a basis of the space.

Underlying linear space


The underlying linear space L of a Grassmann algebra is the linear space which, together with the
1

exterior product operation generates the algebra. The field of L is denoted L, and dimension of L
1 0 1

by n (or, when using GrassmannAlgebra, n or D).

Exterior linear space


An exterior linear space L is the linear space whose basis consists of all the essentially different
m

exterior products of basis elements of L, m at a time. The dimension of L is therefore


1 m m

n m

. Its

elements are called m-elements. The integer m is called the grade of L and its elements.

Exterior linear spaces and their elements


Grade Basis L 0
0

Element 0-element 1-element 2-element

1 e1 , e2 , e3 , , en e1 e2 , e1 e3 , , en-1 en

L 1
1

L 2
2

L m
m

e1 e2 em , , en-m+1 en-1 en m-element e1 e2 en


n-element

L n
n

2009 9 3

Grassmann Algebra Book.nb

65

Grassmann algebra
A Grassmann algebra L is the direct sum L!L ! L ! ! L ! ! L of an underlying linear
0 1 2 m n

space L, its field L, and the exterior linear spaces L (2 m n). As well as being an algebra, L is
1 0 m

a linear space of dimension 2 , whose basis consists of all the basis elements of its exterior linear spaces. Its elements of grade m are called m-elements, and its multigraded elements are called Lelements. An m-element is called simple if it can be expressed as the exterior product of 1-element factors.

Space
In this book, the term space is another term for a Grassmann algebra. The term n-space is another term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of terminology we will refer to the dimension and basis of an n-space as that of its underlying linear space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering !n . The space of a simple m-element is the m-space whose underlying linear space consists of all the 1elements in the m-element. A 1-element is said to be in a simple m-element if and only their exterior product is zero. A basis of the space of a simple m-element may be taken as any set of m independent factors of the m-element. An n-space and the space of its n-element are identical.

Vector space
A vector space is a space in which the elements of the underlying linear space have been interpreted as vectors. This interpretation views a vector as an (unlocated) direction and graphically depicts it as an arrow. An m-element in a vector space is called an m-vector or multivector. A 2-vector is also called a bivector. A 3-vector is also called a trivector. A simple mvector is viewed as a multi-dimensional direction, or m-direction. An element of a vector space may also be called a geometric entity (or simply, entity) to emphasize that it has a geometric interpretation. An m-vector is an entity of grade m. Vectors are 1-entities, bivectors are 2-entities. A vector n-space is a vector space whose underlying linear space is of dimension n. A vector 3space is thus richer than the usual 3-dimensional vector algebra, since it also contains bivectors and trivectors, as well as the exterior product operation.

2009 9 3

Grassmann Algebra Book.nb

66

A vector n-space and its entities


Grade Basis L 0
0

Entity scalar vector bivector

1 e1 , e2 , e3 , , en e1 e2 , e1 e3 , , en-1 en

L 1
1

L 2
2

L m
m

e1 e2 em , , en-m+1 en-1 en m-vector e1 e2 en


n-vector

L n
n

The space of a simple m-vector is the m-space whose underlying linear space consists of all the vectors in the m-vector.

Bound space
A bound space is a vector space to whose basis has been added an origin. The origin is an element with the geometric interpretation of a point. A bound n-space is a vector n-space to whose basis has been added an origin. The symbol n refers to the number of vectors in the basis of the bound space, thus making the dimension of a bound n-space n+1. In GrassmannAlgebra you can declare that you want to work in a bound n-space by entering "n . You can determine the dimension of a space by entering D, which in this case would return n+1. Thus the underlying linear space of a bound n-space is of dimension n+1. An element of a bound space may also be called a geometric entity (or simply, entity) to emphasize it has a geometric interpretation. An m-entity is an entity of grade m. Vectors and points are 1entities. Bivectors and bound vectors are 2-entities. A bound n-space is thus richer than a vector n-space, since as well as containing vectorial (unlocated) entities (vectors, bivectors, ), it also contains bound (located) entities (points, bound vectors, ) and sums of these. In contradistinction to the term bound space, a vector space may sometimes be called a free space. A bound 3-space is a closer algebraic model to physical 3-space than a vector 3-space, even though its underlying linear space is of four dimensions.

2009 9 3

Grassmann Algebra Book.nb

67

A bound n-space and its entities


Grade Basis L
0

Free Entity Bound Entity scalar vector bivector weighted point bound vector bound Hm-1L-vector bound Hn-1L-vector bound n-vector

0 1 2 m n n+1

1 !, e1 , e2 , e3 , , en !e1 , !e2 , , en-1 en

L
1

L
2

L
m

!e1 e2 em -1 , m-vector , en-m+1 en-1 en !e1 e2 en -1 , , e1 e2 en !e1 e2 en


n-vector

L
n

n+1

The space of a bound simple m-vector is the (m+1)-space whose underlying linear space consists of all the points and vectors in the bound simple m-vector.

Geometric objects
The geometric object of a bound simple entity is the set of all points in the entity, that is, all points whose exterior product with the bound simple entity is zero. The geometric object of a bound scalar or weighted point is the point itself. The geometric object of a bound vector is a line (an infinite set of points). The geometric object of a bound simple bivector is a plane (a doubly infinite set of points). Since the properties of geometric objects are well modelled algebraically by their corresponding entities, we find it convenient to compute with geometric objects by computing with their corresponding bound simple entities; thus for example computing the intersection of two lines by computing the regressive product of two bound vectors.

2009 9 3

Grassmann Algebra Book.nb

68

Geometric objects and their corresponding entities


Grade Geometric Object Entity L
1

1 2 3 m n n+1

point line plane Hm-1L-plane hyperplane n|plane

bound scalar bound vector bound simple bivector bound simple Hm-1L-vector bound Hn-1L-vector bound n-vector

L
2

L
3

L
m

L
n

n+1

Congruence and orientation


Two elements are said to be congruent if one is a scalar multiple (not zero) of the other. The scalar multiple associated with two congruent elements is called the congruence factor. Two congruent elements are said to be of opposite orientation if their congruence factor is negative.

Points, vectors and carriers


The origin of a bound space is the basis element designated as the origin. A vector in a bound space is a 1-element that does not involve the origin. A point is the sum of the origin with a vector. This vector is called the position vector of the point. A weighted point is a scalar multiple of a point. The scalar multiple is called the weight of the point. The sum of a point and a vector is another point. The vector may be said to carry the first point into the second. The sum of a bound simple m-vector and a simple (m+1)-vector containing the m-vector is another bound simple m-vector. The simple (m+1)-vector may be said to carry the first bound simple mvector into the second.

Position, direction and sense


Congruent weighted points are said to define the same position. Congruent vectors are said to define the same direction.

2009 9 3

Grassmann Algebra Book.nb

69

Congruent vectors of opposite orientation are also said to have opposite sense. Congruent simple m-vectors are said to define the same m-direction. Congruent bound simple elements are said to define the same position and direction. A bound simple m-vector that has been carried to a new bound simple m-vector by the addition of an (m+1)-vector may be said to have been carried to a new position. The m-direction of the bound simple m-vector is unchanged. The geometric object of a bound simple entity may be said to have the position and direction of its entity. The notions of position, direction, and sense are geometric interpretations on the algebraic elements.

1.16 Summary
To be completed

2009 9 3

Grassmann Algebra Book.nb

70

2 The Exterior Product

2.1 Introduction
The exterior product is the natural fundamental product operation for elements of a linear space. Although it is not a closed operation (that is, the product of two elements is not itself an element of the same linear space), the products it generates form a series of new linear spaces, the totality of which can be used to define a true algebra, which is closed. The exterior product is naturally and fundamentally connected with the notion of linear dependence. Several 1-elements are linearly dependent if and only if their exterior product is zero. All the properties of determinants flow naturally from the simple axioms of the exterior product. The notion of 'exterior' is equivalent to the notion of linear independence, since elements which are truly exterior to one another (that is, not lying in the same space) will have a non-zero product. An exterior product of 1-elements has a straightforward geometric interpretation as the multidimensional equivalent to the 1-elements from which it is constructed. And if the space possesses a metric, its measure or magnitude may be interpreted as a length, area, volume, or hyper-volume according to the grade of the product. However, the exterior product does not require the linear space to possess a metric. This is in direct contrast to the three-dimensional vector calculus in which the vector (or cross) product does require a metric, since it is defined as the vector orthogonal to its two factors, and orthogonality is a metric concept. Some of the results which use the cross product can equally well be cast in terms of the exterior product, thus avoiding unnecessary assumptions. We start the chapter with an informal discussion of the exterior product, and then collect the axioms for exterior linear spaces together more formally into a set of 12 axioms which combine those of the underlying field and underlying linear space with the specific properties of the exterior product. In later chapters we will derive equivalent sets for the regressive and interior products, to which we will often refer. Next, we pin down the linear space notions that we have introduced axiomatically by introducing a basis onto the underlying (primary) linear space and showing how this can induce a basis onto each of the other linear spaces generated by the exterior product. A constantly useful partner to a basis element of any of these exterior linear spaces is its cobasis element. The exterior product of a basis element with its cobasis element always gives the basis n-element of the algebra. We develop the notion of cobasis following that of basis. The next three sections look at some standard topics in linear algebra from a Grassmannian viewpoint: determinants, cofactors, and the solution of systems of linear equations. We show that all the well-known properties of determinants, cofactors, and linear equations proceed directly from the properties of the exterior product. The next two sections discuss two concepts dependent on the exterior product that have no direct counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it can be factorized into 1-elements. If a simple element is divided by another simple element contained in it, the result is not unique. Both these properties will find application later in our explorations of geometry. 2009 9 3

Grassmann Algebra Book.nb

71

The next two sections discuss two concepts dependent on the exterior product that have no direct counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it can be factorized into 1-elements. If a simple element is divided by another simple element contained in it, the result is not unique. Both these properties will find application later in our explorations of geometry. At the end of the chapter, we take the opportunity to introduce the concepts of span and cospan of a simple element, and the concept that we can define multilinear forms involving the spans and cospans of an element which are invariant to any refactorization of the element. Following this we explore their application to calculating the union and intersection of two simple elements, and the factorization of a simple element. These applications involve only the exterior product. But we will continue to use the concepts of span, cospan and multilinear forms throughout the book, where they will generally involve other product operations.

2.2 The Exterior Product


Basic properties of the exterior product
Let L denote the field of real numbers. Its elements, called 0-elements (or scalars) will be denoted
0

by a, b, c, . Let L denote a linear space of dimension n whose field is L. Its elements, called 11 0

elements, will be denoted by x, y, z, . We call L a linear space rather than a vector space and its elements 1-elements rather than vectors
1

because we will be interpreting its elements as points as well as vectors. A second linear space denoted L may be constructed as the space of sums of exterior products of 12

elements taken two at a time. The exterior product operation is denoted by , and has the following properties:

a Hx yL Ha xL y

2.1

x Hy + zL x y + x z

2.2

Hy + zL x y x + z x

2.3

xx 0

2.4

An important property of the exterior product (which is sometimes taken as an axiom) is the antisymmetry of the product of two 1-elements.

x y -y x
This may be proved from the distributivity and nilpotency axioms since: 2009 9 3

2.5

Grassmann Algebra Book.nb

72

This may be proved from the distributivity and nilpotency axioms since:
Hx + yL Hx + yL 0 xx + xy + yx + yy 0 xy + yx 0 x y -y x

An element of L will be called a 2-element (of grade 2) and is denoted by a kernel letter with a '2'
2

written below. For example:


a xy + zw + !
2

A simple 2-element is the exterior product of two 1-elements. It is important to note the distinction between a 2-element and a simple 2-element (a distinction of no consequence in L, where all elements are simple). A 2-element is in general a sum of simple 21

elements, and is not generally expressible as the product of two 1-elements (except where L is of
1

dimension n b 3). The structure of L, whilst still that of a linear space, is thus richer than that of L,
2 1

that is, it has more properties.

Example: 2-elements in a three-dimensional space are simple


By way of example, the following shows that when L is of dimension 3, every element of L is
1 2

simple. Suppose that L has basis e1 , e2 , e3 . Then a basis of L is the set of all essentially different (linearly
1 2

independent) products of basis elements of L taken two at a time: e1 e2 , e2 e3 , e3 e1 . (The


1

product e1 e2 is not considered essentially different from e2 e1 in view of the anti-symmetry property). Let a general 2-element a be expressed in terms of this basis as:
2

a a e1 e2 + b e2 e3 + c e3 e1
2

Without loss of generality, suppose a " 0. Then a can be recast in the form below, thus proving
2

the proposition.
a a e1 2

b a

e3 e2 -

c a

e3

2009 9 3

Grassmann Algebra Book.nb

73

Historical Note
In the Ausdehnungslehre of 1844 Grassmann denoted the exterior product of two symbols a and b by a simple concatenation viz. a b; whilst in the Ausdehnungslehre of 1862 he enclosed them in square brackets viz. [a b]. This notation has survived in the threedimensional vector calculus as the 'box' product [a b g] used for the triple scalar product. Modern usage denotes the exterior product operation by the wedge , thus ab. Amongst other writers, Whitehead [1898] used the 1844 version whilst Forder [1941] and Cartan [1922] followed the 1862 version.

Declaring scalar and vector symbols in GrassmannAlgebra


The GrassmannAlgebra software needs to know which symbols it will treat as scalar (0-element) symbols, and which symbols it will treat as vector (1-element) symbols. These lists must be distinct (disjoint), as they have different interpretations in the algebra. It loads with default values for lists of these symbols. The default settings are: ScalarSymbols VectorSymbols
{a,b,c,d,e,f,g,h} {p,q,r,s,t,u,v,w,x,y,z}

If you have just loaded the GrassmannAlgebra package, you will see these symbols listed in the status panes at the top of the palette. (You can load the package by clicking the red title button on the GrassmannAlgebra Palette, and get help on any of the topics or commands by clicking the associated ? button.) To declare your own list of scalar symbols, enter DeclareScalarSymbols[list].
DeclareScalarSymbols@8a, b, g<D 8a, b, g<

You will now see these new symbols reflected in the Current ScalarSymbols pane. Now, if you enter ScalarSymbols you see the new list:
ScalarSymbols 8a, b, g<

You can always return to the default declaration by entering the command DeclareDefaultScalarSymbols or its alias S. (You can enter this with the keystroke combination of *5S, or by using the 5-star symbol on the palette.)
S 8a, b, c, d, e, f, g, h<

The same procedures apply mutatis mutandi for vector symbols, with the word "Scalar" replaced by the word "Vector" and S by V. As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include patterns. Any expression which conforms to such a pattern is considered to be a symbol of the respective type. For further information, check the Preferences section in the palette. 2009 9 3

Grassmann Algebra Book.nb

74

As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include patterns. Any expression which conforms to such a pattern is considered to be a symbol of the respective type. For further information, check the Preferences section in the palette. Note that the five-pointed star symbol (*5) is used in GrassmannAlgebra as the first character in each of its aliases to commonly used commands, objects or functions. You can reset all the default declarations (ScalarSymbols, VectorSymbols, Basis and Metric) by entering A, which you can access from the palette. We will do this often throughout the book to ensure we are starting our computations from the default environment.

Entering exterior products


The exterior product is symbolized by the wedge ''. This may be entered either by typing \[ Wedge], by typing ^, or by clicking on the button on the GrassmannAlgebra palette. As an infix operator, xyz! is interpreted as Wedge[x,y,z,!]. A quick way to enter xy is to type x, click on in the palette, then type y. Other ways are to type x^y, or click the button on the palette. The first placeholder will be selected automatically for immediate typing. To move to the next placeholder, press the tab key. To produce a multifactored exterior product, click the palette button as many times as required. To take the exterior product with an existing expression, select the expression and click the button. For example, to type (x + y)(x + z), first type x + y and select it, click the button, then type (x + z). For further information, check the Expression Composition section in the palette.

2.3 Exterior Linear Spaces


Composing m-elements
In the previous section the exterior product was introduced as an operation on the elements of the linear space L to produce elements belonging to a new space L. Just as L was composed from
1 2 2

sums of exterior products of elements of L two at a time, so L may be composed from sums of
1 m

exterior products of elements of L m at a time.


1

A simple m-element is a product of the form:


x1 x2 ! xm xi L
1

An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an 'm' written below: 2009 9 3

Grassmann Algebra Book.nb

75

An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an 'm' written below:
a = a1 Hx1 x2 ! xm L + a2 Hy1 y2 ! ym L + !
m

ai L
0

The number m is called the grade of the m-element. The exterior linear space L has essentially only one element in its basis, which we take as the unit
0

1. All other elements of L are scalar multiples of this basis element.


0

The exterior linear space L (where n is the dimension of L) also has essentially only one element in
n 1

its basis: e1 e2 ! en . All other elements of L are scalar multiples of this basis element.
n

Any m-element, where m > n, is zero, since there are no more than n independent elements available from which to compose it, and the nilpotency property causes it to vanish.

Composing elements automatically


You can automatically compose an element by using ComposeSimpleForm. For example a general element in the algebra of a 3-space may be composed by entering
ComposeSimpleFormBa x + b x + c x + d xF
0 1 2 3

a x0 + b x1 + c x2 x3 + d x4 x5 x6

GrassmannAlgebra understands that the new symbols generated are vector symbols. You can verify this by checking the Current VectorSymbols status pane in the palette, or by entering VectorSymbols.
VectorSymbols 8p, q, r, s, t, u, v, w, x, y, z, x1 , x2 , x3 , x4 , x5 , x6 <

For further information, check the Expression Composition section in the palette.

Spaces and congruence


In this book, the term space is another term for a Grassmann algebra. The term n-space is another term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of terminology we will refer to the dimension and basis of an n-space as that of its underlying linear space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering !n . The space of a simple m-element is the m-space whose underlying linear space consists of all the 1elements in the m-element. A basis of the space of a simple m-element may be taken as any set of m independent factors of the m-element. An n-space and the space of its n-element are identical. A 1-element a is said to belong to, to be contained in, or simply, to be in a simple m-element a if
m

and only if their exterior product is zero, that is a a 0.


m

2009 9 3

Grassmann Algebra Book.nb

76

A 1-element a is said to belong to, to be contained in, or simply, to be in a simple m-element a if


m

and only if their exterior product is zero, that is a a 0.


m

We may also say that a is in the space of a or that a defines its own space.
m m

A simple element b is said to be contained in another simple element a if and only if the space of b
k m k

is a subset of that of a.
m

We say that two elements are congruent if one is a scalar multiple of the other, and denote the relationship with the symbol . For example we may write:
aaa
m m

We can therefore say that if two elements are congruent, then their spaces are the same. Note that whereas several congruent elements are linearly dependent, several linearly dependent elements are not necessarily congruent. When we have one element equal to a scalar multiple of another, a = a b say, we may sometimes
m m

take the liberty of writing the scalar multiple as a quotient of the two elements:
a a
m m

These notions will be encountered many times in the rest of the book. An overview of the terminology used in this book can found at the end of Chapter 1.

The associativity of the exterior product


The exterior product is associative in all groups of (adjacent) factors. For example:
Hx1 x2 L x3 x4 x1 Hx2 x3 L x4 Hx1 x2 x3 L x4 !

Hence

ab g a bg
m k p m k p

2.6

Thus the brackets may be omitted altogether. From this associativity together with the anti-symmetric property of 1-elements it may be shown that the exterior product is anti-symmetric in all (1-element) factors. That is, a transposition of any two 1-element factors changes the sign of the product. For example:
x1 x2 x3 x4 - x3 x2 x1 x4 x3 x4 x1 x2 !

Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero. For example: 2009 9 3

Grassmann Algebra Book.nb

77

Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero. For example:
x1 x2 x3 x2 0

Example: Non-simple elements are not generally nilpotent


It should be noted that for simple elements a, a a 0, but that non-simple elements do not
m m m

necessarily possess this property, as the following example shows. Suppose L is of dimension 4 with basis e1 , e2 , e3 , e4 . Then the following exterior product of
1

identical elements is not zero:


He1 e2 + e3 e4 L He1 e2 + e3 e4 L 2 e1 e2 e3 e4

Transforming exterior products


Expanding exterior products
To expand an exterior product, you can use the function GrassmannExpand. For example, to expand the product in the previous example:
X = GrassmannExpand@He1 e2 + e3 e4 L He1 e2 + e3 e4 LD e1 e2 e1 e2 + e1 e2 e3 e4 + e3 e4 e1 e2 + e3 e4 e3 e4

An alias for GrassmannExpand is #, which you can find on the GrasssmannAlgebra palette. Note that GrassmannExpand does not simplify the expression.

Simplifying exterior products


To simplify the expression you would use the GrassmannSimplify function. An alias for GrassmannSimplify is $, which you can also find on the palette. However in this case, you must first tell GrassmannAlgebra that you have chosen a 4-space with basis 8e1 , e2 , e3 , e4 <, otherwise it will not know that the ei are basis 1-elements. You can do that by entering the alias !4 . (We will discuss the detail of declaring bases in a later section).
!4 8e1 , e2 , e3 , e4 <

Now GrassmannSimplify will simplify the expanded product.


$@XD 2 e1 e2 e3 e4

Simplifying and expanding exterior products


2009 9 3

Grassmann Algebra Book.nb

78

Simplifying and expanding exterior products


To expand and then simplify the expression in one operation, you can use the single function GrassmannExpandAndSimplify (which has the alias %).
%@He1 e2 + e3 e4 L He1 e2 + e3 e4 LD 2 e1 e2 e3 e4

Note that you can also use these functions to expand and/or simplify any Grassmann expression. For further information, check the Expression Transformation section in the palette.

2.4 Axioms for Exterior Linear Spaces


Summary of axioms
The axioms for exterior linear spaces are summarized here for future reference. They are a composite of the requisite field, linear space, and exterior product axioms, and may be used to derive the properties discussed informally in the preceding sections. Composite statements are given in list form.

1:

The sum of m-elements is itself an m-element. :a L, b L> : a + b L >


m m m m m m m

2.7

2:

Addition of m-elements is associative.

Ka + b O + g a + Kb + g O
m m m m m m

2.8

3:

m-elements have an additive identity (zero element). :!"#$%$B0, 0 LF, a 0 + a>


m m m m m m

2.9

2009 9 3

Grassmann Algebra Book.nb

79

4:

m-elements have an additive inverse. :!"#$%$B-a, -a LF, 0 a + -a>


m m m m m m

2.10

5:

Addition of m-elements is commutative. a+b b+a


m m m m

2.11

6:

The exterior product of an m-element and a k-element is an (m+k)element. :a L, b L> : a b L >


m m k k m k m +k

2.12

7:

The exterior product is associative.

ab
m k

g a bg
j m k j

2.13

8:

There is a unit scalar which acts as an identity under the exterior product. :!"#$%$B1, 1 LF, a 1 a>
0 m m

2.14

9:

Non-zero scalars have inverses with respect to the exterior product.

:!"#$%$Ba-1 , a-1 LF, 1 a a-1 , &'()**Ba, a ! 0 LF>


0 0 0

2.15

2009 9 3

Grassmann Algebra Book.nb

80

10:

The exterior product of elements of odd grade is anti-commutative. a b H- 1L m k b a


m k k m

2.16

11:

Additive identities act as multiplicative zero elements under the exterior product. :!"#$%$B0, 0 LF, 0 a
k k k k m

0>
k+ m

2.17

12:

The exterior product is both left and right distributive under addition.

Ka + b O g a g + b g
m m k m k m k

2.18

a b + g
m k k

ab + ag
m k m k

13:

Scalar multiplication commutes with the exterior product.

a a b Ka aO b a a b
m k m k m k

2.19

A convention for the zeros


It can be seen from the above axioms that each of the linear spaces has its own zero. Apart from grade, these zeros all have the same properties, so that for simplicity in computations we will denote them all by the one symbol 0. However, this also means that we cannot determine the grade of this symbol 0 from the symbol alone. In practice, we overcome this problem by defining the grade of 0 to be the (undefined) symbol 0.

Grassmann algebras
Under the foregoing axioms it may be directly shown that: 1. L is a field.
0

2. The L are linear spaces over L.


m 0

3. The direct sum of the L is an algebra. m 2009 9 3

Grassmann Algebra Book.nb

81

1. L is a field.
0

2. The L are linear spaces over L.


m 0

3. The direct sum of the L is an algebra.


m

This algebra is called a Grassmann algebra. Its elements are sums of elements from the L, thus
m

allowing closure over both addition and exterior multiplication. For example, we can expand and simplify the product of two elements of the algebra to give another element.
%@H1 + 2 x + 3 x y + 4 x y zL H1 + 2 x + 3 x y + 4 x y zLD 1 + 4 x + 6 xy + 8 xyz

A Grassmann algebra is also a linear space of dimension 2n , where n is the dimension of the underlying linear space L. We will sometimes refer to the Grassmann algebra whose underlying
1

linear space is of dimension n as the Grassmann algebra of n-space. Grassmann algebras will be discussed further in Chapter 9: Exploring Grassmann Algebra.

On the nature of scalar multiplication


The anti-commutativity axiom
m k k

10 for general elements states that:


m

a b H- 1Lm k b a

If one of these factors is a scalar (b = a, say; k = 0), the axiom reduces to:
k

aa aa
m m

Since by axiom 6, each of these terms is an m-element, we may permit the exterior product to subsume the normal field multiplication. Thus, if a is a scalar, a a is equivalent to a a. The latter
m m

(conventional) notation will usually be adopted. In the usual definitions of linear spaces no discussion is given to the nature of the product of a scalar and an element of the space. A notation is usually adopted (that is, the omission of the product sign) that leads one to suppose this product to be of the same nature as that between two scalars. From the axioms above it may be seen that both the product of two scalars and the product of a scalar and an element of the linear space may be interpreted as exterior products.

aa aa a a
m m m

aL
0

2.20

2009 9 3

Grassmann Algebra Book.nb

82

Factoring scalars
In GrassmannAlgebra scalars can be factored out of a product by GrassmannSimplify (alias $) so that they are collected at the beginning. For example:
$@5 H2 xL H3 yL Hb zLD 30 b x y z

If any of the factors in the product is scalar, it will also be collected at the beginning. Here a is a scalar since it has been declared as one (by default).
$@5 H2 xL H3 aL Hb zLD 30 a b x z GrassmannSimplify works with any Grassmann expression, or lists (to any level) of Grassmann expressions: $BK K 1 + ab ax OF MatrixForm zb 1 - zx

1+ab ax O bz 1 + xz

Grassmann expressions
A Grassmann expression is any well-formed expression recognized by GrassmannAlgebra as a valid element of a Grassmann algebra. To check whether an expression is considered to be a Grassmann expression, you can use the function GrassmannExpressionQ.
GrassmannExpressionQ@1 + x yD True GrassmannExpressionQ also works on lists of expressions. Below we determine that whereas the product of a scalar a and a 1-element x is a valid element of the Grassmann algebra, the ordinary multiplication of two 1-elements x y is not. GrassmannExpressionQ@8x y, a x, x y<D 8True, True, False<

For further information, check the Expression Analysis section in the palette.

Calculating the grade of a Grassmann expression


The grade of an m-element is m. At this stage we have not yet begun to use expressions for which the grade is other than obvious by inspection. However, as will be seen later, the grade will not always be obvious, especially when general Grassmann expressions, for example those involving Clifford or hypercomplex numbers, are being 2009 9 3 considered. A Clifford number may, for example, have components of different grade.

Grassmann Algebra Book.nb

83

At this stage we have not yet begun to use expressions for which the grade is other than obvious by inspection. However, as will be seen later, the grade will not always be obvious, especially when general Grassmann expressions, for example those involving Clifford or hypercomplex numbers, are being considered. A Clifford number may, for example, have components of different grade. In GrassmannAlgebra you can use the function Grade (alias G) to calculate the grade of a Grassmann expression. Grade works with single Grassmann expressions or with lists (to any level) of expressions.
Grade@x y zD 3 Grade@81, x, x y, x y z<D 80, 1, 2, 3<

It will also calculate the grades of the elements in a more general expression, returning them in a list.
Grade@1 + x + x y + x y zD 80, 1, 2, 3<

The grade of zero


As mentioned in the summary of axioms above the zero symbol 0 will be used indiscriminately for the zero element of any of the exterior linear spaces. The grade of the symbol 0 is therefore ambiguous. In GrassmannAlgebra, whenever Grade encounters a 0 it will return the symbol 0. This symbol may be read as "the grade of zero". Grade also returns 0 when it encounters an element whose notional grade is greater than the dimension of the currently declared space (and hence is zero). As an example, we first declare a 3-space, then compute the grades of three elements. The first element 0 could be the zero of any exterior linear space, hence its grade is returned as 0. The second element, the unit scalar 1, is of grade 0 (this 0 is a true scalar!). The third element has a notional grade of 4, and hence reduces to 0 in the currently declared 3-space.
!3 ; Grade@80, 1, x y z w<D 80, 0, 0<

2.5 Bases
Bases for exterior linear spaces
Suppose e1 , e2 , !, en is a basis for L. Then, as is well known, any element of L may be
1 1

expressed as a linear combination of these basis elements. A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2

products ei1 ei2 Hi1 , i2 : 1, , nL. Two products are essentially different if they do not involve 2009 9 3 n n the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2
2

Grassmann Algebra Book.nb

84

A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2

products ei1 ei2 Hi1 , i2 : 1, , nL. Two products are essentially different if they do not involve the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2

n 2

n 2

In general, a basis for L may be constructed from the ei by taking all the essentially different
m

products ei1 ei2 ! eim Hi1 , i2 , , im : 1, , nL. L is thus K


m

n O-dimensional. m

Essentially different products are of course linearly independent.

Declaring a basis in GrassmannAlgebra


GrassmannAlgebra allows the setting up of an environment in which given symbols are declared basis elements of L.
1

When first loaded GrassmannAlgebra sets up a default basis of 8e1 , e2 , e3 <. You can see this from the Current Basis pane on the palette. This is the basis of a 3-dimensional linear space which may be interpreted as a 3-dimensional vector space if you wish. (We will be exploring interpretations other than the vectorial later.) To declare your own basis, enter DeclareBasis[list]. For example:
DeclareBasis@8i, j, k<D 8i, j, k<

Notice that the Current Basis pane of the palette now reads {i, j, k}.
DeclareBasis either takes a list (of your own basis elements) or a positive integer as its argument. By using the positive integer form you can simply declare a subscripted basis of any dimension you please. DeclareBasis@8D 8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

An optional argument gives you control over the kernel symbol used for the basis elements.
DeclareBasis@8, eD 8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

A convenient way to change your space to one of dimension n is by entering the symbol !n with n a positive integer. The simplest way to enter this is from the palette using the ! symbol, and entering the integer into the placeholder .

2009 9 3

Grassmann Algebra Book.nb

85

!4 8e1 , e2 , e3 , e4 <

You can return to the default basis by entering DeclareBasis[3] or !3 . You can set all preferences back to their default values by entering DeclareAllDefaultPreferences (or it alias A). For further information, check the Preferences section of the palette.

Composing bases of exterior linear spaces


The function BasisL[m] generates a list of basis elements of L arranged in standard order from
m

the declared basis of L. For example, we declare the basis to be that of a 3-dimensional vector
1

space, and then compute the bases of L, L, L, and L.


0 1 2 3

!3 8e1 , e2 , e3 < BasisL@0D 81< BasisL@1D 8e1 , e2 , e3 < BasisL@2D 8e1 e2 , e1 e3 , e2 e3 < BasisL@3D 8e1 e2 e3 < BasisL[] generates a list of the bases of each of the L.
m

BasisL@D 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <<

You can combine these into a single list by using Mathematica's inbuilt Flatten function.
Flatten@BasisL@DD 81, e1 , e2 , e3 , e1 e2 , e1 e3 , e2 e3 , e1 e2 e3 <

This is a basis of the Grassmann algebra whose underlying linear space is 3-dimensional. As a linear space, the algebra has 23 = 8 dimensions corresponding to its 8 basis elements.

Composing palettes of basis elements


2009 9 3

Grassmann Algebra Book.nb

86

Composing palettes of basis elements


If you would like to compose a palette of the basis elements of all the exterior linear spaces induced by the declared basis, you can use the GrassmannAlgebra command BasisPalette. For example suppose you are working in a four-dimensional space.
!4 8e1 , e2 , e3 , e4 <

Entering BasisPalette would then give you the 16 basis elements of the corresponding Grassmann algebra:
BasisPalette

Basis Palette
L L
0 1

L
2

L
3

L
4

1 e1 e1 e2 e1 e2 e3 e1 e2 e3 e4 e2 e1 e3 e1 e2 e4 e3 e1 e4 e1 e3 e4 e4 e2 e3 e2 e3 e4 e2 e4 e3 e4
You can click on any of the buttons to get its contents pasted into your notebook. For example you can compose Grassmann numbers of your choice by clicking on the relevant basis elements.
X = a + b e1 + c e1 e3 + d e1 e3 e4

Standard ordering
The standard ordering of the basis of L is defined as the ordering of the elements in the declared
1

list of basis elements. This ordering induces a natural standard ordering on the basis elements of L:
m

if the basis elements of L were letters of the alphabet arranged alphabetically, then the basis
1

elements of L would be words arranged alphabetically. Equivalently, if the basis elements were
m

digits, the ordering would be numeric. For example, if we take {A,B,C,D} as basis, we can see from its BasisPalette that the basis elements for each of the bases are arranged alphabetically.

2009 9 3

Grassmann Algebra Book.nb

87

DeclareBasis@8A, B, C, D<D; BasisPalette

Basis Palette
L L
0 1

L
2

L
3

L
4

A AB ABC ABCD B AC ABD C AD ACD D BC BCD BD CD

Indexing basis elements of exterior linear spaces


Just as we can denote the basis elements of L by indexed symbols ei , (i : 1, !, n), so too can we
1

denote the basis elements of L by indexed symbols ei , (i : 1, !, K


m
m

n O). This would enable us to m

denote more succinctly the basis elements of an exterior linear space of large or arbitrary dimension. For example, suppose we have a basis 8e1 , e2 , e3 , e4 < for L, then the standard basis for L is
1 3

given by
!4 ; BasisL@3D 8e1 e2 e3 , e1 e2 e4 , e1 e3 e4 , e2 e3 e4 <

We could, if we wished, set up rules to denote these basis elements more compactly. For example:
:e1 e2 e3 e1 , e1 e2 e4 e2 , e1 e3 e4 e3 , e2 e3 e4 e4 >
3 3 3 3

We will find this more compact notation of some use in later theoretical derivations. Note however that GrassmannAlgebra is set up to accept only declarations for the basis of L from which it
1

generates the bases of the other exterior linear spaces of the algebra.

2009 9 3

Grassmann Algebra Book.nb

88

2.6 Cobases
Definition of a cobasis
The notion of cobasis of a basis will be conceptually and notationally useful in our development of later concepts. The cobasis of a basis of L is simply the basis of L which has been ordered in a
m n-m

special way relative to the basis of L.


m

Let 8e1 , e2 , !, en < be a basis for L. The cobasis element associated with a basis element ei is
1

denoted ei and is defined as the product of the remaining basis elements such that the exterior product of a basis element with its cobasis element is equal to the basis element of L.
n

ei ei e1 e2 ! en

That is:

e i = H - 1 L i-1 e 1 ! i ! e n
where i means that ei is missing from the product. The choice of the underbar notation to denote the cobasis may be viewed as a mnemonic to indicate that the element ei is the basis element of L with ei 'struck out' from it.
n

2.21

Cobasis elements have similar properties to Euclidean complements, which are denoted with overbars (see Chapter 5: The Complement). However, it should be noted that the underbar denotes an element, and is not an operation. For example: a ei is not defined. More generally, the cobasis element of the basis m-element ei = ei1 ! eim of L is denoted ei
m

and is defined as the product of the remaining basis elements such that:

ei ei e1 e2 ! en
m m

2.22

That is:

e i H - 1 L K m e 1 ! i1 ! i m ! e n
m

2.23

1 where Km = m g=1 ig + 2 m Hm + 1L.

From the above definition it can be seen that the exterior product of a basis element with the cobasis element of another basis element is zero. Hence we can write: 2009 9 3

Grassmann Algebra Book.nb

89

From the above definition it can be seen that the exterior product of a basis element with the cobasis element of another basis element is zero. Hence we can write:

ei ej dij e1 e2 ! en
m m

2.24

Here, dij is the Kronecker delta.

The cobasis of unity


The natural basis of L is 1. The cobasis 1 of this basis element is defined by formula 2.24 as the
0

product of the remaining basis elements such that:


1 1 e1 e2 ! en

Thus:
1 e1 e2 ! en e
n

2.25

Any of these three representations will be called the basis n-element of the space. 1 may also be described as the cobasis of unity. Note carefully that 1 may be different in different bases.

Composing palettes of cobasis elements


You can compose a palette of basis elements of the currently declared basis together with their corresponding cobasis elements by the command CobasisPalette. For the default basis 8e1 ,e2 ,e3 < this would give

2009 9 3

Grassmann Algebra Book.nb

90

!3 ; CobasisPalette

Cobasis Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Cobasis e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1

If you want to compose a palette for another basis, first declare the basis.
DeclareBasis@8&, '<D; CobasisPalette

Cobasis Palette
Basis Cobasis 1 ! " ! " ! " " -! 1

The cobasis of a cobasis


Let ei be a basis element and ei be its cobasis element. Then the cobasis element of this cobasis
m m

element is denoted ej and is defined as expected by the product of the remaining basis elements
m

such that:

2009 9 3

Grassmann Algebra Book.nb

91

ei ej dij e1 e2 ! en
m m

2.26

The left-hand side may be rearranged to give:


e i e j H - 1 L m Hn-m L e j e i
m m m m

which, by comparison with the definition for the cobasis shows that:

e i H - 1 L m Hn- m L e i m
m

2.27

That is, the cobasis element of a cobasis element of an element is (apart from a possible sign) equal to the element itself. We shall find this formula useful as we develop general formulae for the complement and interior products in later chapters.

2.7 Determinants
Determinants from exterior products
All the properties of determinants follow naturally from the properties of the exterior product. Indeed, it may be reasonably posited that the theory of determinants was a system for answering questions of linear dependence, developed only because Grassmann's work was ignored. Had Grassmann's work been more widely known, his simpler and more understandable approach would have rendered derterminant theory of little interest. The determinant:
a11 a12 a21 a22 an1 an2 ! a1 n ! a2 n ! ! an n

may be calculated by considering each of its rows (or columns) as a 1-element. Here, we consider rows. Development by columns may be obtained mutatis mutandis. Introduce an arbitrary basis ei in order to encode the position of the entry in the row, and let
ai ai1 e1 + ai2 e2 + ! + ain en

Then form the exterior product of all the ai . We can arrange the ai in rows to portray the effect of an array:

2009 9 3

Grassmann Algebra Book.nb

92

a1 a2 ! an Ha11 e1 + a12 e2 + ! + a1 n en L Ha21 e1 + a22 e2 + ! + a2 n en L ! Han1 e1 + an2 e2 + ! + an n en L

The determinant D is then the coefficient of the resulting n-element:


a1 a2 ! an D e1 e2 ! en

Because L is of dimension 1, we can express this uniquely as:


n

D=

a1 a2 ! an e1 e2 ! en

2.28

Properties of determinants
All the well-known properties of determinants proceed directly from the properties of the exterior product.

1 The determinant changes sign on the interchange of any two rows.


The exterior product is anti-symmetric in any two factors.
! ai ! aj ! - ! aj ! ai !

2 The determinant is zero if two of its rows are equal.


The exterior product is nilpotent.
! ai ! ai ! 0

3 The determinant is multiplied by a factor k if any row is multiplied by k.


The exterior product is commutative with respect to scalars.
a1 ! Hk ai L ! k H a1 ! ai ! L

4 The determinant is equal to the sum of p determinants if each element of a given row is the sum of p terms.
The exterior product is distributive with respect to addition.
a1 ! JS ai N ! S H a1 ! ai !L
i i

2009 9 3

Grassmann Algebra Book.nb

93

5 The determinant is unchanged if to any row is added scalar multiples of other rows.
The exterior product is unchanged if to any factor is added multiples of other factors.
a1 ! ai +

j!i

S kj aj

! a1 ! ai !

The Laplace expansion technique


The Laplace expansion technique is equivalent to the calculation of the exterior product in four stages: 1. Take the exterior product of any p of the ai . 2. Take the exterior product of the remaining n-p of the ai . 3. Take the exterior product of the results of the first two operations. 4. Adjust the sign to ensure the parity of the original ordering of the ai is preserved. Each of the first two operations produces an element with K O = K
n p n O terms. n- p

A generalization of the Laplace expansion technique is evident from the fact that the exterior product of the ai may be effected in any grouping and sequence which facilitate the computation.

Calculating determinants
As an example take the following matrix. We can calculate its determinant with Mathematica's inbuilt Det function:
a 0 0 0 DetB 0 0 g 0 0 0 b 0 0 f 0 0 0 0 0 0 e 0 0 0 0 c 0 0 0 d 0 0 0 0 0 e 0 0 0 d 0 0 f 0 0 c 0 0 0 0 0 0 g 0 0 b 0 a 0 0 F 0 0 h 0

a b2 c2 e2 h - a b c e2 f2 h

Alternatively, we can set L to be 8-dimensional.


1

DeclareBasis@8D 8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

and then expand and simplify the exterior product of the 8 rows or columns expressed as 1elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in this specialized case than the more general GrassmannExpandAndSimplify) to expand by columns. 2009 9 3

Grassmann Algebra Book.nb

94

and then expand and simplify the exterior product of the 8 rows or columns expressed as 1elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in this specialized case than the more general GrassmannExpandAndSimplify) to expand by columns.
ExpandAndSimplifyExteriorProducts@ HHa e1 + g e7 L Hb e3 + f e6 L He e5 L Hc e2 + d e6 L He e4 + d e8 L Hf e3 + c e6 L Hg e5 + b e8 L Ha e2 + h e7 LLD Ia b2 c2 e2 h - a b c e2 f2 hM e1 e2 e3 e4 e5 e6 e7 e8

The determinant is then the coefficient of the basis 8-element.

Speeding up calculations
For maximum speed it is clearly better to use Mathematica's inbuilt Det wherever possible. If however, you are using GrassmannExpandAndSimplify or ExpandAndSimplifyExteriorProducts it is usually faster to apply the generalized Laplace expansion technique. That is, the order in which the products are calculated is arranged with the objective of minimizing the number of resulting terms produced at each stage. You can do this by dividing the product up into groupings of factors where each grouping contains as few basis elements as possible in common with the others. You then calculate the result from each grouping before combining them into the final result. Of course, you must ensure that the sign of the overall regrouping is the same as the original expression. In the example below, we have rearranged the determinant calculation above into three groupings, which we calculate successively before combining the results. Our determinant calculation then becomes
G = ExpandAndSimplifyExteriorProducts; - G@G@Ha e1 + g e7 L Hc e2 + d e6 L Ha e2 + h e7 LD G@ Hb e3 + f e6 L Hf e3 + c e6 LD G@He e5 L He e4 + d e8 L Hg e5 + b e8 LDD a b c e2 Ib c - f2 M h e1 e2 e3 e4 e5 e6 e7 e8

This grouping took a third of the time of the direct approach above.

Historical Note
Grassmann applied his Ausdehnungslehre to the theory of determinants and linear equations quite early in his work. Later, Cauchy published his technique of 'algebraic keys' which essentially duplicated Grassmann's results. To claim priority, Grassmann was led to publish his only paper in French, obviously directed at Cauchy: 'Sur les diffrents genres de multiplication' ('On different types of multiplication') [1855]. For a complete treatise on the theory of determinants from a Grassmannian viewpoint see R. F. Scott [1880].

2009 9 3

Grassmann Algebra Book.nb

95

2.8 Cofactors
Cofactors from exterior products
The cofactor is an important concept in the usual approach to determinants. One often calculates a determinant by summing the products of the elements of a row by their corresponding cofactors. Cofactors divided by the determinant form the elements of an inverse matrix. In the subsequent development of the Grassmann algebra, particularly the development of the complement operation, we will find it useful to see how cofactors arise from exterior products. Consider the product of n 1-elements introduced in the previous section:
a1 a2 ! an Ha11 e1 + a12 e2 + ! + a1 n en L Ha21 e1 + a22 e2 + ! + a2 n en L ! Han1 e1 + an2 e2 + ! + an n en L

Omitting the first factor a1 :


a2 ! an Ha21 e1 + a22 e2 + ! + a2 n en L ! Han1 e1 + an2 e2 + ! + an n en L

and multiplying out the remaining n|1 factors results in an expression of the form:
a2 ! an a11 e2 e3 ! en + a12 H- e1 e3 ! en L + ! + a1 n H- 1Ln-1 e1 e2 ! en-1

Here, the signs attached to the basis (n|1)-elements have been specifically chosen so that together they correspond to the cobasis elements of e1 to en . The underscored scalar coefficients have yet to be interpreted. Thus we can write:
a2 ! an a11 e1 + a12 e2 + ! + a1 n en

If we now premultiply by the first factor a1 we get a particularly symmetric form.


a1 a2 ! an Ha11 e1 + a12 e2 + ! + a1 n en L Ia11 e1 + a12 e2 + ! + a1 n en M

Multiplying this out and remembering that the exterior product of a basis element with the cobasis element of another basis element is zero, we get the sum:
a1 a2 ! an Ia11 a11 + a12 a12 + ! + a1 n a1 n M e1 e2 ! en

But we have already seen that:


a1 a2 ! an D e1 e2 ! en

Hence the determinant D can be expressed as the sum of products:

2009 9 3

Grassmann Algebra Book.nb

96

Hence the determinant D can be expressed as the sum of products:


D a11 a11 + a12 a12 + ! + a1 n a1 n

showing that the aij are the cofactors of the aij . Mnemonically we can visualize the aij as the determinant with the row and column containing
aij as 'struck out' by the underscore.

Of course, there is nothing special in the choice of the element a1 about which to expand the determinant. We could have written the expansion in terms of any factor (row or column):
D ai1 ai1 + ai2 ai2 + ! + ain ain

It can be seen immediately that the product of the elements of a row with the cofactors of any other row is zero, since in the exterior product formulation the row must have been included in the calculation of the cofactors. Summarizing these results for both row and column expansions we have:
n j=1 n

aij akj aji ajk D dik


j=1

2.29

In matrix terms of course this is equivalent to the standard results


A Ac T A T Ac D I

where Ac is the matrix of cofactors of the elements of A.

The Laplace expansion in cofactor form


We have already introduced the Laplace expansion technique in our discussion of determinants: 1. Take the exterior product of any m of the ai 2. Take the exterior product of the remaining n|m of the ai 3. Take the exterior product of the results of the first two operations (in the correct order). In this section we revisit it more specifically in the context of the cofactor. The results will be important in deriving later results. Let ei be basis m-elements and ei be their corresponding cobasis (n|m)-elements (i: 1,,n and n =
m m

n m

). To fix ideas, suppose we expand a determinant about the first m rows:


H a 1 ! a m L H a m +1 ! a n L Ka1 e1 + a2 e2 + ! + an en O a1 e1 + a2 e2 + ! + an en
m m m m m m

Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis 1-elements, the exterior product of a basis m-element with the cobasis element of another basis melement 2009 9 3is zero. That is:

Grassmann Algebra Book.nb

97

Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis 1-elements, the exterior product of a basis m-element with the cobasis element of another basis melement is zero. That is:
ei ej dij e1 e2 ! en
m m

Expanding the product and applying this simplification then yields:


D a1 a1 + a2 a2 + ! + an an

This is the Laplace expansion. The ai are minors and the ai are their cofactors.

Exploring the calculation of determinants using minors and cofactors


In this section we explore the relationship between a determinant calculation in terms of minors and cofactors, and its exterior product formulation. This is intended for purely theoretical interest only, because of course, if the simple calculation of an actual determinant is required, Mathematica's inbuilt Det function will certainly be the more effective. Our example involves a fourth order determinant. To begin then, we declare a 4-space, and then compose a list of 4 vectors, give each a name, and call the list M. The new scalar coefficients generated by ComposeVector are automatically added to the list of declared scalars.
!4 ; M = 8x, y, z, w< = ComposeVector@8a, b, c, d<D; MatrixForm@MD a1 b1 c1 d1 e1 e1 e1 e1 + a2 + b2 + c2 + d2 e2 e2 e2 e2 + a3 + b3 + c3 + d3 e3 e3 e3 e3 + a4 + b4 + c4 + d4 e4 e4 e4 e4

We can extract the scalar coefficients from M using the GrassmannAlgebra function ExtractCoefficients. The result is now a 4!4 matrix Mc whose determinant we proceed to calculate.
Mc = ExtractCoefficients@BasisD M; MatrixForm@McD a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c3 d3 a4 b4 c4 d4

Case 1: Calculation by expansion about the first row


Expanding and simplifying the exterior product of rows two, three and four gives the minors of the first row as coefficients of the resulting 3-element.

2009 9 3

Grassmann Algebra Book.nb

98

A = % @y z w D H- b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e1 e2 e3 + H- b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e1 e2 e4 + H- b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e1 e3 e4 + H- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e2 e3 e4

The cobasis of the current basis 8e1 , e2 , e3 , e4 < can be displayed by entering Cobasis.
Cobasis 8e2 e3 e4 , - He1 e3 e4 L, e1 e2 e4 , - He1 e2 e3 L<

By comparing the basis elements in the expansion of A with the cobasis elements of the current basis above, we see that both e1 e2 e3 and e1 e3 e4 require a change of sign to bring them into the required cobasis form. The crux of the calculation is to garner this required sign from the coefficients of the basis elements (and hence also changing the sign of the coefficient). We denote the cobasis elements of 8e1 , e2 , e3 , e4 < by 9e1 , e2 , e3 , e4 = and use a rule to replace the basis elements of A with the correctly signed values.
A1 = A . 9e2 e3 e4 -> e1 , e1 e3 e4 -> - e2 , e1 e2 e4 -> e3 , e1 e2 e3 -> - e4 = H- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e1 H- b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e2 + H- b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e3 H- b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e4

The coefficients of 9e1 , e2 , e3 , e4 = are the cofactors of 8a1 , a2 , a3 , a4 <, which we denote 9a1 , a2 , a3 , a4 =. We extract them again using ExtractCoefficients.
9a1 , a2 , a3 , a4 = = ExtractCoefficientsA9e1 , e2 , e3 , e4 =E@A1 D 8- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 , b4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 , - b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 , b3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 <

We can then calculate the determinant from:


D1 = a1 a1 + a2 a2 + a3 a3 + a4 a4 a4 Hb3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 L + a3 H- b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L + a2 Hb4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 L + a1 H- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L

This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.

2009 9 3

Grassmann Algebra Book.nb

99

This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.
Expand@D1 D == Det@McD True

Case 2: Calculation by expansion about the first two rows


Expand and simplify the product of the first and second rows.
B1 = % @x yD H- a2 b1 + a1 b2 L e1 e2 + H- a3 b1 + a1 b3 L e1 e3 + H- a4 b1 + a1 b4 L e1 e4 + H- a3 b2 + a2 b3 L e2 e3 + H- a4 b2 + a2 b4 L e2 e4 + H- a4 b3 + a3 b4 L e3 e4

Extract the coefficients of these basis 2-elements and denote them by ai .


8a1 , a2 , a3 , a4 , a5 , a6 < = ExtractCoefficients@LBasisD@B1 D 8- a2 b1 + a1 b2 , - a3 b1 + a1 b3 , - a4 b1 + a1 b4 , - a3 b2 + a2 b3 , - a4 b2 + a2 b4 , - a4 b3 + a3 b4 <

Expand and simplify the product of the third and fourth rows.
B2 = % @z w D H- c2 d1 + c1 d2 L e1 e2 + H- c3 d1 + c1 d3 L e1 e3 + H- c4 d1 + c1 d4 L e1 e4 + H- c3 d2 + c2 d3 L e2 e3 + H- c4 d2 + c2 d4 L e2 e4 + H- c4 d3 + c3 d4 L e3 e4

Extract the coefficients of these basis 2-elements and denote them by bi . (For a more direct correspondence, we denote them in reverse order).
8b6 , b5 , b4 , b3 , b2 , b1 < = ExtractCoefficients@LBasisD@B2 D 8- c2 d1 + c1 d2 , - c3 d1 + c1 d3 , - c4 d1 + c1 d4 , - c3 d2 + c2 d3 , - c4 d2 + c2 d4 , - c4 d3 + c3 d4 <

Now, noting the required change in sign from the basis of L and its cobasis:
2

8BasisL@2D, CobasisL@2D< 88e1 e2 , e1 e3 , e1 e4 , e2 e3 , e2 e4 , e3 e4 <, 8e3 e4 , - He2 e4 L, e2 e3 , e1 e4 , - He1 e3 L, e1 e2 <<

we can define the cofactors 9a6 , a5 , a4 , a3 , a2 , a1 =.


9a6 , a5 , a4 , a3 , a2 , a1 = = 8b6 , -b5 , b4 , b3 , -b2 , b1 < 8- c2 d1 + c1 d2 , c3 d1 - c1 d3 , - c4 d1 + c1 d4 , - c3 d2 + c2 d3 , c4 d2 - c2 d4 , - c4 d3 + c3 d4 <

We can then calculate the determinant from:

2009 9 3

Grassmann Algebra Book.nb

100

D2 = a1 a1 + a2 a2 + a3 a3 + a4 a4 + a5 a5 + a6 a6 H- a4 b3 + a3 b4 L H- c2 d1 + c1 d2 L + H- a4 b2 + a2 b4 L Hc3 d1 - c1 d3 L + H- a4 b1 + a1 b4 L H- c3 d2 + c2 d3 L + H- a3 b2 + a2 b3 L H- c4 d1 + c1 d4 L + H- a3 b1 + a1 b3 L Hc4 d2 - c2 d4 L + H- a2 b1 + a1 b2 L H- c4 d3 + c3 d4 L

This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.
Expand@D2 D == Det@McD True

Transformations of cobases
In this section we show that if a basis of the underlying linear space is transformed by a transformation whose components are aij , then its cobasis is transformed by the induced transformation whose components are the cofactors of the aij . For simplicity in what follows, we use Einstein's summation convention in which a summation over repeated indices is understood. Let aij be a transformation on the basis ej to give the new basis !i . That is, !i aij ej . Let aij be the corresponding transformation on the cobasis ej to give the new cobasis !i . That is, !i aij ej . Now take the exterior product of these two equations.
!i !k Haip ep L J akj ej N aip akj ep ej

But the product of a basis element and its cobasis element is equal to the n-element of that basis. That is, !i !k dik ! and ep ej dpj e. Substituting in the previous equation gives:
n n

dik ! aip akj dpj e n n

Using the properties of the Kronecker delta we can simplify the right side to give:
dik ! aij akj e n n

We can now substitute ! D e where D Det@aij D is the determinant of the transformation.


n n

dik D aij akj

This is precisely the relationship 2.29 derived in the section above for the expansion of a determinant in terms of cofactors. Hence we have shown that akj akj .

2009 9 3

Grassmann Algebra Book.nb

101

!i aij ej !i aij ej
In sum: If a basis of the underlying linear space is transformed by a transformation whose components are aij , then its cobasis is transformed by the induced transformation whose components are the cofactors aij of the aij .

2.30

Exploring transformations of cobases


In order to verify the equation above we first recast it into a pseudo-code equation and then proceed to translate it into Mathematica commands:
!i . H!i aij ej L aij ej

The commands will be valid in an arbitrary-dimensional space, but we exhibit the 3-dimensional case as we proceed. 1 Reset all the defaults and declare the elements of the transformation as scalar symbols.
A; DeclareExtraScalarSymbols@a__ D 8a, b, c, d, e, f, g, h, a__ <

2 Construct a formula for an n-dimensional transformation An .


An_ := Table@ai,j , 8i, 1, n<, 8j, 1, n<D; MatrixForm@A3 D a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3

3 Construct a formula for the adjoint Cn of An (the matrix of cofactors) using Mathematica's inbuilt functions.
Cn_ := Transpose@Det@An D Inverse@An DD; MatrixForm@C3 D - a2,3 a3,2 + a2,2 a3,3 a2,3 a3,1 - a2,1 a3,3 - a2,2 a3,1 + a2,1 a3,2 a1,3 a3,2 - a1,2 a3,3 - a1,3 a3,1 + a1,1 a3,3 a1,2 a3,1 - a1,1 a3,2 - a1,3 a2,2 + a1,2 a2,3 a1,3 a2,1 - a1,1 a2,3 - a1,2 a2,1 + a1,1 a2,2

4 Declare an e-basis, and compute its cobasis.


eBasisn_ := DeclareBasis@n, eD; MatrixForm@eBasis3 D e1 e2 e3

2009 9 3

Grassmann Algebra Book.nb

102

eCobasisn_ := HeBasisn ; CobasisL; MatrixForm@eCobasis3 D e2 e3 - He1 e3 L e1 e2

5 Declare an e-basis, and compute its cobasis.


eBasisn_ := DeclareBasis@nD; MatrixForm@eBasis3 D e1 e2 e3 eCobasisn_ := HeBasisn ; CobasisL; MatrixForm@eCobasis3 D e2 e3 - He1 e3 L e1 e2

6 Compute the left hand side of the pseudo-code equation.


Ln_ := eCobasisn . Thread@eBasisn An .eBasisn D; MatrixForm@L3 D He1 a2,1 + e2 a2,2 + e3 a2,3 L He1 a3,1 + e2 a3,2 + e3 a3,3 L - HHe1 a1,1 + e2 a1,2 + e3 a1,3 L He1 a3,1 + e2 a3,2 + e3 a3,3 LL He1 a1,1 + e2 a1,2 + e3 a1,3 L He1 a2,1 + e2 a2,2 + e3 a2,3 L

7 Compute the right hand side of the pseudo-code equation.


Rn_ := Cn .eCobasisn ; MatrixForm@R3 D H- a2,2 a3,1 + a2,1 a3,2 L e1 e2 - Ha2,3 a3,1 - a2,1 a3,3 L e1 e3 + H- a2,3 a3,2 + Ha1,2 a3,1 - a1,1 a3,2 L e1 e2 - H- a1,3 a3,1 + a1,1 a3,3 L e1 e3 + Ha1,3 a3,2 - a H- a1,2 a2,1 + a1,1 a2,2 L e1 e2 - Ha1,3 a2,1 - a1,1 a2,3 L e1 e3 + H- a1,3 a2,2 +

8 Expand and simplify the exterior products on both sides of the equation.
Fn_ := Expand@ExpandAndSimplifyExteriorProducts@Ln DD == Expand@Rn D; F3 True

9 Verify the equation for a few other dimensions.


Table@Fi , 8i, 2, 5<D 8True, True, True, True<

2009 9 3

Grassmann Algebra Book.nb

103

2.9 Solution of Linear Equations


Grassmann's approach to solving linear equations
Because of its encapsulation of the properties of linear independence, Grassmann was able to use the exterior product to present a theory and formulae for the solution of linear equations well before anyone else. Suppose m independent equations in n (m n) unknowns xi :
a11 x1 + a12 x2 + ! + a1 n xn a1 a21 x1 + a22 x2 + ! + a2 n xn a2 !

am1 x1 + am2 x2 + ! + am n xn am

Multiply these equations by e1 , e2 , !, em respectively and define


Ci a1 i e1 + a2 i e2 + ! + ami em C0 a1 e1 + a2 e2 + ! + am em

The Ci and C0 are therefore 1-elements in a linear space of dimension m. Adding the resulting equations then gives the system in the form:

x1 C1 + x2 C2 + ! + xn Cn C0

2.31

To obtain an equation from which the unknowns xi have been eliminated, it is only necessary to multiply the linear system through by the corresponding Ci . If m = n and a solution for xi exists, it is obtained by eliminating x1 , !, xi-1 , xi+1 , !, xn ; that is, by multiplying the linear system through by C1 ! Ci-1 Ci+1 ! Cn :
xi Ci C1 !

C i-1 C i+1 ! C n C 0 C 1 ! C i-1 C i+1 ! C n


C 0 H C 1 ! C i-1 C i+1 ! C n L C i H C 1 ! C i-1 C i+1 ! C n L

Solving for xi gives:

xi

2.32

In this form we only have to calculate the (n|1)-element C1 ! Ci-1 Ci+1 ! Cn once. An alternative form more reminiscent of Cramer's Rule is:

2009 9 3

Grassmann Algebra Book.nb

104

xi

C 1 ! C i-1 C 0 C i+1 ! C n

C1 C2 ! Cn

2.33

All the well-known properties of solutions to systems of linear equations proceed directly from the properties of the exterior products of the Ci .

Example solution: 3 equations in 4 unknowns


Consider the following system of 3 equations in 4 unknowns.
a-2b+3c+4d2 2a+7c-5d9 a+b+c+d8

To solve this system, first declare a basis of dimension at least equal to the number of equations. In this example, we declare a 4-space so that we can add another equation later.
DeclareBasis@4D 8e1 , e2 , e3 , e4 <

Next define a 1-element for each unknown which encodes its coefficients, and one which encodes the constants
Ca Cb Cc Cd C0 = e1 + 2 e2 + e3 ; = - 2 e1 + e3 ; = 3 e1 + 7 e2 + e3 ; = 4 e1 - 5 e2 + e3 ; = 2 e1 + 9 e2 + 8 e3 ;

The system equation then becomes:


a Ca + b Cb + c Cc + d Cd C0

Suppose we wish to eliminate a and d thus giving a relationship between b and c. To accomplish this we multiply the system equation through by Ca Cd .
Ha Ca + b Cb + c Cc + d Cd L Ca Cd C0 Ca Cd

Or, since the terms involving a and d will obviously be eliminated by their product with Ca Cd , we have more simply:
Hb Cb + c Cc L Ca Cd C0 Ca Cd

This can be put in a form similar to that of equations 2.32 and 2.33 by dividing through by the right side.

2009 9 3

Grassmann Algebra Book.nb

105

Hb Cb + c Cc L Ca Cd C0 Ca Cd

Expanding and simplifying the numerator and denominator then gives the required relationship between b and c.
%@Hb Cb + c Cc L Ca Cd D % @C0 Ca Cd D 1 63 H27 b - 29 cL 1 1

Example solution: 4 equations in 4 unknowns


Had we had a fourth equation, say b - 3c + d 7, we could have simply added the new information for each coefficient.
Ca Cb Cc Cd C0 = e1 + 2 e2 + e3 ; = - 2 e1 + e3 + e4 ; = 3 e1 + 7 e2 + e3 - 3 e4 ; = 4 e1 - 5 e2 + e3 + e4 ; = 2 e1 + 9 e2 + 8 e3 + 7 e4 ;

Now we can use equation 2.33 directly


b Ca C0 Cc Cd Ca Cb Cc Cd

and calculate the result directly by simplifying:


b % @Ca C0 Cc Cd D % @Ca Cb Cc Cd D 30 41

b-

2.10 Simplicity
The concept of simplicity
An important concept in the Grassmann algebra is that of simplicity. Earlier in the chapter we introduced the concept informally. Now we will discuss it in a little more detail. An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements, and n-elements are simple. In the next section we show that all (n|1)-elements are also simple. 2009 9 3

Grassmann Algebra Book.nb

106

An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements, and n-elements are simple. In the next section we show that all (n|1)-elements are also simple. In 2-space therefore, all elements of L, L, and L are simple. In 3-space all elements of L, L, L and
0 1 2 0 1 2

L are simple. In higher dimensional spaces elements of grade 2 m n|2 are therefore the only
3

ones that may not be simple. In 4-space, the only elements that may not be simple are those of grade 2. There is a straightforward way of testing the simplicity of 2-elements not shared by elements of higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of a is a a.) Since odd elements (elements of odd grade) anti-commute, the exterior square of odd
m m m

elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its being simple. We will return to a further discussion of simplicity from the point of view of factorization in Chapter 3: The Regressive Product.

All (n|1)-elements are simple


In general, (n|1)-elements are simple. We can show this as follows. Consider first two simple (n|1)-elements. Since they can differ by at most one 1-element factor (otherwise they would together contain more than n independent factors), we can express them as a b1 and a b2 . Summing these, and factoring the common (n|2)-element gives a simple (n|1)n-2 n-2

element.
n-2

a b1 + a b2 a Hb1 + b2 L
n-2 n-2

Any (n|1)-element can be expressed as the sum of simple (n|1)-elements. We can therefore prove the general case by supposing pairs of simple elements to be combined to form another simple element, until just one simple element remains.

Conditions for simplicity of a 2-element in a 4-space


Consider a simple 2-element in a 4-dimensional space. First we declare a 4-dimensional basis for the space, and then compose a general bivector B. We can shortcut the entry of the bivector by using the GrassmannAlgebra function ComposeBivector. ComposeBivector automatically declares the coefficients to be scalar symbols by including them in the list of declared scalar symbols.
!4 ; B = ComposeBivector@aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4

2009 9 3

Grassmann Algebra Book.nb

107

Since we are supposing B to be simple, B B 0. To see how this constrains the coefficients ai of the terms of B we expand and simplify the product:
% @B BD H2 a3 a4 - 2 a2 a5 + 2 a1 a6 L e1 e2 e3 e4

We thus see that the condition for a 2-element in a 4-dimensional space to be simple may be written:

a3 a4 - a2 a5 + a1 a6 0

2.34

Conditions for simplicity of a 2-element in a 5-space


The situation in 5-space is a little more complex. First we declare a 5-space, and then compose a general bivector.
!5 ; B = ComposeBivector@aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e1 e5 + a5 e2 e3 + a6 e2 e4 + a7 e2 e5 + a8 e3 e4 + a9 e3 e5 + a10 e4 e5

Next, we expand and simplify its exterior square.


B2 = % @B BD H2 a3 a5 - 2 a2 a6 + 2 a1 a8 L e1 e2 e3 e4 + H2 a4 a5 - 2 a2 a7 + 2 a1 a9 L e1 e2 e3 e5 + H2 a4 a6 - 2 a3 a7 + 2 a1 a10 L e1 e2 e4 e5 + H2 a4 a8 - 2 a3 a9 + 2 a2 a10 L e1 e3 e4 e5 + H2 a7 a8 - 2 a6 a9 + 2 a5 a10 L e2 e3 e4 e5

For the bivector B to be simple we require this product B2 to be zero, hence each of its five coefficients must be zero simultaneously. We can extract the coefficients by using GrassmannAlgebra's ExtractCoefficients function, and then thread them into a list of five equations.
A = Thread@ExtractCoefficients@LBasisD@B2 D == 80, 0, 0, 0, 0<D 82 a3 a5 - 2 a2 a6 + 2 a1 a8 0, 2 a4 a5 - 2 a2 a7 + 2 a1 a9 0, 2 a4 a6 - 2 a3 a7 + 2 a1 a10 0, 2 a4 a8 - 2 a3 a9 + 2 a2 a10 0, 2 a7 a8 - 2 a6 a9 + 2 a5 a10 0<

Finally, we can solve these five equations with Mathematica's inbuilt Solve function.
Solve@A, 8a8 , a9 , a10 <D ::a8 a3 a5 - a2 a6 a1 , a9 a4 a5 - a2 a7 a1 , a10 a4 a6 - a3 a7 a1 >>

By inspection we see that these three solutions correspond to the first three equations. Consequently, from the generality of the expression for the bivector we expect any three equations to suffice. 2009 9 3

Grassmann Algebra Book.nb

108

By inspection we see that these three solutions correspond to the first three equations. Consequently, from the generality of the expression for the bivector we expect any three equations to suffice. In summary we can say that a 2-element in a 5-space is simple if any three of the following equations is satisfied:

a3 a5 - a2 a6 + a1 a8 a4 a5 - a2 a7 + a1 a9 a4 a6 - a3 a7 + a1 a10 a4 a8 - a3 a9 + a2 a10 a7 a8 - a6 a9 + a5 a10

0 0 0 0 0

2.35

The concept of simplicity will be explored in more detail in Chapter 3: The Regressive Product.

Factorizing simple elements


In Chapter 3; The Regressive Product, we will develop an algorithm for obtaining a factorization of a simple element. In this section we discuss factorization from first principles using only the exterior product. To make the discussion concrete, we take an example. Suppose we have a 4-element X which we would like to factorize. X is expressed in terms of six 1elements, and is known to be simple. For reference purposes we begin with the element in factored form X0 to assure ourselves that it is simple, but the example will take its expanded form X as starting point. The factorization we will obtain is of course unlikely to be of the same form as X0 .
X0 = H2 x + 3 yL Hu - v - xL H2 w - 5 z + y + uL Hz - xL; X = % @X0 D 3 uvxy + 2 uvxz + 3 uvyz + 6 uwxy + 4 u w x z + 6 u w y z - 14 u x y z - 6 v w x y 4 v w x z - 6 v w y z + 17 v x y z + 6 w x y z

Clearly, a factor of X must be a linear combination of u, v, w, x, y and z. Let us call it f1 . The si are scalar coefficients which we declare as scalar symbols by entering s@s_ D.
s@s_ D; f1 = s1 u + s2 v + s3 w + s4 x + s5 y + s6 z;

Now if f1 is to be a factor of X, its exterior product with X must be zero. Hence

2009 9 3

Grassmann Algebra Book.nb

109

X1 = % @f1 XD 0 H- 6 s1 - 6 s2 + 3 s3 L u v w x y + H- 4 s1 - 4 s2 + 2 s3 L u v w x z + H- 6 s1 - 6 s2 + 3 s3 L u v w y z + H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L u v x y z + H6 s1 + 14 s3 + 6 s4 - 4 s5 + 6 s6 L u w x y z + H6 s2 - 17 s3 - 6 s4 + 4 s5 - 6 s6 L v w x y z 0

Since the coefficients of each of these terms must be zero, we now have some equations of constraint between the coefficients of f1 . In this case it is easy to see that the first 3 are identical, so we can satisfy them by substituting 2 s1 + 2 s2 for s3 .
X2 = HX1 . s3 2 s1 + 2 s2 L Simplify H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L Hu v x y z + 2 u w x y z - 2 v w x y zL 0

Similarly we can eliminate s5 to satisfy the remaining constraint.


X3 = X2 . s5 True 17 s1 + 14 s2 + 3 s4 + 3 s6 2 Simplify

Finally, we can substitute back into the expression for f1 to get an expression for a generic factor of X, which we call f2 .
f2 = f1 . s3 2 s1 + 2 s2 . s5 17 s1 + 14 s2 + 3 s4 + 3 s6 2 1 2 y H17 s1 + 14 s2 + 3 s4 + 3 s6 L

u s1 + v s2 + w H2 s1 + 2 s2 L + x s4 + z s6 +

X can now be constructed as the exterior product of any four linearly independent versions of f2 . As an example, we create 1-elements xi by letting si equal 1, and the others equal 0. x1 = f2 . 8s1 1, s2 0, s4 0, s6 0< u+2w+ 17 y 2

x2 = f2 . 8s1 0, s2 1, s4 0, s6 0< v+2w+7y x3 = f2 . 8s1 0, s2 0, s4 1, s6 0< x+ 3y 2

2009 9 3

Grassmann Algebra Book.nb

110

x4 = f2 . 8s1 0, s2 0, s4 0, s6 1< 3y 2 +z

The exterior product of these xi is


X4 = % @x1 x2 x3 x4 D 3 uvyz + 3 uwxy + 2 2 2 uwxz + 3 uwyz - 7 uxyz - 3 vwxy 17 2 vwxz - 3 vwyz + vxyz + 3 wxyz 2 uvxy + uvxz + 3

This turns out to be half of X. So by doubling one of the xi , say x4 , we have the final result which we can verify.
% BX u + 2 w + True 17 y 2 Hv + 2 w + 7 yL x + 3y 2 H3 y + 2 zLF

Applying the process to a non-simple element


Consider the case where X is not simple. For example
X = x y + u v;

If X had a 1-element factor, it would have to be some linear combination of x, y, u, and v.


f = s1 x + s2 y + s3 u + s4 v;

If f is a factor of X, its exterior product with X must be zero.


% @f XD s1 u v x + s2 u v y + s3 u x y + s4 v x y

Clearly this expression can only be zero if all the coefficients si are zero, which in turn implies f must be zero. Hence in this case, X has no 1-element factors. More generally, an element being non-simple does not imply that it has no 1-element factors. Here is an non-simple element with two 1-element factors.
X = Hx y + u vL w z; f = s1 x + s2 y + s3 u + s4 v + s5 w + s6 z; % @f XD - s1 u v w x z - s2 u v w y z + s3 u w x y z + s4 v w x y z

Hence s1 , s2 , s3 , s4 must be zero, leaving f = s5 w + s6 z as a generic factor of X.

2009 9 3

Grassmann Algebra Book.nb

111

Hence s1 , s2 , s3 , s4 must be zero, leaving f = s5 w + s6 z as a generic factor of X.

2.11 Exterior Division


The definition of an exterior quotient
It will be seen in Chapter 4: Geometric Interpretations that one way of defining geometric entities like lines and planes is by the exterior quotient of two interpreted elements. Such a quotient does not yield a unique element. Indeed this is why it is useful for defining geometric entities, for they are composed of sets of elements. The exterior quotient of a simple (m+k)-element a by another simple element b which is
m +k k m +k m m +k

contained in a is defined to be the most general m-element contained in a (g, say) such that the
m +k

exterior product of the quotient g with the denominator b yields the numerator a .
m k

a
g
m

m +k

b
k

gb a
m

m +k

2.36

Note the convention adopted for the order of the factors. In Chapter 9: Exploring Grassmann Algebra we shall generalize this definition of quotient to define the general division of Grassmann numbers.

Division by a 1-element
Consider the quotient of a simple (m+1)-element a by a 1-element b. Since a contains b we can
m +1 m +1

write it as a1 a2 !am b. However, we could also have written this numerator in the more general form:
Ha1 + t1 bL Ha2 + t2 bL ! Ham + tm bL b

where the ti are arbitrary scalars. It is in this more general form that the numerator must be written before b can be 'divided out'. Thus the quotient may be written:

a1 a2 ! a m b b

Ha1 + t1 bL Ha2 + t2 bL ! Ha m + t m bL

2.37

2009 9 3

Grassmann Algebra Book.nb

112

Division by a k-element
Suppose now we have a quotient with a simple k-element denominator. Since the denominator is contained in the numerator, we can write the quotient as:
Ha1 a2 ! am L Hb1 b2 ! bk L Hb1 b2 ! bk L

To prepare for the 'dividing out' of the b factors we rewrite the numerator in the more general form:
Ha1 + t11 b1 + ! + t1 k bk L Ha2 + t21 b1 + ! + t2 k bk L ! Ham + tm1 b1 + ! + tmk bk L Hb1 b2 ! bk L

where the tij are arbitrary scalars. Hence:

Ha1 a2 ! a m L Hb1 b2 ! bk L Ha1 + t11

Hb1 b2 ! bk L b1 + ! + t1 k bk L Ha2 + t21 b1 + ! + t2 k bk L ! Ham + tm1 b1 + ! + tmk bk L

2.38

In the special case of m equal to 1, this reduces to:

a b1 b2 ! bk b1 b2 ! bk

a + t1 b1 + ! + tk bk

2.39

We will later see that this formula neatly defines a hyperplane.

Special cases
In the special cases where a or b is a scalar, the results are unique.
m k

a b1 b2 ! bk b1 b2 ! bk b a1 a2 ! a m b

2.40

a1 a2 ! a m

2.41

2009 9 3

Grassmann Algebra Book.nb

113

Automating the division process


By using the GrassmannAlgebra function ExteriorQuotient you can automatically generate the expression resulting from the exterior quotient of two exterior products. For example, in a 3space, the exterior quotient of the basis trivector e1 e2 e3 by the basis bivector e1 e2 results in the variable vector e3 + e1 t1 + e2 t2 , where the coefficients t1 and t2 are automatically generated arbitrary scalar symbols (Mathematica automatically puts these scalar symbols after the basis elements because of its internal ordering routine).
!3 ; ExteriorQuotientB e3 + e1 t1 + e2 t2 e1 e2 e3 e1 e2 F

Whereas the exterior quotient of the basis trivector e1 e2 e3 by the basis vector e1 results in the variable bivector He2 + e1 t1 L He3 + e1 t2 L.
ExteriorQuotientB e1 e2 e3 e1 F

He2 + e1 t1 L He3 + e1 t2 L ExteriorQuotient requires that the factors in the denominator are also specifically displayed in the numerator. ExteriorQuotientB Hx - yL Hy - zL Hz - xL Hz - xL Hx - yL F

y - z + Hx - yL t1 + H- x + zL t2

More generally, you can use ExteriorQuotient on any Grassmann expression involving exterior quotients.
ExteriorQuotientBa e1 e2 e3 e1 +b e1 e2 e3 e1 e2 +c e1 e2 e3 e1 e2 e3 F

c + b He3 + e1 t1 + e2 t2 L + a He2 + e1 t3 L He3 + e1 t4 L

2.12 Multilinear Forms


The span of a simple element
We have already introduced the notion of the space of a simple m-element in Section 2.3. The space of a simple m-element is the m-space whose underlying linear space consists of all the 1elements in the m-element. A 1-element is said to be in a simple m-element if and only their exterior product is zero. The space of a simple m-element is a subspace of the n-space in which it resides. An n-space and the space of its n-element are thus identical. In order to explore the space of a simple m-element we can of course choose as basis for its underlying linear space, any set of m independent 1-elements whose exterior product is congruent to the given m-element. Such sets span the space of the simple m-element, and by extension, we 2009 9 3 call each of them a span of the m-element.

Grassmann Algebra Book.nb

114

In order to explore the space of a simple m-element we can of course choose as basis for its underlying linear space, any set of m independent 1-elements whose exterior product is congruent to the given m-element. Such sets span the space of the simple m-element, and by extension, we call each of them a span of the m-element. For example, if A xyz, then we can say that a span of A is {x, y, z}. But of course, {x + y, y, z} and {a x, y, z} are also spans of A. Practically however, the simple m-elements we are working with will often be available to us in some known factored form, that is, as a given exterior product of m 1-elements. In these cases it is convenient for computational purposes to choose these factors to span the element, and, by abuse of terminology, call the list of these factors the span of the element. (We can read "the span" to mean "the current working span" of the element). For example, if A xyz, then we say that the span of A is {x, y, z}. More generally, we define the k-span of a simple m-element as the list of all the essentially different exterior products of grade k formed from the span (or 1-span) of the element. Given this provisory preamble, we can now say that the most straightforward way to conceive of the span of a simple m-element is as the basis of an m-space, where the basis is the list of factors of the m-element in order. Similarly the k-span of a simple m-element may be conceived of as the basis of the concomitant exterior product space of grade k, and the k-cospan of the element may be conceived of as its cobasis. The k-span of a simple m-element has no natural notion of invariance attached to it, because the same m-element, factorized differently, will generate different k-spans. On the other hand, the pair comprising the k-span of an m-element and its concomitant (m-k)-span will, if present together appropriately in a multilinear form, can generate results invariant to different factorizations. Many expressions in the algebra are multilinear forms, and an understanding of their properties is instructive. We will explore these forms in the sections to follow. But first, we solidify what these definitions mean by seeing how to compose a span.

Composing spans
In this section we discuss how to compose the span (1-span), k-span or k-cospan of a simple melement. As a concrete case, let us take a 4-element A. The compositions are independent of the dimension of the currently declared space, but any computations with them will require the dimension of the space to be at least that of the m-element. First we compose A. (By using ComposeSimpleForm, the ai are automatically added to the list of declared vector symbols.)
A = ComposeSimpleFormBa, 1F
4

a1 a2 a3 a4

To compose the span of A, you can use the GrassmannAlgebra function ComposeSpan[A][1] or its alternative representation #1 @AD. To compose the 2-span of A you can use ComposeSpan[A][2] or its alternative #2 @AD.

2009 9 3

Grassmann Algebra Book.nb

115

8ComposeSpan@AD@1D, #2 @AD< 88a1 , a2 , a3 , a4 <, 8a1 a2 , a1 a3 , a1 a4 , a2 a3 , a2 a4 , a3 a4 <<

To compose the cospan of A, you can use ComposeCospan[A][1] or #1 @AD. To compose the k-cospan of A you can use ComposeCospan[A][2] or #2 @AD. (You can enter the 2 in #2 as either a superscript or a power.) Note that while a k-span is a list of k-elements, a k-cospan is a list of (m-k)-elements.
9ComposeCospan@AD@1D, #2 @AD= 88a2 a3 a4 , - Ha1 a3 a4 L, a1 a2 a4 , - Ha1 a2 a3 L<, 8a3 a4 , - Ha2 a4 L, a2 a3 , a1 a4 , - Ha1 a3 L, a1 a2 <<

Since the exterior product is Listable (like all Grassmann products in GrassmannAlgebra), the exterior product of a k-span and its k-cospan gives a list of 4-elements which simplify to 4 copies of A.
#2 @AD #2 @AD 8a1 a2 a3 a4 , a1 a3 - Ha2 a4 L, a1 a4 a2 a3 , a2 a3 a1 a4 , a2 a4 - Ha1 a3 L, a3 a4 a1 a2 <

In the discussion of multilinear forms to follow we will usually use the alternative representation # since its compact nature makes the composition of the forms clearer. Further syntax for using # follows. To compose the k-spans or k-cospans for several values of k you can enter the values of k enclosed in [ ]. For example
#@1,3D @AD 88a1 , a2 , a3 , a4 <, 8a1 a2 a3 , a1 a2 a4 , a1 a3 a4 , a2 a3 a4 << #@1,3D @AD 88a2 a3 a4 , - Ha1 a3 a4 L, a1 a2 a4 , - Ha1 a2 a3 L<, 8a4 , -a3 , a2 , -a1 <<

If you want to multiply the elements of a span by scalar coefficients you can use Mathematica's inbuilt listability attribute of Times, and simply multiply the span by the list of coefficients (making sure that the two lists are the same length). The alias S stands for DeclareExtraScalarSymbols, and is here used to declare any subscripted s as a scalar.
S@s_ D; 8 s 1 , s 2 , s 3 , s 4 , s 5 , s 6 < #2 @ A D 8s1 a1 a2 , s2 a1 a3 , s3 a1 a4 , s4 a2 a3 , s5 a2 a4 , s6 a3 a4 <

If you want to multiply the elements of a span by scalar coefficients and then sum them, you can use Mathematica's Dot (.) function.

2009 9 3

Grassmann Algebra Book.nb

116

8 s 1 , s 2 , s 3 , s 4 , s 5 , s 6 < . #2 @ A D s1 a1 a2 + s2 a1 a3 + s3 a1 a4 + s4 a2 a3 + s5 a2 a4 + s6 a3 a4

If you want to use some other type of listable product between your elements (other than Times) and then sum them, you can use the GrassmannAlgebra alias for Mathematica's Total function S which simply sums the elements of any list.

S@8s1 , s2 , s3 , s4 , s5 , s6 < #2 @ADD


s1 a1 a2 + s2 a1 a3 + s3 a1 a4 + s4 a2 a3 + s5 a2 a4 + s6 a3 a4

To pick out the ith element in any of these lists you can use Mathematica's inbuilt Part function by giving a subscript to the list in the form PiT. For example
#2 @ADP3T a1 a4

Thus we can write A as the exterior product of any two corresponding terms from a k-span and a kcospan.
%AA #2 @ADP3T #2 @ADP3T E True

Example: Refactorizations
Let us suppose we have a simple element A for which we have a factorization, and that we wish to obtain a factorization that has a given 1-element X (which we know belongs to A) as one of the factors. Take again the example of the previous section.
A = He4 - 2 e7 L H3 e5 + e2 L He1 + 2 e2 L; X = e1 - 6 e5 ;

By taking the 2-span of A and multiplying each of the elements by X we get 3 candidates.
#2 @AD X 8He4 - 2 e7 L He2 + 3 e5 L He1 - 6 e5 L, He4 - 2 e7 L He1 + 2 e2 L He1 - 6 e5 L, He2 + 3 e5 L He1 + 2 e2 L He1 - 6 e5 L<

Expanding and simplifying these eliminates the third candidate.


%@#2 @AD XD 8- He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7 , - 2 e1 e2 e4 + 4 e1 e2 e7 + 6 e1 e4 e5 + 12 e1 e5 e7 + 12 e2 e4 e5 + 24 e2 e5 e7 , 0<

In expanded form, A was equal to

2009 9 3

Grassmann Algebra Book.nb

117

- He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7

So the first two candidates are thus valid factorizations congruent to A.

Multilinear forms
In the chapters to follow we introduce various new Grassmann products in addition to the exterior product: the regressive, interior, inner, generalized, hypercomplex and Clifford products. In the computation of expressions involving these products we will often come across formulae which express the computation as a sum of terms where each term itself is a product. These sums must have the same natural invariance to transformations possessed by the original expression, and it turns out they are all examples of multilinear forms. For our purposes, a typical multilinear form is a function M@a1 , a2 , , am D of 1-elements ai which is linear in each of the ai . That is,
M@a1 , a2 , , a1 b1 + a2 b2 , , am D a1 M@a1 , a2 , , b1 , , am D + a2 M@a1 , a2 , , b2 , , am D

where the ai are scalars and the bi are also 1-elements. In this book however, because the M are usually viewed as products, we write them in their infix form, just as we have seen that Mathematica allows us to do with the exterior product: entering an expression in the argument form above displays it in its infix form.
Wedge@a1 , a2 , , a1 b1 + a2 b2 , , am D a1 Wedge@a1 , a2 , , b1 , , am D + a2 Wedge@a1 , a2 , , b2 , , am D a1 a2 Ha1 b1 + a2 b2 L am a1 a1 a2 b1 am + a2 a1 a2 b2 am

More specifically, we say that a product operation " is a linear product if it is both left and right distributive under addition, and permits scalars to be factored out.

Ka + b O " g a " g + b " g


m m k m k m k

a" b + g
m k k

a"b + a"g
m k m k

2.42

a " a b Ka aO " b a a " b


m k m k m k

In the chapters to follow, the multilinear forms of interest may involve two, or sometimes three, different linear product operations. To represent these we choose three initially undefined infix operators from Mathematica's library of symbols, ", #, and (CircleTimes, CirclePlus, CircleDot). We make them Listable (as all the Grassmann products are), and make them behave linearly within any functions we define intended to operate on expressions involving them. As a beginning example, consider a simple 3-element A a1 a2 a3 in a space of at least 3 dimensions, the three operations ", #, and , and two arbitrary Grassmann expressions G1 , and G2 . From these we can form the expression 2009 9 3

Grassmann Algebra Book.nb

118

As a beginning example, consider a simple 3-element A a1 a2 a3 in a space of at least 3 dimensions, the three operations ", #, and , and two arbitrary Grassmann expressions G1 , and G2 . From these we can form the expression
F HG1 " a1 L HG2 # Ha2 a3 LL

Now if we replace a2 , say, by a1 b1 + a2 b2 , we can expand the expression to get a sum of two expressions.
F HG1 " a1 L HG2 # HHa1 b1 + a2 b2 L a3 LL HG1 " a1 L HG2 # Ha1 b1 a3 + a2 b2 a3 LL HG1 " a1 L Ha1 G2 # Hb1 a3 L + a2 G2 # H b2 a3 LL HG1 " a1 L Ha1 G2 # Hb1 a3 LL + HG1 " a1 L Ha2 Q2 # H b2 a3 LL a1 HG1 " a1 L H G2 # Hb1 a3 LL + a2 HG1 " a1 L HG2 # H b2 a3 LL

Since the individual product operations are linear, the whole expression is multilinear: substituting for any of the Gi or ai with a sum of terms will lead to a sum of expressions. In later applications these linear products will of course be Grassmann products (exterior, regressive, interior, inner, generalized, hypercomplex or Clifford), so the multilinear form F will simply be a Grassmann expression (an element of the algebra under consideration).

Defining m:k-forms
In this section we define in more detail the particular types of multilinear forms we will find useful later in the book. We will find it convenient to call them m:k-forms because, for any given simple m-element, there are forms of grade k, where k runs from 0 to m. To begin consider a simple melement A.
A a 1 a 2 a k a k+1 a m

The k-span and k-cospan of A are denoted #k @AD and #k @AD. Each of these lists contain n =
m k

elements of grade k and grade m-k respectively. The ith element of any list is denoted with a

subscripted PiT. Hence A can always be written as

A "k @ADPiT "k @ADPiT


where A is a simple m-element, k can range from 0 to m, and i can range from 1 to n = A generic m:k-form for A is denoted Fm:k and can now be defined as
m k

2.43
.

2.44

2009 9 3

Grassmann Algebra Book.nb

119

Fm:k IG1 # "k @ADPiT M IG2 " "k @ADPiT M


i=1

A "k @ADPiT "k @ADPiT


Here, A is a simple m-element, 0 k m, G1 and G2 are as yet undefined Grassmann expressions, and #, , and " are as yet undefined linear products. The form is intended to be generic so that it includes all its special cases, for example
n

IG1 # "k @ADPiT M IG2 " "k @ADPiT M


i=1 n

I"k @ADPiT M I"k @ADPiT M


i=1 n

2.45

IG1 # "k @ADPiT M I"k @ADPiT M


i=1 n

I"k @ADPiT M IG2 " "k @ADPiT M


i=1

Composing m:k-forms
All the product operations we will be dealing with are defined with the Mathematica attribute Listable. This means that to build up expressions, you only need to enter a product of lists (with the same number of elements in each), or the product of a single element and a list, to get a list of the products. To see how this works in building up forms, we can take the simple case of A = a1 a2 a3 and show the steps as follows.
9#1 @AD, #1 @AD= 88a1 , a2 , a3 <, 8a2 a3 , - Ha1 a3 L, a1 a2 << 9G1 # #1 @AD, G2 " #1 @AD= 88G1 ! a1 , G1 ! a2 , G1 ! a3 <, 8G2 " a2 a3 , G2 " - Ha1 a3 L, G2 " a1 a2 << HG1 # #1 @ADL IG2 " #1 @ADM 8HG1 ! a1 L HG2 " a2 a3 L, HG1 ! a2 L HG2 " - Ha1 a3 LL, HG1 ! a3 L HG2 " a1 a2 L<

If at any stage you want to sum the terms of a list, you can apply Mathematica's Total function, or its GrassmannAlgebra alias S.

2.44

2009 9 3

Grassmann Algebra Book.nb

120

SAHG1 # #1 @ADL IG2 " #1 @ADME


HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L

Because all our product operations are defined to be listable you can compose the forms in the previous section without explicitly indexing the terms (i) and specifying their number (n). All you need to do is eliminate the index and replace the summation sign with S.

S@HG1 # "k @ADL IG2 " "k @ADME S@H"k @ADL I"k @ADME S@HG1 # "k @ADL I"k @ADM E S@H"k @ADL IG2 " "k @ADME
For example if A = a1 a2 a3 , the following two expressions give the same result.
A = a1 a2 a3 ;
3

2.46

IG1 # #1 @ADPiT M IG2 " #1 @ADPiT M


i=1

HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L

S@HG1 # #1 @ADL IG2 " #1 @ADME


HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L

Expanding and simplifying m:k-forms


If you want to expand and simplify an m:k form, you can use the GrassmannAlgebra function ExpandAndSimplifyForm. This function distributes products over sums, and factors scalars out of terms. For example
F = HG1 # Ha x + b yLL HG2 " Hc zLL HG1 ! Ha x + b yLL HG2 " Hc zLL

Note that, because there are a number of product operators with their own precedence, it is advisable to assure any expected behaviour by liberal use of parentheses in the input expression.
ExpandAndSimplifyForm@FD a c HG1 ! xL HG2 " zL + b c HG1 ! yL HG2 " zL

As another example let A be a 3-element and F3,1 be a form based on it.

2009 9 3

Grassmann Algebra Book.nb

121

A = a1 a2 a3 ; F3,1 = S@HG1 # #1 @ADL IG2 " #1 @ADME HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L

Now if a1 is expressed as a linear combination of other 1-elements say


A = Ha x + b yL a2 a3 ;

the form is now composed as


F3,1 = S@HG1 # #1 @ADL IG2 " #1 @ADME HG1 ! Ha x + b yLL HG2 " a2 a3 L + HG1 ! a2 L HG2 " - HHa x + b yL a3 LL + HG1 ! a3 L HG2 " Ha x + b yL a2 L ExpandAndSimplifyForm applied to this new form now gives ExpandAndSimplifyForm@F3,1 D a HG1 ! xL HG2 " a2 a3 L + b HG1 ! yL HG2 " a2 a3 L a HG1 ! a2 L HG2 " x a3 L - b HG1 ! a2 L HG2 " y a3 L + a HG1 ! a3 L HG2 " x a2 L + b HG1 ! a3 L HG2 " y a2 L

Developing invariant forms


Our objective in this section is to understand the construction of forms which are invariant to precisely the same types of substitutions that leave an m-element invariant. For example, a 3element A a1 a2 a3 is left invariant if to any of its 1-element factors is added a linear combination of its other 1-element factors: A is unchanged if we add a a2 to a1 .
A = a1 a2 a3 ; Aa = a1 a2 a3 . a1 a1 + a a2 Ha1 + a a2 L a2 a3 % @ A Aa D True

Now consider the expression F involving the factors of A and make the same substitution.
F = HG1 " a1 L HG2 # Ha2 a3 LL; Fa = HG1 " a1 L HG2 # Ha2 a3 LL . a1 a1 + a a2 HG1 " Ha1 + a a2 LL HG2 ! a2 a3 L

Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar coefficient.

2009 9 3

Grassmann Algebra Book.nb

122

Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar coefficient.
ExpandAndSimplifyForm@Fa D HG1 " a1 L HG2 ! a2 a3 L + a HG1 " a2 L HG2 ! a2 a3 L

If the form F had been invariant to this transformation as the 3-element A was, Fa would have simplified back again to F. But Fa now has an extra term which we notice displays a2 twice, but in different parts of the product. This isolates the two occurrences of a2 so that the antisymmetry of the exterior product cannot put their product to zero. This suggests that we should antisymmetrize the original form F to create a new form Fb by adding a term featuring the same factors ai of A but reordered to bring a2 to first place. If we do this we find Fb is invariant to the substitution a1 a1 + a a2 .
Fb = HG1 " a1 L HG2 # Ha2 a3 LL + HG1 " a2 L HG2 # H- Ha1 a3 LLL; Fc = Fb . a1 a1 + a a2 HG1 " a2 L HG2 ! - HHa1 + a a2 L a3 LL + HG1 " Ha1 + a a2 LL HG2 ! a2 a3 L ExpandAndSimplifyForm@Fb Fc D True

Of course, if we want a form which is antisymmetric with respect to substitution of all of its factors ai , we will need to include all the essentially different decompositions of the 3-element into a 1element and a 2-element.
Fd = HG1 " a1 L HG2 # Ha2 a3 LL + HG1 " a2 L HG2 # H- Ha1 a3 LLL + HG1 " a3 L HG2 # Ha1 a2 LL;

Now, replacing a1 by a1 + a a2 + b a3 in this form leaves the form invariant to the substitution.
Fe = Fd . a1 a1 + a a2 + b a3 HG1 " a2 L HG2 ! - HHa1 + a a2 + b a3 L a3 LL + HG1 " a3 L HG2 ! Ha1 + a a2 + b a3 L a2 L + HG1 " Ha1 + a a2 + b a3 LL HG2 ! a2 a3 L ExpandAndSimplifyForm@Fd Fe D True

The form Fd may be recognized as the m:k-form F3,1 introduced in the previous section.
F3,1 = S@HG1 # #1 @ADL IG2 " #1 @ADME HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L

2009 9 3

Grassmann Algebra Book.nb

123

Properties of m:k-forms
To this point we have discussed simple m-elements A and the m:k-forms Fm,k based on them.
A a 1 a 2 a k a k+1 a m Fm,k SAHG1 # #k @ADL IG2 " #k @ADME

Our interest is to show that any transformation which leaves A invariant, also leaves any of its associated m:k-forms Fm,k invariant. These transformations are equivalent to refactorizations of the m-element: we want to show that an m:k-form is unchanged no matter what factorization of A we use. For concreteness initially, let us take A to be a 3-element as we have in previous sections.
A = a1 a2 a3 ; A is of course invariant to any of its factorizations. Any factorization Aa of A is congruent to the exterior product of three independent linear combinations of the ai . Aa = Ha1,1 a1 + a1,2 a2 + a1,3 a3 L Ha2,1 a1 + a2,2 a2 + a2,3 a3 L Ha3,1 a1 + a3,2 a2 + a3,3 a3 L;

Expanding and simplifying Aa gives, as expected, the original factorization A multiplied by the determinant of the coefficients. (But first we need to declare the coefficients as scalars.)
S@a__ D; Ab = % @ Aa D H- a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3 L a1 a2 a3 det = Det@88a1,1 , a1,2 , a1,3 <, 8a2,1 , a2,2 , a2,3 <, 8a3,1 , a3,2 , a3,3 <<D - a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3

Any set of coefficients whose determinant is unity will provide a transformation to which A is invariant. Now let us compose the range of 3:k-forms for A, and for Aa .
F3,k = TableASAHG1 # #i @ADL IG2 " #i @ADME , 8i, 0, 3<E ExpandAndSimplifyForm 8HG1 ! 1L HG2 " a1 a2 a3 L, HG1 ! a1 L HG2 " a2 a3 L HG1 ! a2 L HG2 " a1 a3 L + HG1 ! a3 L HG2 " a1 a2 L, HG1 ! a1 a2 L HG2 " a3 L - HG1 ! a1 a3 L HG2 " a2 L + HG1 ! a2 a3 L HG2 " a1 L, HG1 ! a1 a2 a3 L HG2 " 1L<

2009 9 3

Grassmann Algebra Book.nb

124

Fa,3,k = ITableASAHG1 # #i @Aa DL IG2 " #i @Aa DME , 8i, 0, 3<E ExpandAndSimplifyForm SimplifyM . 8Simplify@detD $, Simplify@- detD - $< 8! HG1 ! 1L HG2 " a1 a2 a3 L, ! HHG1 ! a1 L HG2 " a2 a3 L - HG1 ! a2 L HG2 " a1 a3 L + HG1 ! a3 L HG2 " a1 a2 LL, ! HHG1 ! a1 a2 L HG2 " a3 L - HG1 ! a1 a3 L HG2 " a2 L + HG1 ! a2 a3 L HG2 " a1 LL, ! HG1 ! a1 a2 a3 L HG2 " 1L<

Here we have used Mathematica's inbuilt Simplify to extract the determinant of the transformation and denote it symbolically by $. If we divide by $, we can easily see that this list of forms is equal to the original list of forms.
F3,k True Fa,3,k $

Thus transformations (factorizations) which leave the 3-element A invariant, also leave all its 3:kforms invariant.

The complete span of a simple element


In this section we discuss the notion of complete span and cospan and a form based on it. We will use the result we derive for the form to develop alternative formulations of the General Product Formula in the next chapter. The complete span of a simple m-element A is denoted # @AD and defined to be the ordered list of all the k-spans of A, with k ranging from 0 to m. Similarly the complete cospan of a simple m-element A is denoted # @AD and defined to be the ordered list of all the k-cospans of A, with k ranging from 0 to m. For example, if A = u x y z the complete span and complete cospan of A are
A = u x y z; # @AD 881<, 8u, x, y, z<, 8u x, u y, u z, x y, x z, y z<, 8u x y, u x z, u y z, x y z<, 8u x y z<< # @AD 88u x y z<, 8x y z, - Hu y zL, u x z, - Hu x yL<, 8y z, - Hx zL, x y, u z, - Hu yL, u x<, 8z, - y, x, - u<, 81<<

2009 9 3

Grassmann Algebra Book.nb

125

Since the exterior product is Listable, simplifying the exterior product of a complete span with its complete cospan gives lists of the original m-element.
$A# @AD # @ADE 88u x y z<, 8u x y z, u x y z, u x y z, u x y z<, 8u x y z, u x y z, u x y z, u x y z, u x y z, u x y z<, 8u x y z, u x y z, u x y z, u x y z<, 8u x y z<<

Taking the exterior product of a complete cospan with a complete span give a similar result, but this time the exterior products of the k-cospan elements with the k-span elements include a sign H - 1 L k Hm -kL .
$A# @AD # @ADE 88u x y z<, 8- Hu x y zL, - Hu x y zL, - Hu x y zL, - Hu x y zL<, 8u x y z, u x y z, u x y z, u x y z, u x y z, u x y z<, 8- Hu x y zL, - Hu x y zL, - Hu x y zL, - Hu x y zL<, 8u x y z<<

If m is odd then k(m-k) will be even for all k, and hence there will be no change in sign. If m is even, then k(m-k) will alternate between even and odd as k ranges from 0 to m. Thus we can construct a list of signs dependent on the parity of m.
r@m_D := Mod@Range@0, mD, 2D; rr@m_D := If@EvenQ@mD, r@mD, 0D

For example
9H- 1Lrr@1D , H- 1Lrr@2D , H- 1Lrr@3D , H- 1Lrr@4D = 81, 81, - 1, 1<, 1, 81, - 1, 1, - 1, 1<<

With this sign construction, we can now determine a relationship between products of complete spans and cospans. Below we verify the identity for m ranging from 1 to 5.
!6 ; TableB$B# BAF # BAF H- 1Lrr@mD J# BAF # BAFNF, 8m, 1, 5<F
m m m m

8True, True, True, True, True<

In the more general case of a form based on the complete span and cospan of a simple m-element A, we can obtain a similar identity providing we reverse all the lists, that is, reverse each list of elements and the list of these lists. To simplify this two-level reversing process we define a function ReverseAll with alias R.

2009 9 3

Grassmann Algebra Book.nb

126

ReverseAll@s_ListD := Reverse@Reverse sD; R = ReverseAll;

JG1 # " BAFN JG2 " " BAFN


m m

RBH- 1Lrr@mD JG1 # " BAFN JG2 " " BAFNF


m m

2.47

Again we verify the identity for m ranging from 1 to 5.


TableB( BJG1 # # BAFN JG2 " # BAFNF
m m

( BRBH- 1L

rr@mD

JG1 # # BAFN JG2 " # BAFNFF, 8m, 1, 5<F


m m

8True, True, True, True, True<

As might be expected, the identity remains valid no matter to which form the sign change H- 1Lrr@mD or the ReverseAll operation is applied. Let
F1 @m_D := JG1 # # BAFN JG2 " # BAFN
m m

F2 @m_D := JG1 # # BAFN JG2 " # BAFN


m m

then the following identities hold

F1 @mD RAH- 1Lrr@mD F2 @mDE H- 1Lrr@mD R@F2 @mDD

2.48

F2 @mD RAH- 1Lrr@mD F1 @mDE H- 1Lrr@mD R@F1 @mDD

2.49

R@F1 @mDD H- 1Lrr@mD F2 @mD R@F2 @mDD H- 1Lrr@mD F1 @mD

2.50

2.51

To verify these identities we first define some functions Ji @mD representing them which simplify the terms, and then tabulate the combined results for m ranging from 1 to 5.
J1 @m_D := ( @F1 @mDD ( ARAH- 1Lrr@mD F2 @mDEE ( AH- 1Lrr@mD R@F2 @mDDE; J2 @m_D := ( @F2 @mDD ( ARAH- 1Lrr@mD F1 @mDEE ( AH- 1Lrr@mD R@F1 @mDDE;

2009 9 3

Grassmann Algebra Book.nb

127

J3 @m_D := ( @R@F1 @mDDD ( AH- 1Lrr@mD F2 @mDE; J4 @m_D := ( @R@F2 @mDDD ( AH- 1Lrr@mD F1 @mDE; Table@And@J1 @mD, J2 @mD, J3 @mD, J4 @mDD, 8m, 1, 5<D 8True, True, True, True, True<

2.13 Unions and Intersections


Union and intersection as a multilinear form
One of the more immediate applications of multilinear forms that we can make at this stage is to the computing of unions and intersections of simple elements. The intersection of two simple elements A and B is the element C of highest grade which is common to both A and B.
A A* C; B B* C; A* C B* ! 0

The element A* C B* is called the union of A and B. Two elements A and B are said to be disjoint if C is of grade 0, that is, a scalar: thus AB ! 0. The formulae that we are going to develop still remain valid in this case, so that we will not need to consider it further. The first thing to note is that defining the union and intersection only defines them up to congruence because they still remain valid if we multiply and divide the factors in the formulae by an arbitrary scalar.
A A* c Hc CL; B B* c Hc CL; A* c Hc CL B* c !0

Thus now, if c C is the intersection, I 1 M A* C B* is the union. c The observation that the union and intersection are only defined up to congruence leads us to consider how we might develop a formula for the union and intersection which is not dependent on an arbitrary congruence factor. Suppose is a linear product as discussed in the previous section, not equal to the exterior product, but otherwise arbitrary. Then the product of A and B can be written as the product of their union and intersection, and the congruence factors cancel.

A B H A * CL HB* CL H A * C B* L C
This formula can be written also as
A HB* CL H A B* L C

2.52

Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A B* we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal to k* , say) for which A B* is not zero. To do this partitioning, B has to be simple, and we need to know9 some 2009 3 factorization of it. If we partition B using its span and cospan for various k and for the given factorization, then our formula becomes a special case of an m:k-form, and invariance of the result to the actual factorization of B is guaranteed.

Grassmann Algebra Book.nb

128

Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A B* we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal to k* , say) for which A B* is not zero. To do this partitioning, B has to be simple, and we need to know some factorization of it. If we partition B using its span and cospan for various k and for the given factorization, then our formula becomes a special case of an m:k-form, and invariance of the result to the actual factorization of B is guaranteed. Hence we write the formula for the product of the possibly intersecting simple elements A and B in terms of their union and intersection as

A B SAHA "k* @BDL "k @BDE


where k* is the maximum grade for which the right hand side is not zero. In the sections to follow, we will make these notions more concrete with some examples.

2.53

Where the intersection is evident


To see how this formula works, let us first take some simple examples where the two elements are expressed in such a way that their union and intersection are evident by inspection. To enable the computations we first declare any subscripted a, b, or g as vector symbols.
DeclareExtraVectorSymbols@a_ , b_ , g_ D;

Example: 2:k-forms
A = a1 a2 g1 ; B = b1 g1 ;

Here it is obvious that our formula should give us


Ha1 a2 g1 L Hb1 g1 L Ha1 a2 g1 b1 L g1

If we simply compose a table of all the 2:k-forms (k equal to 0, 1, and 2 in this case) without simplifying the forms, we get the three forms
T = TableASAHA #k @BDL #k @BDE, 8k, 0, 2<E; TableForm@TD Ha1 a2 g1 1L Hb1 g1 L Ha1 a2 g1 b1 L g1 + Ha1 a2 g1 g1 L H-b1 L Ha1 a2 g1 b1 g1 L 1

Clearly, when simplified, the first form (k = 0) is equal to A B, and the third form (k = 2) is zero. The second form (k = 1) reduces to a single term, giving us the expected expression for the union and intersection. To simplify a form, or a list of forms, we use ExpandAndSimplifyForm, or its alias (.

2009 9 3

Grassmann Algebra Book.nb

129

( @TD TableForm Ha1 a2 g1 L Hb1 g1 L - Ha1 a2 b1 g1 L g1 0

(Note that ( has automatically put the factors in the second form into canonical order, based in this case on the alphabetical order of the symbols used)

Example: 3:k-forms
Had B also been a 3-element, we would have generated four forms.
A = a1 a2 g1 ; B = b1 b2 g1 ; T = TableASAHA #k @BDL #k @BDE, 8k, 0, 3<E; TableForm@TD Ha1 a2 g1 1L Hb1 b2 g1 L Ha1 a2 g1 b1 L Hb2 g1 L + Ha1 a2 g1 b2 L H- Hb1 g1 LL + Ha1 a2 g1 Ha1 a2 g1 b1 b2 L g1 + Ha1 a2 g1 b1 g1 L H-b2 L + Ha1 a2 g1 b2 g1 Ha1 a2 g1 b1 b2 g1 L 1 ( @TD TableForm Ha1 a2 g1 L Hb1 b2 g1 L - Ha1 a2 b1 g1 L Hb2 g1 L + Ha1 a2 b2 g1 L Hb1 g1 L Ha1 a2 b1 b2 g1 L g1 0

The union and intersection is clearly the third form.

Example: Interchanging the order of A and B


We would not expect that interchanging the order of A and B in the form would make any difference to the results, except perhaps for a change in sign.
A = a1 a2 g1 ; B = b1 b2 g1 ; ( ATableASAHB #k @ADL #k @ADE, 8k, 0, 3<EE TableForm Hb1 b2 g1 L Ha1 a2 g1 L - Ha1 b1 b2 g1 L Ha2 g1 L + Ha2 b1 b2 g1 L Ha1 g1 L Ha1 a2 b1 b2 g1 L g1 0

2009 9 3

Grassmann Algebra Book.nb

130

Example: Interchanging the order of the span and cospan


Neither would we expect that interchanging the order of the span #k and cospan #k functions in the form would make any difference to the results, except again perhaps for a change in sign. However this interchange does reverse the order of the forms, making the form displaying the union and intersection come after the zero form, rather than before it.
A = a1 a2 g1 ; B = b1 b2 g1 ; ( ATableASAIA #k @BDM #k @BDE, 8k, 0, 3<EE TableForm 0 Ha1 a2 b1 b2 g1 L g1 - Ha1 a2 b1 g1 L Hb2 g1 L + Ha1 a2 b2 g1 L Hb1 g1 L Ha1 a2 g1 L Hb1 b2 g1 L

Where the intersections is not evident


In the previous section we explored the application of the union and intersection formula in trivial cases where we could have more easily obtained the results by inspection. The usefulness of the formula becomes evident however when the intersection of A and B is not displayed explicitly.

Example: 2:k-forms
We begin with the same elements as in the first example above. But now, we express all the 1element factors in terms of the current basis. The process is independent of the actual basis, so we can choose it arbitrarily for the example, say an 8-space.
!8 ;

We compose A as the product of three factors to explicitly display the intersection as the last factor. But the algorithm does not need to have A in its factored form, so we expand and simplify before applying the formula.
A = %@He4 - 2 e7 L H3 e5 + e2 L He1 + 2 e2 LD - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7

The second element B must be supplied to the algorithm in factored form, otherwise its spans and cospans cannot be computed. However the factorization can be arbitrary, so we expand it and refactor it differently so that it does not explicitly display the intersection we have chosen for the example.
B = ExteriorFactorize@%@H2 e3 - e2 L He1 + 2 e2 LDD He1 + 4 e3 L He2 - 2 e3 L

In this case we know that k equal to 1 will give us the result we are looking for.

2009 9 3

Grassmann Algebra Book.nb

131

F1 = ( ASAHA #1 @BDL #1 @BDEE 2 He1 e2 e3 e4 L e1 + 4 He1 e2 e3 e4 L e2 - 4 He1 e2 e3 e7 L e1 8 He1 e2 e3 e7 L e2 - 3 He1 e2 e4 e5 L e1 - 6 He1 e2 e4 e5 L e2 6 He1 e2 e5 e7 L e1 - 12 He1 e2 e5 e7 L e2 + 6 He1 e3 e4 e5 L e1 + 12 He1 e3 e4 e5 L e2 + 12 He1 e3 e5 e7 L e1 + 24 He1 e3 e5 e7 L e2 + 12 He2 e3 e4 e5 L e1 + 24 He2 e3 e4 e5 L e2 + 24 He2 e3 e5 e7 L e1 + 48 He2 e3 e5 e7 L e2

Of course, this is in expanded form. We would like to factorize it to display the union and intersection explicitly. We can do this with GrassmannAlgebra's FactorizeForm.
F2 = FactorizeForm@F1 D He1 - 6 e5 L He2 + 3 e5 L e3 + 3 e5 2 He4 - 2 e7 L H2 e1 + 4 e2 L

The intersection displayed here is congruent to the factor that we introduced as our 'hidden' intersection in the A and B supplied to the form, and is therefore a correct result as any congruent factor would be. Although the union does not explicitly display the intersection, we can verify that the intersection is indeed a factor of it, and that it is also a factor of A and B as supplied to the formula. Let U be the union and J be the intersection.
U = He1 - 6 e5 L He2 + 3 e5 L e3 + J = 2 e1 + 4 e2 ; 3 e5 2 He4 - 2 e7 L;

then the following are zero as expected.


%@8U J, A J, B J<D 80, 0, 0<

All the other factors of the original A and B are also easily verified to belong to U.
%@U 8He4 - 2 e7 L, H3 e5 + e2 L, H2 e3 - e2 L, He1 + 4 e3 L, He2 - 2 e3 L<D 80, 0, 0, 0, 0<

Intersection with a non-simple element


To this point we have assumed A is simple. But, provided the intersection with B is simple, the algorithm will still yield results. The following examples demonstrate that one may need to pay special attention to their interpretation.

2009 9 3

Grassmann Algebra Book.nb

132

Example 1
We take the simplest example with A non-simple, and a 1-element intersection g1 , which does not occur in the non-simple 2-element factor of A.
A = Ha1 a2 + a3 a4 L a5 g1 ; B = b1 g1 ;

Since the intersection is a 1-element, and B is a 2-element, we can get the union and intersection from the 1-span and 1-cospan of B.

SAHA #1 @BDL #1 @BDE


HHa1 a2 + a3 a4 L a5 g1 b1 L g1 + HHa1 a2 + a3 a4 L a5 g1 g1 L H-b1 L

Upon simplification, the second term is clearly zero, leading to the expected result as the first term showing, in this case, that the union is not simple.

Example 2
Now let B contain one of the 1-element factors which occurs in the non-simple 2-element factor of A, say a1 .
A = Ha1 a2 + a3 a4 L a5 g1 ; B = a1 g1 ;

SAHA #1 @BDL #1 @BDE


HHa1 a2 + a3 a4 L a5 g1 a1 L g1 + HHa1 a2 + a3 a4 L a5 g1 g1 L H-a1 L

Upon simplification, the second term is again zero, but the first term, the union, is now simple. Thus we conclude that under some circumstances (Example 1) the union and intersection formula will give straightforwardly interpretable results. But in others (Example 2) one would need to pay special attention to their interpretation.

Factorizing simple elements


In the previous section we have seen how one may obtain the union and intersection of two simple elements. We can now use these results to obtain a factorization of a simple element. For concreteness we take the example from the previous section where A is a simple 3-element in an 8space. We will work with the expanded form of A, and pretend that we do not know any factorization of it.
A = %@He4 - 2 e7 L H3 e5 + e2 L He1 + 2 e2 LD - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7

Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining the union and intersection of A with a simple 6-element B using the formula involving the 5-span and 5-cospan of B. 2009 9 3

Grassmann Algebra Book.nb

133

Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining the union and intersection of A with a simple 6-element B using the formula involving the 5-span and 5-cospan of B. But as we have seen in the sections above, it is really only the number of distinct symbols (hence independent 1-elements) that are important in computing unions and intersections. Thus we can simplify our computations by considering A as a 3-element in a 5-space 8e1 , e2 , e4 , e5 , e7 < and B as a simple 3-element using the formula involving the 2-span and 2-cospan of B. To obtain our first factor of A, suppose we take B as e1 e2 e4 . The
B1 = e1 e2 e4 ; ( ASAHA #2 @B1 DL #2 @B1 DEE FactorizeForm He1 e2 e4 e5 e7 L H6 e1 + 12 e2 L

The factor of interest, our first factor of A, is the intersection 6 e1 + 12 e2 . We can ignore the union as it is not of any significance in this procedure. Repeating the procedure for the remaining possible independent 3-element we can construct for B gives
BB = #3 @e1 e2 e4 e5 e7 D 8e1 e2 e4 , e1 e2 e5 , e1 e2 e7 , e1 e4 e5 , e1 e4 e7 , e1 e5 e7 , e2 e4 e5 , e2 e4 e7 , e2 e5 e7 , e4 e5 e7 < T = IFactorizeFormA( ASAHA #2 @DL #2 @DEEE &M BB 8He1 e2 e4 e5 e7 L H6 e1 + 12 e2 L, 0, He1 e2 e4 e5 e7 L H3 e1 + 6 e2 L, He1 e2 e4 e5 e7 L H2 e1 - 12 e5 L, He1 e2 e4 e5 e7 L H6 e4 - 12 e7 L, He1 e2 e4 e5 e7 L H- e1 + 6 e5 L, He1 e2 e4 e5 e7 L H2 e2 + 6 e5 L, He1 e2 e4 e5 e7 L H- 3 e4 + 6 e7 L, He1 e2 e4 e5 e7 L H- e2 - 3 e5 L, He1 e2 e4 e5 e7 L H- e4 + 2 e7 L<

By inspection we can see that there are 4 different factors. We can take the simplest form congruent to each. Clearly they are not independent
AA = He1 + 2 e2 L He1 - 6 e5 L He4 - 2 e7 L He2 + 3 e5 L; %@AAD 0

But it is an easy matter to compute all the products of 3 different factors.


#3 @AAD 8He1 + 2 e2 L He1 - 6 e5 L He4 - 2 e7 L, He1 + 2 e2 L He1 - 6 e5 L He2 + 3 e5 L, He1 + 2 e2 L He4 - 2 e7 L He2 + 3 e5 L, He1 - 6 e5 L He4 - 2 e7 L He2 + 3 e5 L<

By inspection the second factorization in the list, because it does not display all the symbols of A, will be zero. The others are all candidates for a factorization of A, and all can be seen to be congruent to A when expanded and simplified. 2009 9 3

Grassmann Algebra Book.nb

134

By inspection the second factorization in the list, because it does not display all the symbols of A, will be zero. The others are all candidates for a factorization of A, and all can be seen to be congruent to A when expanded and simplified.
%@#3 @AADD 8- 2 e1 e2 e4 + 4 e1 e2 e7 + 6 e1 e4 e5 + 12 e1 e5 e7 + 12 e2 e4 e5 + 24 e2 e5 e7 , 0, - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7 , - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7 <

In Chapter 3, we will explore an equivalent algorithm for factorizing simple elements, based on the regressive product as the multilinear product.

2.14 Summary
This chapter has introduced the exterior product. Its codification of linear dependence is the foundation of Grassmann's contribution to the mathematical and physical sciences. From it he was able to build an algebraic system widely applicable to much of the physical world. We began the chapter by formally extending the axioms of a linear space to include the exterior product. We then showed how the exterior product leads naturally to an understanding of determinants and linear systems of equations. Although Grassmann was the first to develop an algebraic structure which naturally handled these now accepted denizens of "Linear Algebra", we showed, in discussing the concepts of simplicity and exterior division, that his exterior product spaces (although themselves linear spaces) had the potential to be much richer than the simple linear space of Linear Algebra. At the end of the chapter we undertook an analysis inspired by Grassmann of the various types of possible product structures. This analysis is really quite fundamental to his contribution, and confirms the central role played by the exterior and interior products in their own right, and as we shall see later in the book, the construction of higher level products (the Clifford product, Generalized Grassmann product and hypercomplex product) as sums of these two. The chapter has not yet introduced any geometric or physical interpretations for the elements of the algebra, as, being a mathematical system, its power lies in its potentially multiple interpretations. Nevertheless, I am acutely aware that comprehension is enhanced by context, particularly the geometric. I beg your indulgence that the next chapter is still devoid of any depiction of the entities discussed, but hope that Chapter 4: Geometric Interpretations and subsequent examples will redress this balance. Chapter 3 will discover that the symmetry in the suite of linear spaces created by the exterior product leads to a beautiful "dual" product: the regressive product. Equipped with these dual products, Chapter 4 will then be able finally to provide a geometric interpretation for all the different types of elements of the spaces. Chapter 5 shows how the symmetry further enables the definition of a new operation: the complement. Then Chapter 6 shows how to combine all of these to define the interior product, a much more general product than the scalar product. These chapters form the critical foundation to Grassmann's algebra. The rest is exploration!

2009 9 3

Grassmann Algebra Book.nb

135

3 The Regressive Product

3.1 Introduction
Since the linear spaces L an L are of the same dimension K
m n-m

n n O= K O and hence are m n-m

isomorphic, the opportunity exists to define a product operation (called the regressive product) dual to the exterior product such that theorems involving exterior products of m-elements have duals involving regressive products of (n|m)-elements. Very roughly speaking, if the exterior product is associated with the notion of union, then the regressive product is associated with the notion of intersection. The regressive product appears to be little used in the recent literature. Grassmann's original development did not distinguish notationally between the exterior and regressive product operations. Instead he capitalized on the inherent duality and used a notation which, depending on the grade of the elements in the product, could only sensibly be interpreted by one or the other operation. This was a very elegant idea, but its subtlety may have been one of the reasons the notion has become lost. (See "Grassmann's notation for the regressive product" in Section 3.3.) However, since the regressive product is a simple dual operation to the exterior product, an enticing and powerful symmetry is lost by ignoring it. We will find that its 'intersection' properties are a useful conceptual and algorithmic addition to non-metric geometry, and that its algebraic properties enable a firm foundation for the development of metric geometry. Some of the results which are usually proven in metric spaces via the inner and cross products can also be proven using the exterior and regressive products, thus showing that the result is independent of whether or not the space has a metric. The approach adopted in this book is to distinguish between the two product operations by using different notations, just as Boolean algebra has its dual operations of union and intersection. We will find that this approach does not detract from the elegance of the results. We will also find that differentiating the two operations explicitly enhances the simplicity and power of the derivation of results. Since the commonly accepted modern notation for the exterior product operation is the 'wedge' symbol , we will denote the regressive product operation by a 'vee' symbol . Note however that this (unfortunately) does not correspond to the Boolean algebra usage of the 'vee' for union and the 'wedge' for intersection.

2009 9 3

Grassmann Algebra Book.nb

136

3.2 Duality
The notion of duality
In order to ensure that the regressive product is defined as an operation correctly dual to the exterior product, we give the defining axiom set for the regressive product the same formal symbolic structure as the axiom set for the exterior product. This may be accomplished by replacing by , and replacing the grades of elements and spaces by their complementary grades. The complementary grade of a grade m is defined in a linear space of n dimensions to be n|m.

a a
m

n- m

L L
m

n- m

3.1

Note that here we are undertaking the construction of a mathematical structure and thus there is no specific mapping implied between individual elements at this stage. In Chapter 5: The Complement, we will introduce a mapping between the elements of L and L which will lead to
m n-m

the definition of complement and interior product. For concreteness, we take some examples.

Examples: Obtaining the dual of an axiom


The dual of axiom 6
We begin with the exterior product axiom

6:

The exterior product of an m-element and a k-element is an (m+k)-element.


:a L, b L> : a b L >
m m k k m k m +k

To form the dual of this axiom, replace with , and the grades of elements and spaces by their complementary grades.
:a L, b L> : a b
n-m n-m n-k n-k n-m n-k n-Hm +kL

>

Although this is indeed the dual of axiom 6, it is not necessary to display the grades of what are arbitrary elements in the more complex form specifically involving the dimension n of the space. It will be more convenient to display them as grades denoted by simple symbols like m and k as were the grades of the elements of the original axiom. To effect this transformation most expeditiously we first let m' = n|m, k' = n|k to get

2009 9 3

Grassmann Algebra Book.nb

137

:a L, b L> : a b
m' m' k' k' m' k'

L
Im'+k'M-n

>

The grade of the space to which the regressive product belongs is n|(m+k) = n|((n|m')+(n|k')) = (m'+k')|n. Finally, since the primes are no longer necessary we drop them. Then the final form of the axiom dual to

6, which we label 6, becomes:


:a L, b L> : a b
m m k k m k m +k-n

L >

In words, this says that the regressive product of an m-element and a k-element is an (m+k|n)element.

The dual of axiom 8


Axiom

8 says:
:#)*+,+B1, 1 LF, a 1 a>
0 m m

There is a unit scalar which acts as an identity under the exterior product.

For simplicity we do not normally display designated scalars with an underscripted zero. However, the duality transformation will be clearer if we rewrite the axiom with 1 in place of 1.
0

:#)*+,+B1, 1 LF, a 1 a>


0 0 0 m 0 m

Replace with , and the grades of elements and spaces by their complementary grades.
:#)*+,+B1, 1 LF, a
n n n n-m

1 a >
n n-m

Replace arbitrary grades m with n|m' (n is not an arbitrary grade).


:#)*+,+B1, 1 LF, a 1 a >
n n n m' n m'

Drop the primes.


:#)*+,+B1, 1 LF, a 1 a>
n n n m n m

In words, this says that there is a unit n-element which acts as an identity under the regressive product.

The dual of axiom 10


Axiom

10 says:

The exterior product of elements of odd grade is anti-commutative.


2009 9 3

Grassmann Algebra Book.nb

138

The exterior product of elements of odd grade is anti-commutative.


a b H- 1Lm k b a
m k k m

Replace with , and the grades of elements by their complementary grades. Note that it is only the grades as shown in the underscripts which should be replaced, not those figuring elsewhere in the formula.
n-m

a b
n-k

H- 1Lm k b a
n-k

n-m

Replace arbitrary grades m (wherever they occur in the formula) with n|m', k with n|k'. An arbitrary grade is one without a specific value, like 0, 1, or n.
a b H- 1LIn-m'M In- k'M b a
m' k' k' m'

Drop the primes.


a b H - 1 L Hn-m L Hn- kL b a
m k k m

In words this says that the regressive product of elements of odd complementary grade is anticommutative.

Summary: The duality transformation algorithm


The algorithm for the duality transformation may be summarized as follows: 1. Replace with , and the grades of elements and spaces by their complementary grades. 2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes. An arbitrary grade is one without a specific value, like 0, 1, or n.

3.3 Properties of the Regressive Product


Axioms for the regressive product
In this section we collect the results of applying the duality algorithm above to the exterior product axioms

6 to 12. Axioms 1 to 5 transform unchanged since there are no products involved.

We use the symbol n for the dimension of the underlying linear space wherever we want to ensure the formulas can be understood by GrassmannAlgebra functions (for example, the Dual function to be discussed later). However in general textual comment we will usually retain the simpler symbol n to represent this dimension.

6:

The regressive product of an m-element and a k-element is an (m+k|n)element.

2009 9 3

Grassmann Algebra Book.nb

139

6:

The regressive product of an m-element and a k-element is an (m+k|n)element. :a L, b L> :a b


m m k k m k

k+ m -n

L >

3.2

7:

The regressive product is associative.

ab
m k

g a bg
j m k j

3.3

8:

There is a unit n-element which acts as an identity under the regressive product. :!"#$%$B 1 , 1 L F, a 1 a>
n n n m n m

3.4

9:

Non-zero n-elements have inverses with respect to the regressive product.

1 1 1 :!"#$%$B , L F, 1 a , &'()**Ba, a ! 0 L F> n n n a a n a

3.5

10:

The regressive product of elements of odd complementary grade is anticommutative. a b H - 1 L Hn-kL Hn- m L b a
m k k m

3.6

11:

Additive identities act as multiplicative zero elements under the regressive product. :!"#$%$B0, 0 LF, 0 a
k k k k m

0 >
k+ m -n

3.7

12:

The regressive product is both left and right distributive under addition.

2009 9 3

Grassmann Algebra Book.nb

140

12:

The regressive product is both left and right distributive under addition.

Ka + b O g a g + b g
m m k m k m k

3.8

a b + g
m k k

ab + ag
m k m k

13:

Scalar multiplication commutes with the regressive product.

a a b Ka aO b a a b
m k m k m k

3.9

The unit n-element


The unit n-element is congruent to any basis n-element.
The duality algorithm has generated a unit n-element 1 which acts as the multiplicative identity for
n

the regressive product, just as the unit scalar 1 (or 1) acts as the multiplicative identity for the
0

exterior product (axioms

8 and 8).
n 1 n

We have already seen that any basis of L contains only one element. If a basis of L is
8e1 ,e2 ,!,en <, then the single basis element of L is congruent to e1 e2 ! en . (Two

elements are congruent if one is a scalar multiple of the other.) If the basis of L is changed by an
1

arbitrary (non-singular) linear transformation, then the basis of L changes by a scalar factor which
n

is the determinant of the transformation. Any basis of L may therefore be expressed as a scalar
n

multiple of some given basis, say e1 e2 ! en . Hence we can therefore also express e1 e2 ! en as some scalar multiple c of 1. (We denote the scalar in this form so that
n

GrassmannAlgebra can understand its special role in later computations.)

e1 e2 ! en c 1
n

3.10

Defining 1 any more specifically than this is normally done by imposing a metric onto the space.
n

This we do in Chapter 5: The Complement, and Chapter 6: The Interior Product. It turns out then that 1 is the n-element whose measure (magnitude, volume) is unity.
n

On the other hand, for geometric application in spaces without a metric, for example the calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only know 1 up to congruence, because we will see that if an element defines a geometric entity then
n

2009 93 any element congruent to it will define the same geometric entity.

Grassmann Algebra Book.nb

141

On the other hand, for geometric application in spaces without a metric, for example the calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only know 1 up to congruence, because we will see that if an element defines a geometric entity then
n

any element congruent to it will define the same geometric entity.

The unit n-element is idempotent under the regressive product.


By putting a equal to 1 in the regressive product axiom
m n

8 we have immediately that:


3.11

1 11
n

n-elements allow a sort of associativity.


By letting a a 1 we can show that regressive products with an n-element allow a sort of
n n

associativity with the exterior product.

a b g a bg
k j n k j

3.12

Division by n-elements.
An equation may be 'divided' through by an n-element under the regressive product.

b a b g
n m n m

ag
m m

3.13

We can easily see this by putting b b 1 , and using axiom


n n

8.

The inverse of an n-element


Axiom 9 says that every (non-zero) n-element has an inverse with respect to the regressive product, and the unit n-element.
1 1 1 :#)*+,+B , L F, 1 a , (-./00Ba, a ! 0 L F> n n n a a n a

Suppose we have an n-element a expressed in terms of some basis. Then, according to 3.10 we can
n

express it as a scalar multiple (a, say) of the unit n-element 1.


n

We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n

of 1. n 2009 9 3

1 a

Grassmann Algebra Book.nb

142

We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n 1 a

of 1.
n

aa1
n n

a -1
n

1 a

1
n 1 1: a n

We can see this by taking the regressive product of a 1 with


n

a a -1 J a 1 N
n n n

1 a

1 11 1
n n n n

If a is now expressed in terms of a basis we have:


n

a b e1 e2 ! en b c 1
n n

Hence a-1 can be written as:


n

a -1
n

1 1 b c

1
n

b c2
n

e1 e2 ! en

Summarizing these results: If a-1 is defined by a a-1 1, then


n n n

aa1
n n

a-1
n

1 a

1
n

3.14

a b e1 e2 ! en
n

a -1
n

b c2

e1 e2 ! en

3.15

Grassmann's notation for the regressive product


In Grassmann's Ausdehnungslehre of 1862 Section 95 (translated by Lloyd C. Kannenberg) he states:

If q and r are the orders of two magnitudes A and B, and n that of the principal domain, then the order of the product [A B] is first equal to q+r if q+r is smaller than n, and second equal to q+r|n if q+r is greater than or equal to n.
Translating this into the terminology used in this book we have:

If q and r are the grades of two elements A and B, and n that of the underlying linear space, then the grade of the product [A B] is first equal to q+r if q+r is smaller than n, and second equal to q+r|n if q+r is greater than or equal to n.

2009 9 3

Grassmann Algebra Book.nb

143

Translating this further into the notation used in this book we have:
@ A BD A B
p p q q

p+q<n p+qn

@ A BD A B

Grassmann called the product [A B] in the first case a progressive (exterior) product, and in the second case a regressive (exterior) product. (In current terminology, which we adopt in this book, the progressive exterior product is called simply the exterior product, and the regressive exterior product is called simply the regressive product). In the equivalence above, Grassmann has opted to define the product of two elements whose grades sum to the dimension n of the space as regressive, and thus a scalar. However, the more explicit notation that we have adopted identifies that some definition is still required for the progressive (exterior) product of two such elements. The advantage of denoting the two products differently enables us to correctly define the exterior product of two elements whose grades sum to n, as an n-element. Grassmann by his choice of notation has decided to define it as a scalar. In modern terminology this is equivalent to confusing scalars with pseudo-scalars. A separate notation for the two products thus avoids this tensorially invalid confusion. We can see how then, in not being explicitly denoted, the regressive product may have become lost.

3.4 The Grassmann Duality Principle


The dual of a dual
The duality of the axiom sets for the exterior and regressive products is completed by requiring that the dual of a dual of an axiom be the axiom itself. The dual of a regressive product axiom may be obtained by applying the following algorithm: 1. Replace with , and the grades of elements and spaces by their complementary grades. 2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes. This differs from the algorithm for obtaining the dual of an exterior product axiom only in the replacement of with instead of vice versa. It is easy to see that applying this algorithm to the regressive product axioms generates the original exterior product axiom set. We can combine both transformation algorithms by restating them as: 1. Replace each product operation by its dual operation, and the grades of elements and spaces by their complementary grades. 2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes. Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each theorem involving exterior or regressive products corresponds a dual theorem obtained by applying the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply the Duality Principle). 2009 9 3

Grassmann Algebra Book.nb

144

Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each theorem involving exterior or regressive products corresponds a dual theorem obtained by applying the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply the Duality Principle).

The Grassmann Duality Principle


To every theorem involving exterior and regressive products, a dual theorem may be obtained by: 1. Replacing each product operation by its dual operation, and the grades of elements and spaces by their complementary grades. 2. Replacing arbitrary grades m with n|m', k with n|k', then dropping the primes.

Example 1
The following theorem says that the exterior product of two elements is zero if the sum of their grades is greater than the dimension of the linear space.
:a b 0, m + k - n > 0>
m k

The dual theorem states that the regressive product of two elements is zero if the sum of their grades is less than the dimension of the linear space.
:a b 0, n - Hk + mL > 0>
m k

We can recover the original theorem as the dual of this one.

Example 2
This theorem below says that the exterior product of an element with itself is zero if it is of odd grade.
:a a 0, m 8OddPositiveIntegers<>
m m

The dual theorem states that the regressive product of an element with itself is zero if its complementary grade is odd.
:a a 0, Hn - mL 8OddPositiveIntegers<>
m m

Again, we can recover the original theorem by calculating the dual of this one.

Example 3
The following theorem states that the exterior product of unity with itself any number of times remains unity.

2009 9 3

Grassmann Algebra Book.nb

145

11!1a a
m m

The dual theorem states the corresponding fact for the regressive product of unit n-elements 1 .
n

1 1 ! 1 a a
n n m m

Using the GrassmannAlgebra function Dual


The algorithm of the Duality Principle has been encapsulated in the function Dual in GrassmannAlgebra. Dual takes an expression or list of expressions which comprise an axiom or theorem and generates the list of dual expressions by transforming them according to the replacement rules of the Duality Principle.

Example 1: The Dual of axiom 10


Our first examples are to show how Dual may be used to develop dual axioms. For example, to take the dual of axiom

10 simply enter:
k m

DualBa b H- 1Lm k b aF
m k

a b H - 1 L H-k+nL H-m +nL b a


m k k m

Example 2: The Dual of axiom 8


Again, to take the Dual of axiom

8 enter:
m m

DualB:#)*+,+B1, 1 LF, a 1 a>F


0

:!"#$%$B 1 , 1 L F, a 1 a>
n n n m n m

Example 3: The Dual of axiom 9


To apply Dual to an axiom involving more than one statement, collect the statements in a list. For example, the dual of axiom

9 is obtained as:
0 0 0

DualB:#)*+,+Ba-1 , a-1 LF, 1 a a-1 , (-./00Ba, a ! 0 LF>F 1 1 1 :!"#$%$B , L F, 1 a , &'()**Ba, a # 0 L F> n n n a a n a

Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and ForAll by writing them in script.

2009 9 3

Grassmann Algebra Book.nb

146

Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and ForAll by writing them in script.

Example 4: The Dual of the Common Factor Axiom


Our fourth example is to generate the dual of the Common Factor Axiom. This axiom will be introduced in the next section. Note that the dimension of the space must be entered as n in order for Dual to recognize it as such.
CommonFactorAxiom = : a g b g a b g g , m + k + j - n 0>;
m j k j m k j j

Dual@CommonFactorAxiomD : a g b g a b g g, - j - k - m + 2 n 0>
m j k j m k j j

We can verify that the dual of the dual of the axiom is the axiom itself.
CommonFactorAxiom Dual@Dual@CommonFactorAxiomDD True

Example 5: The Dual of a formula


Our fifth example is to obtain the dual of one of the product formulae derived in a later section of this chapter. Again, the dimension of the space must be entered as n.
F = a b x Ka x O b + H- 1Lm a b x
m k n-1 m n-1 k m k

n-1

Dual@FD a b x H - 1 L -m +n a b x + a x b
m k m k m k

Again, we can verify that the dual of the dual of the formula is the formula itself.
F Dual@Dual@FDD True

Setting up your own input to the Dual function


You can use the GrassmannAlgebra function Dual with any expression involving exterior and regressive products of graded, vector or scalar symbols. You can include multiple statements by combining them in a list. The statements should not include any expressions which will evaluate when entered. The dimension of the underlying linear space should be denoted as n. Avoid using scalar or vector symbols as (underscripted) grades if they appear other than as underscripts elsewhere in the expression.

3.5 The Common Factor Axiom


2009 9 3

Grassmann Algebra Book.nb

147

3.5 The Common Factor Axiom


Motivation
Although the axiom sets for the exterior and regressive products have been posited, it still remains to propose an axiom explicitly relating the two types of products. The axiom set for the regressive product, which we have created above simply as a dual axiom set to that of the exterior product, asserts that in an n-space the regressive product of an m-element and a k-element is an (m+k-n)-element. Our first criterion for the new axiom then, is that it can be used to compute this new element. Earlier we have also remarked on some enticing correspondences between the dual exterior and regressive products and the dual union and intersection operations of a Boolean algebra. We can already see that the (non-zero) exterior product of simple elements is an element whose space is the 'union' of the spaces of its factors. Our second criterion then, is that the new axiom enable the 'dual' of this property: the (non-zero) regressive product of simple elements is an element whose space is the 'intersection' of the spaces of its factors. This criterion leads us first to return to Chapter 2: Unions and Intersections, where, with an arbitrary multilinear product operation (which we denoted ), we were able to compute the union and intersection of simple elements. In the cases explored the simple elements were of arbitrary grade. We saw there that given simple elements A* C and C B* , we could obtain their union and intersection as (congruent to) A* C B* and C respectively.
A B H A * CL HC B* L H A * C B* L C

Since is an arbitrary multilinear product (other than the exterior product), we can satisfy the second criterion by positing the same form for the regressive product.

A B H A * CL HC B* L H A * C B* L C

3.16

What additional requirements on this would one need in order to satisfy the first criterion, that is, that the axiom enables a result which does not involve the regressive product? Axiom gives us the clue.
:#)*+,+B 1 , 1 L F, a 1 a>
n n n m n m

8 above

If the grade of A* C B* is equal to the dimension of the space n (or in computations, n), then since all n-elements are congruent we can write A* C B* c 1 , where c is some scalar,
n

showing that the right hand side is congruent to the common factor (intersection) C.
H A * CL HC B* L H A * C B* L C Jc 1 N C c C C
n

2009 9 3

Grassmann Algebra Book.nb

148

An axiom which satisfies both of our original criteria is then


8HA* CL HC B* L HA* C B* L C, Grade@A* C B* D n<

Using graded symbols, we could also write this in either of the following forms
: a g g b a g b g , m + k + j - n 0>
m j j k m j k j

: a g b g a b g g , m + k + j - n 0>
m j k j m k j j

As we will see, an axiom of this form works very well. Indeed, it turns out to be one of the fundamental underpinnings of the algebra. We call it the Common Factor Axiom and explore it further below.

Historical Note
The approach we have adopted in this chapter of treating the common factor relation as an axiom is effectively the same as Grassmann used in his first Ausdehnungslehre (1844) but differs from the approach that he used in his second Ausdehnungslehre (1862). (See Chapter 3, Section 5 in Kannenberg.) In the 1862 version Grassmann proves this relation from another which is (almost) the same as the Complement Axiom that we introduce in Chapter 5: The Complement. Whitehead [1898], and other writers in the Grassmannian tradition follow his 1862 approach. The relation which Grassmann used in the 1862 Ausdehnungslehre is in effect equivalent to assuming the space has a Euclidean metric (his Ergnzung or supplement). However the Common Factor Axiom does not depend on the space having a metric; that is, it is completely independent of any correspondence we set up between L and L . Hence we
m n-m

would rather not adopt an approach which introduces an unnecessary constraint, especially since we want to show later that the Ausdehnungslehre is easily extended to more general metrics than the Euclidean.

The Common Factor Axiom


We begin completely afresh in positing the axiom. Let a, b, and g be simple elements with m+k+j
m k j

= n, where n is the dimension of the space. Then the Common Factor Axiom states that:

: a g b g a b g g , m + k + j - n 0>
m j k j m k j j

3.17

Thus, the regressive product of two elements a g and b g with a common factor g is equal to
m j m k k j j j j

the regressive product of the 'union' of the elements a b g with the common factor g (their 'intersection'). If a and b still contain some simple elements in common, then the product a b g is zero, hence,
m k m k j

2009 9definition 3 by the above a g b g is also zero. In what follows, we suppose that a b g
m j k j m k j

is not zero.

Grassmann Algebra Book.nb

149

If a and b still contain some simple elements in common, then the product a b g is zero, hence,
m k m k j

by the definition above a g b g is also zero. In what follows, we suppose that a b g


m j k j m k j

is not zero. Since the union a b g is an n-element, we can write it as some scalar factor c, say, of the unit nm k j

element: a b g c 1 . Hence by axiom


m k j m j n k j

8 we derive immediately that the regressive product


j

of two elements a g and b g with a common factor g is congruent to that factor.

: a g b g g , m + k + j - n 0>
m j k j j

3.18

It is easy to see that by using the anti-commutativity axiom 10, that the axiom may be arranged in any of a number of alternative forms, the most useful of which are:

: a g g b a g b g , m + k + j - n 0>
m j j k m j k j

3.19

: g a g b g a b g , m + k + j - n 0>
j m j k j m k j

3.20

And since for the regressive product, any n-element commutes with any other element, we can always rewrite the right hand side with the common factor first.

: a g g b g a g b , m + k + j - n 0>
m j j k j m j k

3.21

Extension of the Common Factor Axiom to general elements


The axiom has been stated for simple elements. In this section we show that it remains valid for general (possibly non-simple) elements, provided that the common factor remains simple. Consider two simple elements a1 and a2 . Then the Common Factor Axiom can be written for each
m m

as:

2009 9 3

Grassmann Algebra Book.nb

150

a1 g b g a1 b g g
m j k j m k j j

a2 g b g a2 b g g
m j k j m k j j

Adding these two equations and using the distributivity of and gives:
Ka1 + a2 O g b g Ka1 + a2 O b g g
m m j k j m m k j j

Extending this process, we see that the formula remains true for arbitrary a and b, providing g is
m k j

simple.

: a g b g a b g g g , m + k + j - n 0>
m j k j m k j j j

3.22

This is an extended version of the Common Factor Axiom. It states that: the regressive product of two arbitrary elements containing a simple common factor is congruent to that factor. For applications involving computations in a non-metric space, particularly those with a geometric interpretation, we will see that the congruence form is not restrictive. Indeed, it will be quite elucidating. For more general applications in metric spaces we will see that the associated scalar factor is no longer arbitrary but is determined by the metric imposed.

Special cases of the Common Factor Axiom


In this section we list some special cases of the Common Factor Axiom. We assume, without explicitly stating, that the common factor is simple. When the grades do not conform to the requirement shown, the product is zero.

: a g b g 0, m + k + j - n ! 0>
m j k j

3.23

If there is no common factor (other than the scalar 1), then the axiom reduces to:

:a b a b 1 L , m + k - n 0>
m k m k 0

3.24

The following version yields a sort of associativity for products which are scalar.

2009 9 3

Grassmann Algebra Book.nb

151

: ab g a bg
m k j m k j

L , m + k + j - n 0>
0

3.25

To prove this, we can use the previous formula to show that each side the equation is equal to
a b g 1.
m k j

Dual versions of the Common Factor Axiom


As discussed earlier in the chapter, dual versions of formulae can be obtained automatically by using the GrassmannAlgebra function Dual. The dual Common Factor Axiom is:

: a g b g a b g g L, m + k + j - 2 n 0>
m j k j m k j j j

3.26

The duals of the three special cases in the previous section are:

: a g b g 0, m + k + j - 2 n ! 0>
m j k j

3.27

:a b a b 1 L , m + k - n 0>
m k m k n n

3.28

: ab g a bg
m k j m k j

L , m + k + j - 2 n 0>
n

3.29

Application of the Common Factor Axiom


We now work through an example to illustrate how the Common Factor Axiom might be applied. In most cases however, results will be obtained more effectively using the Common Factor Theorem. This is discussed in the section to follow. Suppose we have two general 2-elements X and Y in a 3-space and we wish to find a formula for their 1-element intersection Z. Because X and Y are in a 3-space, we are assured that they are simple.

2009 9 3

Grassmann Algebra Book.nb

152

X = x1 e1 e2 + x2 e1 e3 + x3 e2 e3 Y = y1 e1 e2 + y2 e1 e3 + y3 e2 e3

We calculate Z as the regressive product of X and Y:


Z XY Hx1 e1 e2 + x2 e1 e3 + x3 e2 e3 L Hy1 e1 e2 + y2 e1 e3 + y3 e2 e3 L

Expanding this product, and remembering that the regressive product of identical basis 2-elements is zero, we obtain:
Z Hx1 e1 e2 L Hy2 e1 e3 L + Hx1 e1 e2 L Hy3 e2 e3 L + Hx2 e1 e3 L Hy1 e1 e2 L + Hx2 e1 e3 L Hy3 e2 e3 L + Hx3 e2 e3 L Hy1 e1 e2 L + Hx3 e2 e3 L Hy2 e1 e3 L

In a 3 space, regressive products of 2-elements are anti-commutative since H- 1LHn-mL Hn-kL = H- 1LH3-2L H3-2L = - 1. Hence we can collect pairs of terms with the same factors by making the corresponding sign change:
Z Hx1 y2 - x2 y1 L He1 e2 L He1 e3 L + Hx1 y3 - x3 y1 L He1 e2 L He2 e3 L + Hx2 y3 - x3 y2 L He1 e3 L He2 e3 L

We can now apply the Common Factor Axiom to each of these regressive products:
Z Hx1 y2 - x2 y1 L He1 e2 e3 L e1 + Hx1 y3 - x3 y1 L He1 e2 e3 L e2 + Hx2 y3 - x3 y2 L He1 e2 e3 L e3

Finally, by putting e1 e2 e3 c 1 , we have Z expressed as a 1-element:


n

Z = c HHx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3 L

Thus, in sum, we have the general congruence relation for the intersection of two 2-elements in a 3space.

Hx1 e1 e2 + x2 e1 e3 + x3 e2 e3 L Hy1 e1 e2 + y2 e1 e3 + y3 e2 e3 L Hx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3

3.30

The result on the right hand side almost looks as if it could have been obtained from a crossproduct operation. However the cross-product requires the space to have a metric, while this does not. We will see in Chapter 6 how, once we have introduced a metric, we can transform this into the formula for the cross product. This is an example of how the regressive product can generate results, independent of whether the space has a metric; whereas the usual vector algebra, in using the cross-product, must assume that it does.

2009 9 3

Grassmann Algebra Book.nb

153

Check by calculating the exterior products


We can check that Z is indeed a common element to X and Y by determining if the exterior product of Z with each of X and Y is zero. We compose the expressions for X, Y, and Z, and then use % (the palette alias of GrassmannExpandAndSimplify) to do the check. GrassmannAlgebra already knows that the special congruence symbol c is scalar, and ComposeBivector will automatically declare the scalar coefficients as ScalarSymbols.
!3 ; X = ComposeBivector@xD; Y = ComposeBivector@yD; Z = c HHx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3 L; Expand@%@8Z X, Z Y<DD 80, 0<

When the common factor is not simple


Let us take a precautionary example to show what happens when we have a factor that looks like a common factor but it is not simple. When an m-element is simple, it only needs m independent 1elements to express it. When an m-element is not simple, it requires at least m+2 independent 1elements to express it. The simplest example of a non-simple element is the 2-element C, say, equal to uv + xy, where u, v, x and y are independent 1-elements.

4-space
In a 4-space we only have 4 independent 1-elements available. Let A and B be 3-elements with C apparently a common factor. First, suppose we simplify them before applying the Common Factor Axiom
A u C u Hu v + x yL u x y B C x Hu v + x yL x u v x A B Hu x yL Hu v xL Hy u xL H- u x vL

The Common Factor Axiom gives us the correct result for the simplified factors because the simplified factors had a simple common factor. But of course it is not C.
A B Hy u x vL Hx uL

Now if we formally (that is, only looking at the form), (and thus incorrectly) apply the Common Factor Axiom without simplifying we get
A B Hu CL HC xL Hu C xL C

Substituting back for C on the right hand side gives zero, which is of course an incorrect result. In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms resulting from the complete expansion of a regressive product, since these terms will either have simple common factors, or be zero. On the other hand it will not necessarily give the correct result (as expected) when the symbol for the common factor takes on a non-simple value. 2009 9 3

Grassmann Algebra Book.nb

154

In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms resulting from the complete expansion of a regressive product, since these terms will either have simple common factors, or be zero. On the other hand it will not necessarily give the correct result (as expected) when the symbol for the common factor takes on a non-simple value.

3.6 The Common Factor Theorem


Development of the Common Factor Theorem
The Common Factor Theorem which we are about to develop is one of the most important in the Grassmann algebra as it is the source of many formulae. We will also see later that it has a counterpart which forms the principal expansion theorem for interior products. The example in the previous section applied the Common Factor Axiom to two elements by expanding all the terms in their regressive product, applying the Common Factor Axiom to each of the terms, and then factoring the result. This is always possible. But in situations where there is a large number of terms, it may not be very efficient. The Common Factor Theorem will enable more efficient solutions. Let us begin by returning to the motivation of the Common Factor Axiom where we recast the union and intersection formula from Chapter 2 to become the axiom.
A B H A * CL HC B* L H A * C B* L C

Here, the grade of A* C B* is equal to the dimension of the space. Since an n-element will commute with an element of any grade, the term on the right hand side can always be rewritten as C HA* C B* L. This latter form is often convenient from a mnemonic point of view, and we may make such exchanges without further comment. Thus

A B H A * CL HC B* L H A * C B* L C C H A * C B* L

3.31

There are two different forms of the Common Factor Theorem which we will be developing: The A form and the B form. It is useful to have both forms at hand, since depending on the case, one of them is likely to be more computationally efficient than the other; and formulae resulting from each, while ultimately yielding the same result, give distinctly different forms to their results. The A form requires A to be simple and obtains the common factor C from the factors of A by spanning A (that is, composing an appropriate span and cospan of A). The B form requires B to be simple and obtains the common factor C from the factors of B by spanning B.

The A form
The basic formula for the A form begins with the Common Factor Axiom written as

A B H A * CL B H A * BL C

3.32

us now display the grades explicitly. Let A be of grade m, B be of grade k, and C be of grade j. Let A* is then of grade m - j = n - k.

2009 9 3

Grassmann Algebra Book.nb

155

A B A* C B A* B C
m k m -j j k m -j k j

The crux of the development comes from the intrinsic relationship between A, its span, and its cospan: that is, for any i and s, A is equal to the exterior product of the ith s-span element and the corresponding ith s-cospan element.
A = I#s @ADPiT M I#s @ADPiT M

In our case, s is m-j. So we can write


A = A* C I#m-j @ADPiT M I#m-j @ADPiT M
m -j j

We now focus on the right hand side of the Common Factor Axiom. If, for a given value of i, we were to replace A* by #m-j @ADPiT and C by #m-j @ADPiT , we would get the term
m -j j

J#m-j @ADPiT BN #m-j @ADPiT


k

Clearly, this term may be different for different values of i, since, for example, wherever the span element of A (#m-j @ADPiT ) contains a factor of the common factor it will be zero, since B will contain the same factor. In other cases it may be non-zero. As it turns out, the common factor needs the sum of these terms. We will see why in the next section when we discuss the proof of the theorem.
n

A B K " m -j B A F
m k i=1

m PiT

B O " m -j B A F
k

m PiT

3.33

The index i ranges from 1 to n, where n is equal to

m j

(or equivalently

m m- j

).

Because the exterior and regressive products have Mathematica's Listable attribute, we do not actually need to index the list elements and then sum them. If we write S as a shorthand for Mathematica's inbuilt Total function, which sums the elements of a list, we can write

A B SBJ"m-j BAF BN "m-j BAFF


m k m k m

3.34

To make these ideas more concrete we now apply the formula to some simple examples before proceeding to a proof in the next section.

Generalizations
It is evident that if both elements are simple, expansion in factors of the element of lower grade will be the more efficient. When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k

component terms of the product. 2009 9 3

Grassmann Algebra Book.nb

156

When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k

component terms of the product. Multiple regressive products may be treated by successive applications of the formulae above.

Example 1
Suppose we are working in a 5-space and A and B are expressed in such a way as to display their common factor C. We declare the extra vector symbols with the alias V.
!5 ; A = a1 g1 g2 ; B = g1 g2 b1 b2 ; V@a_ , b_ , g_ D;

Thus, n is 5, m (the grade of A) is 3, k (the grade of B) is 4, and j (the grade of C) is 2. For reference, the 1-span and 1-cospan of A are
9#1 @AD, #1 @AD= 88a1 , g1 , g2 <, 8g1 g2 , - Ha1 g2 L, a1 g1 <<

Now if we simply evaluate the formula, we notice that two of the three terms are zero due to repeated factors in the exterior product.
F = SAH#1 @AD BL #1 @ADE a1 g1 g2 b1 b2 g1 g2 + g1 g1 g2 b1 b2 - Ha1 g2 L + g2 g1 g2 b1 b2 a1 g1

Simplifying then gives us the result.


% @FD g1 g2 a1 b1 b2 g1 g2

Because the second factor in the regressive product is an n-element, we can immediately write this as congruent to the common factor:
Hg1 g2 L Ha1 b1 b2 g1 g2 L g1 g2

Example 2
We now take the same example as we did earlier when we multiplied out the regressive product term by term. X and Y are both 2-elements in a 3-space, and we wish to compute a 1-element Z common to them (remember Z can only be computed up to congruence).
!3 ; 8X, Y< = 8%x , %y < 8x1 e1 e2 + x2 e1 e3 + x3 e2 e3 , y1 e1 e2 + y2 e1 e3 + y3 e2 e3 <

To apply the Common Factor Theorem in the form above, we need X in factored form.

2009 9 3

Grassmann Algebra Book.nb

157

Xf = ExteriorFactorize@XD x1 e1 e3 x3 x1 e2 + e3 x2 x1

To compute the span of an element, the element needs to be a simple product of 1-elements - not a scalar multiple of such a product. The GrassmannAlgebra function # ignores any such scalar factors. In applications where only a result up to congruence is required, this is entirely satisfactory. Otherwise if we wish to include the scalar factor, we will need to multiply it into one of the other factors, say the first.
Xf = Xf . a_ Hb_ c__L Ha bL c x1 e1 e3 x3 x1 e2 + e3 x2 x1

The span and cospan of Xf are both of grade 1. We display them for reference.
9#1 @Xf D, #1 @Xf D= ::x1 e1 e3 x3 x1 , e2 + e3 x2 x1 >, :e2 + e3 x2 x1 , - x1 e1 e3 x3 x1 >>

Application of the formula gives the unsimplified sum.


F = SAH#1 @Xf D YL #1 @Xf DE e2 + e3 x2 x1 Hy1 e1 e2 + y2 e1 e3 + y3 e2 e3 L - x1 e1 e3 x3 x1 e3 x3 x1 e3 x2 x1 +

x1 e1 -

Hy1 e1 e2 + y2 e1 e3 + y3 e2 e3 L e2 +

Expansion and simplification gives


F = % @FD H- x2 y1 + x1 y2 L e1 e1 e2 e3 + H- x3 y1 + x1 y3 L e2 e1 e2 e3 + H- x3 y2 + x2 y3 L e3 e1 e2 e3

The n-element in this case is e1 e2 e3 . Factoring it out gives


F = HH- x2 y1 + x1 y2 L e1 + H- x3 y1 + x1 y3 L e2 + H- x3 y2 + x2 y3 L e3 L e1 e2 e3

Writing this in congruence form gives us the common factor required, and corroborates the result originally obtained by applying the Common Factor Axiom to each term of the full expansion.
Z HH- x2 y1 + x1 y2 L e1 + H- x3 y1 + x1 y3 L e2 + H- x3 y2 + x2 y3 L e3 L c He1 H- x2 y1 + x1 y2 L + e2 H- x3 y1 + x1 y3 L + e3 H- x3 y2 + x2 y3 LL

Proof of the Common Factor Theorem


2009 9 3

Grassmann Algebra Book.nb

158

Proof of the Common Factor Theorem


Consider a regressive product A B where A is given as a simple product of 1-element factors and
m k m

m+k = n+j, j > 0. Then A B is either zero, or, by the Common Factor Axiom, has a common factor
m k

C. We assume it is not zero. We then express A and B in terms of C as we have done previously.
j m k j

A A* C
m m -j j

B C B*
k j

k-j

Let the (m-j)-span and (m-j)-cospan of A be written as


m

#m-j BAF : A1 , A2 , !, An >


m m -j m -j m -j

#m-j BAF :A1 , A2 , !, An >


m j j j

then A can be written in any of the forms


m

A A1 A1 A2 A2 ! An An
m m -j j m -j j m -j j

Here n is equal to

m j

, or equivalently

m m- j

Since the common factor C is a j-element belonging to A it can be expressed as a linear


j m

combination of the cospan elements of A.


m

C a1 A1 + a2 A2 + + an An
j j j j

The exterior product of the ith (m-j)-span element Ai with C can be expanded to give
m -j j

Ai C m -j j a1

Ai a1 A1 + a2 A2 + + an An
m -j j j j

Ai A1 m -j j

+ a2

Ai A2 m -j j

+ + ai

Ai Ai m -j j

+ + an

Ai An m -j j
m -j

And indeed, because by definition, the exterior product of an (m-j)-span element (say, A2 ) and any of the non-corresponding cospan elements (say, A3 ) is zero, we can write for any i
j

Ai C m -j j

ai

Ai Ai m -j j

ai A
m

Now return to the Common Factor Axiom in the form

2009 9 3

Grassmann Algebra Book.nb

159

Z AB
m k

m -j

A * C C B*
j j

k-j

A * C B* C
m -j j k -j j

A B* C
m k-j j

Substituting for the common factor in the right hand side gives
Z A B* C
m k-j j

A B* a1 A1 + a2 A2 + + an An
m k-j j j j

By the distributivity of the exterior and regressive products, and their behaviour with scalars, the scalar factors can be transferred and attached to A.
m

Ja1 A N B* A1 + Ja2 A N B* A2 + + Jan A N B* An


m k-j j m k-j j m k-j j

But we have shown earlier that


ai A Ai C
m m -j j

Hence substituting gives


Z A1 C B* A1 + A2 C B* A2 + + An C B* An
j k-j j m -j j k-j j m -j j k-j j

m -j

Finally, a further substitution of B for C B* gives the final result.


k j k-j

A1 B k m -j

A1 + A2 B A2 + + An B An
j m -j k j m -j k j n

3.35
m -j k j

Ai B Ai
i=1

The A and B forms of the Common Factor Theorem


We have just proved what we will now call the A form of the Common Factor Theorem (because it works by spanning the first factor in the regressive product). An analogous formula may be obtained mutatis mutandis by factoring B rather than A. Hence B must now be simple. This formula
k m k

we will call the B form of the Common Factor Theorem. In the case where both A and B are simple but not of the same grade, the form which decomposes the element of lower grade will generate the least number of terms and be more computationally efficient. Multiple regressive products may be treated by successive applications of the theorem in either of its forms.

2009 9 3

Grassmann Algebra Book.nb

160

The A form of the Common Factor Theorem


n

: A B Ai B Ai ,
m k i=1 m -j k j

m - j + k - n 0, m n K O> j 3.36

A A1 A1 A2 A2 ! An An ,
m m -j j m -j j m -j j

As we have already seen, this is equivalent to the computationally oriented form.


n

A B #m-j BAF
m k i=1

m PiT

B #m - j B A F
k

m PiT

#m-j BAF : A1 , A2 , !, An >


m m -j m -j m -j

#m-j BAF :A1 , A2 , !, An >


m j j j

Or, using Mathematica's inbuilt Listability


A B SBJ#m-j BAF BN #m-j BAFF
m k m k m

The B form of the Common Factor Theorem


n

: A B A Bi Bi ,
m k

i=1

m - j + k - n 0, n k > j 3.37

k-j

B B1 B1 B2 B2 ! Bn Bn ,
k j k-j j k-j j k-j

The computationally oriented form is


n

A B A #j BBF
m k i=1 m

k PiT

#j BBF

k PiT

#j BBF :B1 , B2 , !, Bn >


k j j j

#j BBF : B1 , B2 , !, Bn >
k k-j k-j k-j

Or, using Mathematica's inbuilt Listability


A B SBJA #j BBFN #j BBFF
m k m k k

2009 9 3

Grassmann Algebra Book.nb

161

Example: The decomposition of a 1-element


The special case of the regressive product of a 1-element b with an n-element a enables us to
n

decompose b directly in terms of the factors of a. The Common Factor Theorem gives:
n n

a b ai b ai
n i=1 n-1

where:
a a 1 a 2 ! a n H - 1 L n-i H a 1 a 2 ! i ! a n L a i
n

The symbol i means that the ith factor is missing from the product. Substituting in the Common Factor Theorem gives:
n

a b H- 1Ln-i HHa1 a2 ! i ! an L bL ai
n i=1 n

H a 1 a 2 ! a i-1 b a i+1 ! a n L a i
i=1

Hence the decomposition formula becomes:

Ha1 a2 ! an L b
n

H a 1 a 2 ! a i-1 b a i+1 ! a n L a i
i=1

3.38

Writing this out in full shows that we can expand the expression simply by interchanging b successively with each of the factors of a1 a2 !an , and summing the results.

Ha1 a2 ! an L b Hb a2 ! an L a1 + Ha1 b ! an L a2 + ! + Ha1 a2 ! bL an


We can make the result express more explicitly the decomposition of b in terms of the ai by 'dividing through' by a. (The quotient of two n-elements has been defined previously as the
n

3.39

quotient of their scalar coefficients.)


n

b
i=1

a 1 a 2 ! a i-1 b a i+1 ! a n a1 a2 ! an

ai

3.40

This expression is, of course, equivalent to that which would have been obtained from Grassmann's approach to solving linear equations (see Chapter 2: The Exterior Product). 2009 9 3

Grassmann Algebra Book.nb

162

This expression is, of course, equivalent to that which would have been obtained from Grassmann's approach to solving linear equations (see Chapter 2: The Exterior Product).

Example: Applying the Common Factor Theorem


In this section we show how the Common Factor Theorem generally leads to a more efficient computation of results than repeated application of the Common Factor Axiom, particularly when done manually, since there are fewer terms in the Common Factor Theorem expansion, and many are evidently zero by inspection. Again, we take the problem of finding the 1-element common to two 2-elements in a 3-space. This time, however, we take a numerical example and suppose that we know the factors of at least one of the 2-elements. Let:
x1 3 e1 e2 + 2 e1 e3 + 3 e2 e3 x2 H5 e2 + 7 e3 L e1

We keep x1 fixed initially and apply the Common Factor Theorem to rewrite the regressive product as a sum of the two products, each due to one of the essentially different rearrangements of x2 :
x1 x2 Hx1 e1 L H5 e2 + 7 e3 L - Hx1 H5 e2 + 7 e3 LL e1

The next step is to expand out the exterior products with x1 . Often this step can be done by inspection, since all products with a repeated basis element factor will be zero.
x1 x2 H3 e2 e3 e1 L H5 e2 + 7 e3 L H10 e1 e3 e2 + 21 e1 e2 e3 L e1

Factorizing out the 3-element e1 e2 e3 then gives:


x1 x2 He1 e2 e3 L H- 11 e1 + 15 e2 + 21 e3 L c H- 11 e1 + 15 e2 + 21 e3 L - 11 e1 + 15 e2 + 21 e3

Thus all factors common to both x1 and x2 are congruent to - 11 e1 + 15 e2 + 21 e3 .

Check by calculating the exterior products


We can check that this result is correct by taking its exterior product with each of x1 and x2 . We should get zero in both cases, indicating that the factor determined is indeed common to both original 2-elements. Here we use GrassmannExpandAndSimplify in its alias form %.
!3 ; %@H3 e1 e2 + 2 e1 e3 + 3 e2 e3 L H- 11 e1 + 15 e2 + 21 e3 LD 0 %@H5 e2 + 7 e3 L e1 H- 11 e1 + 15 e2 + 21 e3 LD 0

Automating the application of the Common Factor Theorem


2009 9 3

Grassmann Algebra Book.nb

163

Automating the application of the Common Factor Theorem


The Common Factor Theorem can be computed in GrassmannAlgebra by using either the A or B form in its computable form using the appropriate span and cospan. However GrassmannAlgebra also has a more direct function ToCommonFactor which will attempt to handle multiple regressive products in an optimum way.

Using ToCommonFactor
ToCommonFactor reduces any regressive products in a Grassmann expression to their common factor form.

For example in the case explored above we have:


x1 = 3 e1 e2 + 2 e1 e3 + 3 e2 e3 ; x2 = H5 e2 + 7 e3 L e1 ; A; ToCommonFactor@x1 x2 D c H- 11 e1 + 15 e2 + 21 e3 L

The scalar c is called the congruence factor. The congruence factor has already been defined in Section 3.3 above as the connection between the current basis n-element and the unit n-element 1
n

(in GrassmannAlgebra input we use n instead of n, in case n is being used elsewhere in your computations).
e1 e2 ! en c 1
n

Since this connection cannot be defined unless a metric is introduced, the congruence factor remains arbitrary in non-metric spaces. Although such an arbitrariness may appear at first sight to be disadvantageous, it is on the contrary, highly elucidating. In application to the computing of unions and intersections of spaces, perhaps under a geometric interpretation where they represent lines, planes and hyperplanes, the notion of congruence becomes central, since spaces can only be determined up to congruence. In the example above, x1 and x2 were expressed in terms of basis elements. ToCommonFactor can also apply the Common Factor Theorem to more general elements. For example, if x1 and x2 were expressed as symbolic bivectors, the common factor expansion could still be performed.
x1 = x y; x2 = u v; ToCommonFactor@x1 x2 D c Hy u v x - x u v yL

The expression uvx represents the coefficient of uvx when uvx is expressed in terms of the current basis n-element (in this case 3-element). As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an expression for uvx in terms of basis elements, and then expand and simplify the result using the alias % for GrassmannExpandAndSimplify. 2009 9 3

Grassmann Algebra Book.nb

164

As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an expression for uvx in terms of basis elements, and then expand and simplify the result using the alias % for GrassmannExpandAndSimplify.
X = ComposeBasisForm@u v xD He1 u1 + e2 u2 + e3 u3 L He1 v1 + e2 v2 + e3 v3 L He1 x1 + e2 x2 + e3 x3 L % @XD H- u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L e1 e2 e3

Hence in this case uvx is given by


u v x H- u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L

Using ToCommonFactorA and ToCommonFactorB


If you wish to explicitly apply the A form of the Common Factor Theorem, you can use ToCommonFactorA. Or if the B form you can use ToCommonFactorB. The results will be equal, but they may look different. We illustrate by taking the example in the section above and expanding it both ways. Again define
x1 = x y; x2 = u v; ToCommonFactorA@x1 x2 D c Hy u v x - x u v yL ToCommonFactorB@x1 x2 D c H- v u x y + u v x yL

To see most directly that these are equal, we express the vectors in terms of basis elements.
Z = ComposeBasisForm@x1 x2 D He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 u1 + e2 u2 + e3 u3 L He1 v1 + e2 v2 + e3 v3 L

Applying the A form of the Common Factor Theorem gives


ToCommonFactorA@ZD c He1 Hu3 v1 x2 y1 - u1 v3 x2 y1 - u2 v1 x3 y1 + u1 v2 x3 y1 u3 v1 x1 y2 + u1 v3 x1 y2 + u2 v1 x1 y3 - u1 v2 x1 y3 L + e2 Hu3 v2 x2 y1 - u2 v3 x2 y1 - u3 v2 x1 y2 + u2 v3 x1 y2 u2 v1 x3 y2 + u1 v2 x3 y2 + u2 v1 x2 y3 - u1 v2 x2 y3 L + e3 Hu3 v2 x3 y1 - u2 v3 x3 y1 - u3 v1 x3 y2 + u1 v3 x3 y2 u3 v2 x1 y3 + u2 v3 x1 y3 + u3 v1 x2 y3 - u1 v3 x2 y3 LL

The resulting expression is identical to that calculated by the B form.

2009 9 3

Grassmann Algebra Book.nb

165

ToCommonFactorB@ZD ToCommonFactorA@ZD True

Expressions involving non-decomposable elements


The GrassmannAlgebra function ToCommonFactor uses the A or B form heuristically. When a common factor can be calculated by using either the A or B form, the A form is used by default. If an expansion is not possible by using the A form, ToCommonFactor tries the B form. For example, consider the regressive product of two symbolic 2-elements in a 3-space. Attempting to compute the common factor fails, since neither factor can be decomposed.
A; ToCommonFactorBa bF
2 2

ab
2 2

However, if either one of the factors is decomposable, then a formula for the common factor can be generated. In the case below, the results are the same; remember that c is an arbitrary scalar factor.
:ToCommonFactorBa Hx yLF, ToCommonFactorBHx yL aF>
2 2

:c K- y x a + x y aO, c Ky x a - x y aO>
2 2 2 2

Example: The regressive product of three 3-elements in a 4-space.


ToCommonFactor can find the common factor of any number of elements. As an example, we take the regressive product of three 3-elements in a 4-space. This is a 1-element, since 3 + 3 + 3 - 2 4 # 1. Remember also that a 3-element in a 4-space is necessarily simple (see Chapter 2: The Exterior Product).

To demonstrate this we first declare a 4-space, and then use the GrassmannAlgebra function ComposeBasisForm to compose the required product in basis form.
!4 ; X = ComposeBasisFormBa b gF
3 3 3

Ha3,1 e1 e2 e3 + a3,2 e1 e2 e4 + a3,3 e1 e3 e4 + a3,4 e2 e3 e4 L Hb3,1 e1 e2 e3 + b3,2 e1 e2 e4 + b3,3 e1 e3 e4 + b3,4 e2 e3 e4 L Hg3,1 e1 e2 e3 + g3,2 e1 e2 e4 + g3,3 e1 e3 e4 + g3,4 e2 e3 e4 L

The common factor is then determined up to congruence by ToCommonFactor.

2009 9 3

Grassmann Algebra Book.nb

166

Xc = ToCommonFactor@XD c2 He1 H-a3,3 b3,2 g3,1 + a3,2 b3,3 g3,1 + a3,3 b3,1 g3,2 - a3,1 b3,3 g3,2 - a3,2 b3,1 g3,3 + a3,1 b3,2 g3,3 L + e2 H-a3,4 b3,2 g3,1 + a3,2 b3,4 g3,1 + a3,4 b3,1 g3,2 a3,1 b3,4 g3,2 - a3,2 b3,1 g3,4 + a3,1 b3,2 g3,4 L + e3 H-a3,4 b3,3 g3,1 + a3,3 b3,4 g3,1 + a3,4 b3,1 g3,3 a3,1 b3,4 g3,3 - a3,3 b3,1 g3,4 + a3,1 b3,3 g3,4 L + e4 H-a3,4 b3,3 g3,2 + a3,3 b3,4 g3,2 + a3,4 b3,2 g3,3 a3,2 b3,4 g3,3 - a3,3 b3,2 g3,4 + a3,2 b3,3 g3,4 LL

Note that the congruence factor c is to the second power. This is because the original expression contained the regressive product operator twice: one c effectively stands in for each basis 4element that is produced during the calculation (3+3+3 = 4+4+1). It is easy to check that this result is indeed a common factor by taking its exterior product with each of the 3-elements. For example:
%@Xc Ha3,1 e1 e2 e3 + a3,2 e1 e2 e4 + a3,3 e1 e3 e4 + a3,4 e2 e3 e4 LD Expand 0

3.7 The Regressive Product of Simple Elements


The regressive product of simple elements
The regressive product of simple elements is simple. To show this consider the (non-zero) regressive product a b, where a and b are simple, m+k n.
m k m k

The 1-element factors of a and b must then have a common subspace of dimension m+k-n = j. Let
m k

g be a simple j-element which spans this common subspace. We can then write:
j

ab
m k

a g
m -j j

b g
k-j j

a b g g g
m -j k-j j j j

The Common Factor Axiom then shows us that since g is simple, then so is the original product of
j

simple elements a b.
m k

2009 9 3

Grassmann Algebra Book.nb

167

The regressive product of (n|1)-elements


Since we have shown in Chapter 2: The Exterior Product that all (n|1)-elements are simple, and in the previous section that the regressive product of simple elements is simple, it follows immediately that the regressive product of any number of (n|1)-elements is simple.

Example: The regressive product of two 3-elements in a 4-space is simple


As an example of the foregoing result we calculate the regressive product of two 3-elements in a 4space. We begin by declaring a 4-space and creating two general 3-elements.
!4 ; Z = ComposeBasisFormBx yF
3 3

Hx3,1 e1 e2 e3 + x3,2 e1 e2 e4 + x3,3 e1 e3 e4 + x3,4 e2 e3 e4 L Hy3,1 e1 e2 e3 + y3,2 e1 e2 e4 + y3,3 e1 e3 e4 + y3,4 e2 e3 e4 L

The common 2-element is:


Zc = ToCommonFactor@ZD c HH- x3,2 y3,1 + x3,1 y3,2 L e1 e2 + H- x3,3 y3,1 + x3,1 y3,3 L e1 e3 + H- x3,3 y3,2 + x3,2 y3,3 L e1 e4 + H- x3,4 y3,1 + x3,1 y3,4 L e2 e3 + H- x3,4 y3,2 + x3,2 y3,4 L e2 e4 + H- x3,4 y3,3 + x3,3 y3,4 L e3 e4 L

We can show that this 2-element is simple by confirming that its exterior product with itself is zero. (This technique was discussed in Chapter 2: The Exterior Product.)
%@Zc ZcD Expand 0

Regressive products leading to scalar results


A formula particularly useful in its interior product form to be derived later in Chapter 6 is obtained by application of the Common Factor Theorem to the product
Ha1 ! am L b1 ! bm
n-1 n-1

where the ai are 1-elements and the bi are (n|1)-elements.


n-1

2009 9 3

Grassmann Algebra Book.nb

168

Ha1 ! am L b1 ! bm
n-1 n-1

H a 1 ! a m L b j H - 1 L j-1 b 1 ! j ! b m
n-1 n-1 n-1

H - 1 L i-1 a i b j
i n-1

Ha1 ! i ! am L

H - 1 L j-1 b 1 ! j ! b m
n-1 n-1

H - 1 L i+j a i b j
i n-1

Ha1 ! i ! am L b1 ! j ! bm
n-1 n-1

By repeating this process one obtains finally that:

Ha1 ! a m L b1 ! b m
n-1 n-1

DetBai bj F
n-1

3.41

where DetBai bj F is the determinant of the matrix whose elements are the scalars ai bj .
n-1 n-1

For the case m = 2 we have:


Ha1 a2 L b1 b2
n-1 n-1

a1 b1
n-1

a2 b2
n-1

- a1 b2
n-1

a2 b1
n-1

This determinant formula is of central importance in the computation of inner products to be discussed in Chapter 6.

Expressing an element in terms of another basis


The Common Factor Theorem may also be used to express an m-element in terms of another basis with basis n-element b by expanding the product b a.
n n m

Let the new basis be 8b1 , !, bn < and let b b1 ! bn , then the Common Factor Theorem
n

permits us to write:
n

b a bi a bi
n m i=1 n-m m m

where n = K

n O and m

b b1 b1 b2 b2 ! bn bn
n n-m m n-m m n-m m

We can visualize how the formula operates by writing a and b as simple products and then
m k

exchanging the bi with the ai in all the essentially different ways possible whilst always retaining the original ordering. To make this more concrete, suppose n is 5 and m is 2: 2009 9 3

Grassmann Algebra Book.nb

169

We can visualize how the formula operates by writing a and b as simple products and then
m k

exchanging the bi with the ai in all the essentially different ways possible whilst always retaining the original ordering. To make this more concrete, suppose n is 5 and m is 2:
Hb1 b2 b3 b4 b5 L Ha1 a2 L Ha1 a2 b3 b4 b5 L Hb1 b2 L + Ha1 b2 a2 b4 b5 L Hb1 b3 L + Ha1 b2 b3 a2 b5 L Hb1 b4 L + Ha1 b2 b3 b4 a2 L Hb1 b5 L + Hb1 a1 a2 b4 b5 L Hb2 b3 L + Hb1 a1 b3 a2 b5 L Hb2 b4 L + Hb1 a1 b3 b4 a2 L Hb2 b5 L + Hb1 b2 a1 a2 b5 L Hb3 b4 L + Hb1 b2 a1 b4 a2 L Hb3 b5 L + Hb1 b2 b3 a1 a2 L Hb4 b5 L

Now let the new n-elements on the right hand side be written as scalar factors bi times the new basis n-element b.
n

bi a m n-m

bi b
n

Substituting gives
n n m i=1 n m n n i=1 m

b a bi b bi b bi bi

Since b is an n-element, we can 'divide' through by it to give finally


n

a bi bi
m i=1 m

bi

bi a n- m m b
n

3.42

Example: Expressing a 3-element in terms of another basis


Consider a 4-space with basis 8e1 ,e2 ,e3 ,e4 < and let a be a 3-element expressed in terms of this
3

basis.
!4 ; a = 2 e1 e2 e3 - 3 e1 e3 e4 - 2 e2 e3 e4 ;
3

Suppose now that we have another basis related to the e basis by:

2009 9 3

Grassmann Algebra Book.nb

170

b1 b2 b3 b4

= 2 e1 + 3 e3 ; = 5 e3 - e4 ; = e1 - e3 ; = e1 + e2 + e3 + e4 ;
3

We wish to express a in terms of the new basis. Instead of inverting the transformation to find the
ei in terms of the bi , substituting for the ei in a, and simplifying, we can almost write the result
3

required by inspection.
Hb1 b2 b3 b4 L a b1 a Hb2 b3 b4 L - b2 a Hb1 b3 b4 L +
3 3 3

b3 a Hb1 b2 b4 L - b4 a Hb1 b2 b3 L
3 3

H- 4 e1 e2 e3 e4 L Hb2 b3 b4 L - H2 e1 e2 e3 e4 L Hb1 b3 b4 L + H- 2 e1 e2 e3 e4 L Hb1 b2 b4 L - H- e1 e2 e3 e4 L Hb1 b2 b3 L He1 e2 e3 e4 L HH- 4 b2 b3 b4 L + H- 2 b1 b3 b4 L + H- 2 b1 b2 b4 L + Hb1 b2 b3 LL

But from the transformation we find that


% @b1 b2 b3 b4 D - 5 e1 e2 e3 e4

so that the equation becomes


Hb1 b2 b3 b4 L a 3

Hb1 b2 b3 b4 L 5 HH- 4 b2 b3 b4 L + H- 2 b1 b3 b4 L + H- 2 b1 b2 b4 L + Hb1 b2 b3 LL

We can simplify this by 'dividing' through by b1 b2 b3 b4 , to give finally


a
3

4 5

b2 b3 b4 +

2 5

b1 b3 b4 +

2 5

b1 b2 b4 -

1 5

b1 b2 b3

We can easily check this by expanding and simplifying the right hand side to give both sides in terms of the original e basis.
a % B
3

4 5

b2 b3 b4 +

2 5

b1 b3 b4 +

2 5

b1 b2 b4 -

1 5

b1 b2 b3 F

True

2009 9 3

Grassmann Algebra Book.nb

171

Expressing a 1-element in terms of another basis


For the special case of a 1-element, the scalar coefficients bi in formula 3.35 can be written more explicitly, and without undue complexity, resulting in a decomposition of a in terms of n independent elements bi equivalent to expressing a in the basis bi .
n

a
i=1

b1 b2 ! a ! bn b1 b2 ! bi ! bn

bi

3.43

Here the denominators are the same in each term but are expressed this way to show the positioning of the factor a in the numerator. This result has already been introduced in the section above Example: The decomposition of a 1element.

The symmetric expansion of a 1-element in terms of another basis


For the case m = 1, b a may be expanded by the Common Factor Theorem to give:
n n

H b 1 b 2 ! b n L a H b 1 b 2 ! b i-1 a b i+1 ! b n L b i
i=1

or, in terms of the mnemonic expansion in the previous section:


Hb1 b2 b3 ! bn L a Ha b2 b3 ! bn L b1 + Hb1 a b3 ! bn L b2 + Hb1 b2 a ! bn L b3 +! + Hb1 b2 b3 ! aL bn

Putting a equal to b0 , this may be written more symmetrically as:


n

H- 1Li Hb0 b1 ! i ! bn L bi 0
i=0

3.44

For example, suppose b0 , b1 , b2 , b3 are four dependent 1-elements which span a 3-space, then the formula reduces to the identity:

Hb1 b2 b3 L b0 - Hb0 b2 b3 L b1 + Hb0 b1 b3 L b2 - Hb0 b1 b2 L b3 0

3.45

We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form, and finding the common factor of each term. (To use ComposeBasisForm, we will first have to give the bi new names which do not already involve subscripts.) 2009 9 3

Grassmann Algebra Book.nb

172

We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form, and finding the common factor of each term. (To use ComposeBasisForm, we will first have to give the bi new names which do not already involve subscripts.)
A; B = Hb1 b2 b3 L b0 Hb0 b2 b3 L b1 + Hb0 b1 b3 L b2 - Hb0 b1 b2 L b3 ; B = B . 8b0 w, b1 x, b2 y, b3 z< - H w x y zL + w x z y - w y z x + x y z w Bn = ComposeBasisForm@BD - HHe1 w1 + e2 w2 + e3 w3 L He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 LL + He1 w1 + e2 w2 + e3 w3 L He1 x1 + e2 x2 + e3 x3 L He1 z1 + e2 z2 + e3 z3 L He1 y1 + e2 y2 + e3 y3 L He1 w1 + e2 w2 + e3 w3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 L He1 x1 + e2 x2 + e3 x3 L + He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 L He1 w1 + e2 w2 + e3 w3 L ToCommonFactor@Bn D 0 True

Exploration: The cobasis form of the Common Factor Axiom


The Common Factor Axiom has a significantly suggestive form when written in terms of cobasis elements. This form will later help us extend the definition of the interior product to arbitrary elements. We start with three basis elements whose exterior product is equal to the basis n-element. The basis n-element may also be expressed as the cobasis 1 of the unit element 1 of L.
0

ei ej es e1 e2 ! en 1 c 1
m k p

The Common Factor Axiom can be written for these basis elements as:

ei es ej es ei ej es es ,
m
p

m + k + p n

3.46

From the definition of cobasis elements we have that:


ei es H- 1Lm k ej
m p k

2009 9 3

Grassmann Algebra Book.nb

173

ej es ei
k p m

es ei ej
p m k

ei ej es 1
m k p

Substituting these four elements into the Common Factor Axiom above gives:
H- 1Lm k ej ei 1 Hei ej L
k m m k

Or, more symmetrically, by interchanging the first two factors:

ei ej 1 Hei ej L c ei ej
m k
m k m k

3.47

It can be seen that in this form the Common Factor Axiom does not specifically display the common factor, and indeed remains valid for all basis elements, independent of their grades. In sum: Given any two basis elements, the cobasis element of their exterior product is congruent to the regressive product of their cobasis elements.

Exploration: The regressive product of cobasis elements


In Chapter 5: The Complement, we will have cause to calculate the regressive product of cobasis elements. From the formula derived below we will have an instance of the fact that the regressive product of (n|1)-elements is simple, and we will determine that simple element. First, consider basis elements e1 and e2 of an n-space and their cobasis elements e1 and e2 . The regressive product of e1 and e2 is given by:
e1 e2 He2 e3 ! en L H- e1 e3 ! en L

Applying the Common Factor Axiom enables us to write:


e1 e2 He1 e2 e3 ! en L H e3 ! en L

We can write this either in the form already derived in the section above:
e1 e2 1 H e1 e2 L

or, by writing 1 c 1 as:


n

e1 e2 c H e1 e2 L

Taking the regressive product of this equation with e3 gives:

2009 9 3

Grassmann Algebra Book.nb

174

e1 e2 e3 c H e1 e2 L e3 c 1 H e1 e2 e3 L c2 H e1 e2 e3 L

Continuing this process, we arrive finally at the result:

e 1 e 2 ! e m c m -1 H e 1 e 2 ! e m L

3.48

A special case which we will have occasion to use in Chapter 5 is where the result reduces to a 1element.

H - 1 L j-1 e 1 e 2 ! j ! e n c n-2 H - 1 L n-1 e j

3.49

In sum: The regressive product of cobasis elements of basis 1-elements is congruent to the cobasis element of their exterior product. In fact, this formula is just an instance of a more general result which says that: The regressive product of cobasis elements of any grade is congruent to the cobasis element of their exterior product. We will discuss a result very similar to this in more detail after we have defined the complement of an element in Chapter 5.

3.8 Factorization of Simple Elements


Factorization using the regressive product
The Common Factor Axiom asserts that in an n-space, the regressive product of a simple melement with a simple (n|m+1)-element will give either zero, or a 1-element belonging to them both, and hence a factor of the m-element. If m such factors can be obtained which are independent, their product will therefore constitute (apart from a scalar factor easily determined) a factorization of the m-element. Let a be the simple element to be factorized. Choose first an (n|m)-element b whose product with
m n- m n-m

a is non-zero. Next, choose a set of m independent 1-elements bj whose products with b are also
m

non-zero. A factorization aj of a is then obtained from:


m

a a a1 a2 ! a m
m

aj a Kbj b O
m n- m

3.50

The scalar factor a may be determined simply by equating any two corresponding terms of the original element and the factorized version. Note that no factorization is unique. Had different bj been chosen, a different factorization would have been obtained. Nevertheless, any one factorization may be obtained from any other by adding multiples of the factors to each factor. 2009 9 3

Grassmann Algebra Book.nb

175

Note that no factorization is unique. Had different bj been chosen, a different factorization would have been obtained. Nevertheless, any one factorization may be obtained from any other by adding multiples of the factors to each factor. If an element is simple, then the exterior product of the element with itself is zero. The converse, however, is not true in general for elements of grade higher than 2, for it only requires the element to have just one simple factor to make the product with itself zero. If the method is applied to the factorization of a non-simple element, the result will still be a simple element. Thus an element may be tested to see if it is simple by applying the method of this section: if the factorization is not equivalent to the original element, the hypothesis of simplicity has been violated.

Example: Factorization of a simple 2-element


Suppose we have a 2-element a which we wish to show is simple and which we wish to factorize.
2

a vw + vx + vy + vz + wz + xz + yz
2

There are 5 independent 1-elements in the expression for a : v, w, x, y, z, hence we can choose n
2

to be 5. Next, we choose b (= b) arbitrarily as xyz, b1 as v, and b2 as w. Our two factors then


n-m 3

become:
a1 a b1 b a Hv x y zL
2 3 2

a2 a b2 b a H w x y zL
2 3 2

In determining a1 the Common Factor Theorem permits us to write for arbitrary 1-elements x and y:
Hx y L Hv x y zL Hx v x y zL y - Hy v x y zL x

Applying this to each of the terms of a gives:


2

a1 - H w v x y zL v + H w v x y zL z v - z

Similarly for a2 we have the same formula, except that w replaces v.


Hx y L H w x y zL Hx w x y zL y - Hy w x y zL x

Again applying this to each of the terms of a gives:


2

a2 Hv w x y zL w + Hv w x y zL x + Hv w x y zL y + Hv w x y zL z w + x + y + z

Hence the factorization is congruent to:

2009 9 3

Grassmann Algebra Book.nb

176

a1 a2 Hv - zL H w + x + y + zL

Verification by expansion of this product shows that this is indeed a factorization of the original element.

Factorizing elements expressed in terms of basis elements


We now take the special case where the element to be factorized is expressed in terms of basis elements. This will enable us to develop formulae from which we can write down the factorization of an element (almost) by inspection. The development is most clearly apparent from a specific example, but one that is general enough to cover the general concept. Consider a 3-element in a 5-space, where we suppose the coefficients to be such as to ensure the simplicity of the element.
a a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5
3

+ a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5

Choose b1 e1 , b2 e2 , b3 e3 , b e4 e5 , then we can write each of the factors bi b as a


2 2

cobasis element.
b1 b e1 e4 e5 e2 e3
2

b2 b e2 e4 e5 - e1 e3
2

b3 b e3 e4 e5 e1 e2
2

Consider the cobasis element e2 e3 , and a typical term of a e2 e3 , which we write as


3

Ha ei ej ek L e2 e3 . The Common Factor Theorem tells us that this product is zero if ei ej ek does not contain e2 e3 . We can thus simplify the product a e2 e3 by dropping
3

out the terms of a which do not contain e2 e3 . Thus:


3

a e2 e3 Ha1 e1 e2 e3 + a7 e2 e3 e4 + a8 e2 e3 e5 L e2 e3
3

Furthermore, the Common Factor Theorem applied to a typical term He2 e3 ei L e2 e3 of the expansion in which both e2 e3 and e2 e3 occur yields a 1-element congruent to the remaining basis 1-element ei in the product. This effectively cancels out (up to congruence) the product e2 e3 from the original term.
He2 e3 ei L e2 e3 IHe2 e3 L He2 e3 LM ei ei

Thus we can further reduce a e2 e3 to give:


3

2009 9 3

Grassmann Algebra Book.nb

177

Thus we can further reduce a e2 e3 to give:


3

a1 a e2 e3 a1 e1 + a7 e4 + a8 e5
3

Similarly we can determine the other factors:


a2 a - e1 e3 a1 e2 - a4 e4 - a5 e5
3

a3 a e1 e2 a1 e3 + a2 e4 + a3 e5
3

It is clear from inspecting the product of the first terms in each 1-element that the product requires a scalar divisor of a2 1 . The final result is then:
a
3

1 a1 2

a1 a2 a3

1 a1 2 Ha1 e1 + a7 e4 + a8 e5 L Ha1 e2 - a4 e4 - a5 e5 L Ha1 e3 + a2 e4 + a3 e5 L

Verification and derivation of conditions for simplicity


We verify the factorization by multiplying out the factors and comparing the result with the original expression. When we do this we obtain a result which still requires some conditions to be met: those ensuring the original element is simple. First we declare a 5-dimensional basis and compose a (this automatically declares the coefficients
3

ai to be scalar). A; !5 ; ComposeMElement@3, aD a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5 + a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5

To effect the multiplication of the factored form we use GrassmannExpandAndSimplify (in its alias form).

2009 9 3

Grassmann Algebra Book.nb

178

A = % B

1 a1 2

Ha1 e1 + a7 e4 + a8 e5 L Ha1 e2 - a4 e4 - a5 e5 L Ha1 e3 + a2 e4 + a3 e5 LF a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5 + a4 e1 e3 e4 + a3 a4 a2 a5 a5 e1 e3 e5 + + e1 e4 e5 + a7 e2 e3 e4 + a1 a1 a3 a7 a2 a8 a5 a7 a4 a8 a8 e2 e3 e5 + + e2 e4 e5 + + a1 a1 a1 a1

e3 e4 e5

For this expression to be congruent to the original expression we clearly need to apply the simplicity conditions for a general 3-element in a 5-space. Once this is done we retrieve the original 3-element a with which we began. The simplicity conditions can be expressed by the
3

following rules, which we apply to the expression A. We will discuss them in the next section.
A . : a3 a4 a1 + + a2 a5 a1 a6 , a5 a7 a1 + a4 a8 a1 a10 >

a3 a7 a1

a2 a8 a1

a9 , -

a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5 + a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5

In sum: A 3-element in a 5-space of the form:


a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5 + a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5

whose coefficients are constrained by the relations:


a2 a5 - a3 a4 - a1 a6 0 a2 a8 - a3 a7 - a1 a9 0 a4 a8 - a5 a7 - a1 a10 0

is simple, and has a factorization of:


1 a1 2 Ha1 e1 + a7 e4 + a8 e5 L H a1 e2 - a4 e4 - a5 e5 L Ha1 e3 + a2 e4 + a3 e5 L

This factorization is of course not unique.

2009 9 3

Grassmann Algebra Book.nb

179

The factorization algorithm


In this section we take the development of the previous sections and extract an algorithm for writing down a factorization by inspection. Suppose we have a simple m-element expressed as a sum of terms, each of which is of the form a x y ! z, where a denotes a scalar (which may be unity) and the x, y, !, z are symbols denoting 1-element factors.

To obtain a 1-element factor of the m-element


Select an (m|1)-element belonging to at least one of the terms, for example y ! z. Drop any terms not containing the selected (m|1)-element. Factor this (m|1)-element from the resulting expression and eliminate it. The 1-element remaining is a 1-element factor of the m-element.

To factorize the m-element


Select an m-element belonging to at least one of the terms, for example x y ! z. Create m different (m|1)-elements by dropping a different 1-element factor each time. The sign of the result is not important, since a scalar factor will be determined in the last step. Obtain m independent 1-element factors corresponding to each of these (m|1)-elements. The original m-element is congruent to the exterior product of these 1-element factors. Compare this product to the original m-element to obtain the correct scalar factor and hence the final factorization.

Example 1: Factorizing a 2-element in a 4-space


Suppose we have a 2-element in a 4-space, and we wish to apply this algorithm to obtain a factorization. We have already seen that such an element is in general not simple. We may however use the preceding algorithm to obtain the simplicity conditions on the coefficients.
a a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4
2

Select a 2-element belonging to at least one of the terms, say e1 e2 . Drop e2 to create e1 . Then drop e1 to create e2 . Select e1 . Drop the terms a4 e2 e3 + a5 e2 e4 + a6 e3 e4 since they do not contain e1 . Factor e1 from a1 e1 e2 + a2 e1 e3 + a3 e1 e4 and eliminate it to give factor a1 .
a1 a1 e2 + a2 e3 + a3 e4

Select e2 . Drop the terms a2 e1 e3 + a3 e1 e4 + a6 e3 e4 since they do not contain e2 . Factor e2 from a1 e1 e2 + a4 e2 e3 + a5 e2 e4 and eliminate it to give factor a2 .

2009 9 3

Grassmann Algebra Book.nb

180

a2 - a1 e1 + a4 e3 + a5 e4

The exterior product of these 1-element factors is


a1 a2 Ha1 e2 + a2 e3 + a3 e4 L H- a1 e1 + a4 e3 + a5 e4 L a1 a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + - a3 a4 + a2 a5 a1
2

e3 e4

Comparing this product to the original 2-element a gives the final factorization as
1 a1

a
2

Ha1 e2 + a2 e3 + a3 e4 L H- a1 e1 + a4 e3 + a5 e4 L

provided that the simplicity conditions on the coefficients are satisfied:


a1 a6 + a3 a4 - a2 a5 0

In sum: A 2-element in a 4-space may be factorized if and only if a condition on the coefficients is satisfied.

a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4 1 a1 Ha1 e1 - a4 e3 - a5 e4 L Ha1 e2 + a2 e3 + a3 e4 L, a3 a4 - a2 a5 + a1 a6 0 3.51

Again, this factorization is not unique.

Example 2: Factorizing a 3-element in a 5-space


This time we have a numerical example of a 3-element in 5-space. We wish to determine if the element is simple, and if so, to obtain a factorization of it. To achieve this, we first assume that it is simple, and apply the factorization algorithm. We will verify its simplicity (or its non-simplicity) by comparing the results of this process to the original element.
a - 3 e1 e2 e3 - 4 e1 e2 e4 + 12 e1 e2 e5 + 3 e1 e3 e4 3

3 e1 e3 e5 + 8 e1 e4 e5 + 6 e2 e3 e4 - 18 e2 e3 e5 + 12 e3 e4 e5

Select a 3-element belonging to at least one of the terms, say e1 e2 e3 . Drop e1 to create e2 e3 . Drop e2 to create e1 e3 . Drop e3 to create e1 e2 . Select e2 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a1 .
a1 - 3 e1 + 6 e4 - 18 e5 - e1 + 2 e4 - 6 e5

Select e1 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 . 2009 9 3

Grassmann Algebra Book.nb

181

Select e1 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .
a2 3 e2 + 3 e4 - 3 e5 e2 + e4 - e5

Select e1 e2 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a3 .
a3 - 3 e3 - 4 e4 + 12 e5

The exterior product of these 1-element factors is:


a1 a2 a3 H- e1 + 2 e4 - 6 e5 L He2 + e4 - e5 L H- 3 e3 - 4 e4 + 12 e5 L

Multiplying this out (here using GrassmannExpandAndSimplify in its alias form) gives:
A; !5 ; %@H- e1 + 2 e4 - 6 e5 L He2 + e4 - e5 L H- 3 e3 - 4 e4 + 12 e5 LD 3 e1 e2 e3 + 4 e1 e2 e4 - 12 e1 e2 e5 - 3 e1 e3 e4 + 3 e1 e3 e5 8 e1 e4 e5 - 6 e2 e3 e4 + 18 e2 e3 e5 - 12 e3 e4 e5

Comparing this product to the original 3-element a verifies a final factorization as:
3

a He1 - 2 e4 + 6 e5 L He2 + e4 - e5 L H- 3 e3 - 4 e4 + 12 e5 L
3

This factorization is, of course, not unique. For example, a slightly simpler factorization could be obtained by subtracting twice the first factor from the third factor to obtain:
a He1 - 2 e4 + 6 e5 L He2 + e4 - e5 L H- 3 e3 - 2 e1 L
3

Factorization of (n|1)-elements
The foregoing method may be used to prove constructively that any (n|1)-element is simple by obtaining a factorization and subsequently verifying its validity. Let the general (n|1)-element be:
n n-1

a ai ei ,
i=1

a1 ! 0

where the ei are the cobasis elements of the ei . Choose b e1 and bj ej and apply the Common Factor Theorem to obtain (for j "1):
1

2009 9 3

Grassmann Algebra Book.nb

182

n-1

a He1 ej L K a ej O e1 - K a e1 O ej
n-1 n- 1 n i=1 n i=1

ai ei ej e1 - ai ei e1 ej H - 1 L n-1 K a j e e 1 - a 1 e e j O
n n

H- 1L

n-1

e Haj e1 - a1 ej L
n

Factors of a are therefore of the form aj e1 - a1 ej , j " 1, so that a is congruent to:


n-1 n-1

n-1

a Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L ! Han e1 - a1 en L

The result may be summarized as follows:

a1 e1 + a2 e2 + ! + an en Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L ! Han e1 - a1 en L, a1 ! 0

3.52

ai ei
i=1

i=2

L Ha

e1 - a1 ei L,

a1 ! 0

3.53

By putting equation 3.46 in its equality form rather than its congruence form, we have
n

a1

n-2

ai ei H- 1L
i=1

n-1

i=2

L Ha

e1 - a1 ei L,

a1 ! 0

3.54

Example: Verifying the formula


The left and right sides of the formula 3.47 are easily coded in GrassmannAlgebra as functions of the dimension n of the space.
L@n_D := !n ; a1 n-2 ai CobasisElement@ei D
i=1 n

R@n_D := H- 1Ln-1 Wedge Table@ai e1 - a1 ei , 8i, 2, n<D

This enables us to tabulate the formula for different dimensions. For example

2009 9 3

Grassmann Algebra Book.nb

183

Table@L@nD R@nD, 8n, 3, 5<D 9a1 Ha3 e1 e2 - a2 e1 e3 + a1 e2 e3 L Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L, a2 1 H- a4 e1 e2 e3 + a3 e1 e2 e4 - a2 e1 e3 e4 + a1 e2 e3 e4 L - HHa2 e1 - a1 e2 L Ha3 e1 - a1 e3 L Ha4 e1 - a1 e4 LL, 3 a1 Ha5 e1 e2 e3 e4 - a4 e1 e2 e3 e5 + a3 e1 e2 e4 e5 a2 e1 e3 e4 e5 + a1 e2 e3 e4 e5 L Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L Ha4 e1 - a1 e4 L Ha5 e1 - a1 e5 L=

or to verify it for different dimensions by expanding and simplifying both sides.


Table@%@L@nDD %@R@nDD, 8n, 3, 8<D 8True, True, True, True, True, True<

Factorizing simple m-elements


For m-elements known to be simple we can use the GrassmannAlgebra function ExteriorFactorize to obtain a factorization. Note that the result obtained from ExteriorFactorize will be only one factorization out of the generally infinitely many possibilities. However, every factorization may be obtained from any other by adding scalar multiples of each factor to the other factors. We have shown above that every (n|1)-element in an n-space is simple. Applying ExteriorFactorize to a series of general (n|1)-elements in spaces of 4, 5, and 6 dimensions corroborates this result (although the form it generates is slightly different). In each case we first declare a basis of the requisite dimension by entering !n ;, and then create a general (n|1)-element X with scalars ai using the GrassmannAlgebra function ComposeMElement.

Factorizing a 3-element in a 4-space


A; !4 ; X = ComposeMElement@3, aD a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e3 e4 + a4 e2 e3 e4 ExteriorFactorize@XD a1 e1 + a4 e4 a1 e2 a3 e4 a1 e3 + a2 e4 a1

Factorizing a 4-element in a 5-space


!5 ; X = ComposeMElement@4, aD a1 e1 e2 e3 e4 + a2 e1 e2 e3 e5 + a3 e1 e2 e4 e5 + a4 e1 e3 e4 e5 + a5 e2 e3 e4 e5

2009 9 3

Grassmann Algebra Book.nb

184

ExteriorFactorize@XD a1 e1 a5 e5 a1 e2 + a4 e5 a1 e3 a3 e5 a1 e4 + a2 e5 a1

Factorizing a 5-element in a 6-space


!6 ; X = ComposeMElement@5, aD a1 e1 e2 e3 e4 e5 + a2 e1 e2 e3 e4 e6 + a3 e1 e2 e3 e5 e6 + a4 e1 e2 e4 e5 e6 + a5 e1 e3 e4 e5 e6 + a6 e2 e3 e4 e5 e6 ExteriorFactorize@XD a1 e1 + a6 e6 a1 e2 a5 e6 a1 e3 + a4 e6 a1 e4 a3 e6 a1 e5 + a2 e6 a1

The regressive product of two 3-elements in a 4-space


Let X and Y be 3-elements in a 4-space.
A; !4 ; X = ComposeMElement@3, xD x1 e1 e2 e3 + x2 e1 e2 e4 + x3 e1 e3 e4 + x4 e2 e3 e4 Y = ComposeMElement@3, yD y1 e1 e2 e3 + y2 e1 e2 e4 + y3 e1 e3 e4 + y4 e2 e3 e4

We can obtain the 2-element common factor of these two 3-elements by applying ToCommonFactor to their regressive product.
Z = ToCommonFactor@X YD c HH- x2 y1 + x1 y2 L e1 e2 + H- x3 y1 + x1 y3 L e1 e3 + H- x3 y2 + x2 y3 L e1 e4 + H- x4 y1 + x1 y4 L e2 e3 + H- x4 y2 + x2 y4 L e2 e4 + H- x4 y3 + x3 y4 L e3 e4 L

Applying ExteriorFactorize to this common factor shows that it is simple and can be factorized.
ExteriorFactorize@ZD c H- x2 y1 + x1 y2 L e1 + e2 + e3 Hx4 y1 - x1 y4 L - x2 y1 + x1 y2 + x2 y1 - x1 y2 + e4 Hx4 y2 - x2 y4 L - x2 y1 + x1 y2

e3 Hx3 y1 - x1 y3 L x2 y1 - x1 y2

e4 Hx3 y2 - x2 y3 L

2009 9 3

Grassmann Algebra Book.nb

185

Because this is a 2-element in a 4-space we can of course also prove its simplicity by expanding and simplifying its exterior square to show that it is zero.
Expand@%@Z ZDD 0

Factorizing contingently simple m-elements


If ExteriorFactorize is applied to an element which is not simple, the element will simply be returned. For example, in a 4-space, the 2-element xy + uv is not simple.
!4 ; ExteriorFactorize@x y + u vD uv + xy

If ExteriorFactorize is applied to an element which may be simple conditional on the values taken by some of its scalar symbol coefficients, the conditional result will be returned.

Contingently factorizing a 2-element in a 4-space


!4 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4 X1 = ExteriorFactorize@XD IfB8a3 a4 - a2 a5 + a1 a6 0<, a1 e1 a4 e3 a1 a5 e4 a1 e2 + a2 e3 a1 + a3 e4 a1 F

This Mathematica If function has syntax If[C,F], where C is a list of constraints on the scalar symbols of the expression, and F is the factorization if the constraints are satisfied. Hence the above may be read: if the simplicity condition is satisfied, then the factorization is as given, else the element is not factorizable. Instead of a list of constraints, we can also recast the list of constraints in predicate form by applying And to the list. (In this example, where there is only one constraint, the effect is simply to remove the braces from the constraint.)
X1 = X1 . c_List And c IfBa3 a4 - a2 a5 + a1 a6 0, a1 e1 a4 e3 a1 a5 e4 a1 e2 + a2 e3 a1 + a3 e4 a1 F

If we are able to assert the condition required, then the 2-element is indeed simple and the factorization is valid. Substituting this condition into the predicate form of the If statement, yields true for the predicate, hence the factorization is returned.

2009 9 3

Grassmann Algebra Book.nb

186

X1 . a6 a1 e1 -

a2 a5 - a3 a4 a1 a1 a5 e4 a1 e2 + a2 e3 a1 + a3 e4 a1

a4 e3

Contingently factorizing a 2-element in a 5-space


!5 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e1 e5 + a5 e2 e3 + a6 e2 e4 + a7 e2 e5 + a8 e3 e4 + a9 e3 e5 + a10 e4 e5 X1 = ExteriorFactorize@XD IfB8a3 a5 - a2 a6 + a1 a8 0, a4 a5 - a2 a7 + a1 a9 0, a4 a6 - a3 a7 + a1 a10 0<, a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5 a1 e1 e2 + + + F a1 a1 a1 a1 a1 a1

In this case of a 2-element in a 5-space, we have a list of three constraints, which we can turn into a predicate by applying And to the list.
X1 = X1 . c_List And c IfBa3 a5 - a2 a6 + a1 a8 0 && a4 a5 - a2 a7 + a1 a9 0 && a4 a6 - a3 a7 + a1 a10 0, a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5 a1 e1 e2 + + + F a1 a1 a1 a1 a1 a1

Now, if we assert the constraints, we get the simple factored expression.


X1 . :a8 a1 e1 a3 a5 - a2 a6 a1 a6 e4 a1 , a9 a4 a5 - a2 a7 a1 a2 e3 a1 + , a10 a3 e4 a1 + a4 a6 - a3 a7 a1 a1 >

a5 e3 a1

a7 e5 a1

e2 +

a4 e5

Elements expressed in terms of independent vector symbols


ExteriorFactorize will also contingently factorize m-elements expressed in terms of (independent) vector symbols. For example, here is a 2-element in a 4-space expressed in terms of independent vector symbols x, y, z, w, and scalars a and b. !4 ; X = 2 Hx yL + b Hy zL - 5 Hx wL + 7 a Hy wL - 4 Hz wL;

2009 9 3

Grassmann Algebra Book.nb

187

ExteriorFactorize@XD IfB88 + 5 b 0<, 5 w 2y 5 x7ay 5 + 4z 5 F

Determining if an element is simple


To determine if an element is simple, we can use the GrassmannAlgebra function SimpleQ. If an element is simple, SimpleQ returns True. If it is not simple, it returns False. If it may be simple conditional on the values taken by some of its scalar coefficients, SimpleQ returns the conditions required. We take some of the examples discussed in the previous section on factorization.

For simple m-elements A 4-element in a 5-space


!5 ; X = ComposeMElement@4, aD a1 e1 e2 e3 e4 + a2 e1 e2 e3 e5 + a3 e1 e2 e4 e5 + a4 e1 e3 e4 e5 + a5 e2 e3 e4 e5 SimpleQ@XD True

For non-simple m-elements


If SimpleQ is applied to an element which is not simple, the element will simply be returned.
SimpleQ@x y + u vD False

For contingently simple m-elements


If SimpleQ is applied to an element which may be simple conditional on the values taken by some of its scalar coefficients, the conditional result will be returned.

A 2-element in a 4-space
!4 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4

2009 9 3

Grassmann Algebra Book.nb

188

SimpleQ@XD If@8a3 a4 - a2 a5 + a1 a6 0<, TrueD

If we are able to assert the condition required, that a3 a4 - a2 a5 + a1 a6 0, then the 2-element is indeed simple.
SimpleQBX . a6 True a3 a4 - a2 a5 a1 F

A 2-element in a 5-space
!5 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e1 e5 + a5 e2 e3 + a6 e2 e4 + a7 e2 e5 + a8 e3 e4 + a9 e3 e5 + a10 e4 e5 SimpleQ@XD If@8a3 a5 - a2 a6 + a1 a8 0, a4 a5 - a2 a7 + a1 a9 0, a4 a6 - a3 a7 + a1 a10 0<, TrueD

In this case of a 2-element in a 5-space, we have three conditions on the coefficients to satisfy before being able to assert that the element is simple.

For general m-elements


SimpleQ will also check m-elements expressed in terms of (independent) vector symbols. !4 ; X = 2 Hx yL + b Hy zL - 5 Hx wL + 7 a Hy wL - 4 Hz wL; SimpleQ@XD If@88 + 5 b 0<, TrueD

3.9 Product Formulas for Regressive Products


The Product Formula
The Common Factor Theorem forms the basis for many of the important formulas relating the various products of the Grassmann algebra. In this section, some of the more basic formulas which involve just the exterior and regressive products will be developed. These in turn will be shown in Chapter 6: The Interior Product to have their counterparts in terms of exterior and interior products. The formulas are usually directed at obtaining alternative expressions (or expansions) for an element, or for products of elements. The first formula to be developed (which forms a basis for much of the rest) is an expansion for the regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call this (and its dual) the Product Formula. 2009 9 3

Grassmann Algebra Book.nb

189

The first formula to be developed (which forms a basis for much of the rest) is an expansion for the regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call this (and its dual) the Product Formula. Let x be an (n|1)-element, then:
n-1

a b x Ka x O b + H- 1L m a b x
m k n-1 m n-1 k m k

n-1

3.55

To prove this, suppose initially that a and b are simple and can be expressed as a a1 ! am
m k m

and b b1 ! bk . Applying the Common Factor Theorem gives


k

a b x HHa1 ! am L Hb1 ! bk LL x
m k n-1 n-1 m

H- 1Li-1 Jai x N HHa1 ! i ! am L Hb1 ! bk LL


i=1 k n-1

+ H- 1Lm+j-1 Jbj x N HHa1 ! am L Hb1 ! j ! bk LL


j=1 n-1

Since the ai x and bj x are n-elements, we can rearrange the parentheses in the terms of the
n-1 n-1

sums by using the property of n-elements that they behave with regressive products just like scalars behave with exterior products: A B C KA BO C. So the right hand side becomes
n b c n b c
m

H- 1Li-1 JJai x N Ha1 ! i ! am LN Hb1 ! bk L


i=1 k n-1

+ H- 1Lm +j-1 H- 1Lm Hk-1L JJbj x N Hb1 ! j ! bk LN Ha1 ! am L


j=1 n -1

Reapplying the Common Factor Theorem in reverse enables us to condense the sums back to:
Ka x O Hb1 ! bk L + H- 1Lm k b x
m n-1 k

n-1

Ha1 ! am L

A reordering of the second term gives the final result.


Ka x O Hb1 ! bk L + H- 1Lm Ha1 ! am L b x
m n-1 k

n-1

This result may be extended in a straightforward manner to the case where a and b are not simple:
m k

since a non-simple element may be expressed as the sum of simple terms, and the formula is valid for each term, then by addition it can be shown to be valid for the sum. We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual. (Note that we use n instead of n in order to let the Dual function know of its special meaning as the dimension of the space.) 2009 9 3

Grassmann Algebra Book.nb

190

We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual. (Note that we use n instead of n in order to let the Dual function know of its special meaning as the dimension of the space.)
DualB a b x Ka x O b + H- 1Lm a b x
m k n-1 m n-1 k m k

n-1

a b x H - 1 L -m +n a b x + a x b
m k m k m k

which we rearrange slightly to make it more readable as:

a b x K a x O b + H - 1 L n- m a b x
m k m k m k

3.56

Note that here x is a 1-element.

Deriving Product Formulas


If x is of a grade higher than 1, then similar relations hold, but with extra terms on the right-hand side. For example, if we replace x by x1 x2 and note that:
a b Hx1 x2 L
m k

a b x1 x2
m k

then the right-hand side may be expanded by applying the above Product Formula twice to obtain:

a b Hx1 x2 L Ka Hx1 x2 LO b + a b Hx1 x2 L


m k m k m k

3.57 + H- 1 Ln- m Ka x2 O b x1 - Ka x 1 O b x 2
m k m k

Each successive application doubles the number of terms. We started with two terms on the right hand side of the basic Product Formula. By applying it again we obtain a Product Formula with four terms on the right hand side. The next application would give us eight terms as shown in the Product Formula below.

a b Hx1 x2 x3 L
m k

Ka x1 O b x2 x3 - Ka x2 O b x1 x3 +
m k m k

3.58

2009 9 3

Grassmann Algebra Book.nb

191

Ka x3 O b x1 x2 + Ka x1 x2 x3 O b +
m k m k

H - 1 L n- m a b x 1 x 2 x 3 + K a x 1 x 2 O b x 3 m k m k

Ka x1 x3 O b x2 + Ka x2 x3 O b x1
m k m k

Thus the Product Formula for a b Hx1 x2 xj L would lead to 2j terms.


m k

Because this derivation process is simply the repeated application of a formula, we can get Mathematica to do it for us automatically.

Deriving Product Formulas automatically


By doing the previous derivations by hand, we identify three basic steps: 1. Devise the rule to be applied at each step 2. Apply the rule as many times as required 3. Simplify the signs of the terms. We discuss the coding of each of these steps in turn.

1. Devise the rule to be applied at each step


DeriveProductFormulaOnce@F_, x_D := GrassmannExpand@F xD . HHa_. * Hu_ v_LL z_ a * HHu zL vL + H- 1L ^ Hn - RawGrade@uDL * a * Hu Hv zLLL; DeriveProductFormulaOnce takes an existing Product Formula F, together with a new 1element x, and expands out their exterior product using GrassmannExpand. Each of the resulting terms will be of the form shown by the pattern (a_.*(u_v_))z_, where u_, v_, and z_ may be exterior products, and a_ may be a scalar coefficient.

Each term is now transformed into two terms with the forms a*((uz)v) and (-1)^(nRawGrade[u])*a*(u(vz))). Here n is the (symbolic) dimension of the underlying linear space, and the function RawGrade computes the grade of an element without consideration to the dimension of the currently declared space. Both these features enable the formula derivation to apply to an element of any grade.

3.58

2009 9 3

Grassmann Algebra Book.nb

192

2. Apply the rule as many times as required


DeriveProductFormula@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D DPFSimplify@Fold@DeriveProductFormulaOnce, A B, 8ComposeSimpleForm@X, 1D< . Wedge SequenceDD; DeriveProductFormula takes the left hand side of the original Product Formula (AB)X, and uses Mathematica's Fold function to repeatedly apply DeriveProductFormulaOnce to the previous result, each time including a new 1-element factor of X (or the form resulting from applying ComposeSimpleForm to X).

3. Simplify the signs of the terms.


DPFSimplify is a set of rules which simplify any products of the form H- 1La+b m+c n which arise in the derivation. Here, a, b, and c may be odd or even integers. Symbol m may be symbolic. Symbol n is symbolic.

This code is explanatory only. You do not need to enter it to have DeriveProductFormula work.

Examples: Testing the code


We can test out the code by seeing if we get the same results as the formulas derived by hand above.
A; DeriveProductFormulaB a b xF
m k

a b x H - 1 L -m +n a b x + a x b
m k m k m k

DeriveProductFormulaB a b xF
m k 2

a b x1 x2
m k

a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k

2009 9 3

Grassmann Algebra Book.nb

193

DeriveProductFormulaB a b xF
m k 3

a b x1 x2 x3
m k k m k m k

a x1 b x2 x3 - a x2 b x1 x3 + a x3 b x1 x2 +
m

H - 1 L -m +n a b x 1 x 2 x 3 + a x 1 x 2 b x 3 - a x 1 x 3 b x 2 +
m k m k m k

a x2 x3 b x1 + a x1 x2 x3 b
m k m k

DeriveProductFormulaB a b xF
m k 4

a b x1 x2 x3 x4
m k k m k m k m k m k m k

a b x1 x2 x3 x4 + a x1 x2 b x3 x4 - a x1 x3 b x2 x4 +
m

a x1 x4 b x2 x3 + a x2 x3 b x1 x4 - a x2 x4 b x1 x3 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 x 3 x 4 +
m k m m k k m m k k m k m k

a x2 b x1 x3 x4 - a x3 b x1 x2 x4 + a x4 b x1 x2 x3 - a x1 x2 x3 b x4 + a x1 x2 x4 b x3 a x1 x3 x4 b x2 + a x2 x3 x4 b x1 + a x1 x2 x3 x4 b
m k m k m k

Although these outputs are not quite as easy to read as the manually derived formulas in the previous section (because the outputs omit any brackets deemed unnecessary with the inbuilt precedence of the operators), one can at least see in these cases that the formulas are equivalent.

Computing the General Product Formula


If we look carefully at the form of the product formulae for different grades j of x we can see that
j

each of them is a specific case of a more general explicit formula. We will call this more general explicit formula the General Product Formula. We derive it below by deconstructing the result for
x, and then confirm its identity to the original iteratively derived form in the first few cases.
4

2009 9 3

Grassmann Algebra Book.nb

194

F1 = DeriveProductFormulaB a b xF
m k 4

a b x1 x2 x3 x4
m k k m k m k m k m k m k

a b x1 x2 x3 x4 + a x1 x2 b x3 x4 - a x1 x3 b x2 x4 +
m

a x1 x4 b x2 x3 + a x2 x3 b x1 x4 - a x2 x4 b x1 x3 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 x 3 x 4 +
m k m m k k m m k k m k m k

a x2 b x1 x3 x4 - a x3 b x1 x2 x4 + a x4 b x1 x2 x3 - a x1 x2 x3 b x4 + a x1 x2 x4 b x3 a x1 x3 x4 b x2 + a x2 x3 x4 b x1 + a x1 x2 x3 x4 b
m k m k m k

The first thing we note is that apart possibly from the sign H- 1L-m+n , the terms on the right hand side are all products of the form
a xi b xi
m 4-r k r

x xi xi
j r

4-r

This means we can compute them using the r-span and r-cospan of x. For j equal to 4, r will range
j

from 0 to 4. Thus we have


Ka #0 BxFO b #0 BxF
m 4 k 4

:a x1 x2 x3 x4 b 1>
m k

Ka #1 BxFO b #1 BxF
m 4 k 4

:a x2 x3 x4 b x1 , a - Hx1 x3 x4 L b x2 ,
m k m k

a x1 x2 x4 b x3 , a - Hx1 x2 x3 L b x4 >
m k m k

Ka #2 BxFO b #2 BxF
m 4 k 4

:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k m k m k

a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >
m k m k m k

2009 9 3

Grassmann Algebra Book.nb

195

Ka #3 BxFO b #3 BxF
m 4 k 4

:a x4 b x1 x2 x3 , a - x3 b x1 x2 x4 ,
m k m k

a x2 b x1 x3 x4 , a - x1 b x2 x3 x4 >
m k m k

Ka #4 BxFO b #4 BxF
m 4 k 4

:a 1 b x1 x2 x3 x4 >
m k

But all these terms can be computed at once using the complete span # and cospan # .
Ka # BxFO b # BxF
m 4 k 4

::a x1 x2 x3 x4 b 1>,
m k m

:a x2 x3 x4 b x1 , a - Hx1 x3 x4 L b x2 ,
m k k

a x1 x2 x4 b x3 , a - Hx1 x2 x3 L b x4 >,
m k m k m

:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k m k k

a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >,
m k m k m m k

:a x4 b x1 x2 x3 , a - x3 b x1 x2 x4 , a x2 b x1 x3 x4 ,
m k m k m k

a - x1 b x2 x3 x4 >, :a 1 b x1 x2 x3 x4 >>
m k k

Now we need to multiply the terms by a scalar factor which is 1 when the grade of the span is even, and H- 1Ln-m when it is odd. Since the lists of terms in the collection above are arranged according to the grades of their spans, we then need a list whose elements alternate between 1 and H- 1Ln-m . To construct this list, we first define a function r which returns an alternating list of 1s and 0s of the correct length j + 1, multiply this list by n - m, and then use it as a power. Mathematica's inbuilt Listable attributes of Times and Power will again return the result we want.
r@j_D := Mod@Range@0, jD, 2D H - 1 L r@4D Hn-m L 91, H- 1L-m+n , 1, H- 1L-m+n , 1=

Multiplying by the scalar factor now gives us the terms we want.

2009 9 3

Grassmann Algebra Book.nb

196

H- 1Lr@4D Hn-mL Ka # BxFO b # BxF


m 4 k 4

::a x1 x2 x3 x4 b 1>,
m k m m

:H- 1L-m+n a x2 x3 x4 b x1 , H- 1L-m+n a - Hx1 x3 x4 L b x2 ,


k k

H - 1 L -m +n a x 1 x 2 x 4 b x 3 , H - 1 L -m +n a - H x 1 x 2 x 3 L b x 4 > ,
m k m m k m

:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k k k

a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >,
m k -m +n m m k -m +n m m k

:H- 1L

a x4 b x1 x2 x3 , H- 1L
k m

a - x3 b x1 x2 x4 ,
k m

H - 1 L -m +n a x 2 b x 1 x 3 x 4 , H - 1 L -m +n a - x 1 b x 2 x 3 x 4 > ,
k k

:a 1 b x1 x2 x3 x4 >>
m k

Flattening this list and then summing the terms gives the final result.
F2 = a b Hx1 x2 x3 x4 L
m k

SBFlattenBH- 1Lr@4D Hn-mL Ka # BxFO b # BxF FF


m 4 k 4

a b x1 x2 x3 x4
m k k -m +n -m +n m k -m +n

a 1 b x 1 x 2 x 3 x 4 + H - 1 L -m +n a - x 1 b x 2 x 3 x 4 +
m

H- 1L H- 1L
m

a x2 b x1 x3 x4 + H- 1L
m m k k k m

a - x3 b x1 x2 x4 +
m k k

a x4 b x1 x2 x3 + a - Hx1 x3 L b x2 x4 +
-m +n

a - Hx2 x4 L b x1 x3 + H- 1L H- 1L
m m -m +n m k k k -m +n m k m

a - Hx1 x2 x3 L b x4 +
m m k k

a - Hx1 x3 x4 L b x2 + a x1 x2 b x3 x4 +
k m m k k

a x1 x4 b x2 x3 + a x2 x3 b x1 x4 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n a x 1 x 2 x 4 b x 3 + H- 1L a x2 x3 x4 b x1 + a x1 x2 x3 x4 b 1

Applying GrassmannExpandAndSimplify will simplify the signs and allow us to confirm the result is identical to that produced by DeriveProductFormula.

2009 9 3

Grassmann Algebra Book.nb

197

%@F1 D === %@F2 D True

The General Product Formula


To summarize the results above: we can take the Product Formula derived in the previous section and rewrite it in its computational form using the notions of complete span and complete cospan. The computational form relies on the fact that in GrassmannAlgebra the exterior and regressive products have an inbuilt Listable attribute. (This attribute means that a product of lists is automatically converted to a list of products.)

ab x
m k j

3.59
j k j

SBFlattenBH- 1L

r@jD Hn- m L

a " BxF b " BxF FF


m

In this formula, # BxF and # BxF are the complete span and complete cospan of X. The sign
j j j

function r just creates a list of elements alternating between 1 and H- 1L equal to 8:


H - 1 L r@8D Hn-m L

-m +n

. For example, for j

91, H- 1L-m+n , 1, H- 1L-m+n , 1, H- 1L-m+n , 1, H- 1L-m+n , 1=

Comparing the two forms of the Product Formula


Encapsulating the computable formula
We can easily encapsulate this computable formula in a function GeneralProductFormula for easy comparison with the results from our initial iterative derivation using DeriveProductFormula.
GeneralProductFormula@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D SAFlattenAH- 1Lr@G@XDD Hn-G@ADL IA " @XDM HB " @XDLEE; r@m_D := Mod@Range@0, mD, 2D;

2009 9 3

Grassmann Algebra Book.nb

198

Comparing results from DeriveProductFormula with GeneralProductFormula.


We can compare the results obtained by our two formulations. DeriveProductFormula invoked the successive application of a basic formula. GeneralProductFormula invoked an explicit formula for the general case. Below we show the two formulations give the same result for all simple elements from grades 1 to 10.
!10 ; TableB%BDeriveProductFormulaB a b xFF ===
m k j

%BGeneralProductFormulaB a b xFF, 8j, 1, 10<F


m k j

8True, True, True, True, True, True, True, True, True, True<

The invariance of the General Product Formula


In Chapter 2 we saw that the m:k-forms based on a simple factorized m-element X were, like the melement, independent of the factorization.

S@HG1 # #k @XDL IG2 " #k @XDME


As we have seen in its development, the General Product Formula is composed of a number of m:kforms, and, as expected, shows the same invariance. For example, suppose we generate the formula F1 for a 3-element X equal to xyz:
F1 = GeneralProductFormulaB a b Hx y zLF
m k

a b x y z H - 1 L -m +n a 1 b x y z + a x b y z +
m k m k -m +n m k -m +n k m m k -m +n m m k k k m k

a - y b x z + a z b x y + H- 1L H- 1L a x y b z + H- 1L
m

a - Hx zL b y +

ayzbx + axyzb1

We can express the 3-element as a product of different 1-element factors by adding to any given factor, scalar multiples of the other factors. For example

2009 9 3

Grassmann Algebra Book.nb

199

F2 = GeneralProductFormulaB a b Hx y Hz + a x + b yLLF
m k

a b x y Ha x + b y + zL
m k m k m m k k k k k m k

H - 1 L -m +n a 1 b x y H a x + b y + z L + a x b y H a x + b y + z L + a - y b x Ha x + b y + zL + a Ha x + b y + zL b x y +
m k -m +n -m +n -m +n

H- 1L H- 1L H- 1L

a - Hx Ha x + b y + zLL b y +
m m m

a x y b Ha x + b y + zL + a y Ha x + b y + zL b x + a x y Ha x + b y + zL b 1

Applying GrassmannExpandAndSimplify to these expressions shows that they are equal.


% @F1 F2 D True

Alternative forms for the General Product Formula


If we start from the General Product Formula we can rearrange it to interchange the span and cospan, so that the span elements become associated with a and the cospan elements become
m

associated with b. Of course, there will need to be some associated changes in the signs of the
k

terms as well. We start with the General Product Formula in the form developed above.
a b x SBFlattenBH- 1Lr@jD Hn-mL a # BxF b # BxF FF
m k j m j k j

From Chapter 2: Multilinear Forms, we have shown that a form such as


a # BxF b # BxF
m j k j

can be rewritten in the form


H- 1Lrr@jD RB a # BxF b # BxF F
m j k j

Thus we can write the General Product formula in the alternative form

a b x SBFlattenBH- 1Lr@jD Hn-mL


m k rr@jD j

3.60
m j k j

H- 1L

RB a " BxF b " BxF FFF

Here, the definitions of r, rr and R are 2009 9 3

Grassmann Algebra Book.nb

200

Here, the definitions of r, rr and R are


r@m_D := Mod@Range@0, mD, 2D; rr@m_D := If@EvenQ@mD, r@mD, 0D; ReverseAll@s_ListD := Reverse@Reverse sD; R = ReverseAll;

Encapsulating the second computable formula


Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative one.
GeneralProductFormulaB@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D SBFlattenBH- 1Lr@G@XDD Hn-G@ADL H- 1Lrr@G@XDD RB a " BxF b " BxF FFF;
m j k j

Comparing results from DeriveProductFormula with GeneralProductFormulaB.


Again comparing the results of DeriveProductFormula with this alternative formula verifies their equivalence for values of j from 1 to 10.
!10 ; TableB%BDeriveProductFormulaB a b xFF ===
m k j

%BGeneralProductFormulaBB a b xFF, 8j, 1, 10<F


m k j

8True, True, True, True, True, True, True, True, True, True<

The Decomposition Formula


In the General Product Formula, putting k equal to n|m permits the left-hand side to be expressed as a scalar multiple of x and hence expresses a type of 'decomposition' of x into components.
j j

2009 9 3

Grassmann Algebra Book.nb

201

x SB
j

a " BxF FlattenBH- 1L


r@jD Hn- m L m j

n- m

b " BxF
j

3.61 FF

a b
m n- m

If the element to be decomposed is a 1-element, that is j = 1, there are just two terms, reducing the decomposition formula to:
Ka # @xDO x %BSBFlattenBH- 1L
r@1D Hn-m L m

n-m

b # @xD FFF

a b
m n-m

H - 1 L m +n a b x + a x b x
m -m +n m -m + n

a b
m -m +n

To make this a little more readable we add some parentheses and absorb the sign by interchanging the order of the factors x and b .
n-m

a x b x
m n- m

Ka xO b +
m n- m

3.62

a b
m n- m

a b
m n- m

In Chapter 4: Geometric Interpretations we will explore some of the geometric significance of these formulas. They will also find application in Chapter 6: The Interior Product.

Exploration: Dual forms of the General Product Formulas


You can obtain the dual forms of the various specific product formulas by applying the Dual function to them. Below we simply display the duals of the first three specific cases.

First Product Formula


F1 = DeriveProductFormulaB a b xF
m k

a b x H - 1 L -m +n a b x + a x b
m k m k m k

2009 9 3

Grassmann Algebra Book.nb

202

Dual@F1 D ab x
m k -1+n

H- 1Lm a b x
m k

-1+ n

+ Ka x O b
m -1+n k

Second Product Formula


F2 = DeriveProductFormulaB a b xF
m k 2

a b x1 x2
m k

a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k

Dual@F2D a b x1 x2 a b x1 x2
m k -1+n -1+n m k -1+n

+ b x1
k

-1+n

H- 1Lm -

a x1
m

-1+n

b x2
k

-1+n

+ a x2
m

-1+n

-1+n

a x1 x2
m -1+n

b
k

-1+n

Third Product Formula


F3 = DeriveProductFormulaB a b xF
m k 3

a b x1 x2 x3
m k k m k m k

a x1 b x2 x3 - a x2 b x1 x3 + a x3 b x1 x2 +
m

H - 1 L -m +n a b x 1 x 2 x 3 + a x 1 x 2 b x 3 - a x 1 x 3 b x 2 +
m k m k m k

a x2 x3 b x1 + a x1 x2 x3 b
m k m k

2009 9 3

Grassmann Algebra Book.nb

203

Dual@F3D a b x1 x2 x3
m k -1+n -1+n -1+n

a x1
m

-1+n

b x2 x3
k -1+n - 1+ n

- a x2
m

-1+n

b x1 x3
k -1+n -1+n

+ + +

a x3
m -1+n

b x1 x2
k -1+n

-1+n

+ H- 1Lm a b x1 x2 x3
m k -1+ n -1+n -1+n

a x1 x2
m -1+n

-1+n

b x3
k -1+n

- a x1 x3
m -1+n -1+n

b x2
k

-1+n

a x2 x3
m -1+n -1+n

b x1
k

-1+n

+ a x1 x2 x3
m -1+n -1+n -1+n

b
k

The double sum form of the General Product Formula


Although not directly computable, the following double sum form of the General Product Formula may be shown to be equivalent to the computable form. It will find particular application in Chapter 6 where we convert it to its interior product forms.
p n r Hn- m L

a b x H- 1L
m k p r=0

a xi b xi
i=1 m p-r k r

3.63

x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r

p nK O r

3.10 Summary
This chapter has introduced the regressive product as a true dual to the exterior product. This means that to every theorem T involving exterior and regressive products there corresponds a dual theorem Dual[T] such that T Dual[Dual[T]]. But although the regressive product axioms assert that the regressive product of an m-element with a k-element is an (m+k-n)-element (where n is the dimension of the underlying linear space), the axiom sets for the exterior and regressive products alone do not provide a mechanism for deriving such an element explicitly. A further explicit axiom involving both exterior and regressive products is necessary. We called this axiom the Common Factor Axiom and motivated it by a combination of algebraic and geometric argument. If the exterior product is viewed as a sort of 'union' of independent elements, the regressive product may be viewed as a sort of 'intersection'. Because Grassmann considered only Euclidean spaces, used the same notation for both exterior and regressive products, and equated scalars and pseudo-scalars, the Common Factor Axiom was effectively hidden in his notation. The Common Factor Axiom was then extended to prove one of the most important formulae in the Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then the result is specific, and depends on the metric for its precise value. Otherwise, if there is no metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It 2009 9 3 was then shown how the Common Factor Theorem could be used in the factorization of simple elements.

Grassmann Algebra Book.nb

204

The Common Factor Axiom was then extended to prove one of the most important formulae in the Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then the result is specific, and depends on the metric for its precise value. Otherwise, if there is no metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It was then shown how the Common Factor Theorem could be used in the factorization of simple elements. Finally, it was shown how the Common Factor Theorem leads to a suite of formulas called Product Formulas. These formulas expand expressions involving an exterior product of an element with a regressive product of elements, or, involving a regressive product of an element with an exterior product of elements. In Chapter 6 we will show how these lead naturally to formulas where the regressive products are replaced by interior products. These interior product forms of the product formulae will find application throughout the rest of the book. The next chapter is an interlude in the development of the algebraic fundamentals. In it we begin to explore one of Grassmann algebra's most enticing interpretations: geometry. But at this stage we will only be discussing non-metric geometry. Chapters 5 and 6 to follow will develop the algebra's metric concepts, and thus complete the fundamentals required for subsequent applications and geometric interpretations in metric space.

2009 9 3

Grassmann Algebra Book.nb

205

4 Geometric Interpretations

4.1 Introduction
In Chapter 2, the exterior product operation was introduced onto the elements of a linear space L,
1

enabling the construction of a series of new linear spaces L possessing new types of elements. In
m

this chapter, the elements of L will be interpreted, some as vectors, some as points. This will be
1

done by singling out one particular element of L and conferring upon it the interpretation of origin
1

point. All the other elements of the linear space then divide into two categories. Those that involve the origin point will be called (weighted) points, and those that do not will be called vectors. As this distinction is developed in succeeding sections it will be seen to be both illuminating and consistent with accepted notions. Vectors and points will be called geometric interpretations of the elements of L.
1

Some of the more important consequences however, of the distinction between vectors and points arise from the distinctions thereby generated in the higher grade spaces L. It will be shown that a
m

simple element of L takes on two interpretations. The first, that of a multivector (or m-vector) is
m

when the m-element can be expressed in terms of vectors alone. The second, that of a bound multivector (or bound m-vector) is when the m-element requires both points and vectors to express it. These simple interpreted elements will be found useful for defining geometric entities such as lines and planes and their higher dimensional analogues known as multiplanes (or m-planes). Unions and intersections of multiplanes may then be calculated straightforwardly by using the bound multivectors which define them. A multivector may be visualized as a 'free' entity with no location. A bound multivector may be visualized as 'bound' through a location in space. It is not only simple interpreted elements which will be found useful in applications however. In Chapter 7: Exploring Screw Algebra and Chapter 8: Exploring Mechanics, a basis for a theory of mechanics is developed whose principal quantities (for example, systems of forces, momentum of a system of particles, velocity of a rigid body) may be represented by a general interpreted 2element, that is, by the sum of a bound vector and a bivector. In the literature of the nineteenth century, wherever vectors and points were considered together, vectors were introduced as point differences. When it is required to designate physical quantities it is not satisfactory that all vectors should arise as the differences of points. In later literature, this problem appeared to be overcome by designating points by their position vectors alone, making vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either, since by excluding points much of the power of the calculus for dealing with free and located entities together is excluded. In this book we do not require that vectors be defined in terms of points, but rather, propose a difference of interpretation between the origin element and those elements not involving the origin. This approach permits the existence of points and vectors together without the vectors necessarily arising as point differences.

2009 9 3

vectors were introduced as point differences. When it is required to designate physical quantities it is not satisfactory that all vectors should arise as the differences of points. In later literature, this problem appeared to be overcome by designating points by their position vectors alone, making vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory Grassmann Algebra either,Book.nb since by 206 excluding points much of the power of the calculus for dealing with free and located entities together is excluded. In this book we do not require that vectors be defined in terms of points, but rather, propose a difference of interpretation between the origin element and those elements not involving the origin. This approach permits the existence of points and vectors together without the vectors necessarily arising as point differences. In this chapter, as in the preceding chapters, L and the spaces L do not yet have a metric. That is,
1 m

there is no way of calculating a measure or magnitude associated with an element. The interpretation discussed therefore may also be supposed non-metric. In the next chapter a metric will be introduced onto the uninterpreted spaces and the consequences of this for the interpreted elements developed. In summary then, it is the aim of this chapter to set out the distinct non-metric geometric interpretations of m-elements brought about by the interpretation of one specific element of L as an
1

origin point.

4.2 Geometrically Interpreted 1-Elements


Vectors
Depicting vectors
The most common current geometric interpretation for an element of a linear space is that of a vector. We suppose in this chapter that the linear space does not have a metric (that is, we cannot calculate magnitudes). Such a vector has the geometric properties of direction and sense (but no location and no magnitude), and will be graphically depicted (in the usual way) by a directed and sensed line segment thus:

The usual depiction of a vector. But this vector is located nowhere!

This depiction is unsatisfactory in that the line segment has a definite location and length whilst the vector it is depicting is supposed to possess neither of these properties. One way perhaps to depict an unlocated entity is to show it in many locations.

One way to depict an unlocated entity is to show it in many locations.

Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse to manipulate the vector below to reinforce that any of the vector's locations that retain its direction and sense 2009 9 3 are equally satisfactory depictions for it. This dynamic depiction is a somewhat more faithful way of depicting something without locating it. But it is still not fully satisfactory, since we are depicting the property of no-location by using any location.

Grassmann Algebra Book.nb

207

Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse to manipulate the vector below to reinforce that any of the vector's locations that retain its direction and sense are equally satisfactory depictions for it. This dynamic depiction is a somewhat more faithful way of depicting something without locating it. But it is still not fully satisfactory, since we are depicting the property of no-location by using any location.

Another way is to allow it to be moveable to any location. HYou can drag the arrow by its name with your mouse.L

A linear space, all of whose elements are interpreted as vectors, will be called a vector space.

Depicting the addition of vectors


The geometric interpretation of the addition operation of a vector space is the triangle (or parallelogram) rule. To effect a sum of two vectors using the triangle rule in this 'unlocated' view of vectors, you would slide them (always in a direction parallel to themselves) from where you had each of them conveniently docked, until their tails touched, make the sum in the standard textbook manner, then park the result anywhere you please.

The triangle rule for the addition of two vectors.

From this point onwards we will always show vectors docked in a visually convenient location, often one that is suggestive of the operation we wish to perform. Of course as discussed above, this does not mean it is actually located there.

Comparing vectors
In a metric space we can define the magnitude of a vector and hence compare vectors by comparing their magnitudes even if they are not in the same direction. In spaces where no metric has been imposed, as are the spaces we are discussing in this chapter, we cannot define the magnitude of a vector, and hence we cannot compare it with another vector in a different direction. However, we can compare vectors in the same direction. Vectors in the same direction are congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the congruence factor.

2009 9 3

Grassmann Algebra Book.nb

208

However, we can compare vectors in the same direction. Vectors in the same direction are congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the congruence factor. Thus vectors a x and b x are congruent with congruence factor
a b

or

b . a

Most often it is not the

magnitude of the congruence factor that is important, but its sign. If it is positive, the vectors may be said to have the same orientation. If it is negative the vectors may be said to have an opposite orientation. Orientation is thus a relative concept. It applies in the same way to elements of any grade. For the special case of vectors however, the term sense is often used synonymously with orientation. Below we depict three vectors with conguence factors (relative to x) of 1, -1 and 2.

-x

2x

The effect of multiplying the vector x by factors 1, -1, and 2.

Points
The origin
In order to describe position in space it is necessary to have a reference point. This point is usually called the origin. Rather than the standard technique of implicitly assuming the origin and working only with vectors to describe position, we find it important for later applications to augment the vector space with the origin as a new element to create a new linear space with one more element in its basis. For reasons which will appear later, such a linear space will be called a bound vector space. The only difference between the origin element and the vector elements of the linear space is their interpretation. The origin element is interpreted as a point. We will denote it in GrassmannAlgebra by ", which you can access from the palette or type as *5dsO (a five-star followed by a double-struck capital O).

In order to describe position it is necessary to have a reference point.

2009 9 3

Grassmann Algebra Book.nb

209

The sum of the origin and a vector is a point


The bound vector space in addition to its vectors and its origin now possesses a new set of elements requiring interpretation: those formed from the sum of the origin and a vector.
P " + x

It is these elements that will be used to describe position and that we will call points. The vector x is called the position vector of the point P. A position vector is just like any other vector and is therefore, of course, located nowhere (even though we show it in a conveniently suggestive position!). We depict points (other than the origin) by blue points.

The sum of the origin and a vector is a point.

The difference of two points is a vector


It follows immediately from this definition of a point that the difference of two points is a vector:
P - Q H" + pL - H" + q L p - q x

The difference of two points is a vector.

Remember that the bound vector space we are discussing does not yet have a metric. That is, the distance between two points (the magnitude of the vector equal to the point difference) is not meaningful. However, the relative distances between points on the same line can be measured by means of their intensities.

The sum of a point and a vector is a point


The sum of a point and a vector is another point.

2009 9 3

Grassmann Algebra Book.nb

210

Q P + y H" + xL + y " + Hx + yL

P x x+y !

y Q

The sum of a point and a vector is another point

and so a vector may be viewed as a carrier of points. That is, the addition of a vector to a point carries or transforms it to another point. (See the historical note below).

The sum of two points is a weighted point


The sum of two points is not quite another point. Rather, it is a point with weight 2, situated midway between the two points.
P1 + P2 H" + x1 L + H" + x2 L 1 2 " + Hx1 - x2 L 2 " + Hx1 - x2 L 2 Pc 2

Wherever pictorially feasible we will show weighted points with their weights attached to their names, and/or a size or colour change to distinguish them from the 'pure' points on the same graphic.

The sum of two points is a point of weight 2 located mid-way between them

Similarly, the sum of n points is a point with weight n, situated at the centre of mass (centre of gravity) of the points.

The sum of weighted points


A scalar multiple of a point (of the form m P or mP) will be called a weighted point. Weighted points are summed just like a set of point masses is deemed equivalent to their total mass located at their centre of mass. For example the sum of two weighted points with weights (masses) m1 and m2 is equal to the weighted point with weight Hm1 + m2 Llocated on the line between them. The location is the point Pc about which the 'mass moments' m1 l1 and m2 l2 are equal.

2009 9 3

Grassmann Algebra Book.nb

211

l2 l1 m1 P1 Hm1 + m2 L Pc

m2 P2
l1 m1 ! l2 m2

The sum of two weighted points is a weighted point between them

In the more general case we have


I mi M Pc mi Pi

We can also easily express this equation in terms of the position vectors of the weighted points.
mi Pi mi H" + xi L I mi M " + S mi xi S mi

Thus the sum of a number of mass-weighted points mi Pi is equivalent to the centre of gravity S m xi " + S i weighted by the total mass mi . A numerical example is included later in this section. m
i

As will be seen in Section 4.4 below, a weighted point may also be viewed as a bound scalar.

Historical Note
Sir William Rowan Hamilton in his Lectures on Quaternions [Hamilton 1853] was the first to introduce the notion of vector as 'carrier' of points.
... I regard the symbol B-A as denoting "the step from B to A": namely, that step by making which, from the given point A, we should reach or arrive at the sought point B; and so determine, generate, mark or construct that point. This step, (which we always suppose to be a straight line) may also in my opinion be properly called a vector; or more fully, it may be called "the vector of the point B from the point A": because it may be considered as having for its office, function, work, task or business, to transport or carry (in Latin vehere) a moveable point, from the given or initial position A, to the sought or final position B.

Declaring a basis for a bound vector space


Any geometry that we do with points in GrassmannAlgebra will require us to declare the origin " as one of the elements of the basis of the space. We have already seen that a shorthand way of declaring a basis 8e1 , e2 , !, en < is by entering !n . Declaring the augmented basis 8", e1 , e2 , !, en < can be accomplished by entering the alias "n (most easily from the palette). These are 5-star-script-capital letters subscripted with the integer n denoting the desired 'vectorial' dimension of the space. For example, entering "2 gives a basis for the plane.

2009 9 3

Grassmann Algebra Book.nb

212

" 2 8#, e1 , e2 <


"2

e2 ! e1

Declaring a basis for the plane

We do not depict the basis vectors at right angles because the space does not yet have a metric, and orthogonality is a metric concept. As always, you can confirm your currently declared basis by checking the status pane at the top of the palette, We may often precede a calculation with one of these DeclareBasis aliases followed by a semicolon. This accomplishes the declaration of the basis but for brevity suppresses the confirming output. For example, below we declare a bound 3-space, and then compose a palette of the basis of the algebra constructed on it.
"3 ; BasisPalette

Basis Palette
L
0

L
1

L
2

L
3

L
4

1 ! ! e1 ! e1 e2 ! e1 e2 e3 e1 ! e2 ! e1 e3 e2 ! e3 ! e2 e3 e3 e1 e2 e1 e3 e2 e3
Of course, you can declare your own bound vector space basis if you wish. For example if you want the vector basis elements to be i, j, and k, you could enter
DeclareBasis@8i, j, k, "<D 8#, i, j, k<

e1 e2 e3

(DeclareBasis will always rearrange the ordering to make the origin " come first).

Composing vectors and points


2009 9 3

Grassmann Algebra Book.nb

213

Composing vectors and points


Composing vectors
Once you have declared a basis for your bound vector space, say "3 ,
" 3 8#, e1 , e2 , e3 <

you can quickly compose any vectors in this basis with ComposeVector or its alias ! . The placeholder is used for entering the symbol upon which you want the coefficients to be based. Here we choose a.
Va = &a a1 e1 + a2 e2 + a3 e3 ComposeVector automatically declares the ai to be scalar symbols.

If you enter ! , you get an expression with placeholders in which you can enter your own coefficients just by clicking on any placeholder, and tabbing through them.
& e1 + e2 + e3

Note however that in this case if you want any symbolic coefficients you enter to be recognized as scalar symbols, you would need to ensure this yourself.

Composing points
Similarly, to compose a point you can use " .
Pb = 'b # + b1 e1 + b2 e2 + b3 e3 ' # + e1 + e2 + e3

Now you can use the points and vectors you have composed in expressions. For example, here we add Pb and 2 Va , then use GrassmannSimplify (alias $) to collect their coefficients.
Pb + 2 Va $ # + H2 a1 + b1 L e1 + H2 a2 + b2 L e2 + H2 a3 + b3 L e3

2009 9 3

Grassmann Algebra Book.nb

214

Example: Calculation of the centre of mass


Suppose a space with basis 8", e1 , e2 , e3 < and a set of masses Mi situated at points Pi . It is required to find their centre of mass. First declare the basis, then enter the mass points.
" 3 8#, e1 , e2 , e3 < M1 M2 M3 M4 = 2 P1 ; = 4 P2 ; = 7 P3 ; = 5 P4 ; P1 P2 P3 P4 = " + e1 + 3 e2 - 4 e3 ; = " + 2 e1 - e2 - 2 e3 ; = " - 5 e1 + 3 e2 - 6 e3 ; = " + 4 e1 + 2 e2 - 9 e3 ;

We simply add the mass-weighted points.


4

M = Mi
i=1

5 H# + 4 e1 + 2 e2 - 9 e3 L + 7 H# - 5 e1 + 3 e2 - 6 e3 L + 2 H# + e1 + 3 e2 - 4 e3 L + 4 H# + 2 e1 - e2 - 2 e3 L

Simplifying this gives a weighted point with weight 18, the scalar attached to the origin.
M = % @ M D 18 # - 5 e1 + 33 e2 - 103 e3

To take the weight out as a factor, that is, expressing the result in the form mass ! point, we can use ToWeightedPointForm.
M = ToWeightedPointForm@MD 18 # 5 e1 18 + 11 e2 6 103 e3 18

7 P3

2 P1 4 P2 18 Pc

5 P4

The centre of mass of four weighted points

Thus the total mass is 18 situated at the point Pc " -

5 e1 18

11 e2 6

103 e3 18

. If you are reading

this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by 2009 9 3 your mouse over their symbols. hovering

Grassmann Algebra Book.nb

215

Thus the total mass is 18 situated at the point Pc " -

5 e1 18

11 e2 6

103 e3 18

. If you are reading

this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by hovering your mouse over their symbols.

4.3 Geometrically Interpreted 2-Elements


Simple geometrically interpreted 2-elements
It has been seen in Chapter 2 that the linear space L may be generated from L by the exterior
2
1

product operation. In the preceding section the elements of L have been given two geometric
1

interpretations: that of a vector and that of a point. These interpretations in turn generate various other interpretations for the elements of L.
2

In L there are at first sight three possibilities for simple elements:


2

1. xy (vector by vector). 2. Px (point by vector). 3. P1 P2 (point by point). However, P1 P2 may be expressed as P1 HP2 - P1 L which is the product of a point and a vector, and thus reduces to the second case. There are thus two simple interpreted elements in L:
2

1. xy (the simple bivector). 2. Px (the bound vector).

A note on terminology
The term bound as in the bound vector Px indicates that the vector x is conceived of as bound through the point P, rather than to the point P, since the latter conception would give the incorrect impression that the vector was located at the point P. By adhering to the terminology bound through, we get a slightly more correct impression of the 'freedom' that the vector enjoys. The term bound vector space will be used to denote a vector space to whose basis has been added an origin point. This should be read as bound vector-space, rather than bound-vector space. The term bound m-space will be used to denote an m-dimensional vector space to whose basis has been added an origin point.

2009 9 3

Grassmann Algebra Book.nb

216

Bivectors
Depicting bivectors
Earlier in this chapter, a vector was depicted graphically by a directed and sensed line segment supposed to be located nowhere in particular. In like manner we depict a simple bivector by depicting its two component vectors supposed located nowhere in particular. But to indicate that they are part of the same exterior product, the vectors are depicted conveniently docked head to tail in the order of the product, and 'joined' by a somewhat transparent parallelogram in the same colour, named by default with the exterior product symbol.

Depicting a simple bivector. This bivector is located nowhere.

This bivector defines a planar 2-direction, the precise 2-dimensional analog of the unlocated vector. Its vectors are located nowhere, and it is located nowhere. The parallelogram should be viewed as an artifact which is used to visually tie the two vectors together in the exterior product. We motivate this choice by noting that if this bivector were in a metric space, the area of the parallelogram would correspond to the magnitude of the bivector. We will discuss this further in Chapter 6. If we change the sign of both vectors in the product, we do not change the bivector, but we get a different depiction.

Changing the signs of the both vectors gives the same bivector.

Any bivector congruent to this bivector (that is, a scalar multiple of it) will represent the same 2direction. Reversing the order of the factors in a bivector is equivalent to multiplying it by -1, producing an 'opposite orientation'.

2009 9 3

Grassmann Algebra Book.nb

217

Congruent bivectors with different orientations.

This parallelogram depiction of the simple bivector is still misleading in a further major respect that did not arise for vectors. It incorrectly suggests a specific shape of the parallelogram. Indeed, since xy x(x + y), another valid depiction of this simple bivector would be a parallelogram with sides constructed from vectors x and x + y.
x y x Hx + yL

Or, more generally, for any scalar multiplier k:


x y x Hk x + yL

However in what follows, just as we will usually depict a bivector docked in a location convenient for the discussion, so too we will usually depict it in the shape most convenient for the discussion at hand, usually one with the simplest factors.

Bivectors in a metric space


In the following chapter a metric will be introduced onto L from which a metric is induced onto L.
1 2

This will permit the definition of the measure of a vector (its length) and the measure of the simple bivector (its area). The measure of a simple bivector is geometrically interpreted as the area of the parallelogram formed by its two vectors. However, as demonstrated above, the bivector can be expressed in terms of an infinite number of pairs of vectors. Despite this, the simple geometric fact that they have the same base and height shows that parallelograms formed from all of them have the same area. For example, the areas of the parallelograms in the previous figure are the same. Thus the area definition of the measure of the bivector is truly an invariant measure. From this point of view the parallelogram depiction in a metric space is correctly suggestive, although the parallelogram is not of fixed shape. In Chapter 6 it will be shown that this parallelogram-area notion of measure is simply the 2dimensional case of a very much more general notion of measure applicable to entities of any grade. A sum of simple bivectors is called a bivector. In two and three dimensions all bivectors are simple. This will have important consequences for our exploration of screw algebra in Chapter 7 and its application to mechanics in Chapter 8. Earlier in the chapter it was remarked that a vector may be viewed as a 'carrier' of points. Analogously, a simple bivector may be viewed as a carrier of bound vectors. This view will be more fully explored in the next section.

Bound vectors
2009 9 3

Grassmann Algebra Book.nb

218

Bound vectors
In mechanics the concept of force is paramount. In Chapter 8: Exploring Mechanics we will show that a force may be represented by a bound vector, and that a system of forces may be represented by a sum of bound vectors. It has already been shown that a bound vector may be expressed either as the product of a point with a vector or as the product of two points.
P1 x P1 HP2 - P1 L P1 P2 P2 P1 + x

The bound vector in the form P1 x defines a line through the point P1 in the direction of x. Similarly, in the form P1 P2 it defines the line through P1 and P2 . Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and congruence. To say that a bound vector B defines a line means that the line may be defined as the set of all points P which belong to the space of B, that is, such that B P 0. A consequence of this definition is that any other bound vector congruent to B defines the same line. We depict a bound vector as located in its line in either of two ways, by: 1. Two points and their vector difference. 2. A single point and a vector. In both cases the vector indicates the order of the factors in the exterior product.

Pointvector depiction

Pointpoint depiction

Two different depictions of a bound vector in its line.

These graphical depictions of the bound vector are each misleading in their own way. The first (point-vector) depiction suggests that the vector lies in the line. Since the vector is located nowhere, it is certainly not bound to the point. However as a component of the bound vector, we will often find it convenient to imagine it docked in the line, and to speak of it as bound through the point. And the point, again as a component of the bound vector, can be anywhere in the line since if P and P* are any two points in the line, P - P* is a vector of the same direction as x and the bound vector can be expressed as either P x or P* x .
HP - P* L x 0 P x P* x

Hence, although a lone vector is located nowhere, and a lone point is immoveably fixed, the bound vector formed by the exterior product of the two is bound to a line through the point in the direction of the vector. And to add to this representational conundrum, the bound vector has no specific location in the line. The following diagram depicts a bound vector docked in two different locations in the line.

2009 9 3

Grassmann Algebra Book.nb

219

x x P
A bound vector displaying its ability to slide along its line.

P*

The second (point-point) depiction suggests that the depicted points are of specific importance over other points in the line. However, any pair of points in the line with the same vector difference may be used to express the bound vector. Let x Q - P Q* - P* , then
P x P* x P HQ - PL P* HQ* - P* L P Q P* Q*

A bound vector displaying its ability to look like a sliding pair of points.

It has been mentioned in the previous section that a simple bivector may be viewed as a 'carrier' of bound vectors. To see this, take any bound vector Px and a bivector whose space contains x. The bivector may be expressed in terms of x and some other vector, y say, yielding yx. Thus:
P x + y x HP + yL x P* x

The bivector as a carrier of bound vectors.

The geometric interpretation of the addition of such a simple bivector to a bound vector is then similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.

Composing bivectors and bound vectors


Composing bivectors
Once you have declared a basis for your bound vector space, say "4 ,

2009 9 3

Grassmann Algebra Book.nb

220

" 4 8#, e1 , e2 , e3 , e4 <

you can use quickly compose any bivectors in this basis with ComposeBivector or its alias # . The placeholder is used for entering the symbol upon which you want the coefficients to be based. Here we choose a.
B a = %a a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4

Just as with vectors and points, if you enter # , you get an expression with placeholders in which you can enter your own coefficients. (But if you want to do further manipulation, make sure they are declared as scalar symbols).
% e1 e2 + e1 e3 + e1 e4 + e2 e3 + e2 e4 + e3 e4

Composing simple bivectors


In a 4-space, bivectors are not necessarily simple. If you want to compose a simple bivector, you can use ComposeSimpleBivector or its alias $ .
B b = (b He1 b1 + e2 b2 + e3 b3 + e4 b4 L He1 b5 + e2 b6 + e3 b7 + e4 b8 L

Composing bound vectors


Similarly, to compose a bound vector you can use ComposeBoundVector, or its alias % . The symbol ) has been chosen because as we have seen, bound vectors are often used to define lines.
L1 = )q H# + e1 q1 + e2 q2 + e3 q3 + e4 q4 L He1 q5 + e2 q6 + e3 q7 + e4 q8 L

The sum of two parallel bound vectors


Suppose we have two bound vectors !1 and !2 , and we wish to explore how their sum might be expressed. In this section we explore the special case where they are parallel. The results are valid in a space of any dimension.

The sum of two parallel bound vectors whose vectors are equal but of opposite sense.
Consider a space of any dimension. Let !1 P x and !2 Q y be two bound vectors. Then their sum % is

2009 9 3

Grassmann Algebra Book.nb

221

% %1 + %2 P x + Q y

Since the bound vectors are parallel we can write y equal to a x (where a is a scalar) so that
% P x + Q y P x + Q Ha xL HP + a QL x

In the special case that y is equal to - x (that is a is -1), we have that % reduces to a bivector
% HP - QL x

The sum of two bound vectors, whose vectors are equal but of opposite sense, is a bivector.

In mechanics (see Chapters 7 and 8) this is equivalent to two oppositely directed forces of equal magnitude reducing to a couple.

The sum of two parallel bound vectors is in general a bound vector parallel to them
Supposing now that a is not -1, then P + a Q is a weighted point of weight (1+a) situated on the line joining P and Q. We can shift this weight to the vectorial term (so that the bound vector again becomes the product of a (pure) point and a vector) by writing % as
% R HH1 + aL xL; R P+aQ a+1

The sum of two parallel bound vectors is a bound vector parallel to them.

In mechanics, this is equivalent to two parallel forces reducing to a resultant force parallel to them. If the two parallel forces are of equal magnitude (a is equal to 1), the resultant force passes through a point mid-way between them.

The sum of two non-parallel bound vectors


2009 9 3

Grassmann Algebra Book.nb

222

The sum of two non-parallel bound vectors


We now turn to the more general case where we assume the bound vectors are not parallel. In a space of any dimension we can always express a sum of bound vectors in terms of an arbitrary point, P say, by writing:
% P x + Q y R Hx + yL + HP - R L x + HQ - R L y R Hx + yL + B

Because our two bivectors are not parallel, the sum of their vectors cannot be zero. Hence first term is a new bound vector which is not zero. The remaining terms are simple bivectors. In the general dimensional case, this sum of bivectors B may not be reducible to a simple bivector. In 3dimensional space, the sum of bivectors B can always be reduced to a single simple bivector. In the plane, a point R can always be found which makes the sum zero, leaving us with the result that the sum of two bound vectors in the plane (whose sum of vectors is not zero) can always be reduced to a single bound vector.

The sum of two non-parallel bound vectors in the plane


This point R is in fact the point of intersection of the lines of the bound vectors. We have seen earlier in the chapter that a bound vector is independent of the point used to express it, provided that the point lies on its line. We can thus imagine 'sliding' each bound vector along its line until its point reaches the point of intersection with the line of the other bound vector. We can then take the common point out of the sum as a factor, and we are left with the vector of the resultant being the sum of the vectors of the bound vectors.

The sum of two non-parallel bound vectors in the plane is a bound vector through their point of intersection.

This result applies of course, not only to bound vectors in the plane, but to any pair of intersecting bound vectors in a space of any dimensions, since together they form a planar subspace. As we might expect, given any two bound vectors in the plane, we can check if they intersect or are parallel by extracting a common factor from their regressive product. If they intersect, their regressive product will give their point of intersection. If they are parallel, it will give their common vector. For example, if they intersect we have
"2 ; B1 = H" + a x + b yL x; B2 = H" + c x + d yL y;

2009 9 3

Grassmann Algebra Book.nb

223

ToCommonFactor@B1 B2 D c Hc x # x y + b y # x y + # # x yL

Whence their point of intersection can be extracted as


" + c x + b y

The sum of two non-parallel bound vectors in 3-space


If the lines of the bound vectors in 3-space intersect, then the bound vectors together belong to a plane and the results of the previous section apply. We are thus left to explore the remaining case where the bivectors are neither parallel nor intersecting. In this case, the exterior product of the bound vectors will not be zero. As we have seen above, the sum of two bound vectors in a space of any number of dimensions, may always be reduced to the sum of a bound vector and a bivector. The resultant bound vector can be chosen through an arbitrary point, but its vector must be the sum of the vectors of the original two bound vectors. The bivector will depend on the choice of the point defining the resultant bound vector. And in 3-space this bivector is guaranteed to be simple.
% P x + Q y R Hx + yL + B

To visualize how this works, we take a simple example. Define the two bound vectors as:
" 3 ; ) = " e1 ; * = H" + 2 e2 L e3

Choose a convenient point through which to define the resultant bound vector, say half way between the points used to define the component bound vectors.
+ = H" + e2 L He1 + e3 L

Then the bivector of the sum will be


B)+*-+ " e1 + H" + 2 e2 L e3 - H" + e2 L He1 + e3 L e2 He3 - e1 L

The following depictions give two different views of the same summation. The original component bound vectors are shown with a somewhat ghostly opacity, while the resultant bound vector and bivector are shown in full-blooded colours.

2009 9 3

Grassmann Algebra Book.nb

224

The sum of two bound vectors. View 1.

The sum of two bound vectors. View 2.

We shall see in the next section that the sum of any number of bound vectors in 3-space may be reduced to the sum of a single bound vector and a single bivector In Chapter 8: Exploring Mechanics, we will see that bound vectors correspond to forces and bivectors to moments. If the resultant bound vector force is chosen through a point which makes it and its bivector moment orthogonal, such a system of forces is called a wrench.

Sums of bound vectors


Sums of bound vectors in the plane
We have seen in the sections above that the sum of two bound vectors in the plane is either a bound vector or a bivector or zero. We have also seen that the sum of a bound vector and a bivector is another bound vector. A sum of any number of bound vectors in the plane, therefore, is clearly also reducible to a bound vector, a bivector, or zero; for we can simply add them two at a time.

Sums of bound vectors in 3-space


In 3-space we can present the same sort of argument. The sum of two bound vectors in 3-space is either a bound vector, a simple bivector, the sum of a bound vector and a simple bivector, or zero. In addition, the sum of two simple bivectors in 3-space is itself a simple bivector. Thus again, we can add our bound vectors two at a time, and will end up with a bound vector, a simple bivector, the sum of a bound vector and a simple bivector, or zero. In 3-space, the sum of a bound vector and a bivector can also always be recast back into the sum of two bound vectors by adding and subtracting a bound vector.
P x + y z HP x - P zL + HP z + y zL P Hx - zL + HP + yL z

Sums of bound vectors in n-space


2009 9 3

Grassmann Algebra Book.nb

225

Sums of bound vectors in n-space


Sums of bound vectors in n-space (n > 3) differ from sums in 3-space only in one aspect. The bivector may not be simple. If the bivector is simple, a sum of bound vectors in n-space has essentially the same properties as one in 3-space.

A criterion for a sum of bound vectors to be reducible to a single bound vector


A non-zero sum of bound vectors S can be reduced to a single bound vector only in the case that the vector of the bound vector is a factor of the bivector. That is, S can be expressed in the form
S P x + y x HP + yL x

In this case it is clear that the exterior square of S is zero.


S S HHP + yL xL HHP + yL xL 0

Suppose now that the vector of the bound vector is not a factor of the bivector. Then we have S = Px + yz, where x, y, and z are independent. The exterior square of S is then
S S HP x + y zL HP x + y zL 2 P x y z

This forms a straightforward test to see if the reduction of the sum to a single bound vector can be effected. If the exterior square is zero, the sum can be reduced to a single bound vector. If the exterior square is not zero, then the sum cannot be reduced to a single bound vector. We have already seen this property of exterior squares of 2-elements in our discussion of simplicity in Chapter 3.

Sums of bound vectors: Summary


A non-zero sum of bound vectors S Pi xi (except in the case S xi 0) may always be reduced to the sum of a bound vector and a bivector (or zero), since, by choosing an arbitrary point P, S Pi xi may always be written in the form:

S Pi xi P HS xi L + S HPi - PL xi
The first term on the right hand side is a bound vector. The second term is a bivector. This decomposition is clearly valid in a space of any dimension.

4.1

If S xi 0 then the sum is a bivector. In spaces of vector dimension greater than 3, the bivector may not be simple. If S HPi - PL xi 0 for some P, then the sum is a bound vector. Alternatively, if HS Pi xi M HS Pi xi M 0, then the sum is a bound vector. This transformation is of fundamental importance in our exploration of mechanics in Chapter 8.

Example: Reducing a sum of bound vectors


2009 9 3

Grassmann Algebra Book.nb

226

Example: Reducing a sum of bound vectors


Here is an example of a sum of bound vectors and its reduction to the sum of a bound vector and a bivector. We begin by declaring a bound vector space of three vector dimensions, so in this case the bivector is guaranteed to be simple.
A; "3 ;

Next, we define and enter the four bound vectors.


b1 b2 b3 b4 = P1 x1 ; = P2 x2 ; = P3 x3 ; = P4 x4 ; P1 P2 P3 P4 = " + e1 + 3 e2 - 4 e3 ; x1 = e1 - e3 ; = " + 2 e1 - e2 - 2 e3 ; x2 = e1 - e2 + e3 ; = " - 5 e1 + 3 e2 - 6 e3 ; x3 = 2 e1 + 3 e2 ; = " + 4 e1 + 2 e2 - 9 e3 ; x4 = 5 e3 ;

The sum of the four bound vectors is:


4

B = bi
i=1

H# + 4 e1 + 2 e2 - 9 e3 L H5 e3 L + H# - 5 e1 + 3 e2 - 6 e3 L H2 e1 + 3 e2 L + H# + e1 + 3 e2 - 4 e3 L He1 - e3 L + H# + 2 e1 - e2 - 2 e3 L He1 - e2 + e3 L

By expanding these products, simplifying and collecting terms, we obtain the sum of a bound vector (through the origin) " H4 e1 + 2 e2 + 5 e3 L and a bivector - 25 e1 e2 + 39 e1 e3 + 22 e2 e3 . We can use GrassmannExpandAndSimplify to do the computations for us.
B = % @BD # H4 e1 + 2 e2 + 5 e3 L - 25 e1 e2 + 39 e1 e3 + 22 e2 e3

We could just as well have expressed this 2-element as bound through (for example) the point " + e1 . To do this, we simply add e1 H4 e1 + 2 e2 + 5 e3 L to the bound vector and subtract it from the bivector to get:
H" + e1 L H4 e1 + 2 e2 + 5 e3 L - 27 e1 e2 + 34 e1 e3 + 22 e2 e3

We can of course also express the bivector in factored form in many ways. Using ExteriorFactorize (discussed in Chapter 2) gives a default result
ExteriorFactorize@- 27 e1 e2 + 34 e1 e3 + 22 e2 e3 D - 27 e1 + 22 e3 27 e2 34 e3 27

Finally, we can check to see if the sum can be reduced to a bound vector by calculating its exterior square.

2009 9 3

Grassmann Algebra Book.nb

227

% @B BD - 230 # e1 e2 e3

Since this is not zero, the reduction of the sum to a single bound vector is not possible in this case.

4.4 Geometrically Interpreted m-Elements


Types of geometrically interpreted m-elements
In L the situation is analogous to that in L. A simple product of m points and vectors may always
m 2

be reduced to the product of a point and a simple (m|1)-vector by subtracting one of the points from all of the others. For example, take the exterior product of three points and two vectors. By subtracting the first point, say, from the other two, we can cast the product into the form of a bound simple 4-vector.
P P1 P2 x y P HP1 - PL HP2 - PL x y

There are thus only two simple interpreted elements in L:


m

1. x1 x2 !xm (the simple m-vector). 2. P x2 !xm (the bound simple (m|1)-vector). A sum of simple m-vectors is called an m-vector. If a is a (not necessarily simple) m-vector, then P a is called a bound m-vector.
m m

A sum of bound m-vectors may always be reduced to the sum of a bound m-vector and an (m+1)vector (with the proviso that either or both may be zero). These interpreted elements and their relationships will be discussed further in the following sections.

The m-vector
The simple m-vector, or multivector, is the multidimensional equivalent of the vector. As with a vector, it does not have the property of location. The m-dimensional vector space of a simple mvector may be used to define the multidimensional direction of the m-vector. If we had access to m-dimensional paper, we would depict the simple m-vector, just as we did for the bivector, by depicting its vectors conveniently docked head to tail in the order of the product, and 'joined' by a somewhat transparent m-dimensional parallelepiped in the same colour. In practice of course, we will only be depicting vectors, bivectors and trivectors. A trivector may be depicted by its three vectors in order joined by a parallelepiped.

2009 9 3

Grassmann Algebra Book.nb

228

Depicting a simple trivector. A trivector is located nowhere.

The orientation is given by the order of the factors in the simple m-vector. An interchange of any two factors produces an m-vector of opposite orientation. By the anti-symmetry of the exterior product, there are just two distinct orientations. In Chapter 6: The Interior Product, it will be shown that the measure of a trivector may be geometrically interpreted as the volume of this parallelepiped. However, the depiction of the simple trivector in the manner above suffers from similar defects to those already described for the bivector: namely, it incorrectly suggests a specific location and shape of the parallelepiped. A simple m-vector may also be viewed as a carrier of bound simple (m|1)-vectors in a manner analogous to that already described for the simple bivector as a carrier of bound vectors. Thus, a simple trivector may be viewed as a carrier for bound simple bivectors. (See below for a depiction of this.) A sum of simple m-vectors (that is, an m-vector) is not necessarily reducible to a simple m-vector, except in L, L, L , L.
0 1 n-1 n

The bound m-vector


The exterior product of a point and an m-vector is called a bound m-vector. Note that it belongs to L . A sum of bound m-vectors is not necessarily a bound m-vector. However, it may in general be
m +1

reduced to the sum of a bound m-vector and an (m+1)-vector as follows:

Pi ai HP + bi L ai P ai + bi ai
m m m m

4.2

The first term P ai is a bound m-vector providing that ai ! 0 and the second term is an
m m

(m+1)-vector. When m = 0, a bound 0-vector or bound scalar Pa ( = a P) is seen to be equivalent to a weighted point. When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted " e1 e2 ! en .) 2009 9 3

Grassmann Algebra Book.nb

229

When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted " e1 e2 ! en .) The graphical depiction of bound simple m-vectors presents even greater difficulties than those already discussed for bound vectors. As in the case of the bound vector, the point used to express the bound simple m-vector is not unique. In Section 4.6 we will see how bound simple m-vectors may be used to define multiplanes.

Bound simple m-vectors expressed by points


A bound simple m-vector may always be expressed as a product of m+1 points. Let Pi = P0 + xi , then:
P0 x1 x2 ! xm P0 HP1 - P0 L HP2 - P0 L ! HPm - P0 L P0 P1 P2 ! Pm

Conversely, as we have already seen, a product of m+1 points may always be expressed as the product of a point and a simple m-vector by subtracting one of the points from all of the others. The m-vector of a bound simple m-vector P0 P1 P2 !Pm may thus be expressed in terms of these points as:
HP1 - P0 L HP2 - P0 L ! HPm - P0 L

A particularly symmetrical formula results from the expansion of this product reducing it to a form no longer showing preference for P0 .

HP1 - P0 L HP2 - P0 L ! HP m - P0 L
m

H- 1Li P0 P1 ! i ! P m
i=0

4.3

Here i denotes deletion of the factor Pi from the product.

Bound simple bivectors


In this section, by way of example, we discuss the special case of the bound simple 2-vector, or bound simple bivector. A bound bivector is a 3-element. Any exterior product of three 1-elements, in which at least one 1element is a point, is a bound simple bivector. (For brevity in this section, we will suppose all the bound bivectors discussed are bound simple bivectors.) Consider three points in a space of any number of dimensions:

2009 9 3

Grassmann Algebra Book.nb

230

P QP+x RP+y

We can construct the same bound bivector % from these points in a number of ways
% = PQR QRP RPQ

By substituting for R and expanding, we see that % can also be expressed as


% = PQy QyP yPQ

Substituting again for Q and expanding gives


% = Pxy xyP yPx

Thus we have nine (inessentially different) ways of constructing or denoting the same bound bivector. From here on we will usually only discuss bound bivectors in any of the three conceptually distinct forms in which points are denoted before vectors:
% = PQR PQy Pxy

We depict each of these forms in a slightly different way: in each case emphasizing the entities involved in the form, but in all cases showing the order of factors in the product by the chain of arrows. (These graphical conventions will also apply to our discussion of bound trivectors later.)

Pointvectorvector

Pointpointvector

Pointpointpoint

Three different depictions of a bound bivector.

The bound bivector % defines a plane through the points P, Q and R in the directions of x and y. Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and congruence. To say that a bound bivector % defines a plane means that the plane may be defined as the set of all points P which belong to the space of %, that is, such that % P 0. A consequence of this definition is that any other bound bivector congruent to % defines the same plane. These graphical depictions of the bound bivector are each misleading in their own way. Since the bound bivector defines a plane, we can conceive of it as lying in the plane. However, just as a bound vector can lie anywhere in its line, a bound bivector can lie anywhere in its plane. The following diagram depicts a bound bivector docked in three different locations in its plane. The depictions each have the same vectors, but differ in the point of the plane they use.

2009 9 3

Grassmann Algebra Book.nb

231

A bound bivector displaying its ability to slide around in its plane.

Hence, although a lone bivector is located nowhere, and a lone point is immoveably fixed, the bound bivector formed by the exterior product of the two is bound to a plane through the point in the direction of the bivector. It has been shown earlier that a simple bivector may be viewed as a 'carrier' of bound vectors. In a similar manner, a simple trivector may be viewed as a 'carrier' of bound bivectors. To see this, take any bound bivector Pyz and a trivector xyz. Thus:
P y z + x y z HP + xL y z

The trivector as a carrier of bound bivectors.

The geometric interpretation of the addition of such a simple trivector to a bound bivector is then similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.

Composing m-vectors and bound m-vectors


Composing m-vectors
Suppose that we are interested in a bound vector space of 3 dimensions.
" 3 8#, e1 , e2 , e3 <

Even though we are in a bound vector space we can still can compose m-vectors with ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the coefficients on the symbol a. 2009 9 3

Grassmann Algebra Book.nb

232

Even though we are in a bound vector space we can still can compose m-vectors with ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the coefficients on the symbol a.
Table@8m, ComposeMVector@m, aD<, 8m, 1, 3<D TableForm 1 a1 e1 + a2 e2 + a3 e3 2 a1 e1 e2 + a2 e1 e3 + a3 e2 e3 3 a e1 e2 e3

Composing simple m-vectors


Similarly we can compose simple m-vectors with ComposeSimpleMVector. In this case we use a placeholder to enable later input of specific coefficients.
Table@8m, ComposeSimpleMVector@m, D<, 8m, 1, 3<D TableForm 1 e1 + e2 + e3 2 H e1 + e2 + e3 L H e1 + e2 + e3 L 3 H e1 + e2 + e3 L H e1 + e2 + e3 L H e1 + e2 + e3 L

Composing bound simple m-vectors


Table@8m, ComposeMPlaneElement@m, D<, 8m, 1, 3<D TableForm 1 H# + e1 + e2 + e3 L H e1 + e2 + e3 L 2 H# + e1 + e2 + e3 L H e1 + e2 + e3 L H e1 + e2 + e3 L 3 H# + e1 + e2 + e3 L H e1 + e2 + e3 L H e1 + e2 + e3 L H e1 +

Composing bound m-vectors


Of course you can also combine composition functions.
Table@8m, ' ComposeMVector@m, D<, 8m, 1, 3<D TableForm 1 2 3 H# + e1 + e2 + e3 L H e1 + e2 + e3 L H# + e1 + e2 + e3 L H e1 e2 + e1 e3 + e2 e3 L H# + e1 + e2 + e3 L H e1 e2 e3 L

4.5 Geometrically Interpreted Spaces


Vector and point spaces
The space of a simple non-zero m-element a has been defined in Section 2.3 as the set of 1m

elements x whose exterior product with a is zero: :x : a x 0>.


m m

If x is interpreted as a vector v, then the vector space of a is defined as :v : a v 0>.


m m

2009 9 3

Grassmann Algebra Book.nb

233

If x is interpreted as a vector v, then the vector space of a is defined as :v : a v 0>.


m m

If x is interpreted as a point P, then the point space of a is defined as :P : a P 0>.


m m

The point space of a simple m-vector is empty. The vector space of a simple m-vector is an m-dimensional vector space. Conversely, the mdimensional vector space may be said to be defined by the m-vector. The point space of a bound simple m-vector is called an m-plane (sometimes multiplane). Thus the point space of a bound vector is a 1-plane (or line) and the point space of a bound simple bivector is a 2-plane (or, simply, a plane). The m-plane will be said to be defined by the bound simple mvector. The vector space of a bound simple m-vector is an m-dimensional vector space. The geometric interpretation for the notion of set inclusion is taken as 'to lie in'. Thus for example, a point may be said to lie in an m-plane. The point and vector spaces for a bound simple m-element are tabulated below.

m 0 1 2 m

Bound Simple m-Vector Point Space Vector Space bound scalar bound vector bound simple bivector bound simple m-vector point line plane m-plane hyperplane n|plane 1|dimensional vector space 2|dimensional vector space m|dimensional vector space Hn-1L|dimensional vector space n|dimensional vector space

n-1 bound Hn-1L-vector n bound n|vector

Two congruent bound simple m-vectors Pa and a P a define the same m-plane. Thus, for
m m

example, the point " + x and the weighted point 2(" + x) define the same point.

Bound simple m-vectors as m-planes


We have defined an m-plane as a set of points defined by a bound simple m-vector. It will often turn out however to be more convenient and conceptually fruitful to work with m-planes as if they were the bound simple m-vectors which define them. This is in fact the approach taken by Grassmann and the early workers in the Grassmannian tradition (for example, [Whitehead 1898], [Hyde 1884-1906] and [Forder 1941]). This will be satisfactory provided that the equality relationship we define for m-planes is that of congruence rather than the more specific equals. Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an mplane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a scalar factor, not zero) is the same m-plane.

2009 9 3

Grassmann Algebra Book.nb

234

Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an mplane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a scalar factor, not zero) is the same m-plane.

Coordinate spaces
The coordinate spaces of a Grassmann algebra are the spaces defined by the basis elements. The coordinate m-spaces are the spaces defined by the basis elements of L. For example, if L has
m 1

basis 8", e1 , e2 , e3 <, that is, we are working in a bound vector 3-space, then the Grassmann algebra it generates has basis:
"3 ; LBasis 881<, 8#, e1 , e2 , e3 <, 8# e1 , # e2 , # e3 , e1 e2 , e1 e3 , e2 e3 <, 8# e1 e2 , # e1 e3 , # e2 e3 , e1 e2 e3 <, 8# e1 e2 e3 <<

Each one of these basis elements defines a coordinate space. Most familiar are the coordinate mplanes. The coordinate 1-planes "e1 , "e2 , "e3 define the coordinate axes, while the coordinate 2-planes "e1 e2 , "e1 e3 , "e2 e3 define the coordinate planes. Additionally however there are the coordinate vectors e1 , e2 , e3 and the coordinate bivectors e1 e2 , e1 e3 , e2 e3 . Perhaps less familiar is the fact that there are no coordinate m-planes in a vector space, but rather simply coordinate m-vectors.

Geometric dependence
In Chapter 2 the notion of dependence was discussed for elements of a linear space. Non-zero 1elements are said to be dependent if and only if their exterior product is zero. If the elements concerned have been endowed with a geometric interpretation, the notion of dependence takes on an additional geometric interpretation, as the following table shows.
x1 x2 0 P1 P2 0 x1 , x2 are parallel Hco|directionalL P1 , P2 are coincident

x1 x2 x3 0 x1 , x2 , x3 are co|2|directional Hor parallelL P1 P2 P3 0 P1 , P2 , P3 are collinear Hor coincidentL x1 ! xm 0 P1 ! Pm 0 x1 , !, xm are co| k |directional, k < m P1 , !, Pm are co| k |planar, k < m - 1

Thus, co-directionality and coincidence, while being distinct interpretive geometric notions, are equivalent algebraically.

2009 9 3

Grassmann Algebra Book.nb

235

Geometric duality
The concept of duality introduced in Chapter 3 is most striking when interpreted geometrically. Suppose:
P L p V

defines a point defines a line defines a plane defines a 3-plane

In what follows we tabulate the dual relationships of these entities to each other.

Duality in a plane
In a plane there are just three types of geometric entity: points, lines and planes. In the table below we can see that in the plane, points and lines are 'dual' entities, and planes and scalars are 'dual' entities, because their definitions convert under the application of the Duality Principle.
L P1 P2 P L1 L2 p P1 P2 P3 1 L1 L2 L3 p LP 1 PL

Duality in a 3-plane
In the 3-plane there are just four types of geometric entity: points, lines, planes and 3-planes. In the table below we can see that in the 3-plane, lines are self-dual, points and planes are now dual, and scalars are now dual to 3-planes.
L P1 P2 L p1 p2 p P1 P2 P3 P p1 p2 p3 V P1 P2 P3 P4 1 p1 p2 p3 p4 p LP P Lp V L1 L2 1 L1 L2 V pP 1 Pp

Duality in an n-plane
From these cases the types of relationships in higher dimensions may be composed straightforwardly. For example, if P defines a point and H defines a hyperplane ((n|1)-plane), then we have the dual formulations:
H P 1 P 2 ! P n-1 P H 1 H 2 ! H n-1

2009 9 3

Grassmann Algebra Book.nb

236

4.6 m-Planes
In an earlier section m-planes were defined as point spaces of bound simple m-vectors. In this section m-planes will be considered from three other aspects: the first in terms of a simple exterior product of points, the second as an m-vector and the third as an exterior quotient.

m-planes defined by points


Grassmann and those who wrote in the style of the Ausdehnungslehre considered the point more fundamental than the vector for exploring geometry. This approach indeed has its merits. An mplane is quite straightforwardly defined and expressed as the (space of the) exterior product of m+1 points.
P P0 P1 P2 ! Pm

m-planes defined by m-vectors


Consider a bound simple m-vector P0 x1 x2 ! xm . Its m-plane is the set of points P such that:
P P0 x1 x2 ! xm 0

This equation is equivalent to the statement: there exist scalars a, a0 , ai , not all zero, such that:
a P + a0 P0 + S ai xi 0

And since this is only possible if a - a0 (since for the sum to be zero, it must be a sum of vectors) then we can rewrite the condition as:
a HP - P0 L + S ai xi 0

or equivalently:
HP - P0 L x1 x2 ! xm 0

We are thus lead to the following alternative definition of an m-plane: an m-plane defined by the bound simple m-vector P0 x1 x2 ! xm is the set of points:
8P : HP - P0 L x1 x2 ! xm 0<

This is of course equivalent to the usual definition of an m-plane. That is, since the vectors HP - P0 L, x1 , x2 , !, xm are dependent, then for scalar parameters ti :

HP - P0 L t1 x1 + t2 x2 + ! + t m x m

4.4

m-planes as exterior quotients


2009 9 3

Grassmann Algebra Book.nb

237

m-planes as exterior quotients


The alternative definition of an m-plane developed above shows that an m-plane may be defined as the set of points P such that:
P x1 x2 ! xm P0 x1 x2 ! xm

'Solving' for P and noting from Section 2.11 that the quotient of an (m+1)-element by an m-element contained in it is a 1-element with m arbitrary scalar parameters, we can write:
P P0 x1 x2 ! xm x1 x2 ! xm P0 + t1 x1 + t2 x2 + ! + tm xm

P0 x1 x2 ! x m

x1 x2 ! x m

4.5

Computing exterior quotients


In Chapter 2: The Exterior Product, we introduced the GrassmannAlgebra function ExteriorQuotient. In what follows we use ExteriorQuotient to compute the expressions for various m-planes. Note that all the 1-elements in the numerator must be declared vector symbols, so we need to declare any points we use.
A; "3 ; DeclareExtraVectorSymbols@PD;

Points
The exterior quotient of a weighted point by its weight gives (as would be expected), the (pure) point.
ExteriorQuotientB P Pa a F

Points in a line
The exterior quotient of a bound vector with its vector gives all the points in the line defined by the bound vector, parametrized by the scalar t1 .
ExteriorQuotientB P + x t1 Px x F

We can see that any other point used to define the bound vector gives effectively the same parametrization. 2009 9 3

Grassmann Algebra Book.nb

238

We can see that any other point used to define the bound vector gives effectively the same parametrization.
ExteriorQuotientB P + a x + x t1 HP + a xL x x F

Points in a plane
The exterior quotient of a bound simple bivector with its simple bivector gives all the points in the plane defined by the bound simple bivector, parametrized by the scalars t1 and t2 .
ExteriorQuotientB P + x t1 + y t2 Pxy xy F

Points in a 3-plane
The exterior quotient of a bound simple trivector with its simple trivector gives all the points in the 3-plane defined by the bound simple trivector, parametrized by the scalars t1 , t2 and t3 .
ExteriorQuotientB Pxyz xyz F

P + x t1 + y t2 + z t3

Lines in a plane
The exterior quotient of a bound simple bivector with a vector in it gives a set of bound vectors. These bound vectors define lines in the plane of the bound simple bivector, parametrized by the scalars t1 and t2 . They are characterized by the fact that their exterior product with the vector gives the original bound simple bivector.
ExteriorQuotientB Pxy y F

HP + y t1 L Hx + y t2 L

Lines in a 3-plane
The exterior quotient of a bound simple trivector with a bivector in it gives a set of bound vectors. These bound vectors define lines in the 3-plane of the bound simple trivector, parametrized by the scalars t1 , t2 , t3 and t4 . They are characterized by the fact that their exterior product with the bivector gives the original bound simple trivector.

2009 9 3

Grassmann Algebra Book.nb

239

ExteriorQuotientB

Pxyz yz

HP + y t1 + z t2 L Hx + y t3 + z t4 L

Planes in a 3-plane
The exterior quotient of a bound simple trivector with any vector in it gives all the planes in the 3plane of the bound simple trivector, parametrized by the scalars t1 , t2 and t3 . The exterior quotient of a bound simple trivector with a vector in it gives a set of bound bivectors. These bound bivectors define planes in the 3-plane of the bound simple trivector, parametrized by the scalars t1 , t2 and t3 . They are characterized by the fact that their exterior product with the vector gives the original bound simple trivector.
ExteriorQuotientB Pxyz z F

HP + z t1 L Hx + z t2 L Hy + z t3 L

The m-vector of a bound m-vector


We can define an operator $ which takes a simple (m+1)-element of the form P0 P1 P2 ! Pm and converts it to an m-element of the form HP1 -P0 LHP2 -P0 L!HPm -P0 L. The interesting property of this operation is that when it is applied twice, the result is zero. Operationally, $2 0. For example:
$ HPL 1 $ HP0 P1 L P1 - P0 $ HP1 - P0 L 1 - 1 0 $ HP0 P1 P2 L P1 P2 + P2 P0 + P0 P1 $ HP1 P2 + P2 P0 + P0 P1 L P2 - P1 + P0 - P2 + P1 - P0 0

Remember that P1 P2 + P2 P0 + P0 P1 is simple since it may be expressed as


HP1 - P0 L HP2 - P0 L HP2 - P1 L HP0 - P1 L HP0 - P2 L HP1 - P2 L

This property of nilpotence is shared by the boundary operator of algebraic topology and the exterior derivative. Furthermore, if a product with a given 1-element is considered an operation, then the exterior, regressive and interior products are all likewise nilpotent. In Chapter 6 we will show that under the hybrid metric we will adopt for bound vector spaces (in which the origin is orthogonal to all vectors), taking the interior product of any bound element with the origin " has the same effect as the $ operator. Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from a bound one. Applied a second time to the result gives zero, since the m-vector is aready free. 2009 9 3

Grassmann Algebra Book.nb

240

Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from a bound one. Applied a second time to the result gives zero, since the m-vector is aready free. Another way of generating the m-vector from a bound m-vector is to compose the first co-span of the bound m-vector, and then sum the elements. Suppose we have a bound trivector W in a bound m-space (m 3)
W = P1 P2 P3 P0 ;

Declare the Pi as 'vector' symbols (that is, as 1-elements), and sum the elements of its first co-span.
"5 ; V@P_ D; V1 = SA#1 @WDE P0 P1 P2 - P0 P1 P3 + P0 P2 P3 - P1 P2 P3

We can see this has the simple form we expected by factorizing it.
V2 = ExteriorFactorize@V1 D HP0 - P3 L HP1 - P3 L HP2 - P3 L

Applying the summed first cospan operation to each of these terms again gives zero as do the other operators.

SA#1 @P0 P1 P2 DE - SA#1 @P0 P1 P3 DE + SA#1 @P0 P2 P3 DE - SA#1 @P1 P2 P3 DE


0

SA"1 @DE

4.6

4.7 Line Coordinates


We have already seen that lines are defined by bound vectors independent of the dimension of the space. We now look at the types of coordinate descriptions we can use to define lines in bound spaces (multiplanes) of various dimensions. For simplicity of exposition we refer to a bound vector as 'a line', rather than as 'defining a line', and to a bound bivector as 'a plane', rather than as 'defining a plane '.

Lines in a plane
To explore lines in a plane, we first declare the basis of the plane: "2 .
" 2 8#, e1 , e2 <

A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of two points " + x and " + y where x and y are position vectors. 2009 9 3

Grassmann Algebra Book.nb

241

A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of two points " + x and " + y where x and y are position vectors.
L H" + xL H" + yL

! + y

H! + xL H! + yL
! + x

y x

A line defined by the product of two points

We can automatically generate a basis form for each of the points by using the GrassmannAlgebra ComposePoint function, or its alias form " .
L 'x 'y L H# + e1 x1 + e2 x2 L H# + e1 y1 + e2 y2 L

Or, we can express the line as the product of any point in it and a vector parallel to it. For example:
L H" + xL Hy - xL H" + yL Hy - xL

-x H! + y xL H y - xL
! + x

y x

A line defined by the product of a point and a vector

If we want to compose a line element in this form, we can also use ComposeLineElement or its alias % . By leaving the placeholder unfilled we get a template for entering our own scalar coefficients. 2009 9 3

Grassmann Algebra Book.nb

242

If we want to compose a line element in this form, we can also use ComposeLineElement or its alias % . By leaving the placeholder unfilled we get a template for entering our own scalar coefficients.
) H# + e1 + e2 L H e1 + e2 L

Alternatively, we can express L without specific reference to points in it. For example:
L " Ha e1 + b e2 L + c e1 e2

The first term " Ha e1 + b e2 L is a vector bound through the origin, and hence defines a line through the origin. The second term c e1 e2 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin.

Components of a line in the plane expressed in basis form.

We know that this can indeed represent a line since we can factorize it into any of the forms: A line of gradient
L " +
b a

through the point with coordinate


e1 Ha e1 + b e2 L

c b

on the "e1 axis.

c b

! e2

J! +

c b

e1 N Ha e1 + b e2 L

! +

c b

! e1 e1

A line in the plane expressed through a point on the !e1 axis.

A line of gradient 2009 9 3

b a

through the point with coordinate - c on the "e2 axis. a

Grassmann Algebra Book.nb

243

A line of gradient
L " -

b a

through the point with coordinate - c on the "e2 axis. a


e2 Ha e1 + b e2 L

c a

! e2

J! -

c a

e2 N Ha e1 + b e2 L

! ! c a

! e1

e2

A line in the plane expressed through a point on the !e2 axis.

Or, a line through both points.


L ab c " c a e2 " + c b e1

Of course the scalar factor


L " c a

ab c

is inessential so we can just as well say:


c b e1

e2 " +

! e2

J! -

c a

e2 N J! +

c b

e1 N

!
! c a

! + e2

c b

! e1 e1

A line in the plane expressed through a point on the !e1 axis.

2009 9 3

Grassmann Algebra Book.nb

244

Information required to express a line in a plane


The expression of the line above in terms of a pair of points requires the four coordinates of the points. Expressed without specific reference to points, we seem to need three parameters. However, the last expression shows, as expected, that it is really only two parameters that are necessary (viz y = m x + c).

Lines in a 3-plane
Our normal notion of three-dimensional space corresponds to a 3-plane. Lines in a 3-plane have the same form when expressed in coordinate-free notation as they do in a plane (2-plane). Remember that a 3-plane is a bound vector 3-space whose basis may be chosen as 3 independent vectors and a point, or equivalently as 4 independent points. For example, we can still express a line in a 3-plane in any of the following equivalent forms.
L H" + xL H" + yL L H" + xL Hy - xL L H" + yL Hy - xL L " Hy - xL + x y

Here, x and y are independent vectors in the 3-plane. The coordinate form however will appear somewhat different to that in the 2-plane case. To explore this, we redeclare the basis with "3 , and use ComposePoint in its alias form to define a line as the product of 2 points.
" 3 8#, e1 , e2 , e3 < L = 'x 'y H# + e1 x1 + e2 x2 + e3 x3 L H# + e1 y1 + e2 y2 + e3 y3 L

We can expand and simplify this product with GrassmannExpandAndSimplify or its alias %.
L = % @LD # He1 H- x1 + y1 L + e2 H- x2 + y2 L + e3 H- x3 + y3 LL + H- x2 y1 + x1 y2 L e1 e2 + H- x3 y1 + x1 y3 L e1 e3 + H- x3 y2 + x2 y3 L e2 e3

The scalar coefficients in this expression are sometimes called the Plcker coordinates of the line. Alternatively, we can express L in terms of basis elements, but without specific reference to points or vectors in it. For example:
L = " Ha e1 + b e2 + c e3 L + d e1 e2 + e e2 e3 + f e1 e3 ;

The first term " Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines a line through the origin. The second term d e1 e2 + e e2 e3 + f e1 e3 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin. In order to effect this shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence there will be some constraint on the coefficients d, e, and f. To determine this we only need to determine 2009 9 3 the condition that the exterior product of the vector and the bivector is zero.

Grassmann Algebra Book.nb

245

The first term " Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines a line through the origin. The second term d e1 e2 + e e2 e3 + f e1 e3 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin. In order to effect this shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence there will be some constraint on the coefficients d, e, and f. To determine this we only need to determine the condition that the exterior product of the vector and the bivector is zero.
%@Ha e1 + b e2 + c e3 L Hd e1 e2 + e e2 e3 + f e1 e3 LD 0 Hc d + a e - b fL e1 e2 e3 0

Alternatively, this constraint amongst the coefficients could have been obtained by noting that in order to be a line, L must be simple, hence the exterior product with itself must be zero.
% @L LD 0 H2 c d + 2 a e - 2 b fL # e1 e2 e3 0

Thus the constraint that the coefficients must obey in order for a general bound vector of the form L to be a line in a 3-plane is that:
cd+ae-bf 0

This constraint is sometimes referred to as the Plcker identity. Given this constraint, and supposing neither a, b nor c to be zero, we can factorize the line into any of the following forms:
L " + f c d a d b e1 + e2 e1 e c f a e b e2 Ha e1 + b e2 + c e3 L e3 Ha e1 + b e2 + c e3 L e3 Ha e1 + b e2 + c e3 L

L " -

L " +

Each of these forms represents a line in the direction of a e1 + b e2 + c e3 and intersecting a coordinate plane. For example, the first form intersects the " e1 e2 coordinate plane in the e f e point "+ f e + e with coordinates ( c , c ,0). c 1 c 2 The most compact form, in terms of the number of scalar parameters used, is when L is expressed as the product of two points, each of which lies in a coordinate plane.
L= ac f " d a e2 f a e3 " + f c e1 + e c e2 ;

We can verify that this formulation gives us the original form of the line by expanding and simplifying the product and substituting the constraint relation previously obtained.
%@Expand@#@%@LD . Hc d + a e f bLDDD # Ha e1 + b e2 + c e3 L + d e1 e2 + f e1 e3 + e e2 e3

Similar expressions may be obtained for L in terms of points lying in the other coordinate planes. To summarize, there are three possibilities in a 3-plane, corresponding to the number of different pairs of coordinate 2-planes in the 3-plane. 2009 9 3

Grassmann Algebra Book.nb

246

Similar expressions may be obtained for L in terms of points lying in the other coordinate planes. To summarize, there are three possibilities in a 3-plane, corresponding to the number of different pairs of coordinate 2-planes in the 3-plane.

L H! + x1 e1 + x2 e2 L H! + y2 e2 + y3 e3 L P Q L H! + x1 e1 + x2 e2 L H! + z1 e1 + z3 e3 L P R L H! + y2 e2 + y3 e3 L H! + z1 e1 + z3 e3 L Q R

4.7

A line in a 3-plane through points in the coordinate planes.

Information required to express a line in a 3-plane


As with a line in a 2-plane, we find that a line in a 3-plane is expressed with the minimum number of parameters by expressing it as the product of two points, each in one of the coordinate planes. In this form, there are just 4 independent scalar parameters (coordinates) required to express the line.

Determining the point of intersection with the third coordinate plane


Suppose we have a line expressed as the exterior product of points in two coordinate planes, say " e1 e2 and " e2 e3 .
L = H" + x1 e1 + x2 e2 L H" + y2 e2 + y3 e3 L;

We can always find the point on the line in the third coordinate plane " e3 e1 by computing the intersection of the line and the third plane.

2009 9 3

Grassmann Algebra Book.nb

247

We can always find the point on the line in the third coordinate plane " e3 e1 by computing the intersection of the line and the third plane. To use GrassmannAlgebra for this, we first need to declare the space, and the scalar symbols.
"3 ; DeclareExtraScalarSymbols@8x1 , x2 , y2 , y3 <D;

The intersection of the line and the third coordinate plane is given by finding their common factor.
F = ToCommonFactor@L H" e1 e3 LD c H# Hx2 - y2 L - e1 x1 y2 + e3 x2 y3 L

The point we need is then


R = ToPointForm@FD # e1 x1 y2 x2 - y2 + e3 x2 y3 x2 - y2

We can confirm that this point is both on the line and on the coordinate plane by simplifying their exterior products.
8$@%@R LDD, %@R " e1 e3 D< 80, 0<

Lines in a 4-plane
Lines in a 4-plane have the same form when expressed in coordinate-free notation as they do in any multiplane. To obtain the Plcker coordinates of a line in a 4-plane, express the line as the exterior product of two points and multiply it out. The resulting coefficients of the basis elements are the Plcker coordinates of the line. Additionally, from the results above, we can expect that a line in a 4-plane may be expressed with the least number of scalar parameters as the exterior product of two points, each point lying in one of the coordinate 3-planes. For example, the expression for the line as the product of the points in the coordinate 3-planes "e1 e2 e3 and "e2 e3 e4 is
L = H" + x1 e1 + x2 e2 + x3 e3 L H" + y2 e2 + y3 e3 + y4 e4 L

Lines in an m-plane
The formulae below summarize some of the expressions for defining a line, valid in a multiplane of any dimension. Coordinate-free expressions may take any of a number of forms. For example:

L H! + xL H! + yL 4.8
2009 9 3

Grassmann Algebra Book.nb

248

L H! + xL H! + yL L H! + xL Hy - xL L H! + yL Hy - xL L ! Hy - xL + x y
A line can be expressed in terms of the 2m coordinates of any two points on it.

L H! + x1 e1 + x2 e2 + ! + x m e m L H! + y1 e1 + y2 e2 + ! + y m e m L
When multiplied out, the expression for the line takes a form explicitly displaying the Plcker coordinates of the line.

4.9

L ! HHy1 - x1 L e1 + Hy2 - x2 L e2 + ! + Hym - xm L em L + Hx1 y2 - x2 y1 L e1 e2 + Hx1 y3 - x3 y1 L e1 e3 + H x 1 y 4 - x 4 y 1 L e 1 e 4 + ! + H x m -1 y m - x m y m -1 L e m -1 e m

4.10

Alternatively, a line in an m-plane "e1 e2 !em can be expressed in terms of its intersections with two of its coordinate (m|1)-planes, " e1 ! i ! em and "e1 !j !em say. The notation i means that the ith element or term is missing.

L H! + x1 e1 + ! + i + ! + x m e m L H! + y1 e1 + ! + j + ! + y m e m L
This formulation indicates that a line in m-space has at most 2(m|1) independent parameters required to describe it.

4.11

It also implies that in the special case when the line lies in one of the coordinate (m|1)-spaces, it can be even more economically expressed as the product of two points, each lying in one of the coordinate (m|2)-spaces contained in the (m|1)-space. And so on.

4.8 Plane Coordinates


We have already seen that planes are defined by simple bound bivectors independent of the dimension of the space. We now look at the types of coordinate descriptions we can use to define planes in bound spaces (multiplanes) of various dimensions.

Planes in a 3-plane
A plane P in a 3-plane can be written in several forms. The most intuitive form perhaps is as a product of three non-collinear points P, Q, and R.

4.8
2009 9 3

Grassmann Algebra Book.nb

249

P " + x Q " + y R " + z P P Q R = H" + xL H" + yL H" + zL

Here, x, y and z are the position vectors of P, Q, and R.

A 2-plane in a 3-plane through three points.

Or, we can express it as the product of any two different points in it and a vector parallel to it (but not in the direction of the line joining the two points). For example:

2009 9 3

Grassmann Algebra Book.nb

250

P H" + xL H" + yL Hz - yL

A 2-plane in a 3-plane through two points and a vector.

Or, we can express it as the product of any point in it and any two independent vectors parallel to it. For example:
P H" + xL Hy - xL Hz - yL

A 2-plane in a 3-plane through one point and two vectors.

Or, we can express it as the product of any point in it and any line in it not through the point. For example:

2009 9 3

Grassmann Algebra Book.nb

251

Or, we can express it as the product of any point in it and any line in it not through the point. For example:
P H" + xL L

A 2-plane in a 3-plane through a point and a line.

Or, we can express it as the product of any line in it and any vector parallel to it (but not parallel to the line). For example:
P Hy - xL L

A 2-plane in a 3-plane through a vector and a line.

Given a basis, we can always express the plane in terms of the coordinates of the points or vectors in the expressions above. However the form which requires the least number of coordinates is that which expresses the plane as the exterior product of its three points of intersection with the 2009 9 3 axes. coordinate

Grassmann Algebra Book.nb

252

Given a basis, we can always express the plane in terms of the coordinates of the points or vectors in the expressions above. However the form which requires the least number of coordinates is that which expresses the plane as the exterior product of its three points of intersection with the coordinate axes.
P H" + a e1 L H" + b e2 L H" + c e3 L

A 2-plane in a 3-plane through three points on the coordinate axes.

If the plane is parallel to one of the coordinate axes, say " e3 , it may be expressed as:
P H" + a e1 L H" + b e2 L e3

Whereas, if it is parallel to two of the coordinate axes, say " e2 and " e3 , it may be expressed as:
P H" + a e1 L e2 e3

If we wish to express a plane as the exterior product of its intersection points with the coordinate axes, we first determine its points of intersection with the axes and then take the exterior product of the resulting points. This leads to the following identity:
P HP H" e1 LL HP H" e2 LL HP H" e3 LL

Example: To express a plane in terms of its intersections with the coordinate axes
Suppose we have a plane in a 3-plane defined by three points.
" 3 ; P = H" + e1 + 2 e2 + 5 e3 L H" - e1 + 9 e2 L H" - 7 e1 + 6 e2 + 4 e3 L;

To express this plane in terms of its intersections with the coordinate axes we calculate the intersection points with the axes.

2009 9 3

Grassmann Algebra Book.nb

253

To express this plane in terms of its intersections with the coordinate axes we calculate the intersection points with the axes.
ToCommonFactor@P H" e1 LD c H13 # + 329 e1 L ToCommonFactor@P H" e2 LD c H38 # + 329 e2 L ToCommonFactor@P H" e3 LD c H48 # + 329 e3 L

We then take the product of these points (ignoring the weights) to form the plane.
P " + 329 e1 13 " + 329 e2 38 " + 329 e3 48

To verify that this is indeed the same plane, we can check to see if these points are in the original plane. For example:
% BP " + 0 329 e1 13 F

Planes in a 4-plane
From the results above, we can expect that a plane in a 4-plane is most economically expressed as the product of three points, each point lying in one of the coordinate 2-planes. For example:
P H" + x1 e1 + x2 e2 L H" + y2 e2 + y3 e3 L H" + z3 e3 + z4 e4 L

(This is analogous to expressing a line in a 3-plane as the product of two points, each point lying in one of the coordinate 2-planes.) If a plane is expressed in any other form, we can express it in the form above by first determining its points of intersection with the coordinate planes and then taking the exterior product of the resulting points. This leads to the following identity:
P HP H" e1 e2 LL HP H" e2 e3 LL HP H" e3 e4 LL

Planes in an m-plane
A plane in an m-plane is most economically expressed as the product of three points, each point lying in one of the coordinate (m|2)-planes.

2009 9 3

Grassmann Algebra Book.nb

254

P J " + x 1 e 1 + ! + x i1 e i1 + ! + x i2 e i2 + ! + x m e m N J " + y 1 e 1 + ! + y i3 e i3 + ! + y i4 e i4 + ! + y m e m N J " + z 1 e 1 + ! + z i5 e i5 + ! + z i6 e i6 + ! + z m e m N

Here the notation xi ei means that the term is missing from the sum. This formulation indicates that a plane in an m-plane has at most 3(m|2) independent scalar parameters required to describe it.

The coordinates of geometric entities


In the sections above we have discussed some common geometric entities, and various ways in which they can be represented by coordinates in a given basis system. In this section we make some general observations about the coordinates of geometric entities from a Grassmannian point of view. We can define a geometric entity as an element of an exterior product space which has a specific geometric interpretation. Examples we have already met include points, lines, planes, m-planes, vectors, bivectors, and m-vectors. These entities are characterized by the fact that they are simple. That is, they may be factorized into a product of 1-elements. The 1-element factors are, of course, not unique. However, this is not a shortcoming. Indeed, the power of such a representation for computations owes much to this nonuniqueness. For example, if we know two lines intersect in a point P, we can represent them immediately and expressly as products involving P (say, PQ and PR). If an element is not simple, then it is not a candidate for geometric entities of this sort. (We will look at geometric entities represented by non-simple elements in Chapter 7: Exploring Screw Algebra.) Apart from 0, 1, n-1 and n-elements, an m-element of an n-space is not necessarily simple. Thus in the plane (a three-dimensional linear space), n is equal to 3, and all entities (scalars, points, lines and planes) are simple. In space, n is equal to 4, and hence 2-elements are the only one that are potentially not simple. This means that only a subset of the 2-elements of space (the simple 2-elements) can represent lines. These comments extend to a space of any larger number of dimensions. In particular, (n-1)elements in a space of n dimensions are simple. Hence any (n-1)-element represents a hyperplane. For other elements (apart from the trivial 0 and n-elements), only a subset of them can represent a geometric entity. This subset may be defined by a set of constraints between the coefficients of the element which ensures that it be simple. If we are constructing a geometric entity as the exterior product of known independent points, or as the regressive product of hyperplanes, then we are assured that the result will be simple. In the examples of the previous section, we have seen several cases of this approach. In what follows, we will explore the results that a consideration of simplicity produces.

2009 9 3

Grassmann Algebra Book.nb

255

Lines in space
Consider a general 2-element in a point 3-space.
"3 ; A = ComposeMElement@2, aD a1 # e1 + a2 # e2 + a3 # e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3

We can attempt to factorize this using ExteriorFactorize.


A1 = ExteriorFactorize@AD IfB8a3 a4 - a2 a5 + a1 a6 0<, a1 # a4 e2 a1 a5 e3 a1 e1 + a2 e2 a1 + a3 e3 a1 F

This says that the given factorization is possible only if and only if the condition a3 a4 - a2 a5 + a1 a6 0 on the coefficients of A is met. There is a straightforward way of testing the simplicity of 2-elements not shared by elements of higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of A is A A.) In our case, we can write A as A "b + B, where b is a vector, and B is a bivector. [Since elements of odd grade anti-commute, the exterior square of odd elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its being simple.] In the case of a 2-element A "b + B in a point space then, we require
A A H" b + BL H" b + BL 2 " b B 0

Thus we have the reduced requirement only that bB 0. In the case of a point 3-space
b = a1 e1 + a2 e2 + a3 e3 ; B = a4 e1 e2 + a5 e1 e3 + a6 e2 e3 ;

Simplifying bB gives
% @b BD Ha3 a4 - a2 a5 + a1 a6 L e1 e2 e3

Hence by these means we arrive at the same condition we obtained through ExteriorFactorize. At this point we should note that these results have not involved any metric concepts. The simple requirement that bB be zero, has led to the required constraint. On the other hand, if we require from the start that bB 0, that is B contains the factor a, then we can write B ab, and so write A immediately as the product of a point and a vector.

2009 9 3

Grassmann Algebra Book.nb

256

A " b + a b H" + aL b

The vector a may be visualized as the position vector of a point on the line, and the vector b as a vector in the direction of the line. The coefficients of ab are the same as the coefficients of the cross product a!b. The cross product is the complement of the exterior product, and is orthogonal to it. (The cross product and orthogonality are metric concepts which will be introduced in the next chapter.) The (Plcker) coordinates of a line in space are often given by the coefficients of the two vectors a and a!b. Composing a and b in basis form, and then calculating their exterior product displays the coefficients of a!b.
8a = &a , b = &b , %@a bD< 8a1 e1 + a2 e2 + a3 e3 , b1 e1 + b2 e2 + b3 e3 , H- a2 b1 + a1 b2 L e1 e2 + H- a3 b1 + a1 b3 L e1 e3 + H- a3 b2 + a2 b3 L e2 e3 <

Then with the mapping


8e1 e2 , e1 e3 , e1 e2 < 8e1, - e2 , e3 <

the Plcker line coordinates are given by


8a1 , a2 , a3 , Ha2 b3 - a3 b2 L, Ha3 b1 - a1 b3 L, Ha1 b2 - a2 b1 L<

And as we would expect, the first three components as a vector are orthogonal to the second three.
8a1 , a2 , a3 <.8Ha2 b3 - a3 b2 L, Ha3 b1 - a1 b3 L, Ha1 b2 - a2 b1 L< Expand 0

In sum Despite this common practice of developing the Plcker coordinates in terms of dot and cross products, we can see from the Grassmannian approach that we began with, that it is neither necessary, nor particularly elucidating. The coordinates of a line do not need to rely on metric notions. And if the geometry is projective, they perhaps should not.

Lines in 4-space
Now consider a general 2-element in a point 4-space.
"4 ; A = ComposeMElement@2, aD a1 # e1 + a2 # e2 + a3 # e3 + a4 # e4 + a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4

Factorizing gives

2009 9 3

Grassmann Algebra Book.nb

257

A1 = ExteriorFactorize@AD IfB8a3 a5 - a2 a6 + a1 a8 0, a4 a5 - a2 a7 + a1 a9 0, a4 a6 - a3 a7 + a1 a10 0<, a5 e2 a6 e3 a7 e4 a2 e2 a3 e3 a4 e4 a1 # e1 + + + F a1 a1 a1 a1 a1 a1

On the other hand, adopting the method of the previous section, we can write
b = a1 e1 + a2 e2 + a3 e3 + a4 e4 ; B = a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4 ;

Simplifying bB gives
% @b BD Ha3 a5 - a2 a6 + a1 a8 L e1 e2 e3 + Ha4 a5 - a2 a7 + a1 a9 L e1 e2 e4 + Ha4 a6 - a3 a7 + a1 a10 L e1 e3 e4 + Ha4 a8 - a3 a9 + a2 a10 L e2 e3 e4

Again, we arrive at the same conditions, except that with this approach we have one extra one. However, it is easy to show that the number of independent conditions is still only three. Clearly, because this product will always be a trivector, in a space of n vector dimensions, it will always have c = n(n-1)(n-2)/6 components, and hence spawn c conditions. On the other hand, if we require from the start that bB 0, that is B contains the factor a, then we can write B ab, and so write A immediately as the product of a point and a vector.
A " b + a b H" + aL b

The vector a may be visualized as the position vector of a point on the line, and the vector b as a vector in the direction of the line. The coefficients of ab cannot be expressed as the coefficients of the cross product as before since the vector space is now 4-dimensional. The (Plcker) coordinates of a line in 4-space must now be given by the coefficients of the vector a and the bivector ab. If
b = &b ; Cp = 8a = &a , %@a bD< 8a1 e1 + a2 e2 + a3 e3 + a4 e4 , H- a2 b1 + a1 b2 L e1 e2 + H- a3 b1 + a1 b3 L e1 e3 + H- a4 b1 + a1 b4 L e1 e4 + H- a3 b2 + a2 b3 L e2 e3 + H- a4 b2 + a2 b4 L e2 e4 + H- a4 b3 + a3 b4 L e3 e4 <

Then the Plcker line coordinates are given by


ExtractCoefficients@LBasisD@CpD 8a1 , a2 , a3 , a4 , - a2 b1 + a1 b2 , - a3 b1 + a1 b3 , - a4 b1 + a1 b4 , - a3 b2 + a2 b3 , - a4 b2 + a2 b4 , - a4 b3 + a3 b4 <

2009 9 3

Grassmann Algebra Book.nb

258

In sum In a point 4-space, the Grassmannian approach generalizes, whereas the approach developed through the dot and cross products does not generalize.

4.9 Calculation of Intersections


The intersection of two lines in a plane
Suppose we wish to find the point P of intersection of two lines L1 and L2 in a plane. We have seen in the previous section how we could express a line in a plane as the exterior product of two points, and that these points could be taken as the points of intersection of the line with the coordinate axes. First declare the basis of the plane, and then define the lines.
" 2 8#, e1 , e2 < L1 = H" + a e1 L H" + b e2 L; L2 = H" + c e1 L H" + d e2 L;

To find the point of intersection of these two lines, we first find their common factor. The common factor of two elements can be found by applying ToCommonFactor to their regressive product. Remember that common factors are only determinable up to congruence (that is, to within an arbitrary scalar multiple). Hence the common factor we determine will, in general, be a weighted point.
wP = ToCommonFactor@L1 L2 D c HHb c - a dL # + Ha b c - a c dL e1 + H- a b d + b c dL e2 L

We can convert this weighted point to a form which separates the weight from the point by applying ToWeightedPointForm.
ToWeightedPointForm@wPD Hb c - a dL c # + a c Hb - dL c e1 b c c - a d c + b H- a + cL d c e2 b c c - a d c

Or, if we are only interested in the point, we can apply ToPointForm.


P = ToPointForm@wPD # + a c Hb - dL e1 bc-ad + b H- a + cL d e2 bc-ad

To verify that this point lies in both lines, we can take its exterior product with each of the lines and show the results to be zero.

2009 9 3

Grassmann Algebra Book.nb

259

%@8L1 P, L2 P<D Simplify 80, 0<

In the special case in which the lines are parallel, that is b c - a d = 0, their intersection is no longer a point, but a vector defining their common direction.

The intersection of a line and a plane in a 3-plane


Suppose we wish to find the point P of intersection of a line L and a plane P in a 3-plane. We express the line as the exterior product of two points in two coordinate planes, and the plane as the exterior product of the points of intersection of the plane with the coordinate axes. First declare the basis of the 3-plane, and then define the line and plane.
" 3 8#, e1 , e2 , e3 < L = H" + a e1 + b e2 L H" + c e2 + d e3 L; P = H" + e e1 L H" + f e2 L H" + g e3 L;

To find the point of intersection of the line with the plane, we first determine their common factor (a weighted point) by applying ToCommonFactor to their regressive product, and then drop the weight by applying ToPointForm.
P = ToPointForm@ToCommonFactor@L PDD # + a e Hd f + Hc - fL gL e1 d e f - Hb e - c e + a fL g d e f - Hb e - c e + a fL g + d Hb e + Ha - eL fL g e3 d e f - Hb e - c e + a fL g

f Hb e Hd - gL + c H- a + eL gL e2

To verify that this point lies in both the line and the plane, we can take its exterior product with each of the line and the plane and show the results to be zero.
$@%@8L P, P P<DD 80, 0<

In the special case in which the line is parallel to the plane, their intersection is no longer a point, but a vector defining their common direction. When the line lies in the plane, the result from the calculation will be zero.

The intersection of two planes in a 3-plane


Suppose we wish to find the line L of intersection of two planes P1 and P2 in a 3-plane. First declare the basis of the 3-plane, and then define the planes.

2009 9 3

Grassmann Algebra Book.nb

260

" 3 8#, e1 , e2 , e3 < P1 = H" + a e1 L H" + b e2 L H" + c e3 L; P2 = H" + e e1 L H" + f e2 L H" + g e3 L;

To find the line of intersection of the two planes, we first determine their common factor (a line) by applying ToCommonFactor to their regressive product, and then (if we wish) drop the (arbitrary) congruence factor c by setting it equal to unity. Finally, applying GrassmannSimplify collects the terms in a convenient form with the origin # factored out.
L = $@ToCommonFactor@P1 P2 D . c 1D # Ha e Hc f - b gL e1 + b f H- c e + a gL e2 + c Hb e - a fL g e3 L + a b e f H- c + gL e1 e2 + a c e Hb - fL g e1 e3 + b c H- a + eL f g e2 e3

This is of course, a 2-element in a 4-space, and hence is not necessarily simple. However we can verify in a number of ways that this result is indeed simple, and therefore does define a line. The simplest way is to apply the GrassmannAlgebra function SimpleQ.
SimpleQ@LD True

Perhaps more convincing is to factorize L so as to display it as the product of a point and a vector.
Lp = ExteriorFactorize@LD a e Hc f - b gL # + e1 b f H- c + gL e2 -c f + b g + + c Hb - fL g e3 -c f + b g

Hb c e f - a b f gL e2 acef-abeg

Hb c e g - a c f gL e3 acef-abeg

We can extract the point and vector as specific expressions by applying the GrassmannAlgebra function ExtractArguments.
8P, V< = ExtractArguments@Lp D :# + e1 b f H- c + gL e2 -c f + b g + c Hb - fL g e3 -c f + b g + , >

Hb c e f - a b f gL e2 acef-abeg

Hb c e g - a c f gL e3 acef-abeg

To verify that both the point and the vector lie in both planes, we can take their exterior product with each of the planes and show the results to be zero.
Simplify@ %@8P1 P, P1 V, P2 P, P2 V<DD 80, 0, 0, 0<

In the special case in which the planes are parallel, their intersection is no longer a line, but a bivector defining their common 2-direction. 2009 9 3

Grassmann Algebra Book.nb

261

In the special case in which the planes are parallel, their intersection is no longer a line, but a bivector defining their common 2-direction.

Example: The osculating plane to a curve


The problem
Show that the osculating planes at any three points to the curve defined by:
P " + n e1 + n 2 e2 + n 3 e3

intersect at a point coplanar with these three points. Here, n is a scalar parametrizing the curve.

The solution
The osculating plane P to the curve at the point P is given by PPPP. P and P are the first and second derivatives of P with respect to n.
P = " + n e1 + n 2 e2 + n 3 e3 ; P = e1 + 2 n e2 + 3 n 2 e3 ; P = 2 e2 + 6 n e3 ; P = P P P;
" " " "

We can declare the space to be a 3-plane and subscripts of the parameter n to be scalar, and then use GrassmannSimplify to derive the expression for the osculating plane as a function of n. (We divide out a common multiple of 2 to make the resulting expressions a little simpler.)
"3 ; DeclareExtraScalarSymbols@8n, n_ <D; P = %@P 2D # Ie1 e2 + 3 n e1 e3 + 3 n2 e2 e3 M + n3 e1 e2 e3

Now select any three points on the curve P1 , P2 , and P3 .


P1 = " + n1 e1 + n1 2 e2 + n1 3 e3 ; P2 = " + n2 e1 + n2 2 e2 + n2 3 e3 ; P3 = " + n3 e1 + n3 2 e2 + n3 3 e3 ;

The osculating planes at these three points are:


8P1 , P2 , P3 < = 8P . n n1 , P . n n2 , P . n n3 <
3 9# Ie1 e2 + 3 n1 e1 e3 + 3 n2 1 e2 e3 M + n1 e1 e2 e3 , 3 # Ie1 e2 + 3 n2 e1 e3 + 3 n2 2 e2 e3 M + n2 e1 e2 e3 , 3 # Ie1 e2 + 3 n3 e1 e3 + 3 n2 3 e2 e3 M + n3 e1 e2 e3 =

The (weighted) point of intersection of these three planes may be obtained by calculating their common factor. The congruence factor c appears squared because there are two regressive product operations. 2009 9 3

Grassmann Algebra Book.nb

262

The (weighted) point of intersection of these three planes may be obtained by calculating their common factor. The congruence factor c appears squared because there are two regressive product operations.
wP = ToCommonFactor@P1 P2 P3 D
2 2 2 2 2 c2 I# I- 9 n2 1 n2 + 9 n1 n2 + 9 n1 n3 - 9 n2 n3 - 9 n1 n3 + 9 n2 n3 M + 3 3 3 3 3 e1 I- 3 n3 1 n2 + 3 n1 n2 + 3 n1 n3 - 3 n2 n3 - 3 n1 n3 + 3 n2 n3 M + 3 2 3 2 2 2 3 2 3 2 3 e2 I- 3 n3 1 n2 + 3 n1 n2 + 3 n1 n3 - 3 n2 n3 - 3 n1 n3 + 3 n2 n3 M + 3 2 2 3 2 e3 I- 9 n3 1 n2 n3 + 9 n1 n2 n3 + 9 n1 n2 n3 3 2 2 2 3 9 n1 n3 2 n3 - 9 n1 n2 n3 + 9 n1 n2 n3 MM

The scalar coefficient attached to the origin factors out of the expression so that we can extract the point from the resulting weighted point to give the point of intersection Q as
Q = ToPointForm@wPD # + e3 n1 n2 n3 + 1 3 e1 Hn1 + n2 + n3 L + 1 3 e2 Hn2 n3 + n1 Hn2 + n3 LL

Finally, to show that this point of intersection Q is coplanar with the points P1 , P2 , and P3 , we compute their exterior product.
Simplify@%@P1 P2 P3 QDD 0

This proves the original assertion.

4.10 Decomposition into Components


The shadow
In Chapter 3 we developed a formula which expressed a j-element as a sum of components, the element being decomposed with respect to a pair of elements a and b which together span the
m n-m

whole space.
2j

Ka Xbi O Xci b
G@Xci D Hj+1L m

X H- 1L
j i=1

n-m

a b
m n-m

The first and last terms of this decomposition are:

2009 9 3

Grassmann Algebra Book.nb

263

a X b X
j m j n-m n-m

aX b +!+
m j n -m n-m

a b
m

a b
m

Grassmann called the first term the shadow of X on a excluding b .


j m n-m

It can be seen that the last term can be rearranged as the shadow of X on b excluding a.
j n- m m

aX b
m j n-m n-m

b Xa
n-m n-m j m

a b
m

b a
m

If j = 1, the decomposition formula reduces to the sum of just two components, xa and xb , where xa lies in a and xb lies in b .
m n-m

a Kx b O x xa + xb
m n- m

b Kx aO +
n- m m

4.12

a b
m n- m

b a
n- m m

Another variant on this decomposition formula is formula 3.36 derived in Chapter 3:


n

x
i=1

b1 b2 ! x ! bn b1 b2 ! bi ! bn

bi

We now explore these formulae with a number of geometric examples, beginning with the simplest case of decomposition in a 2-space.

Decomposition in a 2-space
Decomposition of a vector in a bivector
Suppose we have a vector 2-space defined by the bivector ab. We wish to decompose a vector x in the space to give one component in a and the other in b. Applying the decomposition formula above, and noting that xb and xa are bivectors while xb and xa are scalars, we can use formula 3.26 and axiom

8 to give:
a JHx bL 1 N xb ab
n

xa

a Hx bL ab a

ab a

xb ab

Ja 1 N
n

xb ab

2009 9 3

Grassmann Algebra Book.nb

264

xb

b Hx aL

b JHx aL 1 N ax ab
n

ba xa b ba

ba b
xb ab

xa ba

Jb 1 N
n

(We are able to replace the coefficient

on the right hand side by

xb ab

because we are in a 2-

space. See the section on Exterior Division in Chapter 2.)

Decomposition of a vector in a bivector into two components.

The coefficients of a and b are scalars showing that xa is congruent to a and xb is congruent to b. If each of the three vectors is expressed in basis form, we can determine these scalars more specifically. For example:
x = x1 e1 + x2 e2 ; a = a1 e1 + a2 e2 ; b = b1 e1 + b2 e2 ; xb ab Ha1 e1 + a2 e2 L Hb1 e1 + b2 e2 L Hx1 b2 - x2 b1 L e1 e2 Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L e1 e2 Ha1 b2 - a2 b1 L Hx1 e1 + x2 e2 L Hb1 e1 + b2 e2 L

Finally then we can express the original vector x as the required sum of two components, one in a and one in b.
x Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L a+ Hx1 a2 - x2 a1 L Hb1 a2 - b2 a1 L b

We can easily check that the right hand side does indeed reduce to x by composing x, a, and b, and then simplifying:
!2 ; &x ; a = &a ; b = &b ; Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L e1 x1 + e2 x2 a+ Hx1 a2 - x2 a1 L Hb1 a2 - b2 a1 L b Simplify

Decomposition of a point in a line


2009 9 3

Grassmann Algebra Book.nb

265

Decomposition of a point in a line


These same calculations apply mutatis mutandis to decomposing a point P into two component weighted points congruent to two given points Q and R in a line. For variety here, we take a more general and less coordinate-based approach. In the first formulae of the previous section above let:
xP aQ bR

then the decomposition becomes


P PR QR Q+ QP QR R

Decomposition of a point in a line into two weighted points.

Because P is a point, we would expect the sum of its two component weights to be unity.
PR QR + QP QR P HR - QL QR

We can see immediately that this is indeed so because adding them gives the same bound vector in the numerator and the denominator. We can either note this equivalence from our previous discussion of bound vectors, or do a substitution. For a substitution, let r and q be scalars, and v be a vector in (parallel to) the line.
Q=P-qv R=P+rv PR QR QP QR P HP + r vL HP - q vL HP + r vL HP - q vL P HP - q vL HP + r vL r Pv r r+q q r+q

r + q Pv q Pv

r + q Pv

Hence, in these terms, the final decomposition reduces to

2009 9 3

Grassmann Algebra Book.nb

266

r r+q

Q+

q r+q

Decomposition of a vector in a line


Again, these same calculations apply mutatis mutandis to the situation above. Although the result turns out to be trivial in terms of our understanding that a vector may be expressed as the difference of two points, we include the example to emphasize that the decomposition formula applies easily to either bound or free entities. Let
Px

then the decomposition becomes


x xR QR Q+ Qx QR R

R x
H
xR QR

H QR L R
LQ + H
Qx QR

Qx

LR

H QR L Q

x R

Decomposition of a vector in a line into two points of opposite weight.

Because x is a vector, we would expect the sum of its two component weights to be zero.
xR QR + Qx QR x HR - QL QR

Again, we can see immediately that this is so because adding them gives a zero bivector in the numerator. For a substitution, let a be a scalar; then R - Q is a vector in (parallel to) the line.
x = a HR - QL xR QR Qx QR a HR - QL R QR Q a HR - QL QR -a QR QR -a

QR QR

Hence, in these terms, the final decomposition reduces to


x -a Q + a R

which is of course our original definition for x!

Decomposition in a 3-space
2009 9 3

Grassmann Algebra Book.nb

267

Decomposition in a 3-space
Decomposition of a vector in a trivector
Suppose we have a vector 3-space represented by the trivector abg. We wish to decompose a vector x in this 3-space to give one component in ab and the other in g. Applying the decomposition formula gives:
x ab Ha bL Hx g L Ha bL g

xg

g Hx a bL g Ha bL

Because xab is a 3-element it can be seen immediately that the component xg can be written as a scalar multiple of g where the scalar is expressed either as a ratio of regressive products (scalars) or exterior products (n-elements).
xg x Ha bL g Ha bL g abx abg g

The component xab will be a linear combination of a and b. To show this we can expand the expression above for xab using the Common Factor Axiom.
x ab Ha bL Hx g L Ha bL g Ha x g L b Ha bL g Hb x g L a Ha bL g

Rearranging these two terms into a similar form as that derived for xg gives:
x ab xbg abg a+ axg abg b

Of course we could have obtained this decomposition directly by using the results of the decomposition formula 3.36 for decomposing a 1-element into a linear combination of the factors of an n-element.

2009 9 3

Grassmann Algebra Book.nb

268

xbg abg

a+

axg abg

b+

abx abg

Decomposition of a vector in a trivector into three components.

Decomposition of a point in a plane


Suppose we have a plane represented by the bound bivector QRS, where Q, R and S are points. We wish to decompose a point P in this plane in two ways. The first, as weighted points at Q, R, and S. The second as a weighted point at S and a weighted point in the line represented by QR. Applying the decomposition formula 3.36 gives:
P PRS QRS Q+ QPS QRS R+ QRP QRS S

This formula immediately displays P as a sum of weighted points situated at Q, R and S.

2009 9 3

Grassmann Algebra Book.nb

269

QRP QRS

LS

P
H
PRS QRS

LQ H
QPS QRS

LR

A point P in a bound bivector.

P decomposed into 3 weighted points.

Decomposition of a point in a bound bivector into three weighted points.

Alternatively, we can group these terms two at a time to obtain a weighted point in the corresponding line. For example, adding the first two terms together gives us the point which we denote PQR .
P QR PRS QRS Q+ QPS QRS R

QRP QRS

LS

P
H
PRS QRS

LQ+H

QPS QRS

LR

A point P in a bound bivector.

P decomposed into 2 weighted points.

Decomposition of a point in a bound bivector into two weighted points.

Decomposition in a 4-space
Decomposition of a vector in a 4-vector
Suppose we have a vector 4-space represented by the trivector abgd and we wish to decompose a vector x in this 4-space into two vectors, one in ab and one in gd.
x x ab + x g d

Applying the decomposition formula 3.36 gives:


x xbgd abgd a+ axgd abgd b+ abxd abgd g+ abgx abgd d

Hence we an write

2009 9 3

Grassmann Algebra Book.nb

270

Hence we an write
x ab xbgd abgd abxd abgd a+ axgd abgd abgx abgd b

x g d

g+

As we have previously remarked, we can rearrange these components in whatever combinations or interpretations (as points or vectors) we require.

Decomposition of a point or vector in an n-space


Decomposition of a point or vector in a space of n dimensions is generally most directly accomplished by using the decomposition formula 3.36 derived in Chapter 3: The Regressive Product. For example, providing at least one of the ai is a point, the decomposition of a point P can be written:
n

P
i=1

a1 a2 ! P ! an a1 a2 ! ai ! an

ai

4.13

As we have already seen in the examples above, the components are simply arranged in whatever combinations are required to achieve the decomposition. In summary, it is worth reemphasizing that, although the decomposition of elements depicts geometrically quite differently depending on whether the elements are interpreted as points or vectors, the formulae are identical. As with all the formulae of the Grassmann algebra, the only difference is the interpretation.

4.11 Projective Space


In perusing this chapter, you may well have remarked the similarities between the Grassmann geometry described above, and the geometry of projective space. In this section, we will briefly explore these parallels. We will find that there are no essential differences, and that the inessential differences are ones simply of definition or interpretation. The apparent differences stem in the main from the way in which the intersection of two geometric elements is treated in the Projective Geometry on the one hand, and Grassmann geometry on the other. For example, in the plane, both geometries will say that two lines always intersect. In Projective geometry the lines intersect in either a point or a point-at-infinity. In Grassmann geometry the lines intersect in either a point or a vector. We see here that there is no essential difference, but an inessential difference of definition or interpretation: a vector and a point-atinfinity are essentially the same object.

2009 9 3

Grassmann Algebra Book.nb

271

The advantage of the Grassmann geometry interpretation however, is that one can add a metric to it if one wishes to use metric concepts (for example the inner product).

The intersection of two lines in a plane


In the Grassmann approach to the geometry of the plane, two (distinct) points define a line, and two (non-parallel) lines intersect in a common point. If the lines are parallel, they intersect in a common vector. Thus, as we intimated in the introduction to this section, we need only rename this vector as a point-at-infinity to recover the projective geometry of 2-space. But this correspondence is not as arbitrary as it might first appear. It turns out that the result of computing the intersection of two lines approaching parallelism in the Grassmann algebra gives us a result which can be viewed on the one hand as a point approaching infinity, and on the other hand as a weighted point approaching a vector. That is, the point of intersection of two lines moves off towards infinity as the lines approach parallelism, and in the limit becomes a finite vector. To see this we need only apply the methods previously developed for the calculation of intersections to two such lines. For simplicity, and without loss of generality, we can represent the two lines as follows: L1 by a bound vector through a point P, in the direction of the vector x; and L2 by a bound vector through the point P + y, with the vector of the line differing in direction from x by a scalar multiple e of y.

P+y y P

x
L2 L1

- ey

The intersection of two lines as they approach parallelism.

"2 ; DeclareExtraScalarSymbols@eD; DeclareExtraVectorSymbols@PD; L1 = P x; L2 = HP + yL Hx - e yL;

We obtain the point of intersection up to congruence by applying ToCommonFactor to the regressive product of the lines.
ToCommonFactor@L1 L2 D c H- x P x y - P e P x yL

Since the congruence factor c, and Pxy (the coefficient of the exterior product in any basis) are all scalars, we can say that the intersection is congruent to the weighted point wQ given by
wQ e P + x

By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q given by 2009 9 3

Grassmann Algebra Book.nb

272

By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q given by
QP+ x e

As e approaches zero, the lines approach parallelism, the point of intersection Q has an everincreasing vector x , causing its position to approach infinity, and the weighted point wQ has an e ever-decreasing influence from the point P, causing it to approach the (ultimately) common vector x. It may be remarked that in this calculation, the point of intersection does not involve the vector y which specifies the position of L2 . In general then, we may conclude that the resulting vector and hence the resulting point-at-infinity is the same for all lines parallel to L1 .

The line at infinity in a plane


A basis for the plane involves the origin " and two basis vectors, which we denote, say, by e1 and e2 . We can now repeat the previous computation for each of the basis vectors to obtain the weighted points
wQ1 e " + e1 wQ2 e " + e2

We can now see how the plane may be said to contain a line approaching a line-at-infinity if we define it by the product of these two weighted points.
L wQ1 wQ2 He " + e1 L He " + e2 L

This line has the property that in the limit, it contains all the points-at-infinity of the plane. We can show this by taking its exterior product with a third arbitrary point approaching a point-at-infinity, and obtain an element that becomes zero in the limit as e approaches zero.
%@He " + e1 L He " + e2 L He " + a e1 + b e2 LD He - a e - b eL # e1 e2

In fact, in the limit, the line-at-infinity is the bivector of the plane. We can see this by letting e approach zero in the expression for L or its expansion:
%@He " + e1 L He " + e2 LD # H-e e1 + e e2 L + e1 e2

Projective 3-space
In projective 3-space, all the above concepts of the projective plane carry over naturally. We can see the parallels between Projective Geometry and Grassmann Geometry as follows:

2009 9 3

Grassmann Algebra Book.nb

273

Projective Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a line-atinfinity. All the lines-at-infinity belong to a single plane-at-infinity. Grassmann Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a bivector. All the bivectors belong to a single trivector.

Homogeneous coordinates
The plane
As we have seen, a point P in the plane may be represented up to congruence by
P = a0 " + a1 e1 + a2 e2 ;

The elements of the triplet 8ao , a1 , a2 < are called the homogeneous coordinates of the point P. In projective geometry, the triplet 8ao , a1 , a2 < and the triplet 8k ao , k a1 , k a2 <, where k is a scalar, are defined to represent the same point. In the Grassmann approach, this corresponds to weighted points with different weights also defining the same point. The cobasis to the basis of the plane is
"2 ; Cobasis 8e1 e2 , - H# e2 L, # e1 <

A line in the plane can be represented up to congruence by a linear combination of these cobasis elements
L = b0 e1 e2 + b1 H- " e2 L + b2 " e1 ;

The elements of the triplet 8b0 , b1 , b2 < are called the homogeneous coordinates of the line L. The equation to the line may then be obtained from the condition that a point lies in the line, that is, their exterior product is zero.
PL 0

On simplifying the product, we recover the equation of the line in homogeneous coordinates (a0 b0 + a1 b1 + a2 b2 0) by viewing the line as fixed and the point as variable.
DeclareExtraScalarSymbols@a_ , b_ D; % @P LD 0 Ha0 b0 + a1 b1 + a2 b2 L # e1 e2 0

However, we can also view the point as fixed, and the line as variable, wherein the same form of equation describes a variable line through a fixed point.

Points and planes in 3-space


A point P in 3-space may be represented up to congruence by

2009 9 3

Grassmann Algebra Book.nb

274

" 3 ; P = a0 " + a1 e1 + a2 e2 + a3 e3 ;

The elements of the quartet 8ao , a1 , a2 , a3 < are called the homogeneous coordinates of the point P. The cobasis to the basis of the 3-space is
Cobasis 8e1 e2 e3 , - H# e2 e3 L, # e1 e3 , - H# e1 e2 L<

Hence now it is planes in the 3-space that can be represented up to congruence by a linear combination of these cobasis elements
P = b0 e1 e2 e3 + b1 H- " e2 e3 L + b2 " e1 e3 + b3 H- " e1 e2 L;

The elements of the quartet 8b0 , b1 , b2 , b3 < are called the homogeneous coordinates of the plane P. The equation to the plane may then be obtained from the condition that a point lies in the plane, that is, their exterior product is zero.
PP 0

On simplifying the product, we recover the equation of the plane in homogeneous coordinates by viewing the plane as fixed and the point as variable.
% @P PD 0 Ha0 b0 + a1 b1 + a2 b2 + a3 b3 L # e1 e2 e3 0

And as in the case of the plane, we can also view the point as fixed, and the plane as variable, wherein the same form of equation describes a variable plane through a fixed point.

Points and hyperplanes in n-space


The cases of points and lines in 2-space and points and planes in 3-space generalize in an obvious manner to points and hyperplanes ((n-1)-planes) in n-space. The reason that the relationships here are so straightforward is because 1-elements and (n-1)elements are simple. Hence they can immediately represent geometric objects like points and hyperplanes as linear sums of the basis elements. This is not, in general, the case for elements of intermediate grade. For example, in a 3-space, 2elements are not necessarily simple. Hence lines in space have no such simple form as a linear sum of the basis 2-elements. We can easily compose such a form and check for its simplicity.
"3 ; A = ComposeMElement@2, aD a1 # e1 + a2 # e2 + a3 # e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 SimpleQ@AD If@8a3 a4 - a2 a5 + a1 a6 0<, TrueD

Thus, such a form will only be simple, and thus able to represent a line if the coefficients are constrained by the relation 2009 9 3

Grassmann Algebra Book.nb

275

Thus, such a form will only be simple, and thus able to represent a line if the coefficients are constrained by the relation
a3 a4 - a2 a5 + a1 a6 0

Duality
The concept of duality in Grassmann algebra corresponds closely to the concept of duality in projective geometry. For example, consider the expression for a line as the exterior product of two points.
L PQ

This may be read variously as: The points P and Q lie on the line L. The line L passes through the points P and Q. The points P and Q join to form the line L. The line L is the join of the points P and Q. In a plane, the dual expression to "a line is the exterior product of two points", is "a point is the regressive product of two lines".
P LM

This may be read variously as: The lines L and M pass through the point P. The point P lies on the lines L and M. The lines L and M intersect to form the point P. The point P is the intersection of the lines L and M. The concurrency of three points is defined by
PQR 0

The collinearity of three lines is defined by


LMN 0

Duality extends naturally to a space of any dimension. Remember that, just as in projective geometry, these entities and expressions are only defined up to congruence.

Desargues' theorem
As an example of how Grassmann algebra can be used in Projective Geometry, we prove Desargues' theorem. It should be noted that the methods used are simply applications of the exterior and regressive products, and as are all the applications in this chapter, entirely non-metric. This is in contradistinction to "proofs" of the theorem using the dot and cross products of 3dimensional vector algebra, which are metric concepts. If two triangles have corresponding vertices joined by concurrent lines, then the intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all lie on one line. 2009 9 3

Grassmann Algebra Book.nb

276

If two triangles have corresponding vertices joined by concurrent lines, then the intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all lie on one line.

X U P

Q R W Z
Desargues' theorem.

We begin by declaring that we wish to work in the plane.


" 2 ;

For simplicity, and without loss of generality, we may take the fixed point O as the origin ". We can then define the points P, Q, and R simply by
P = " + p; Q = " + q; R = " + r;

And since U, V, and W are on the same lines (respectively) through O as are P, Q, and R, we can define U, V, and W using scalar multiples of the position vectors of P, Q, and R.
U = " + a p; V = " + b q; W = " + c r;

To confirm that these triplets of points are collinear, we can compute their exterior product.
8%@" P UD, %@" Q VD, %@" R WD< 80, 0, 0<

The next stage is to compute the intersections X, Y, and Z. We do this by using ToCommonFactor.

2009 9 3

Grassmann Algebra Book.nb

277

X = ToCommonFactor@HP QL HU VLD c Ha p # p q - a b p # p q - b q # p q + a b q # p q + # Ha # p q - b # p qLL

Here A is a weighted point We can extract the associated point by using ToPointForm. (Remember that "pq represents the scalar coefficient of the "pq in any basis that might be chosen, and c is the scalar congruence factor.)
X = ToPointForm@XD Ha - a bL p a-b + H- 1 + aL b q a-b + #

Similarly, we can proceed with the other two intersections.


Y = ToCommonFactor@HP RL HU WLD ToPointForm Ha - a cL p a-c + H- 1 + aL c r a-c + #

Z = ToCommonFactor@HQ RL HV WLD ToPointForm Hb - b cL q b-c + H- 1 + bL c r b-c + #

We can now check whether these points are collinear by taking their exterior product. The first level of simplification gives
F = % @X Y ZD # H- 1 + aL a b H- 1 + cL Ha - bL Ha - cL H- 1 + aL Ha - a bL c Ha - bL Ha - cL H- 1 + aL2 b c Ha - bL Ha - cL + a H- 1 + bL b H- 1 + cL Ha - bL Hb - cL a H- 1 + bL2 c Ha - bL Hb - cL + + a b H- 1 + cL2 Ha - cL Hb - cL pq +

H- 1 + bL c Ha - a cL Ha - cL Hb - cL H- 1 + aL b H- 1 + cL c Ha - cL Hb - cL

pr +

H1 - aL H- 1 + bL b c Ha - bL Hb - cL

qr

On simplifying the scalar coefficients we get zero as expected, thus proving the theorem.
$@Simplify@FDD 0

Pappas' theorem
As a second example, we prove Pappas' theorem. The methods are essentially the same as used for Desargues' theorem in the previous section. Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y, Z of line pairs PV and QU, PW and RU, QW and RV, are collinear. 2009 9 3

Grassmann Algebra Book.nb

278

Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y, Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.

R Q P Z

U V W
Pappas' theorem.

We begin as before, by declaring that we wish to work in the plane.


" 2 ;

As our example, we take the more general case in which the lines through the two sets of collinear points are not parallel. For simplicity, and without loss of generality, we take their intersection point as the origin. We can then define the points P, Q, R, and U, V, W by means of two vectors p and u; and four scalars a, b, c, d.
P = " + p; Q = " + a p; R = " + b p; U = " + u; V = " + c u; W = " + d u;

By definition these triplets of points are collinear. The next stage is to compute the intersections X, Y, and Z. We do this by using ToCommonFactor and ToPointForm as in the previous section.

2009 9 3

Grassmann Algebra Book.nb

279

X = ToCommonFactor@HP VL HQ ULD ToPointForm a H- 1 + cL p -1 + a c + H- 1 + aL c u -1 + a c + #

Y = ToCommonFactor@HP WL HR ULD ToPointForm b H- 1 + dL p -1 + b d + H- 1 + bL d u -1 + b d + #

Z = ToCommonFactor@HQ WL HR VLD ToPointForm a b Hc - dL p ac-bd + Ha - bL c d u ac-bd + #

We can now check whether these points are collinear by taking their exterior product. A value of zero thus proving the theorem.
F = %@X Y ZD Simplify 0

Projective n-space
A projective n-space is essentially, a bound n-space, that is, a linear space of n+1 dimensions, in which one of the basis elements is interpreted as a point, say the origin, and the other n basis elements are interpreted as vectors. In projective n-space (n 2), the concepts of the projective plane carry over naturally. Non-parallel (n-1)-planes intersect in (n-2)-planes. Parallel (n-1)-planes may be said to intersect in an (n-2)-plane-at-infinity. All the (n-2)-planes-at-infinity belong to a single (n-1)-plane-at-infinity (which is in fact the n-vector of the space). Here, a 0-plane is a point, a 1-plane is a line, a 2-plane is a plane, and so on. Two (n-1)-planes may be said to be parallel if their (n-1)-directions are the same (that is, their (n-1)-vectors are congruent). We can see how this works more concretely by thinking in terms of the actual bound elements representing these geometric objects. We can define two (n-1)-planes P and Q by
P = P a ; Q = Q b
n-1 n-1

where P and Q are points and a and b are (n-1)-vectors.


n-1 n-1

2009 9 3

Grassmann Algebra Book.nb

280

The Common Factor Theorem shows that P and Q will have a common (n-1)-element, C say. If
n-1 n-1

a is congruent to b , then C will be congruent to them both, and will be an (n-1)-vector. If a


n-1 n-1 n-1 n-1 n-2

n-1

is not congruent to b , then C will be of the form C = R g , thus representing an (n-2)-plane. Projective space terminology is just simply equivalent to defining C as an (n-2)-plane-at-infinity.
n-1

All the C from all the possible intersections, being (n-1)-vectors, belong to the n-vector of the
n-1

space. In projective space terminology: an (n-1)-plane-at-infinity. In sum Thus we can see that Projective Geometry differs in no essential way from the geometry which is a natural consequence of Grassmann's algebra. The superficial differences, as is the case with many mathematical systems covering the same applications, reside in definition or interpretation.

4.12 Regions of Space


Regions of space
Some geometric applications require an assessment of the region in which a point is located. Once we have a test which determines if a point is inside or outside a region, we can then define the region as the set of points which satisfy the test. In this section we begin with the very simplest of regions, and build up to more complex ones of higher dimension. A closed region of dimension r is bounded by regions of dimension r-1. A closed region is often called a polytope, a generalization of the notion of polygon and polyhedron.

Regions of a plane
Consider a line L in a plane. The line will divide the plane into two regions (half-planes). A halfplane region is an open region if it does not contain any points of the line. Any point P will be located in either one of the half-planes or in the line itself. We can form the exterior product PL. This exterior product will vary in weight as P varies over the regions, but will only change sign when P crosses the line. The product (and hence the weight) is of course zero when P is on the line itself. Suppose now we have a reference point Q, not in the line, and we want to find if P and Q are on the same side of the line. All we need to do is to take the ratio of PL to QL. If the ratio is positive, P and Q are on the same side. If the ratio is negative, they are on opposite sides. If the ratio is zero, P is in the line.

2009 9 3

Grassmann Algebra Book.nb

281

P2 L QL

<0

P1 L QL

>0

Q P2 P1

Regions of a plane divided by a line L

As can be seen from the figure above, the reason this occurs is because the ordering of the points in the product PL is clockwise on one side of the line, and anti-clockwise on the other. (Remember that the line L itself can be expressed as the product of any two points on it.) Because the line L is in both the numerator and denominator of the ratio, the ratio is invariant with respect to how the line is expressed (for example, which two points are chosen to express it). If we know in which region the point Q is located, we can therefore determine the region in which the point P is located. In the sections below we will see how this simple result can be extended in a boolean manner to regions of higher dimension, and regions with more complex boundaries.

Example
As shown in the graphic below, let L be a line through the origin " in the direction of the basis vector e2 , the reference point Q at the point " + e1 , and point P1 and P2 at the points " + a e1 + e2 and " - b e1 + e2 respectively. The scalar coefficients a and b are strictly positive.

P2 L QL

-b < 0 L

P1 L QL

a>0

P2 Q

P1

Example: Regions of a plane divided by a line

The products of each of the three points with the line are:
Q L H" + e1 L " e2 - " e1 e2

2009 9 3

Grassmann Algebra Book.nb

282

P1 L H" + a e1 + e2 L " e2 - a " e1 e2 P2 L H" - b e1 + e2 L " e2 b " e1 e2

The ratios of P1 L and P2 L with the reference product QL are:


P1 L QL a >0 P2 L QL -b < 0

Thus we conclude that P1 is on the same side of the line L as Q, while P2 is on the opposite side.

Regions of a line
Before we explore more complex regions and higher dimensions, let us take a step back to consider a line as the complete space (a 1-space), and a point R in the line as dividing it into two regions. The point R now takes the role of the line L in the previous example. Suppose the 1-space has basis {",e1 }, and for simplicity we take R to be the origin ". Following the pattern of the example above, we let the reference point Q be the point " + e1 , and points P1 and P2 be the points " + a e1 and " - b e1 respectively. Here, as above, the scalar coefficients a and b are strictly positive.

P2 R QR

-b < 0

P1 R QR

a>0

P2

P1

Example: Regions of a line divided by a point R

The products of each of these three points with the dividing point R are:
Q R H" + e1 L " - " e1 P1 R H" + a e1 L " - a " e1 P2 R H" - b e1 L " b " e1

The quotients of P1 R and P2 R with the reference product QR are:


P1 R QR a >0 P2 R QR -b < 0

Thus we conclude that P1 is on the same side of the dividing point R as Q, while P2 is on the opposite side.

2009 9 3

Grassmann Algebra Book.nb

283

Planar regions defined by two lines


Suppose now, we have two intersecting lines L1 and L2 in the plane and a known reference point Q, not on either line. We wish to find in which quadrant defined by the intersecting lines that a given point lies. In order to explore this we define the regions by four nominal test points P1 , P2 , P3 , P4 , one in each quadrant, and for both lines L1 and L2 , making a pair of ratios for each point. For a given region, represented by the point Pi we can form the ratio as we did above for each of the lines Lj and portray them as follows:

P2 L1 QL1

<0

P2 L2 QL2

>0 L2

Q P2
P1 L1 QL1

>0

P1 L2 QL2

>0

P3 L1

P1

P3

QL1

<0

P3 L2 QL2

<0

P4
P4 L1 QL1

L1 <0

>0

P4 L2 QL2

Planar regions defined by two lines

Although for easier comprehension, the graphic shows the two lines as if they are at right angles, and the points as if they are equidistant from the intersection, remember that all the explorations in this chapter are non-metric, and so angles and distances at this stage are undefined. We can summarize the region definitions more succinctly in a table.

2009 9 3

Grassmann Algebra Book.nb

284

Region P1 P2 P3 P4

L1
P1 L1 QL1 P2 L1 QL1 P3 L1 QL1 P4 L1 QL1

L2 >0 <0 <0 >0


P1 L2 QL2 P2 L2 QL2 P3 L2 QL2 P4 L2 QL2

>0 >0 <0 <0

Example
To see how this works, suppose we know that some point Q is in some quadrant, and we would like to know which quadrant another given point P is in. We simply calculate the two ratios and compare their results to the table. We do this for a numerical example below. First define the lines and the point Q:
L1 = H" + e2 L H" + 2 e1 - e2 L; L2 = H" - e2 L H" + 2 e1 + e2 L; Q = " - 4 e1 + 3 e2 ;

To see if a point P is in the same quadrant as the reference point Q, we simply check that the signs L1 L2 of both products P and P are positive. Below we write a function CheckP to take a random QL QL
1 2

point P and calculate the value of the combined predicate for it.
CheckP := ModuleB8P<, P = " + RandomReal@8- 5, 5<D e1 + RandomReal@8- 5, 5<D e2 ; % @P L1 D % @P L2 D :P, > 0 && > 0>F % @Q L1 D % @Q L2 D

Below we check 10 random points and find that three of them return True for CheckP and are thus in the same quadrant as Q.

2009 9 3

Grassmann Algebra Book.nb

285

Table@CheckP, 810<D TableForm # + 2.50516 e1 - 0.477195 e2 # - 4.78253 e1 + 4.11593 e2 # - 0.511969 e1 + 2.70047 e2 # - 2.45479 e1 + 1.84245 e2 # + 2.7643 e1 - 2.33078 e2 # + 3.40323 e1 - 4.45599 e2 # + 3.68059 e1 - 4.49096 e2 # - 1.29608 e1 + 1.51291 e2 # + 3.90768 e1 - 1.83592 e2 # + 2.52696 e1 - 0.82049 e2 False True False True False False False True False False

Planar regions defined by three lines


Suppose now, we have three intersecting lines L1 , L2 and L3 in the plane and a known reference point Q, not on any line. We wish to find in which region defined by the intersecting lines that a given point lies. We will assume the lines are not parallel and do not all intersect in a single point. We define the regions by seven points P1 , P2 , P3 , P4 , P5 , P6 , P7 , one in each region, and for each of the three lines L1 , L2 and L3 , making a triplet of ratios for each point.

L1

P2

L2

Q P1 P4
L3

P3

P5

P6
Planar regions defined by three lines

P7

For a given region, represented by the point Pi we can form the ratio as we did above for each of the lines Lj and tabulate the regions as follows:

2009 9 3

Grassmann Algebra Book.nb

286

Region P1 P2 P3 P4 P5 P6 P7

L1
P1 L1 QL1 P2 L1 QL1 P3 L1 QL1 P4 L1 QL1 P5 L1 QL1 P6 L1 QL1 P7 L1 QL1

L2 >0 <0 <0 >0 >0 >0 <0


P1 L2 QL2 P2 L2 QL2 P3 L2 QL2 P4 L2 QL2 P5 L2 QL2 P6 L2 QL2 P7 L2 QL2

L3 >0 >0 <0 <0 >0 <0 <0


P1 L3 QL3 P2 L3 QL3 P3 L3 QL3 P4 L3 QL3 P5 L3 QL3 P6 L3 QL3 P7 L3 QL3

>0 >0 >0 >0 <0 <0 <0

Example
Here are some actual lines and a reference point Q:
L1 = H" + e2 L H" + 2 e1 - e2 L; L2 = H" - e2 L H" + 2 e1 + e2 L; L3 = H" - e1 - 2 e2 L H" + 3 e1 - 2 e2 L; Q = " - 3 e1 + e2 ;

To see if a point P is in the centre triangle (the region in which P4 is in), we check that the signs of the products are +, ~, +, according to the table above, and hence write the predicate CheckP4 for this triangular region as
CheckP4 := ModuleB8P<, P = " + RandomReal@8- 3, 3<D e1 + RandomReal@8- 3, 3<D e2 ; % @P L1 D % @P L2 D % @P L3 D :P, > 0 && < 0 && > 0>F % @Q L1 D % @Q L2 D % @Q L3 D

Below we check 10 random points and find that only one of them returns True for CheckP2 and is thus in the centre triangle.

2009 9 3

Grassmann Algebra Book.nb

287

Table@CheckP4, 810<D TableForm # - 0.830005 e1 - 0.848417 e2 # - 2.94884 e1 - 2.22242 e2 # + 2.77172 e1 + 0.3699 e2 # + 0.13623 e1 - 0.109629 e2 # - 2.48096 e1 - 1.96646 e2 # - 2.17117 e1 + 0.0203384 e2 # + 1.02258 e1 + 1.51379 e2 # + 0.506026 e1 - 0.54325 e2 # - 1.15566 e1 + 0.842319 e2 # + 2.25353 e1 - 1.15175 e2 False False False False False False False True False False

Creating a pentagonal region


In this section we explore a simple example of a pentagon in the plane defined by 5 (non-parallel) lines. Each line is defined by the exterior product of two points. If we know the vertices of the pentagon, then the most obvious way to construct these lines is as products of pairs of adjacent vertices. However, it is just as easy if all we know are the lines forming the sides of the pentagon. And it is more general since a line can be constructed as the product of any two points on a side or its extension. In the section which follows this one, we will extend this example to look at the construction of a 5star. To make use of common geometry in the two examples, we will define the sides of the pentagon in this example by the vertices of the 5-star (as shown below). We locate the reference point Q in the middle of the pentagon. Each half-plane in the diagram, on the side of its line which include the reference point Q, is coloured an opaque grey. Regions which overlap therefore show themselves in a darker grey. There are four shades of grey shown. The pentagonal region is the overlap of all five half-planes, and hence shows in the darkest grey. The lightest shade is the overlap of two half-planes.

2009 9 3

Grassmann Algebra Book.nb

288

Creating a pentagonal region by intersecting the half-planes which include the point Q.

To be specific, we define the points and lines involved.


Q = "; P1 = " + 2 e2 ; P2 = " + 2 e1 + e2 ; P3 = " + 2 e1 - 2 e2 ; P4 = " - 2 e1 - 2 e2 ; P5 = " - 2 e1 + e2 ; L1 = P1 P3 ; L2 = P2 P4 ; L3 = P3 P5 ; L4 = P4 P1 ; L5 = P5 P2 ;

The pentagon in the star


The pentagon internal to the 5-star is the region (including the boundary) defined by the set of points P such that the predicate FiveStarPentagon below is True.
FiveStarPentagon@P_D := AndB 0, % @Q L1 D % @P L2 D % @P L3 D % @P L4 D % @P L5 D 0, 0, 0, 0F; % @Q L2 D % @Q L3 D % @Q L4 D % @Q L5 D % @P L1 D

For example, the reference point Q belongs to the pentagon.

2009 9 3

Grassmann Algebra Book.nb

289

For example, the reference point Q belongs to the pentagon.


FiveStarPentagon@QD True

The inside of the pentagon


The inside of the 5-star pentagon (excluding the boundary) is defined by replacing the sign by the sign > in the formula above.
FiveStarPentagonInside@P_D := AndB > 0, % @Q L1 D % @P L2 D % @P L3 D % @P L4 D % @P L5 D > 0, > 0, > 0, > 0F; % @Q L2 D % @Q L3 D % @Q L4 D % @Q L5 D % @P L1 D

The boundary of the pentagon


The boundary of the 5-star pentagon is defined as the pentagon without its inside.
FiveStarPentagonBoundary@P_D := FiveStarPentagon@PD && Not@FiveStarPentagonInside@PDD

To check, we choose three points one inside, one on, and one outside the pentagon boundary. For the point on the boundary we choose the point M, say, mid-way between P2 and P5 . For the other two we perturb the position of M by a small vertical displacement e. As expected, only the point M on the boundary returns True.
M = HP2 + P5 L 2; e = .000001 e2 ; FiveStarPentagonBoundary 8M - e, M, M + e< 8False, True, False<

Note that the pentagon boundary cannot be defined by a Boolean expression of predicates in which the ratios are zero, because the lines from which the boundary is formed extend beyond the boundary.

The outside of the pentagon


The plane outside the 5-star pentagon is defined simply by the negation of the FiveStarPentagon predicate. Since FiveStarPentagon includes the boundary, FiveStarPentagonOutside excludes it.
FiveStarPentagonOutside@P_D := Not@FiveStarPentagon@PDD

For example, the vertices of the 5-star are outside the pentagon.

2009 9 3

Grassmann Algebra Book.nb

290

FiveStarPentagonOutside@D & 8P1 , P2 , P3 , P4 , P5 < 8True, True, True, True, True<

The vertices of the pentagon


Although in this example it has been unnecessary to know the vertices of the pentagon, we can of course obtain these points by applying ToCommonFactor to each of the pairs of intersecting lines.
8A5 , A1 , A2 , A3 , A4 < = ToPointForm@ToCommonFactor@DD & 8L1 L2 , L2 L3 , L3 L4 , L4 L5 , L5 L1 < :# + # 10 e1 11 10 e1 11 + + 2 e2 11 2 e2 11 , # , # e2 2 e1 2 , + e2 , # + e1 2 + e2 >

We confirm that these are on the boundary of the pentagon:


FiveStarPentagonBoundary 8A5 , A1 , A2 , A3 , A4 < 8True, True, True, True, True<

Creating a 5-star region


We now define the 5-star region itself. That is, a Boolean function of the point P which will yield True if P is inside the 5-star or on its boundary, and False otherwise. The Boolean function will of course also contain, as parameters, the reference point Q and the lines defining the 5-star. The vertices of the 5-star are the same as the points we chose in the pentagon example in the previous section, and the edges of the 5-star are segments of the lines we chose. There are many ways of creating a Boolean expression to define the 5-star. The most conceptually straightforward way is to use either a disjunction or conjunction of sub-regions - each of which is itself defined by a Boolean expression. A disjunction (Boolean Or) would be used when the region being defined is composed of one or other of the sub-regions. A conjunction (Boolean And) would be used when the region being defined is the overlap of the sub-regions. We avoid discussing solutions requiring knowledge of the vertices of the inner pentagon, as these may not be known (although of course, as shown in the previous section, they can be easily calculated using the regressive product). However, in the diagram below we name them as points Ai (opposite to Pi ) in order to explain our construction.

2009 9 3

Grassmann Algebra Book.nb

291

Building the formula The triangle


We view the 5-star as two polygons: the triangle P2 P5 A1 , and the quadrilateral P1 P3 A1 P4 . The triangular region can be defined as the intersection of the three half-planes defined by the lines L2 , L3 , L5 , and which include the point Q. This region is shown in the darkest grey in the figure below.
Triangle@P_D := AndB % @P L2 D % @Q L2 D 0, % @P L3 D % @Q L3 D 0, % @P L5 D % @Q L5 D 0F

Creating a 5-star region Stage I: Creating the triangular region P2 P5 A1

The quadrilateral
The region defined by the quadrilateral can be viewed as the intersection of the (infinite) region below the lines L1 and L4 , and the (infinite) region above the lines L2 and L3 . The region below the lines L1 and L4 can be defined by the conjunction of the regions on the positive side (relative to Q) of the lines L1 and L4 .

2009 9 3

Grassmann Algebra Book.nb

292

BelowL1L4@P_D := AndB

% @P L1 D % @Q L1 D

0,

% @P L4 D % @Q L4 D

0F

Creating a 5-star region Stage IIa: Creating the region under the lines L1 and L4

The region above the lines L2 and L3 can be defined by the disjunction of the regions on the positive side (relative to Q) of the lines L2 and L3 . This region is shown in the figure below with various opacities of red.

2009 9 3

Grassmann Algebra Book.nb

293

AboveL2L3@P_D := OrB

% @P L2 D % @Q L2 D

0,

% @P L3 D % @Q L3 D

0F

Creating a 5-star region Stage IIb: Creating the region above the lines L2 and L3

The quadrilateral is then the conjunction of these two regions.


Quadrilateral@P_D := And@BelowL1L4@PD, AboveL2L3@PDD

The 5-star
And the 5-star is the disjunction of the triangle and the quadrilateral.
FiveStar@P_D := Or@Triangle@PD, Quadrilateral@PDD

Testing the formula


To test this formula, we can choose test points on, inside and outside the 5-star. The vertices are on the 5-star, so should all yield True.
FiveStar 8P1 , P2 , P3 , P4 , P5 < 8True, True, True, True, True<

And so also are the vertices of the interior pentagon explored in the previous section.

2009 9 3

Grassmann Algebra Book.nb

294

FiveStar 8A5 , A1 , A2 , A3 , A4 < 8True, True, True, True, True<

We can add small perturbations to these vertices to create points just outside the 5-star. These should all yield False.
d = .000001 e1 ; e = .000001 e2 ; 8FiveStar 8P1 + e, P2 + e, P3 + e, P4 + e, P5 + e<, FiveStar 8A1 - e, A2 - d, A3 + e, A4 + e, A5 + d<< 88False, False, False, False, False<, 8False, False, False, False, False<<

Perturbations to create points just inside the 5-star should all yield True.
8FiveStar 8P1 - e, P2 - 2 d - e, P3 - d + e, P4 + d + e, P5 + 2 d - e<, FiveStar 8A1 + e, A2 + d, A3 - e, A4 - e, A5 - d<< 88True, True, True, True, True<, 8True, True, True, True, True<<

Defining a general 5-star region formula


To define a general explicit formula for a 5-star region, all we need to do is to collect the regions into one expression.
FiveStar@P_, Q_, L_ListD := % @ P L P2T D % @ P L P3T D % @ P L P5T D OrBAndB 0, 0, 0F, % @ Q L P2T D % @ Q L P3T D % @ Q L P5T D AndBAndB OrB % @ P L P1T D % @ Q L P1T D 0, 0, % @ P L P 4T D % @ Q L P 4T D 0F,

% @ P L P2T D % @ Q L P2T D

% @ P L P3T D % @ Q L P3T D

0FFF

This formula is valid for any set of five lines intersecting to form the 5-star, and any reference point Q inside the 5-star region. We test the formula with the points just inside the vertices of the pentagon as above.
FiveStar@, Q, 8L1 , L2 , L3 , L4 , L5 <D & 8 A1 + e, A2 + d, A3 - e, A4 - e, A5 - d< 8True, True, True, True, True<

Creating a 5-star pyramid


Having defined a region in the plane, we can easily extend it into three dimensions by choosing points in the third dimension, and converting all the lines to planes by multiplying them by one or another of these points.

2009 9 3

Grassmann Algebra Book.nb

295

Creating a 5-star pyramid.

To create a three-dimensionalized pyramidal-type 5-star we only need to take a single point S above the origin, and extend each of the lines through S to form five intersecting planes. The formula (predicate) for this is given by
FiveStar@P, Q, L SD

Here, L is the list of lines forming the original 5-star. Since the exterior product operation Wedge is Listable, taking the exterior product of this list of lines with the point S gives the corresponding list of planes through S. We then truncate the (infinite) pyramidal cone thus formed by a sixth plane equivalent to the plane of the original planar 5-star by adding an extra predicate term. We define the plane P as the exterior product of any three of the original 5-star vertices.
P = P1 P2 P3

We must also redefine the location of the reference point Q. Since the original point Q is now on the boundary of the new three dimensional region, we will need to relocate it to an interior point. The required predicate for the truncating plane is then

2009 9 3

Grassmann Algebra Book.nb

296

% @P PD % @Q PD

We can now define the solid region FiveStar3D as the conjunction of these two regions.
FiveStar3D@P_D := AndBFiveStar@P, Q, L SD, % @P PD % @Q PD 0F

Here is our original data for the planar 5-star:


P1 = " + 2 e2 ; P2 = " + 2 e1 + e2 ; P3 = " + 2 e1 - 2 e2 ; P4 = " - 2 e1 - 2 e2 ; P5 = " - 2 e1 + e2 ; L1 = P1 P3 ; L2 = P2 P4 ; L3 = P3 P5 ; L4 = P4 P1 ; L5 = P5 P2 ; L = 8L1 , L2 , L3 , L4 , L5 <;

We now change to a 3-space, and define the two new points, and the new plane.
" 3 ; Q = " + e3 ; S = " + 2 e3 ; P = P1 P2 P3 ;

To verify the formula, we can first check that the 5-star pyramid contains all its defining points.
FiveStar3D 8P1 , P2 , P3 , P4 , P5 , S< 8True, True, True, True, True, True<

We can also check that points clustered sufficiently close to the origin are inside the region.
Table@FiveStar3D@" + RandomReal@0.5D e1 + RandomReal@0.5D e2 + RandomReal@0.5D e3 D, 810<D 8True, True, True, True, True, True, True, True, True, True<

We can add small perturbations to these vertices to create points just outside the 5-star. These should all yield False.
e = .000001 e3 ; FiveStar3D H8P1 , P2 , P3 , P4 , P5 , S< + eL 8False, False, False, False, False, False<

2009 9 3

Grassmann Algebra Book.nb

297

Summary
In 1-space, a hyperplane is a point. In 2-space, a hyperplane is a line. In 3-space, a hyperplane is a plane. In n-space, a hyperplane is an (n-1)-plane. Hyperplanes divide their space into regions. If two points P and Q lie on the same side of a hyperplane ,, then the ratio R of the exterior products of the two points with the hyperplane is positive:
R= P, Q, >0

If P and Q lie on opposite sides of ,, then R is negative:


R= P, Q, <0

If P lies on ,, then R is zero:


R= P, Q, 0

The point Q may be considered as a reference point, and the region in which it is located as the reference region. The point P may be considered as a variable point, and the region in which it is located as the region of interest. Regions of space may be defined by Boolean functions of the predicates R > 0, R 0, R < 0, R 0, and R 0.

4.13 Geometric Constructions


Geometric expressions
In this chapter, we have been exploring non-metric geometry involving geometric entities: points, lines, planes, ..., hyperplanes. In this section we will explore expressions, called geometric expressions, involving exterior and regressive products of these entities - the exterior product to extend them, and the regressive product to intersect them. When geometric expressions are equated to zero they becomes a geometric equations, leading to corresponding algebraic equations for curves and surfaces of various degrees. One of the most enticing features of Grassmann algebra applied to geometry which we will also explore in this section, is the idea that some geometric expressions may not only be geometrically depicted, but may also double as prescriptions for their geometric construction. 2009 9 3

Grassmann Algebra Book.nb

298

One of the most enticing features of Grassmann algebra applied to geometry which we will also explore in this section, is the idea that some geometric expressions may not only be geometrically depicted, but may also double as prescriptions for their geometric construction. The types of geometric expressions we will explore are exemplified by that leading to the equation to a conic section in the plane (discussed in the next section).
P P1 HL1 HP0 HL2 HP P2 LLLL 0

Here, the factors P0 , P1 , P2 , L1 and L2 are fixed (constant) entities defining the parameters of the conic section. P is considered the independent variable, and since it occurs twice, the expression represents an equation of the second degree. (If P had occurred m times in the expression, then the equation would have been of degree m.) The reduction of a geometric expression to an algebraic equation for a curve or surface requires that the expression be either a scalar, or an n-element. The expression above is in fact the exterior product of three points, which reduces to a 3-element in the plane (a linear space of 3 dimensions). Each time a regressive product is simplified to its common factor, for example when two lines intersect, the result is in general a weighted point multiplied by an arbitrary scalar congruence factor (symbolized c in GrassmannAlgebra). The zero on the right hand side of the equation means that we can effectively ignore the weights and congruence factors, and visualize each regressive product as resulting simply in a point. In computations however, it is most effective to retain these weights to avoid coordinates with denominators. For a geometric expression to result in an n-element it will usually be the exterior product of n points (3 points for the plane, 4 points for space). Geometrically this may be interpreted as the points all belonging to a hyperplane in the space, that is, collinear in the plane, coplanar in 3-space. For a geometric expression to result in an scalar it will usually be the regressive product of n hyperplanes (3 lines for the plane, 4 planes for 3-space). Geometrically this may be interpreted as the hyperplanes all belonging to a point in the space, that is, three lines intersecting in one point in the plane, four plane intersecting in one point in 3-space.

Historical Note
Grassmann discusses geometric constructions in Chapter 5 (Applications to Geometry) in his Ausdehnungslehre of 1862, and also in Crelles' Journal volumes XXXI, XXXVI and LII. Probably the most comprehensive additional treatment is given by Alfred North Whitehead in Book IV of his Treatise on Universal Algebra (Chapter IV: Descriptive Geometry and Chapter V: Descriptive Geometry of Conics and Cubics) 1898.

Geometric equations for lines and planes


Before embarking on exploring the more general geometric expressions, we take this opportunity to first review the simplest cases of geometric equations, those defining lines in the plane, and planes in space.

2009 9 3

Grassmann Algebra Book.nb

299

The equation of a line in the plane


Suppose we have a fixed line L through two points in the plane, and we want to derive its normal algebraic equation. Let the line L be defined by the points Pa and Pb .
"2 ; Pa = 'a ; Pb = 'b ; L = Pa Pb H# + a1 e1 + a2 e2 L H# + b1 e1 + b2 e2 L

The geometric equation to this line is PL 0.


P = 'x ; P L 0 H# + e1 x1 + e2 x2 L H# + a1 e1 + a2 e2 L H# + b1 e1 + b2 e2 L 0

The algebraic equation to this line is obtained by simplifying this 3-element and extracting its scalar coefficient by dividing through by the basis 3-element of the space
% @P LD " e1 e2 0

- a2 b1 + a1 b2 + a2 x1 - b2 x1 - a1 x2 + b1 x2 0

Clearly this equation can be rearranged in the more usual forms


Ha2 - b2 L x1 + Hb1 - a1 L x2 + Ha1 b2 - a2 b1 L 0 x2 b2 - a2 b1 - a1 x1 + a2 b1 - a1 b2 b1 - a1

The first of these is often cast in the "hyperplane" form.


A x1 + B x2 + C 0

The equation of a plane in space


To determine the equation of a plane in 3-space we follow the same process.
"3 ; Pa = 'a ; Pb = 'b ; Pc = 'c ; P = 'x ; P = Pa Pb Pc ; P P 0 H# + e1 x1 + e2 x2 + e3 x3 L H# + a1 e1 + a2 e2 + a3 e3 L H# + b1 e1 + b2 e2 + b3 e3 L H# + c1 e1 + c2 e2 + c3 e3 L 0

Simplifying this expression gives us the equation to a plane in 3-space in the more recognizable form A + B x1 + C x2 + D x3 0. Here we introduce the GrassmannAlgebra function ExpandAndCollect for collecting the coefficients of the algebraic equation.

2009 9 3

Grassmann Algebra Book.nb

300

ExpandAndCollectB

% @P PD " e1 e2 e3

, 8x1 , x2 , x3 <, 3F 0

- a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 + a1 b2 c3 + Ha3 b2 - a2 b3 - a3 c2 + b3 c2 + a2 c3 - b2 c3 L x1 + H- a3 b1 + a1 b3 + a3 c1 - b3 c1 - a1 c3 + b1 c3 L x2 + Ha2 b1 - a1 b2 - a2 c1 + b2 c1 + a1 c2 - b1 c2 L x3 0

The same process applied to a hyperplane in n-space yields, as expected:


C + C1 x1 + C2 x2 + C2 x3 + + Cn xn 0

The geometric equation of a conic section in the plane


The previous section introduced the simplest cases of geometric equations: those of linear elements. We now turn our attention to quadratic elements, exploring how their equations may be generated by constraining the position of a point in a more general manner than we used for the linear elements. The idea is that an equation of the general form PH 0 still holds, where P is a variable point, and H is a hyperplane in the space. But for higher degree curves, H will itself need to be dependent on the point P. Thus for a conic section, P will occur twice in the product, leading ultimately to an equation of the second degree in its coordinates. The hyperplane in a 2-space is a line, so in this section we are looking for ways in which this line ()a , say) can be nontrivially dependent on P. In what follows, we will use double struck symbols )i for line parameters which will not appear in the final expression. We will also use the congruence relation when defining intersections of lines, since these normally lead to weighted points (although, as we have previously mentioned, the weights are inconsequential in the final result). Thus we begin with
P )a 0

If P is a simple exterior factor of )a , the equation is true for all P, leading us nowhere. How can )a depend linearly on P, without P being an (exterior) factor of )a ? The answer lies in the regressive product. Geometrically, a line is most meaningfully constructed as an exterior product of points (P1 and Q1 , say). Hence
)a P1 Q1

So one of these points must depend on P. Suppose it is Q1 . Thus in turn, Q1 must be a regressive product, since in the plane the regressive product of two lines yields a point. Thus
Q1 L1 )b

Again, allowing P to be a simple exterior factor of one of these lines leads to a trivial dependence, so we leave one of them constant (L1 , say) and replace the other, )b by an exterior product of points, neither of which is P.

2009 9 3

Grassmann Algebra Book.nb

301

)b P0 Q2

As in the previous steps, we leave one factor constant, and express the other as a product (in this case regressive).
Q2 L2 )c

Finally, we are linearly far enough away from P on our chain of lines through points to come back and intersect it non-trivially.
)c P P2

We can now collect all these steps together


0 P )a )a P1 Q1 Q1 L1 )b )b P0 Q2 Q2 L2 )c )c P P2

and eliminate )a , )b , )c , Q1 and Q2 to give the final equation for the conic section.

P P1 HL1 HP0 HL2 HP P2 LLLL 0

4.14

Note that since interchanging the order of factors in an exterior or regressive product changes at most its sign, and that this change is inconsequential to the final result, we can if we wish, rewrite the geometric equation in any of a number of ways, in particular, in reverse order.

HHHHP2 PL L2 L P0 L L1 L P1 P 0

4.15

The geometric equation as a prescription to construct


This geometric equation, either as a series of steps, or in its final form can be used to guide the construction of points on a conic section. Thus, the equation can be viewed as a prescription to construct a conic section. We begin with the second form of the equation above, and work from left to right.

2009 9 3

Grassmann Algebra Book.nb

302

Constructing a conic section through 5 points

1. 2. 3. 4. 5.

Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 . Construct a line )c through P2 to intersect line L2 in point Q2 . Construct a line )b through Q2 and P0 to intersect line L1 in point Q1 . Construct a line )a through Q1 and P1 to intersect line )c in point P. Repeat the construction by drawing a new line )c through P2 .

We do of course still need to verify that this formula and construction leads to a scalar second degree equation in two variables. This we do in the next section.

The algebraic equation of a conic section in the plane


To construct the algebraic equation for the conic section we essentially follow the same process as we did with the geometric construction. In simplifying the regressive products, it will be convenient to use the GrassmannAlgebra function CongruenceSimplify (through its shorthand 1). CongruenceSimplify takes a nested expression involving exterior and regressive products, and simplifies the it, ignoring any scalar congruence factors c. First, define the variable point P.
"2 ; DeclareExtraScalarSymbols@x, yD; P = " + x e1 + y e2 ;

1. Define the fixed points and lines: P0 , P1 , P2 , L1 , and L2 .


DeclareExtraScalarSymbols@82, 3, 4, 5, 6, 7, 8, 9, ., +<D;

2009 9 3

Grassmann Algebra Book.nb

303

P0 P1 P2 L1 L2

= " + 6 e1 + 7 e2 ; = " + 2 e1 + 3 e2 ; = " + 4 e1 + 5 e2 ; = H" + 8 e1 L H" + 9 e2 L; = H" + . e1 L H" + + e2 L;

2. Construct a line )c through P and P2 to intersect line L2 in (weighted) point Q2 .


Q2 = 1@L2 HP P2 LD H- + ( - , $ + $ x + ( yL # + ( HH- + + $L x + , H- $ + yLL e1 + $ H+ H- ( + xL + H- , + (L yL e2

3. Construct a line )b through Q2 and P0 to intersect line L1 in (weighted) point Q1 .


Q1 = 1@L1 HP0 Q2 LD H- - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLL / H0 H+ ( + , $ - $ x - ( yL + ( HH- + + $L x + , H- $ + yLLLL # + - H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL + 0 / ( y - 0 ( $ y + , H- H. - /L ( H$ - yL + 0 $ H- / + yLLL e1 + / H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H- ( + xL + H- , + (L yL - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLLL e2

4. Construct a line )a through Q1 and P1 to intersect line )c in point P. Q1 , P1 and P are thus on the same line, and the left hand side of the equation (which we call #1 ) becomes
#1 = 1@P P1 Q1 D H- 1 - H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL + 0 / ( y - 0 ( $ y + , H- H. - /L ( H$ - yL + 0 $ H- / + yLLL + 2 / H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H- ( + xL + H- , + (L yL - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLLL + y H- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL + 0 / ( y - 0 ( $ y + , H- H. - /L ( H$ - yL + 0 $ H- / + yLLL 2 H- - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLL / H0 H+ ( + , $ - $ x - ( yL + ( HH- + + $L x + , H- $ + yLLLLL x H/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H- ( + xL + H- , + (L yL - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLLL 1 H- - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLL / H0 H+ ( + , $ - $ x - ( yL + ( HH- + + $L x + , H- $ + yLLLLLL # e1 e2

Dividing out the basis n-element and collecting the coefficients of x and y.

2009 9 3

Grassmann Algebra Book.nb

304

#2 = ExpandAndCollectB

#1 " e1 e2

, 8x , y <, 2F 0

1+0-/(-2+.-/(+1,0-/$-2,.-/$-1+0-($+ 1,.-($-2+0/($+2,./($-1,-/($+2+-/($+ H- 1 + 0 / ( + 2 + . / ( - 1 + - / ( + + . - / ( + 1 + 0 - $ - 1 , . - $ - 1 , 0 / $ + 2+0/$-2+-/$-10-/$+2.-/$+,.-/$+1+-($-1.-($+ 1 , / ( $ + + 0 / ( $ - 2 . / ( $ - , . / ( $ + 1 - / ( $ - + - / ( $L x + H1 + / ( - + . / ( - 1 + - $ + 1 . - $ + 1 0 / $ - + 0 / $ + + - / $ . - / $ - 1 / ( $ + . / ( $ L x2 + H- 1 , . - ( + 2 + . - ( + 2 + 0 / ( - 2 , . / ( + 1 , - / ( - 1 0 - / ( - + 0 - / ( + 2.-/(-1,0-$+2,.-$+2,-/$-,0-/$-2+-($+10-($+ + 0 - ( $ - , . - ( $ - 2 , / ( $ + 2 0 / ( $ - 2 - / ( $ + , - / ( $L y + H1 . - ( - + . - ( - 1 , / ( - 2 + / ( + 1 0 / ( + , . / ( + + - / ( - . - / ( + 1,-$+2+-$-+0-$-2.-$-20/$+,0/$, - / $ + 0 - / $ - 1 - ( $ + . - ( $ + 2 / ( $ - 0 / ( $L x y + H- 2 . - ( + , . - ( + 2 , / ( - 2 0 / ( - , - / ( + 0 - / ( - 2 , - $ + , 0 - $ + 2 - ( $ - 0 - ( $ L y2 0

To check if this gives the correct form, we can substitute in some values for the coordinates of the points and lines.
#2 . 82 2, 3 1, 4 1, 5 2, 6 1, 7 1, 8 1, 9 0, . 1, + 3< - 3 + 6 x - 3 x2 + 2 x y - y2 0

This is an ellipse.
ContourPlotA- 3 + 6 x - 3 x2 + 2 x y - y2 0, 8x, 0, 3<, 8y, 0, 3<, GridLines Automatic, ImageSize 83 72, 3 72<E
3.0

2.5

2.0

1.5

1.0

0.5

0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0

An alternative geometric equation of a conic section in the plane


2009 9 3

Grassmann Algebra Book.nb

305

An alternative geometric equation of a conic section in the plane


We can derive an alternative formula which is a little more symmetric. As in the previous derivation, we still take the fundamental form of the equation to be the collinearity of three points.
P0 Q1 Q2 0

But this time, P0 is a fixed point while Q1 and Q2 are functions of the variable point P. Because we are looking for a more symmetric formula, let us suppose the form for Q1 is the same as that for Q2 in the previous formulation:
Qi Li HP Pi L

Here, the Li and Pi are fixed. The Qi lie on their lines Li . The equation becomes

P0 HL1 HP P1 LL HL2 HP P2 LL 0
Thus we can imagine two fixed lines L1 and L2 and three fixed points P0 , P1 , and P2 . The prescription to construct the conic section is as follows: 1. 2. 3. 4. Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 . Construct a line through the point P0 to intersect L1 and L2 in Q1 and Q2 respectively. Construct a line through Q1 and P1 . Construct a line through Q2 and P2 .

4.16

The first line through P0 , Q1 and Q2 satisfies the condition that P0 Q1 Q2 0. The second and third lines through Qi and Pi satisfy the conditions that Qi P Pi 0. Thus their intersection gives the point P. We now have two formulas purporting to describe a conic section: the formula above, and the formula we first derived.
$1 P P1 HL1 HP0 HL2 HP P2 LLLL $2 P0 HL1 HP P1 LL HL2 HP P2 LL

To see if they are the same, we could choose a basis, substitute symbolic coordinates, and check that the same second degree equation results. An alternative method is to prove the identity of the formulas from the Common Factor Theorem by applying CongruenceSimplify (1) to the symbolic expressions in the formulas. To indicate that the symbols for the lines represent elements of grade 2 we underscript them with their grade. To indicate that the symbols for the points represent elements of grade 1 we declare them as vector symbols.
"2 ; DeclareExtraVectorSymbols@8P, P0 , P1 , P2 <D;

A typical computation might then be to find a weighted intersection point:

2009 9 3

Grassmann Algebra Book.nb

306

1BL1 HP P1 LF
2

P P1 L1 - P L1 P1
2 2

Remember that the bracketing bar expressions are scalar quantities. Z is to be interpreted as the coefficient of the n-element Z when expressed in the current basis. See ToCongruenceForm. We can expand a complete formula in a similar manner to get an alternative expression for the conic section. For example, the first formula is
1BP P1 L1 P0 L2 HP P2 L
2 2

F0 P P0 P1 +

P L2
2

P2 L1 - P L1
2 2

P2 L2
2

P L2
2

P0 L1 P P1 P2 0
2

P L2
2

P2 L1 - P L1
2 2

P2 L2
2

P P0 P1 + 4.17

P L2
2

P0 L1 P P1 P2 0
2

This is just one of a number of alternative forms. Expanding the second formula gives a different expression. In order to show that they are in fact equal we need to note that in the formulae there are four points involved, of which only three are independent in the plane. Hence we can arbitrarily express one of them (say P) in terms of the other three. (We use the symbol wP to emphasize that the expression we substitute is actually a weighted point).
wP = a P0 + b P1 + c P2 ;

In this form, the Pi can be visualized as representing a weighted point basis for the plane, and a, b and c as variable coefficients of the weighted point wP in this basis. The formulae for comparison then become
$1 = wP P1 L1 P0 L2 HwP P2 L
2 2

$1 = GrassmannCollect@Expand@1@$1 DDD a2 P0 L1
2

P0 L2 + a b P0 L1
2 2

P1 L2 + a c P0 L2
2 2

P2 L1 +
2

b c P1 L2
2

P2 L1 - b c P1 L1
2 2

P2 L2
2

P0 P1 P2

2009 9 3

Grassmann Algebra Book.nb

307

$2 = P0 L1 HwP P1 L L2 HwP P2 L ;
2 2

$2 = GrassmannCollect@Expand@1@$2 DDD a2 P0 L1
2

P0 L2 + a b P0 L1
2 2

P1 L2 + a c P0 L2
2 2

P2 L1 +
2

b c P1 L2
2

P2 L1 - b c P1 L1
2 2

P2 L2
2

P0 P1 P2

Clearly these two expressions are equal.


$1 $2 True

Conic sections through five points


Since it is well known that a unique conic section may be constructed through any five points in the plane, we ought to be able to convert the formula above to one involving only these points, rather than involving any lines. Already we can see directly from the formula that P must pass through both P1 and P2 . Since we have supposed the lines to be non-parallel, they will intersect in some point, O say. We can then pick two more points, one on each line, R1 and R2 say, with which to define the lines.
Li O Ri

But since any points on the lines (except O) will be satisfactory, we can choose them advantageously as the intersections with the lines P0 Pi .
R1 L1 HP0 P2 L R2 L2 HP0 P1 L

The point P0 is then the intersection of the lines P1 R2 and P2 R1 .


P0 HP1 R2 L HP2 R1 L

On making these substitutions the equation becomes


P0 HL1 HP P1 LL HL2 HP P2 LL 0

HHP1 R2 L HP2 R1 LL HHO R1 L HP P1 LL HHO R2 L HP P2 LL 0

4.18

Note the symmetry. There are five fixed points, O, P1 , P2 , R1 , R2 , and the variable point P. Each point occurs twice. Swapping P1 and P2 while at the same time swapping R1 and R2 leaves the equation unchanged. We can show that these five points satisfy the equation, and thus all lie on the conic section. But since a conic section constructed through five points is unique, this equation represents the general conic section, or general quadric curve in the plane.

2009 9 3

Grassmann Algebra Book.nb

308

We can show that these five points satisfy the equation, and thus all lie on the conic section. But since a conic section constructed through five points is unique, this equation represents the general conic section, or general quadric curve in the plane. We show this as follows. First, we can immediately read off from the equation that it is satisfied when P coincides with either P1 or P2 . To show it for O, R1 and R2 we can expand the regressive products by the Common Factor Theorem as we did in the previous section to a different view of the equation.
DeclareExtraVectorSymbols@8P, O, P1 , P2 , R1 , R2 <D; 11 = 1@HHP1 R2 L HP2 R1 LL HHO R1 L HP P1 LL HHO R2 L HP P2 LLD 0 O P P1 P P2 R2 P2 R1 R2 O P1 R1 O P P2 P P1 R1 P2 R1 R2 O P1 R2 + O P P1 P P2 R2 P1 P2 R1 O R1 R2 O P P1 O P P2 P2 R1 R2 P1 R1 R2 0

It is clear from the first factor in each of the terms that the equation is satisfied when P is equal to O. But it is not yet clear that the equation is satisfied when P is equal to R1 or R2 . To show this we try applying CongruenceSimplify to a rearranged form where the factors of the regressive products are reversed.
12 = 1@HHP2 R1 L HP1 R2 LL HHP P1 L HO R1 LL HHP P2 L HO R2 LLD 0 O P R1 O P2 R2 P1 R1 R2 P P1 P2 O P R1 O P2 R2 P1 P2 R2 P P1 R1 + O P R2 O P1 R1 P1 P2 R2 P P2 R1 O P R1 O P R2 P1 P2 R2 P1 P2 R1 0

Now we see that not only is the equation satisfied when P is equal to O, but also when P is equal to R1 . We try again with the factors of only the second and third regressive products reversed.
13 = 1@HHP1 R2 L HP2 R1 LL HHP P1 L HO R1 LL HHP P2 L HO R2 LLD 0 - O P R2 O P1 R1 P2 R1 R2 P P1 P2 + O P R1 O P2 R2 P1 P2 R1 P P1 R2 O P R2 O P1 R1 P1 P2 R1 P P2 R2 + O P R1 O P R2 P1 P2 R1 P1 P2 R2 0

This time we see immediately that the equation is satisfied for P is equal to R2 . Of course it would have been more direct to show these sorts of results by replacing the point by P in the equation, and then simplifying. For example, replacing R2 in the above equation, then simplifying, gives zero as expected.
1@HHP1 PL HP2 R1 LL HHP P1 L HO R1 LL HHP P2 L HO PLLD 0

We have thus shown that the five fixed points, O, P1 , P2 , R1 , R2 , all lie on the conic section. Here is an hyperbola as a graphic example, plotted from the original formula above.

2009 9 3

Grassmann Algebra Book.nb

309

Here is an hyperbola as a graphic example, plotted from the original formula above.

R2

Q1

R1

L1

P0 P2 P Q2
L2

P1
Constructing an hyperbola through 5 points

And here is a circle. The only difference being the relative location of the five points.

2009 9 3

Grassmann Algebra Book.nb

310

Q2
L2

P2

R2

P0

P1

R1

L1

Q1

Constructing a circle through 5 points

Dual constructions
The Principle of Duality discussed in Chapter 3 says that to every formula such as
P0 HL1 HP P1 LL HL2 HP P2 LL 0

there corresponds a dual formula in which exterior and regressive products are interchanged, and melements are interchanged with (n-m)-elements: in this case, points are interchanged with lines. Thus we can write another formula for a conic section as
:1 = L0 HP1 H; L1 LL HP2 H; L2 LL;

where the Pi are fixed points, the Li are fixed lines, and ; is a variable line. This formula says that three lines intersect at one point. One line L0 is fixed. The other two are of the form
)i P1 H; L1 L

To see most directly how such a formula translates into an algebraic equation, we take a numerical example with fixed points and lines defined:

2009 9 3

Grassmann Algebra Book.nb

311

P1 P2 L0 L1 L2

= " + 2 e1 + e2 ; = " + e1 + 2 e2 ; = H" + 5 e1 L H" + 4 e2 L; = " H" + e1 L; = H" - e1 L H" + 3 e2 L;

We can define the variable line as the product of its intersections with the coordinate axes:
"2 ; DeclareExtraScalarSymbols@x, yD; ; = H" + x e1 L H" + y e2 L;

Substituting these values into the geometric equation and simplifying gives
Expand@1@:1 DD 0 198 x y - 90 x2 y - 21 y2 - 80 x y2 + 37 x2 y2 0

We can solve for y in terms of x, but remember that these are line coordinates, not point coordinates.
y= 18 I- 11 x + 5 x2 M - 21 - 80 x + 37 x2 ;

By plotting a line for each pair of line coordinates, we see that they form the envelope of a conic section as expected.
Graphics@Table@Line@882 x, -y<, 8-x, 2 y<<D, 8x, - 20, 20, .05<D, PlotRange 88- 1, 3<, 8- .5, 3<<D

Constructing conics in space


2009 9 3

Grassmann Algebra Book.nb

312

Constructing conics in space


To construct a conic in 3-space we need only take a 2-space formula such as
P P1 HL1 HP0 HL2 HP P2 LLLL 0

and replace the 2-space hyperplanes (the lines) with 3-space hyperplanes (planes). Thus we can rewrite the formula as

P P1 HP1 HP0 HP2 HP LLLLL 0


Note that the point P2 becomes the line L in order to make the innermost term into a plane. 1. Construct the fixed points, lines and planes: P0 , P1 , L, P1 , and P2 . 2. Construct a plane Pc through L to intersect plane P2 in line L2 .
L2 P2 Pc Pc P L

4.19

3. Construct a plane Pb through line L2 and point P0 to intersect plane P1 in line L1 .


L1 P1 Pb Pb P0 L2

4. Construct a plane Pa through line L1 and point P1 to intersect line Pc in point P.


0 P Pa Pa P1 L1

5. Repeat the construction by drawing a new plane Pc through L. We now derive the algebraic equation in the same manner as we have previously for the planar cases. First, define the variable point P and the fixed entities.
A; "3 ; DeclareExtraScalarSymbols@x, y, zD; P = " + x e1 + y e2 + z e3 ; DeclareExtraScalarSymbols@ 82, 3, 4, *, 6, 7, 8, 9, ., +, <, =, >, ), ?, @<D; P0 = " + 2 e1 + 3 e2 + 4 e2 ; P1 = " + * e1 + 6 e2 + 7 e3 ; L = H" + 8 e1 + 9 e2 L H" + . e2 + + e3 L; P1 = H" + < e1 L H" + = e2 L H" + > e3 L; P2 = H" + ) e1 L H" + ? e2 L H" + @ e3 L;

2009 9 3

Grassmann Algebra Book.nb

313

Next, compute the algebraic formula.


A1 = ExpandAndCollectB 1@P P1 HP1 HP0 HP2 HP LLLLLD " e1 e2 e3 , 8x , y , z<, 3F

-1 . - $ 3 4 5 " 6 - , . - $ 3 4 5 " 6 + 2 . / $ 3 4 5 " 6 - 1 . - ( 3 4 5 " 7 - , . - ( 3 4 5 " 7 1#/$345"7-,#/$345"7+20/$345"7-2.-(34567-1#-$34567,#-$34567+20-$34567+1.-$34"67+,.-$34"67-2./$34"67+ 10-$35"67+,0-$35"67-20/$35"67+1#-$45"67+ ,#-$45"67-2#/$45"67+.-(345"67-0-$345"67+#/$345"67+ H- 1 0 - $ 3 5 " 6 - , 0 - $ 3 5 " 6 + 2 0 / $ 3 5 " 6 - 1 # - $ 4 5 " 6 - , # - $ 4 5 " 6 + 2#/$45"6+1.-345"6+,.-345"6-2./345"6+2.(345"6.-(345"6+1-$345"6+,-$345"6-2/$345"6+1.-(34"7+ ,.-(34"7+1#/$34"7+,#/$34"7-20/$34"7+1#/345"7+ ,#/345"7-20/345"7-1#(345"7-,#(345"7+20(345"7+ 1-(345"7+,-(345"7-0-(345"7+2.-(3467+1#-$3467+ ,#-$3467-20-$3467+1#-34567+,#-34567-20-34567+ 2-(34567-#-(34567-1.-34"67-,.-34"67+2./34"672.(34"67-1-$34"67-,-$34"67+0-$34"67+2/$34"67#/$34"67-10-35"67-,0-35"67+20/35"67-20(35"67+ 0-(35"67-1#-45"67-,#-45"67+2#/45"67-2#(45"67+ # - ( 4 5 " 6 7 + 0 - 3 4 5 " 6 7 - # / 3 4 5 " 6 7 + # ( 3 4 5 " 6 7 - - ( 3 4 5 " 6 7L z + H1 0 - 3 5 " 6 + , 0 - 3 5 " 6 - 2 0 / 3 5 " 6 + 2 0 ( 3 5 " 6 - 0 - ( 3 5 " 6 + 1#-45"6+,#-45"6-2#/45"6+2#(45"6-#-(45"61-345"6-,-345"6+2/345"6-2(345"6+-(345"61#/34"7-,#/34"7+20/34"7+1#(34"7+,#(34"7-20(34"71-(34"7-,-(34"7+0-(34"7-1#-3467-,#-3467+ 20-3467-2-(3467+#-(3467+1-34"67+,-34"670 - 3 4 " 6 7 - 2 / 3 4 " 6 7 + # / 3 4 " 6 7 + 2 ( 3 4 " 6 7 - # ( 3 4 " 6 7 L z2 + H1 . - $ 4 5 " 6 + , . - $ 4 5 " 6 - 2 . / $ 4 5 " 6 + 1 . $ 3 4 5 " 6 + , . $ 3 4 5 " 6 ./$345"6+1.-(45"7+,.-(45"7+1#/$45"7+,#/$45"720/$45"7-1./345"7-,./345"7+1.(345"7+,.(345"7+ 1/$345"7+,/$345"7-0/$345"7-1.-$3467-,.-$3467+ 2./$3467-10-$3567-,0-$3567+20/$3567+ 2.-(4567-20-$4567+2#/$4567-2./34567+2.(34567+ 1#$34567+,#$34567-20$34567+1-$34567+,-$34567#/$34567-1.$34"67-,.$34"67+./$34"67-10$35"67,0$35"67+0/$35"67-.-(45"67-1#$45"67-,#$45"671-$45"67-,-$45"67+0-$45"67+2/$45"67+ . / 3 4 5 " 6 7 - . ( 3 4 5 " 6 7 + 0 $ 3 4 5 " 6 7 - / $ 3 4 5 " 6 7L x + H1 0 $ 3 5 " 6 + , 0 $ 3 5 " 6 - 0 / $ 3 5 " 6 - 1 . - 4 5 " 6 - , . - 4 5 " 6 + 2./45"6-2.(45"6+.-(45"6+1#$45"6+,#$45"6-#/$45"61$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7+ +

2009 9 3

Grassmann Algebra Book.nb

314

1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7,.(34"7-1/$34"7-,/$34"7+0/$34"7-1#/45"7-,#/45"7+ 20/45"7+1#(45"7+,#(45"7-20(45"7-1-(45"7-,-(45"7+ 0-(45"7+1.-3467+,.-3467-.-(3467-1#$3467-,#$3467+ 20$3467-2/$3467+#/$3467+10-3567+,0-3567-20/3567+ 20(3567-0-(3567+20-4567-2#/4567+2#(4567-2-(45671-34567-,-34567+2/34567-2(34567+-(34567-./34"67+ .(34"67+1$34"67+,$34"67-0$34"67+1-45"67+,-45"670 - 4 5 " 6 7 - 2 / 4 5 " 6 7 + # / 4 5 " 6 7 + 2 ( 4 5 " 6 7 - # ( 4 5 " 6 7L z x + H- 1 . $ 4 5 " 6 - , . $ 4 5 " 6 + . / $ 4 5 " 6 + 1 . / 4 5 " 7 + , . / 4 5 " 7 1.(45"7-,.(45"7-1/$45"7-,/$45"7+0/$45"7+1.$3467+ ,.$3467-./$3467+10$3567+,0$3567-0/$3567+2./45672.(4567+20$4567-2/$4567-1$34567-,$34567+/$34567. / 4 5 " 6 7 + . ( 4 5 " 6 7 + 1 $ 4 5 " 6 7 + , $ 4 5 " 6 7 - 0 $ 4 5 " 6 7 L x2 + H1 . - $ 3 5 " 6 + , . - $ 3 5 " 6 - 2 . / $ 3 5 " 6 - 2 . $ 3 4 5 " 6 + . - $ 3 4 5 " 6 1.-$34"7-,.-$34"7+2./$34"7+1.-(35"7+,.-(35"710-$35"7-,0-$35"7+1#/$35"7+,#/$35"7-1#-$45"7,#-$45"7+2#/$45"7+1.-345"7+,.-345"7+1#$345"7+ ,#$345"7-20$345"7+0-$345"7-2/$345"7+2.-(3567+ 1#-$3567+,#-$3567-20-$3567+2.-34567-2-$34567+ #-$34567+2.$34"67-.-$34"67-.-(35"67+20$35"671-$35"67-,-$35"67+2/$35"67-#/$35"67+2#$45"67# - $ 4 5 " 6 7 - . - 3 4 5 " 6 7 - # $ 3 4 5 " 6 7 + - $ 3 4 5 " 6 7L y + H- 1 . - 3 5 " 6 - , . - 3 5 " 6 + 2 . / 3 5 " 6 - 2 . ( 3 5 " 6 + . - ( 3 5 " 6 20$35"6+0-$35"6-2#$45"6+#-$45"6+2$345"6-$345"6-2./34"7+2.(34"7-.-(34"7-1#$34"7-,#$34"7+ 20$34"7+1-$34"7+,-$34"7-0-$34"7+10-35"7+,0-35"71#/35"7-,#/35"7+1#(35"7+,#(35"7-1-(35"7-,-(35"7+ 1#-45"7+,#-45"7-2#/45"7+2#(45"7-#-(45"7-1-345"7,-345"7+2/345"7-2(345"7+-(345"7-2.-3467+2-$3467#-$3467-1#-3567-,#-3567+20-3567-2-(3567+#-(3567+ .-34"67-2$34"67+#$34"67+1-35"67+,-35"670 - 3 5 " 6 7 - 2 / 3 5 " 6 7 + # / 3 5 " 6 7 + 2 ( 3 5 " 6 7 - # ( 3 5 " 6 7L z y + H- 1 . $ 3 5 " 6 - , . $ 3 5 " 6 + . / $ 3 5 " 6 + 2 . $ 4 5 " 6 - . - $ 4 5 " 6 + 1.$34"7+,.$34"7-./$34"7+1./35"7+,./35"7-1.(35"7,.(35"7+10$35"7+,0$35"7-1/$35"7-,/$35"71.-45"7-,.-45"7+20$45"7+1-$45"7+,-$45"70-$45"7-#/$45"7-1$345"7-,$345"7+/$345"72.$3467+.-$3467+2./3567-2.(3567-1#$3567-,#$3567+ 0-$3567-2/$3567+#/$3567-2.-4567-2#$4567+2-$4567+ 2$34567--$34567-./35"67+.(35"67+1$35"67+ , $ 3 5 " 6 7 - 0 $ 3 5 " 6 7 + . - 4 5 " 6 7 - 2 $ 4 5 " 6 7 + # $ 4 5 " 6 7L x y + H2 . $ 3 5 " 6 - . - $ 3 5 " 6 - 2 . $ 3 4 " 7 + . - $ 3 4 " 7 - 1 . - 3 5 " 7 ,.-35"7-1#$35"7-,#$35"7+1-$35"7+,-$35"72#$45"7+#-$45"7+2$345"7--$345"7-2.-3567+ + + L y2

Substituting some random values for the coordinates of the fixed points, line and planes gives us 2009 93 the equation:

Grassmann Algebra Book.nb

315

Substituting some random values for the coordinates of the fixed points, line and planes gives us the equation:
4.753 - 0.900375 z + 2.06412 z2 - 12.6175 x + 2.27084 z x + 3.12987 x2 + 2.82363 y + 2.80525 z y + 2.39794 x y - 0.486937 y2 0

The graphic below depicts this conic together with its defining points, line and planes. The construction lines are not shown as they are difficult to portray unambiguously in one view.

Constructing a conic from two points, a line and two planes

Note that the line L and the point P1 are on the surface. This is clear from the geometric equation.

2009 9 3

Grassmann Algebra Book.nb

316

A geometric equation for a cubic in the plane


To this point we have explored just second degree equations in the plane and space. In this section we will explore a geometric formula for a third degree equation (cubic) in the plane. As for the quadratic equation, we will assemble the formula from the condition that three points are collinear, but this time, all three points will depend non-trivially on the variable point P. The point P itself will, instead of being at the intersection of two lines, be at the intersection of three lines. We will write down the proposed formula, and then show that it leads to a cubic equation for the coordinates of the point P. Note carefully that this cubic formula is not a prescription to construct, as it is not the simple intersection of two elements.
:1 = HL1 HP P1 LL HL2 HP P2 LL HL3 HP P3 LL;

As before we define the points and lines in terms of coordinates.


"2 ; DeclareExtraScalarSymbols@x, yD; DeclareExtraScalarSymbols@82, 3, 4, 5, 6, 7, 8, 9, ., +, <, =<D; P = " + x e1 + y e2 ; P1 = " + 2 e1 + 3 e2 ; P2 = " + 4 e1 + 5 e2 ; P3 = " + 6 e1 + 7 e2 ; K1 = " + 8 e1 + 9 e2 ; K2 = " + . e1 + + e2 ; K3 = " + < e1 + = e2 ; L1 = K2 K3 ; L2 = K1 K3 ; L3 = K2 K1 ;

A typical factor of the formula, for example the first, simplifies to the weighted point:
Q1 = 1@L1 HP P1 LD HH$ - 4L H2 - xL + H- ( + 3L H1 - yLL # + HH$ 3 - ( 4L H2 - xL + H- ( + 3L H1 x - 2 yLL e1 + HH$ 3 - ( 4L H1 - yL + H- $ + 4L H1 x - 2 yLL e2

Remember that 1 is a shorthand for CongruenceSimplify. It automatically eliminates the congruence factor c arising from the regressive product in the current basis. The complete formula simplifies to

2009 9 3

Grassmann Algebra Book.nb

317

:2 =

1@:1 D " e1 e2

HH$ 3 - ( 4L H1 - yL + H- $ + 4L H1 x - 2 yLL HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H, - xL + H- - + 3L H+ x - , yLL + HH/ - 4L H, - xL + H- - + 3L H+ - yLL HH/ ( - - $L H- 0 + xL + H- - (L H. x - 0 yLLL + H- H$ 3 - ( 4L H2 - xL - H- ( + 3L H1 x - 2 yLL HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H+ - yL + H- / + 4L H+ x - , yLL + HH/ - 4L H, - xL + H- - + 3L H+ - yLL HH- / ( + - $L H. - yL + H/ - $L H. x - 0 yLLL + HH$ - 4L H2 - xL + H- ( + 3L H1 - yLL HH- H/ 3 - - 4L H+ - yL + H/ - 4L H+ x - , yLL HH/ ( - - $L H- 0 + xL + H- - (L H. x - 0 yLL + HH/ 3 - - 4L H, - xL + H- - + 3L H+ x - , yLL HH- / ( + - $L H. - yL + H/ - $L H. x - 0 yLLL 0

It is easy to check that the expression is indeed a cubic, either by inspection, or application of ExpandAndCollect. as in the previous section. It is instructive perhaps to see a graphical depiction of how the formula works. In the graphic below we plot a typical cubic in the plane defined by the three points P1 , P2 , P3 , and the three lines L1 , L2 , and L3 . The point P is located by the condition that the three points Q1 , Q2 , and Q3 are concurrent, where each of the points Qi is defined as the intersection of the line P Pi with the line Li .
Qi = Li HP Pi L

As can be seen from the graphic, the cubic curve passes through nine special points: the three points P1 , P2 , P3 , the three intersections of the three lines L1 , L2 , and L3 , and the intersections R1 of these three lines with the lines through the points Pi taken two at a time.
:1 = HL1 HP P1 LL HL2 HP P2 LL HL3 HP P3 LL R1 = L1 HP2 P3 L R2 = L2 HP3 P1 L R3 = L3 HP1 P2 L

These special points are displayed as large semi-opaque red points (perhaps behind a blue point).

2009 9 3

Grassmann Algebra Book.nb

318

A cubic equation defined by three points and three lines

Pascal's Theorem
Our discussion up to this point leads us to Pascal's Theorem which may be stated: If a hexagon is inscribed in a conic, then opposite sides intersect in collinear points. In the previous sections we have shown that the geometric equation below, involving five fixed points and a variable point P, leads to the algebraic equation of a conic section:
HHP1 R2 L HP2 R1 LL HHO R1 L HP P1 LL HHO R2 L HP P2 LL 0

This is therefore the converse to Pascal's Theorem: If opposite sides of a hexagon intersect in three collinear points, then the hexagon may be inscribed in a conic. To explore Pascal's Theorem further we find it conceptually advantageous to start with a fresh notation. We denote the six points by P1 , P2 , P3 , P4 , P5 , and P6 , and suppose that they are placed on the conic in arbitrary (but of course separate) positions. Let us consider the hexagon formed by these points in order, so that the sides S1 to S6 may be written, starting with P1 as
S1 = P1 P2 ; S2 = P2 P3 ; S3 = P3 P4 ; S4 = P4 P5 ; S5 = P5 P6 ; S6 = P6 P1 ;

The pairs of opposite sides can then be intersected to give the collinear points Z1 , Z2 and Z3 .
Z1 = S1 S4 ; Z2 = S2 S5 ; Z3 = S3 S6 ;

We will call these points Pascal points. The Pascal line ) on which these points lie can be written alternatively as

2009 9 3

Grassmann Algebra Book.nb

319

) Z1 Z2 Z1 Z3 Z2 Z3

The formula is then constructed from the condition that the points Z1 , Z2 , and Z3 all lie on ), that is, their exterior product is zero.
Z1 Z2 Z3 0 HP1 P2 P4 P5 L HP2 P3 P5 P6 L HP3 P4 P6 P1 L 0

HHP1 P2 L HP4 P5 LL HHP2 P3 L HP5 P6 LL HHP3 P4 L HP6 P1 LL 0


The graphic below depicts these entities for six points on a circle. The sides are shown in dark green and their extensions in light green. The Pascal line for this hexagon is shown in red.

4.20

Z2 P5
S5

P6
S6

P2

S1

Z1

P1

S2

S4

P3

S3

P4

Z3

Pascal's theorem

As may be observed, this is essentially the same graphic as Constructing a circle through 5 points in the section above on Conic sections through five points, except that the points have been renamed as follows:
P1 P1 , P2 P5 , R1 P4 , R2 P2 , O P3 , P P6 , Q1 Z3 , Q2 Z2 , P0 Z1

2009 9 3

Grassmann Algebra Book.nb

320

Demonstration of Pascal's Theorem in an ellipse


The diagram above demonstrates Pascal's Theorem in a conic in which two opposite sides cross within the conic, hence making the Pascal line intersect it. The following example demonstrates the Theorem for an ellipse in which all the opposite sides intersect outside the conic. We take the ellipse
x 2 K O + 4 y 3
2

and choose some symmetrically placed points around it:


EllipsePoints = :P1 " + 3 e2 , P2 " + 2 e1 3 2 3 e2 , P3 " + 4 e1 , 3 2 3 e2 >;

P4 " - 3 e2 , P5 " - 4 e1 , P6 " - 2 e1 -

We defined the sides Si above by


S1 = P1 P2 ; S2 = P2 P3 ; S3 = P3 P4 ; S4 = P4 P5 ; S5 = P5 P6 ; S6 = P6 P1 ;

To compute the three Pascal points we take the regressive product of opposite sides, substitute in the values EllipsePoints, and use CongruenceSimplify (in its short form 1) and ToPointForm to do the simplifications.
" 2 ; 8Z1 , Z2 , Z3 < = ToPointForm@1@8S1 S4 , S2 S5 , S3 S6 < . EllipsePointsDD 9# + 4 I- 1 + # - 3 3 M e1 - 3 3 e2 , 3 M e1 - 3 3 e2 =

3 e2 , # + I4 - 4

Pascal's Theorem says that these three points are collinear, hence their exterior product should be zero.
% @Z1 Z2 Z3 D 0

The Pascal line may be obtained as the exterior product of any pair of the points.
8%@Z1 Z2 D, %@Z1 Z3 D, %@Z2 Z3 D< 9I4 - 4 I8 - 8 I4 - 4 3 M # e1 + 12 I- 3 + 3 M # e1 + 24 I- 3 + 3 M # e1 + 12 I- 3 + 3 M e1 e2 , 3 M e1 e2 , 3 M e1 e2 =

2009 9 3

Grassmann Algebra Book.nb

321

Since the weight is not important, we can divide through by it and simplify to obtain the Pascal line in the form
) J" - 3 3 e2 N e1

P1

P5

S6

S1

P3

S5

S2

S4

S3

P6 P4

P2

Z3

Z2
Pascal's theorem in an ellipse

Z1

Demonstration of Pascal's Theorem for a regular hexagon


An interesting case occurs when we choose the conic to be a circle and the hexagon to be a regular hexagon, since the pairs of opposite sides are parallel, and hence do not intersect. We begin by generating a list of rules for replacing the Pi by the points on a regular hexagon.

2009 9 3

Grassmann Algebra Book.nb

322

RegularHexagonPoints = Table@Pi " + Cos@i p 3D e1 + Sin@i p 3D e2 , 8i, 1, 6<D :P1 # + P4 # e1 2 e1 2 + 3 e2 2 3 e2 2 , P2 # , P5 # + e1 2 e1 2 + 3 e2 2 3 e2 2 , P3 # - e1 , , P6 # + e1 >

The intersections of the opposite sides of the regular hexagon gives


8Z1 , Z2 , Z3 < = 1@8S1 S4 , S2 S5 , S3 S6 < . RegularHexagonPointsD :3 e1 , 1 2 3 e1 3 e2 2 , 3 e1 2 3 e2 2 >

Note that these are all vectors, not points as expected, because the opposite sides are all parallel.

Pascal's theorem for a regular hexagon Opposite sides intersect at points at infinity lying in the same bivector "

These vectors are dependent (as of course they must be in the plane); and consequently, their exterior product is zero.

2009 9 3

Grassmann Algebra Book.nb

323

These vectors are dependent (as of course they must be in the plane); and consequently, their exterior product is zero.
% @Z1 Z2 Z3 D 0

Thus Pascal's Theorem is demonstrated in this example to hold for pairs of sides intersecting in points at infinity (vectors). The three points at infinity lie on the same line at infinity. That is, the three vectors lie in the same bivector. (Of course, this must be the case for any three vectors in the plane.)

Pascal lines
Hexagons in a conic
We have seen how Pascal's Theorem establishes a line associated with any hexagon in a conic. Suppose we have fixed the positions of the six points anywhere we wish on the conic, and named them in any order we wish. How many essentially different hexagons (whose sides may of course cross each other) can we form by joining these six points in some order? As before, we name these points P1 , P2 , P3 , P4 , P5 and P6 . Without loss of generality we can start with P1 and join it in any of five ways to the other five points. From that point we then have four ways of forming the next side; then three ways; then two ways. This makes 5!4!3!2 = 120 hexagons. However, for each hexagon P1 Pa Pb Pc Pd Pe there corresponds the same hexagon traversed in the reverse order P1 Pe Pd Pc Pb Pa . Thus there are just 60 essentially different hexagons. We use Mathematica's Permutations function to list them and Union to eliminate the ones traversed in reverse order.

2009 9 3

Grassmann Algebra Book.nb

324

hexagons = Join@8P1 <, D & Union@Permutations@8P2 , P3 , P4 , P5 , P6 <D, SameTest H1 === Reverse@2D &LD 88P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P4 , P4 , P4 , P4 , P4 , P4 , P5 , P5 , P5 , P3 , P3 , P3 , P4 , P4 , P4 , P5 , P5 , P5 , P6 , P6 , P6 , P2 , P2 , P2 , P4 , P4 , P5 , P5 , P6 , P6 , P2 , P2 , P3 , P3 , P5 , P6 , P2 , P3 , P4 , P4 , P5 , P6 , P3 , P5 , P6 , P3 , P4 , P6 , P3 , P4 , P5 , P4 , P5 , P6 , P2 , P5 , P2 , P4 , P2 , P4 , P3 , P5 , P2 , P5 , P2 , P2 , P3 , P2 , P2 , P5 , P4 , P4 , P5 , P3 , P3 , P4 , P3 , P3 , P4 , P3 , P3 , P5 , P4 , P4 , P5 , P2 , P4 , P2 , P4 , P2 , P5 , P3 , P5 , P2 , P3 , P3 , P4 , P4 , P3 , P6 <, P6 <, P5 <, P6 <, P6 <, P5 <, P6 <, P6 <, P4 <, P5 <, P5 <, P4 <, P6 <, P6 <, P5 <, P6 <, P6 <, P6 <, P6 <, P5 <, P5 <, P6 <, P6 <, P6 <, P6 <, P6 <, P5 <, P6 <, P6 <, P6 <, 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P4 , P4 , P4 , P4 , P4 , P4 , P5 , P5 , P5 , P3 , P3 , P3 , P4 , P4 , P4 , P5 , P5 , P5 , P6 , P6 , P6 , P2 , P2 , P2 , P4 , P4 , P5 , P5 , P6 , P6 , P2 , P2 , P3 , P3 , P5 , P6 , P2 , P3 , P4 , P4 , P5 , P6 , P3 , P5 , P6 , P3 , P4 , P6 , P3 , P4 , P5 , P4 , P5 , P6 , P2 , P6 , P2 , P6 , P2 , P5 , P3 , P6 , P2 , P6 , P3 , P3 , P4 , P4 , P3 , P6 , P6 , P5 , P6 , P6 , P5 , P6 , P6 , P4 , P5 , P5 , P4 , P6 , P6 , P5 , P6 , P2 , P6 , P2 , P5 , P2 , P6 , P3 , P6 , P2 , P2 , P2 , P3 , P2 , P2 , P5 <, P4 <, P4 <, P5 <, P3 <, P3 <, P4 <, P3 <, P3 <, P4 <, P3 <, P3 <, P5 <, P4 <, P4 <, P5 <, P5 <, P4 <, P4 <, P4 <, P4 <, P5 <, P5 <, P5 <, P5 <, P6 <, P5 <, P6 <, P6 <, P6 <<

We take the example of the ellipse and its points from the previous section and display their hexagons.

2009 9 3

Grassmann Algebra Book.nb

325

2009 9 3

Grassmann Algebra Book.nb

326

The 60 hexagons formed from six points on an ellipse

Pascal points
When two opposite sides of a hexagon are extended they intersect in a point which we have called a Pascal point. Pascal's Theorem says that the three Pascal points lie on the one line. We can convert each hexagon in the list of hexagons above to a list of three formulae, one formula for each point.
allPascalPointFormulae = hexagons . 8Pa_, Pb_, Pc_, Pd_, Pe_, Pf_< 8HPa PbL HPd PeL, HPb PcL HPe PfL, HPc PdL HPf PaL< 88P1 P2 P4 P5 , 8P1 P2 P4 P6 , 8P1 P2 P5 P4 , 8P1 P2 P5 P6 , 8P1 P2 P6 P4 , 8P1 P2 P6 P5 , 8P1 P2 P3 P5 , 8P1 P2 P3 P6 , 8P1 P2 P5 P3 , 8P1 P2 P5 P6 , 8P1 P2 P6 P3 , 8P1 P2 P6 P5 , 8P1 P2 P3 P4 , 8P1 P2 P3 P6 , 8P1 P2 P4 P3 , 8P1 P2 P4 P6 , 8P1 P2 P6 P3 , P2 P3 P5 P6 , P2 P3 P6 P5 , P2 P3 P4 P6 , P2 P3 P6 P4 , P2 P3 P4 P5 , P2 P3 P5 P4 , P2 P4 P5 P6 , P2 P4 P6 P5 , P2 P4 P3 P6 , P2 P4 P6 P3 , P2 P4 P3 P5 , P2 P4 P5 P3 , P2 P5 P4 P6 , P2 P5 P6 P4 , P2 P5 P3 P6 , P2 P5 P6 P3 , P2 P5 P3 P4 , P3 P4 P6 P1 <, P3 P4 P5 P1 <, P3 P5 P6 P1 <, P3 P5 P4 P1 <, P3 P6 P5 P1 <, P3 P6 P4 P1 <, P4 P3 P6 P1 <, P4 P3 P5 P1 <, P4 P5 P6 P1 <, P4 P5 P3 P1 <, P4 P6 P5 P1 <, P4 P6 P3 P1 <, P5 P3 P6 P1 <, P5 P3 P4 P1 <, P5 P4 P6 P1 <, P5 P4 P3 P1 <, P5 P6 P4 P1 <, ,

2009 9 3

Grassmann Algebra Book.nb

327

8P1 P2 P6 P3 , 8P1 P2 P6 P4 , 8P1 P2 P3 P4 , 8P1 P2 P3 P5 , 8P1 P2 P4 P3 , 8P1 P2 P4 P5 , 8P1 P2 P5 P3 , 8P1 P2 P5 P4 , 8P1 P3 P4 P5 , 8P1 P3 P4 P6 , 8P1 P3 P5 P4 , 8P1 P3 P5 P6 , 8P1 P3 P6 P4 , 8P1 P3 P6 P5 , 8P1 P3 P2 P5 , 8P1 P3 P2 P6 , 8P1 P3 P5 P2 , 8P1 P3 P6 P2 , 8P1 P3 P2 P4 , 8P1 P3 P2 P6 , 8P1 P3 P4 P2 , 8P1 P3 P6 P2 , 8P1 P3 P2 P4 , 8P1 P3 P2 P5 , 8P1 P3 P4 P2 , 8P1 P3 P5 P2 , 8P1 P4 P3 P5 , 8P1 P4 P3 P6 , 8P1 P4 P5 P3 , 8P1 P4 P6 P3 , 8P1 P4 P2 P5 , 8P1 P4 P2 P6 , 8P1 P4 P5 P2 , 8P1 P4 P6 P2 , 8P1 P4 P2 P3 , 8P1 P4 P3 P2 , 8P1 P4 P2 P3 , 8P1 P4 P3 P2 , 8P1 P5 P3 P4 , 8P1 P5 P4 P3 , 8P1 P5 P2 P4 , 8P1 P5 P4 P2 , 8P1 P5 P2 P3 , 8P1 P5 P3 P2 ,

P2 P5 P3 P4 , P2 P5 P4 P3 , P2 P6 P4 P5 , P2 P6 P5 P4 , P2 P6 P3 P5 , P2 P6 P5 P3 , P2 P6 P3 P4 , P2 P6 P4 P3 , P3 P2 P5 P6 , P3 P2 P6 P5 , P3 P2 P4 P6 , P3 P2 P6 P4 , P3 P2 P4 P5 , P3 P2 P5 P4 , P3 P4 P5 P6 , P3 P4 P6 P5 , P3 P4 P2 P6 , P3 P4 P2 P5 , P3 P5 P4 P6 , P3 P5 P6 P4 , P3 P5 P2 P6 , P3 P5 P2 P4 , P3 P6 P4 P5 , P3 P6 P5 P4 , P3 P6 P2 P5 , P3 P6 P2 P4 , P4 P2 P5 P6 , P4 P2 P6 P5 , P4 P2 P3 P6 , P4 P2 P3 P5 , P4 P3 P5 P6 , P4 P3 P6 P5 , P4 P3 P2 P6 , P4 P3 P2 P5 , P4 P5 P3 P6 , P4 P5 P2 P6 , P4 P6 P3 P5 , P4 P6 P2 P5 , P5 P2 P4 P6 , P5 P2 P3 P6 , P5 P3 P4 P6 , P5 P3 P2 P6 , P5 P4 P3 P6 , P5 P4 P2 P6 ,

P5 P6 P4 P1 <, P5 P6 P3 P1 <, P6 P3 P5 P1 <, P6 P3 P4 P1 <, P6 P4 P5 P1 <, P6 P4 P3 P1 <, P6 P5 P4 P1 <, P6 P5 P3 P1 <, P2 P4 P6 P1 <, P2 P4 P5 P1 <, P2 P5 P6 P1 <, P2 P5 P4 P1 <, P2 P6 P5 P1 <, P2 P6 P4 P1 <, P4 P2 P6 P1 <, P4 P2 P5 P1 <, P4 P5 P6 P1 <, P4 P6 P5 P1 <, P5 P2 P6 P1 <, P5 P2 P4 P1 <, P5 P4 P6 P1 <, P5 P6 P4 P1 <, P6 P2 P5 P1 <, P6 P2 P4 P1 <, P6 P4 P5 P1 <, P6 P5 P4 P1 <, P2 P3 P6 P1 <, P2 P3 P5 P1 <, P2 P5 P6 P1 <, P2 P6 P5 P1 <, P3 P2 P6 P1 <, P3 P2 P5 P1 <, P3 P5 P6 P1 <, P3 P6 P5 P1 <, P5 P2 P6 P1 <, P5 P3 P6 P1 <, P6 P2 P5 P1 <, P6 P3 P5 P1 <, P2 P3 P6 P1 <, P2 P4 P6 P1 <, P3 P2 P6 P1 <, P3 P4 P6 P1 <, P4 P2 P6 P1 <, P4 P3 P6 P1 <<

This list is comprised of 3!60 formulae. But some of them define the same point because we can interchange the factors in the regressive product, or interchange the factors in each exterior product, without affecting the result. To reduce these formulae to a list of unique formulae, we can apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates only by a sign (which is unimportant). 2009differing 93

Grassmann Algebra Book.nb

328

This list is comprised of 3!60 formulae. But some of them define the same point because we can interchange the factors in the regressive product, or interchange the factors in each exterior product, without affecting the result. To reduce these formulae to a list of unique formulae, we can apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates only differing by a sign (which is unimportant).
"2 ; pascalPointFormulae = Union@$@Flatten@allPascalPointFormulaeDD . - p_ pD 8P1 P2 P3 P4 , P1 P2 P4 P5 , P1 P3 P2 P4 , P1 P3 P4 P5 , P1 P4 P2 P5 , P1 P4 P5 P6 , P1 P5 P3 P4 , P1 P6 P2 P4 , P1 P6 P4 P5 , P2 P4 P3 P5 , P2 P5 P3 P6 , P2 P6 P4 P5 , P1 P2 P3 P5 , P1 P2 P4 P6 , P1 P3 P2 P5 , P1 P3 P4 P6 , P1 P4 P2 P6 , P1 P5 P2 P3 , P1 P5 P3 P6 , P1 P6 P2 P5 , P2 P3 P4 P5 , P2 P4 P3 P6 , P2 P5 P4 P6 , P3 P4 P5 P6 , P1 P2 P3 P6 , P1 P2 P5 P6 , P1 P3 P2 P6 , P1 P3 P5 P6 , P1 P4 P3 P5 , P1 P5 P2 P4 , P1 P5 P4 P6 , P1 P6 P3 P4 , P2 P3 P4 P6 , P2 P4 P5 P6 , P2 P6 P3 P4 , P3 P5 P4 P6 ,

P1 P4 P2 P3 , P1 P4 P3 P6 , P1 P5 P2 P6 , P1 P6 P2 P3 , P1 P6 P3 P5 , P2 P3 P5 P6 , P2 P5 P3 P4 , P2 P6 P3 P5 , P3 P6 P4 P5 <

Length@pascalPointFormulaeD 45

There are thus in general 45 distinct Pascal point formulae for the hexagons of a conic. Of course, since at this stage we have yet to discuss Pascal lines, these formulae are also valid for the 60 hexagons formed from any six points - not necessarily those constrained to lie on a conic. It should be noted that these are distinct formulae. The number of actual distinct points resulting from their application to any six given points may be less. Some may be duplicates, for example, when the hexagon has some symmetry, and some may reduce to vectors when the hexagon sides are parallel. For the ellipse and hexagon points discussed above, we can apply the formulae to get the actual points.
pascalPoints = ToPointForm@1@pascalPointFormulae . EllipsePointsDD :# + 4 4 3 # + 2 I- 1 + # + 4 e1 3 , -2 3 M e1 3 2 I- 1 + 3 M e2 , # + 4 I- 1 + 3 M e1 9 2 I1 + 3 M e1 - 3 3 e2 , e1 3 e2 , # + I8 - 4 3 M e1 ,

3 e2 , # + 2 I1 +

3 M e2 ,

2009 9 3

Grassmann Algebra Book.nb

329

# + 4 +

4 3

e1 -

3 e2 , # + 4 I2 + 3 3 e2 2

3 M e1 - 3 I1 +

3 M e2 ,

# + 2 I2 + # + 4 I1 + # - 3 # - 3

3 M e1 -

, - 96 e1 + 72 e2 , 3 M e1 + 3 I3 + 3 e2 , 3 M e2 ,

3 M e1 - 3

3 e2 , # - 4 I2 + 3 3 e2 2

3 e2 , # -

3 e2 , # -

, #, # 3 M e2 , 3 M e1 3 M e2 , 3 M e1 3 2 I- 1 + 3 M e1 , 9 2 3

3 e2 , # + 4 I2 + 3 M e1 - 3

3 M e1 + 3 I3 +

# - 4 I1 +

3 e2 , # - 2 I2 + 3 M e1 - 3 I1 +

3 e2 2

96 e1 + 72 e2 , # - 4 I2 + # # 4 3 4 e1 3 # + I4 - 4 # + - 4 + # + 2 I- 1 + # + 4 I2 + # + I2 - 2 # + I8 - 4 # + 2 I1 + - 48 3 M e1 - 3 4 3 3 M e1 9 2 e1 I3 + -2 3 M e1 -

3 e2 , # - 2 I1 + 3 M e1 -

I1 + 3 M e2 ,

3 M e2 ,

3 e2 , # + I2 - 2

3 e2 , # + 4 I- 2 + 3 e2 , # + I8 - 4 I- 1 +

3 M e1 + 3 I- 3 + 3 e2 , 3 M e2 ,

3 M e2 ,

3 M e2 , # - 3 3 M e1 3 M e2 , 3 2 I1 +

3 M e1 , # - 2 I1 + 3 M e1 9 2 3 2 I- 1 +

3 M e1 + I3 - 3 3 M e1 I1 +

3 M e2 , # -

3 e2 , 3 M e1 3 3 e2 2 ,

3 M e2 , # + I4 - 2 3 3 e2 2 ,

3 e1 , # + 2 I- 2 +

3 M e1 -

# + 4 I- 2 + # - 4 I2 +

3 M e1 + 3 I- 3 +

3 M e2 , 3 M e1 + I3 - 3 3 M e2 >

3 M e1 , # + 4 I- 2 +

Two of these points occur in triplicate & are vectors (points at infinity).

3 e2 and & - 3

3 e2 ; and three of the entities

vectors = Select@pascalPoints, VectorFormQD 9- 96 e1 + 72 e2 , 96 e1 + 72 e2 , - 48 3 e1 =

Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors (without their arrowheads) in green. 2009 9 3

Grassmann Algebra Book.nb

330

Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors (without their arrowheads) in green.

Red: 6 hexagon vertices. Blue: 38 distinct Pascal points. Green: 3 vectors

Pascal lines
Since there are 60 essentially different hexagons one can form from 6 given points, there are thus 60 corresponding Pascal lines. We can display the equations for these lines.
PascalLineEquations = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Q1 Q2 Q3 0 8HP1 P2 P4 P5 L HP2 P3 P5 P6 L HP3 P4 P6 P1 L 0, HP1 P2 P4 P6 L HP2 P3 P6 P5 L HP3 P4 P5 P1 L 0, HP1 P2 P5 P4 L HP2 P3 P4 P6 L HP3 P5 P6 P1 L 0, HP1 P2 P5 P6 L HP2 P3 P6 P4 L HP3 P5 P4 P1 L 0, HP1 P2 P6 P4 L HP2 P3 P4 P5 L HP3 P6 P5 P1 L 0, HP1 P2 P6 P5 L HP2 P3 P5 P4 L HP3 P6 P4 P1 L 0, HP1 P2 P3 P5 L HP2 P4 P5 P6 L HP4 P3 P6 P1 L 0, HP1 P2 P3 P6 L HP2 P4 P6 P5 L HP4 P3 P5 P1 L 0, HP1 P2 P5 P3 L HP2 P4 P3 P6 L HP4 P5 P6 P1 L 0, HP1 P2 P5 P6 L HP2 P4 P6 P3 L HP4 P5 P3 P1 L 0, HP1 P2 P6 P3 L HP2 P4 P3 P5 L HP4 P6 P5 P1 L 0, HP1 P2 P6 P5 L HP2 P4 P5 P3 L HP4 P6 P3 P1 L 0, ,

2009 9 3

Grassmann Algebra Book.nb

331

HP1 P2 P6 P5 L HP2 P4 P5 P3 L HP4 P6 P3 P1 L 0, HP1 P2 P3 P4 L HP2 P5 P4 P6 L HP5 P3 P6 P1 L 0, HP1 P2 P3 P6 L HP2 P5 P6 P4 L HP5 P3 P4 P1 L 0, HP1 P2 P4 P3 L HP2 P5 P3 P6 L HP5 P4 P6 P1 L 0, HP1 P2 P4 P6 L HP2 P5 P6 P3 L HP5 P4 P3 P1 L 0, HP1 P2 P6 P3 L HP2 P5 P3 P4 L HP5 P6 P4 P1 L 0, HP1 P2 P6 P4 L HP2 P5 P4 P3 L HP5 P6 P3 P1 L 0, HP1 P2 P3 P4 L HP2 P6 P4 P5 L HP6 P3 P5 P1 L 0, HP1 P2 P3 P5 L HP2 P6 P5 P4 L HP6 P3 P4 P1 L 0, HP1 P2 P4 P3 L HP2 P6 P3 P5 L HP6 P4 P5 P1 L 0, HP1 P2 P4 P5 L HP2 P6 P5 P3 L HP6 P4 P3 P1 L 0, HP1 P2 P5 P3 L HP2 P6 P3 P4 L HP6 P5 P4 P1 L 0, HP1 P2 P5 P4 L HP2 P6 P4 P3 L HP6 P5 P3 P1 L 0, HP1 P3 P4 P5 L HP3 P2 P5 P6 L HP2 P4 P6 P1 L 0, HP1 P3 P4 P6 L HP3 P2 P6 P5 L HP2 P4 P5 P1 L 0, HP1 P3 P5 P4 L HP3 P2 P4 P6 L HP2 P5 P6 P1 L 0, HP1 P3 P5 P6 L HP3 P2 P6 P4 L HP2 P5 P4 P1 L 0, HP1 P3 P6 P4 L HP3 P2 P4 P5 L HP2 P6 P5 P1 L 0, HP1 P3 P6 P5 L HP3 P2 P5 P4 L HP2 P6 P4 P1 L 0, HP1 P3 P2 P5 L HP3 P4 P5 P6 L HP4 P2 P6 P1 L 0, HP1 P3 P2 P6 L HP3 P4 P6 P5 L HP4 P2 P5 P1 L 0, HP1 P3 P5 P2 L HP3 P4 P2 P6 L HP4 P5 P6 P1 L 0, HP1 P3 P6 P2 L HP3 P4 P2 P5 L HP4 P6 P5 P1 L 0, HP1 P3 P2 P4 L HP3 P5 P4 P6 L HP5 P2 P6 P1 L 0, HP1 P3 P2 P6 L HP3 P5 P6 P4 L HP5 P2 P4 P1 L 0, HP1 P3 P4 P2 L HP3 P5 P2 P6 L HP5 P4 P6 P1 L 0, HP1 P3 P6 P2 L HP3 P5 P2 P4 L HP5 P6 P4 P1 L 0, HP1 P3 P2 P4 L HP3 P6 P4 P5 L HP6 P2 P5 P1 L 0, HP1 P3 P2 P5 L HP3 P6 P5 P4 L HP6 P2 P4 P1 L 0, HP1 P3 P4 P2 L HP3 P6 P2 P5 L HP6 P4 P5 P1 L 0, HP1 P3 P5 P2 L HP3 P6 P2 P4 L HP6 P5 P4 P1 L 0, HP1 P4 P3 P5 L HP4 P2 P5 P6 L HP2 P3 P6 P1 L 0, HP1 P4 P3 P6 L HP4 P2 P6 P5 L HP2 P3 P5 P1 L 0, HP1 P4 P5 P3 L HP4 P2 P3 P6 L HP2 P5 P6 P1 L 0, HP1 P4 P6 P3 L HP4 P2 P3 P5 L HP2 P6 P5 P1 L 0, HP1 P4 P2 P5 L HP4 P3 P5 P6 L HP3 P2 P6 P1 L 0, HP1 P4 P2 P6 L HP4 P3 P6 P5 L HP3 P2 P5 P1 L 0, HP1 P4 P5 P2 L HP4 P3 P2 P6 L HP3 P5 P6 P1 L 0, HP1 P4 P6 P2 L HP4 P3 P2 P5 L HP3 P6 P5 P1 L 0, HP1 P4 P2 P3 L HP4 P5 P3 P6 L HP5 P2 P6 P1 L 0, HP1 P4 P3 P2 L HP4 P5 P2 P6 L HP5 P3 P6 P1 L 0, HP1 P4 P2 P3 L HP4 P6 P3 P5 L HP6 P2 P5 P1 L 0, HP1 P4 P3 P2 L HP4 P6 P2 P5 L HP6 P3 P5 P1 L 0, HP1 P5 P3 P4 L HP5 P2 P4 P6 L HP2 P3 P6 P1 L 0, HP1 P5 P4 P3 L HP5 P2 P3 P6 L HP2 P4 P6 P1 L 0, HP1 P5 P2 P4 L HP5 P3 P4 P6 L HP3 P2 P6 P1 L 0, ,

2009 9 3

Grassmann Algebra Book.nb

332

HP1 P5 P2 P4 L HP5 P3 P4 P6 L HP3 P2 P6 P1 L 0, HP1 P5 P4 P2 L HP5 P3 P2 P6 L HP3 P4 P6 P1 L 0, HP1 P5 P2 P3 L HP5 P4 P3 P6 L HP4 P2 P6 P1 L 0, HP1 P5 P3 P2 L HP5 P4 P2 P6 L HP4 P3 P6 P1 L 0<

We can verify these equations for the points we have chosen on the ellipse.
Union@Simplify@1@PascalLineEquations . EllipsePointsDDD 8True<

To generate formulae for the Pascal lines themselves we only have to choose any two of its Pascal points. Here we choose the first two.
PascalLineFormulae = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Q1 Q2 8HP1 P2 P4 P5 L HP2 P3 P5 P6 L, HP1 P2 P5 P4 L HP2 P3 P4 P6 L, HP1 P2 P6 P4 L HP2 P3 P4 P5 L, HP1 P2 P3 P5 L HP2 P4 P5 P6 L, HP1 P2 P5 P3 L HP2 P4 P3 P6 L, HP1 P2 P6 P3 L HP2 P4 P3 P5 L, HP1 P2 P3 P4 L HP2 P5 P4 P6 L, HP1 P2 P4 P3 L HP2 P5 P3 P6 L, HP1 P2 P6 P3 L HP2 P5 P3 P4 L, HP1 P2 P3 P4 L HP2 P6 P4 P5 L, HP1 P2 P4 P3 L HP2 P6 P3 P5 L, HP1 P2 P5 P3 L HP2 P6 P3 P4 L, HP1 P3 P4 P5 L HP3 P2 P5 P6 L, HP1 P3 P5 P4 L HP3 P2 P4 P6 L, HP1 P3 P6 P4 L HP3 P2 P4 P5 L, HP1 P3 P2 P5 L HP3 P4 P5 P6 L, HP1 P3 P5 P2 L HP3 P4 P2 P6 L, HP1 P3 P2 P4 L HP3 P5 P4 P6 L, HP1 P3 P4 P2 L HP3 P5 P2 P6 L, HP1 P3 P2 P4 L HP3 P6 P4 P5 L, HP1 P3 P4 P2 L HP3 P6 P2 P5 L, HP1 P4 P3 P5 L HP4 P2 P5 P6 L, HP1 P4 P5 P3 L HP4 P2 P3 P6 L, HP1 P4 P2 P5 L HP4 P3 P5 P6 L, HP1 P4 P5 P2 L HP4 P3 P2 P6 L, HP1 P4 P2 P3 L HP4 P5 P3 P6 L, HP1 P4 P2 P3 L HP4 P6 P3 P5 L, HP1 P5 P3 P4 L HP5 P2 P4 P6 L, HP1 P5 P2 P4 L HP5 P3 P4 P6 L, HP1 P5 P2 P3 L HP5 P4 P3 P6 L, HP1 P2 P4 P6 L HP2 P3 P6 P5 L, HP1 P2 P5 P6 L HP2 P3 P6 P4 L, HP1 P2 P6 P5 L HP2 P3 P5 P4 L, HP1 P2 P3 P6 L HP2 P4 P6 P5 L, HP1 P2 P5 P6 L HP2 P4 P6 P3 L, HP1 P2 P6 P5 L HP2 P4 P5 P3 L, HP1 P2 P3 P6 L HP2 P5 P6 P4 L, HP1 P2 P4 P6 L HP2 P5 P6 P3 L, HP1 P2 P6 P4 L HP2 P5 P4 P3 L, HP1 P2 P3 P5 L HP2 P6 P5 P4 L, HP1 P2 P4 P5 L HP2 P6 P5 P3 L, HP1 P2 P5 P4 L HP2 P6 P4 P3 L, HP1 P3 P4 P6 L HP3 P2 P6 P5 L, HP1 P3 P5 P6 L HP3 P2 P6 P4 L, HP1 P3 P6 P5 L HP3 P2 P5 P4 L, HP1 P3 P2 P6 L HP3 P4 P6 P5 L, HP1 P3 P6 P2 L HP3 P4 P2 P5 L, HP1 P3 P2 P6 L HP3 P5 P6 P4 L, HP1 P3 P6 P2 L HP3 P5 P2 P4 L, HP1 P3 P2 P5 L HP3 P6 P5 P4 L, HP1 P3 P5 P2 L HP3 P6 P2 P4 L, HP1 P4 P3 P6 L HP4 P2 P6 P5 L, HP1 P4 P6 P3 L HP4 P2 P3 P5 L, HP1 P4 P2 P6 L HP4 P3 P6 P5 L, HP1 P4 P6 P2 L HP4 P3 P2 P5 L, HP1 P4 P3 P2 L HP4 P5 P2 P6 L, HP1 P4 P3 P2 L HP4 P6 P2 P5 L, HP1 P5 P4 P3 L HP5 P2 P3 P6 L, HP1 P5 P4 P2 L HP5 P3 P2 P6 L, HP1 P5 P3 P2 L HP5 P4 P2 P6 L<

We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our chosen points.

2009 9 3

Grassmann Algebra Book.nb

333

We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our chosen points.
PascalLines = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Simplify@ToPointForm@1@Q1 . EllipsePointsDD ToPointForm@1@Q2 . EllipsePointsDDD :I# + 4 I- 1 + # + 4 e1 3 I# + 4 I- 1 + # + 2 I- 1 + # + 2 I1 + 3 M e1 - 3 3 M e1 9 2 9 2 3 e2 M I- 1 + 3 M e2 , -2 3 M e1 - 3 3 e2 M I# - 3 3 e2 M, 3 e2 M,

3 e2 I# - 3

3 M e1 -

I1 + 9 2

3 M e2 3 M e2 , 3 M e1 + 3 I- 3 + 3 M e2 M,

# + 2 I- 1 + # + 4 e1 3 # + 2 I1 + -2

3 M e1 -

I- 1 +

3 e2 I# - 4 I- 2 + 3 M e1 9 2 I1 +

3 M e2 3 M e2 M, 3 M e1 9 2 I- 1 + 3 M e2 ,

I# - 4 I- 2 + I# - 4 I- 2 + # + 2 I- 1 + # - 2 I- 1 + I# - 4 I- 2 + # + 2 I1 + # - 2 I1 + # + 2 I- 1 + # + 2 I1 + # + 4 4 3 # + 2 I- 1 +

3 M e1 + 3 I- 3 +

3 M e1 M # - 2 I- 1 + 3 M e1 3 2 I- 1 + 9 2 I- 1 +

3 M e2 3 M e2 , 3 M e1 3 2 I1 + 3 M e2 ,

3 M e1 -

3 M e1 M # - 2 I1 + 3 M e1 9 2 I1 + 3 2 3 2 I1 + I- 1 +

3 M e2 3 M e2 , 3 M e2 I# + 4 I2 + 3 M e1 M,

3 M e1 3 M e1 3 M e1 e1 9 2

I1 +

3 M e2 I# + 4 I2 + 3 M e1 -

3 M e1 M, 3 2 I1 + 3 M e2 ,

3 e2 # + 2 I1 + 3 2 I- 1 + 3 M e2 ,

3 M e1 -

2009 9 3

Grassmann Algebra Book.nb

334

# + 2 I1 + # + 4 4 e1 3 # + 2 I- 1 + I# - 4 I- 2 + # + 4 e1 3 # + 4 4 3 I# - 4 I- 2 + 4 3 I# + 4 I- 1 + I# - 4 I- 2 + -2 4 3 # + -2

3 M e1 e1 -

3 2

I1 +

3 M e2 , 3 e2 M,

3 e2 I# -

3 e2 I# 3 M e1 3 2 I- 1 +

3 e2 M, 3 M e2 3 M e2 M, 3 M e1 - 3 I- 1 + 3 3 M e2 M, 3 e2 2 ,

3 M e1 - 3 I- 1 +

3 e2 I# - 4 I- 2 +

e1 -

3 e2 # + 2 I- 2 +

3 M e1 3

3 M e1 M # + 2 I- 2 +

3 M e1 -

3 e2 2

# + 4 -

e1 -

3 e2 I- 48

3 e1 M, 3 e1 M, 3 M e1 3 3 e2 2 3 M e1 , 3 3 e2 2

3 M e1 - 3

3 e2 M I48

3 M e1 M # - 2 I- 2 +

I# + 4 I- 1 +

3 M e1 - 3

3 e2 M # - 2 I- 2 + 3 e2 M, 3 e2 M I# - 3 3 M e1 3 M e2 M 3 M e2 , 9 2 3 e2 M, I- 1 +

H- 96 e1 + 72 e2 L I# - 3 I# + 4 I1 + 3 M e1 - 3

H96 e1 - 72 e2 L # + 2 I- 1 + I# - 4 I2 + 3 M e1 + 3 I3 + 3 M e1 9 2

3 M e2 ,

# + 2 I- 1 + I# + 4 I1 +

I- 1 +

3 M e1 - 3

3 e2 M 3 M e2 M,

I# - 4 I- 2 + I# - 4 I2 +

3 M e1 + 3 I- 3 +

3 M e1 + 3 I3 +

3 M e2 M 3 M e2 M,

I# - 4 I- 2 + I# + 4 I2 +

3 M e1 + 3 I- 3 +

3 M e1 - 3 I1 +

3 M e2 M 3 M e2 M,

I# + 4 I- 2 +

3 M e1 + 3 I- 3 +

2009 9 3

Grassmann Algebra Book.nb

335

# + 2 I2 +

3 M e1 -

3 e2 2

3 M e2 M,

I# + 4 I- 2 + I# + 4 I2 +

3 M e1 + 3 I- 3 +

3 M e1 - 3 I1 + 3 M e1 3

3 M e2 M 3 e2 2 , # + 2 I2 + 3 M e2 M, 3 M e1 M, 3 M e1 M, 3 M e1 3 3 e2 2

# - 2 I- 2 + I# - 4 I- 2 + # + 4 3 I3 +

3 M e1 - 3 I- 1 + 3 M e1 -

3 e2 I# - 4 I2 + 3 3 e2 2 I# - 4 I2 + 3 e1 M,

# + 2 I2 + # + 4 3 I3 +

3 M e1 3 M e1 3 M e1 3 M e1 -

3 e2 I48 3 3 e2 2

# + 2 I2 + # + 4 3 I3 +

I# + 4 I2 +

3 M e1 M, 3 M e1 - 3 I- 1 + 3 M e2 M,

3 e2 I# + 4 I- 2 + 3 M e2 M 3 M e2 M, 3 e2 M,

I# + 4 I2 + 4 3

3 M e1 - 3 I1 +

I# + 4 I- 2 + # + I3 +

3 M e1 - 3 I- 1 + 3 M e1 -

3 e2 I# 3 M e2 M

I# + 4 I2 + # - 2 I1 +

3 M e1 - 3 I1 + 3 M e1 3 2 I1 + 9 2

3 M e2 , I- 1 + 3 M e2 , 9 2 I- 1 + 3 M e2 ,

# # - 2 I- 1 + I# -

3 M e1 -

3 e2 M # - 2 I- 1 + 3 M e1 3 2

3 M e1 I1 +

# # - 2 I1 + I# I# # 3

3 M e2 ,

3 e2 M I# + 4 I2 + 3 e2 M I# + 4 I- 2 + 3 e2 2

3 M e1 M, 3 M e1 + 3 I- 3 + 3 M e2 M, 3 M e2 M,

I# + 4 I- 2 +

3 M e1 + 3 I- 3 + 3 3 e2 2

I# -

3 e2 M # - 2 I- 2 +

3 M e1 -

2009 9 3

Grassmann Algebra Book.nb

336

# -

3 e2 2

I# - 4 I- 2 +

3 M e1 - 3 I- 1 + 3 M e1 - 3 I- 1 + 3 M e1 3 M e1 M, 3 M e1 3 2 3 2 I1 + I1 + 3 3 e2 2

3 M e2 M, 3 M e2 M, ,

I# - 3 I# - 3 I# - 3 I# - 3

3 e2 M I# + 4 I- 2 + 3 e2 M # + 2 I- 2 + 3 e2 M I# - 4 I2 + 3 e2 M # + 2 I1 +

3 M e2 , 3 M e2 ,

H96 e1 + 72 e2 L # + 2 I1 + H- 96 e1 - 72 e2 L I# I# - 4 I1 + I# - 4 I1 + I# + 4 I2 + 3 M e1 - 3 3 M e1 - 3

3 M e1 -

3 e2 M, 3 e2 M I# - 4 I2 + 3 e2 M I- 48 3 M e2 M 3 M e2 M, 3 M e1 M,

3 e1 M,

3 M e1 + 3 I3 +

I# + 4 I- 2 + I# + 4 I2 +

3 M e1 - 3 I- 1 +

3 M e1 + 3 I3 + 3 M e1 3

3 M e2 M 3 e2 2 >

# + 2 I- 2 +

We can plot these 60 Pascal lines on their Pascal points. Lines normally have an infinite extent, but to make it clearer which lines correspond to which points, we have plotted them spanning just their three points, where of course one of the points may be at infinity. Thus if the three Pascal points (or vectors) of a Pascal line were Pa , Pb , and Pc we have plotted the line segments HPa , Pb L, HPb , Pc L, and HPc , Pa L. If this is being read as a Mathematica notebook, actual line expressions are discernible as Tooltips. However care should be taken when attempting to correlate the expressions shown with lines, as some lines may overlap.

2009 9 3

Grassmann Algebra Book.nb

337

Sixty Pascal lines each joining its three points or points at infinity

Finally, we plot the sixty Pascal lines and bivectors of a regular hexagon. In this case we have shown the arrowheads on the line-bound vectors and on the bivectors. The lack of apparent symmetry is an artifact caused by the vectors and bivectors showing more information (their signs, weights and orientation) than the geometric lines and 2-directions that they are being used to represent.

2009 9 3

Grassmann Algebra Book.nb

338

The sixty Pascal lines and bivectors of a regular hexagon

4.14 Summary
In this chapter we have explored the implications of interpreting one of the elements of the underlying linear space as an (origin) point (and the rest as vectors). This is a slightly different approach to that adopted by Grassmann and workers in the earlier Grassmannian tradition, of interpreting all the basis elements of the underlying linear space as points, and then treating any consequent point differences as vectors. Although the modern context is to interpret elements of a linear space most commonly as vectors, workers in the early nineteenth century treated the point as the paramount entity of a linear space. Indeed it was Hamilton who coined the term vector in 1853 as noted in the Historical Note at the beginning of this chapter. As we hope to have demonstrated in this chapter, the modern context is significantly poorer for ignoring the possibility of interpreting the elements of a linear space such that they can include points. We can probably trace this development back to Gibbs' early work before he became familiar with Grassmann's work, where he unwrapped Hamilton's quaternion operation into the dot and cross product. Of course, this chapter involves no metric concepts other than the comparison of weights of points (their scalar factors) or the relative lengths of vectors in the same direction. This has enabled us to see what types of geometric theorems and constructions can be done without the concept of a metric. Or, what is equivalent, with an arbitrary metric. The notion of congruence has been central. The next two chapters will introduce the two seminal concepts with which we will explore metric geometry: the complement, and the interior product. Armed with these, we will also be able to define the notions of length and angle, and more specifically, orthogonality. Because the 2009 9 3 algebra extends the vector algebra to higher grade entities, we will find these notions Grassmann also extend naturally to higher grade entities.

Grassmann Algebra Book.nb

339

The next two chapters will introduce the two seminal concepts with which we will explore metric geometry: the complement, and the interior product. Armed with these, we will also be able to define the notions of length and angle, and more specifically, orthogonality. Because the Grassmann algebra extends the vector algebra to higher grade entities, we will find these notions also extend naturally to higher grade entities.

2009 9 3

Grassmann Algebra Book.nb

340

5 The Complement

5.1 Introduction
Up to this point various linear spaces and the dual exterior and regressive product operations have been introduced. The elements of these spaces were incommensurable unless they were congruent, that is, nowhere was there involved the concept of measure or magnitude of an element by which it could be compared with any other element. Thus the subject of the last chapter on Geometric Interpretations was explicitly non-metric geometry; or, to put it another way, it was what geometry is before the ability to compare or measure is added. The question then arises, how do we associate a measure with the elements of a Grassmann algebra in a consistent way? Of course, we already know the approach that has developed over the last century, that of defining a metric tensor. In this book we will indeed define a metric tensor, but we will take an approach which develops from the concepts of the exterior product and the duality operations which are the foundations of the Grassmann algebra. This will enable us to see how the metric tensor on the underlying linear space generates metric tensors on the exterior linear spaces of higher grade; the implications of the symmetry of the metric tensor; and how consistently to generalize the notion of inner product to elements of arbitrary grade. One of the consequences of the anti-symmetry of the exterior product of 1-elements is that the exterior linear space of m-elements has the same dimension as the exterior linear space of (n|m)elements. We have already seen this property evidenced in the notions of duality and cobasis element. And one of the consequences of the notion of duality is that the regressive product of an m-element and an (n|m)-element is a scalar. Thus there is the opportunity of defining for each melement a corresponding 'co-m-element' of grade n|m such that the regressive product of these two elements gives a scalar. We will see that this scalar measures the square of the 'magnitude' of the melement or the (n|m)-element, and corresponds to the inner product of either of them with itself. We will also see that the notion of orthogonality is defined by the correspondence between melements and their 'co-m-elements'. But most importantly, the definition of this inner product as a regressive product of an m-element with a 'co-m-element' is immediately generalizable to elements of arbitrary grade, thus permitting a theory of interior products to be developed which is consistent with the exterior and regressive product axioms and which, via the notion of 'co-m-element', leads to explicit and easily derived formulae between elements of arbitrary (and possibly different) grade. The foundation of the notion of measure or metric then is the notion of 'co-m-element'. In this book we use the term complement rather than 'co-m-element'. In this chapter we will develop the notion of complement in preparation for the development of the notions of interior product and orthogonality in the next chapter. The complement of an element is denoted with a horizontal bar over the element. For example, the complements of a, x + y and xy are denoted a, x+y and x y.
m m

Finally, it should be noted that the term 'complement' may be used either to refer to an operation (the operation of taking the complement of an element), or to the element itself (which is the result of the operation). 2009 9 3

Grassmann Algebra Book.nb

341

Finally, it should be noted that the term 'complement' may be used either to refer to an operation (the operation of taking the complement of an element), or to the element itself (which is the result of the operation).

Historical Note
Grassmann introduced the notion of complement (Ergnzung) into the Ausdehnungslehre of 1862 [Grassmann 1862]. He denoted the complement of an element x by preceding it with a vertical bar, viz |x. For mnemonic reasons, particularly in the derivation of formulas using the Complement Axiom, the notation for the complement used in this book is rather the horizontal bar: x. In discussing the complement, Grassmann defines the product of the n basis elements (the basis n-element) to be unity. That is @e1 e2 ! en D 1 or, in the present notation, e1 e2 ! en 1. Since Grassmann discussed only the Euclidean complement (equivalent to imposing a Euclidean metric

gij dij ), this statement in the present notation is equivalent to 1 1. The introduction of such an
identity, however, destroys the essential duality between L and L which requires rather the identity
m n-m

1 1. In current terminology, equating e1 e2 ! en to 1 is equivalent to equating n-elements (or


pseudo-scalars) and scalars. All other writers in the Grassmannian tradition (for example, Hyde, Whitehead and Forder) followed Grassmann's approach. This enabled them to use the same notation for both the progressive and regressive products. While being an attractive approach in a Euclidean system, it is not tenable for general metric spaces. The tenets upon which the Ausdehnungslehre are based are so geometrically fundamental however, that it is readily extended to more general metrics.

On the use of the term 'complement'


Grassmann's term Ergnzung may be translated as either complement or supplement (as well as other meanings, for example completion). Among the earlier workers in the Grassmannian tradition, Whitehead used supplement, while Hyde used complement. Kannenberg, in his excellent translation of the Ausdehnungslehre of 1862, has used supplement. Whitehead also gives an interesting historical note on extensions of the Ausdehnungslehre to general metric concepts. [Book VI (Theory of Metrics) in A Treatise on Universal Algebra (p 369)] In more modern works the Ergnzung for general metrics has become known as the Hodge Star operator. But we do not use the term here since our aim is to develop the concept by showing how it can be built straightforwardly on the foundations laid by Grassmann. In this text we have chosen to use the word complement for its more recent added evocative meanings in set theory and linear algebra (viz orthogonal complement) and its mnemonic evocation of the concept of co-melement. But most especially, to slightly differentiate it from Grassmann's original use as may be found in Kannenberg's translation; because our aim is to extend its meaning to include a more general metric (the Riemannian metric) than Grassmann initially envisaged.

2009 9 3

Grassmann Algebra Book.nb

342

5.2 Axioms for the Complement


The grade of a complement
1 : The complement of an m-element is an (n|m)-element. aL a L
m m m

n- m

5.1

The grade of the complement of an element is the complementary grade of the element. (The complementary grade of an m-element in an algebra with underlying linear space of dimension n, is n-m.)

The linearity of the complement operation


2 : The complement operation is linear. aa+bb aa+bb aa+bb
m k m k m k

5.2

For scalars a and b, the complement of a sum of elements (perhaps of different grades) is the sum of the complements of the elements. The complement of a scalar multiple of an element is the scalar multiple of the complement of the element.

The complement axiom


3 : The complement of a product is the dual product of the complements. ab ab
m k m k

5.3

ab ab
m k m k

5.4

Note that for the terms on each side of the expression 5.3 to be non-zero we require m+k n, while in expression 5.4 we require m+k n. Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can confirm this duality by applying the GrassmannAlgebra function Dual. 2009 9 3

Grassmann Algebra Book.nb

343

Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can confirm this duality by applying the GrassmannAlgebra function Dual.
DualBa b a bF
m k m k

ab ab
m k m k

This axiom is of central importance in the development of the properties of the complement and interior, inner and scalar products, and formulae relating these with exterior and regressive products. In particular, it permits us to be able consistently to generate the complements of basis melements from the complements of basis 1-elements, and hence via the linearity axiom, the complements of arbitrary elements. The forms 5.3 and 5.4 may be written for any number of elements. To see this, let bgd and
k p q

substitute for b in expression 5.3:


k

agd a gd agd
m p q m p q m p q

In general then, expressions 5.3 and 5.4 may be stated in the equivalent forms:

ab!g ab!g
m k p m k p

5.5

ab!g ab!g
m k p m k p

5.6

In Grassmann's work, this axiom was hidden in his notation. However, since modern notation explicitly distinguishes the progressive and regressive products, this axiom needs to be explicitly stated.

The complement of a complement axiom


4 : The complement of the complement of an element is equal (apart from a possible sign) to the element itself. a H- 1Lf Hm,nL a
m m

5.7

2009 9 3

Grassmann Algebra Book.nb

344

Axiom 1 says that the complement of an m-element is an (n|m)-element. Clearly then the complement of an (n|m)-element is an m-element. Thus the complement of the complement of an m-element is itself an m-element. In the interests of symmetry and simplicity we will require that the complement of the complement of an element is equal (apart from a possible sign) to the element itself. Although consistent algebras could no doubt be developed by rejecting this axiom, it will turn out to be an essential underpinning to the development of the standard Riemannian metric concepts to which we are accustomed. For example, its satisfaction will require the metric tensor to be symmetric. It will turn out that, again in compliance with standard results, that the index f(m,n) is m(n|m). But this result is more in the nature of a theorem than an axiom. We discuss this further in section 5.6. Although this text does not consider pseudo-Riemannian metrics, the equivalent value for the index f would in this case turn out to be m(n|m)+s, where s is the signature of the metric.

The complement of unity


5 : The complement of unity is equal to the unit n-element.
In Chapter 3: The Regressive Product, we showed in Section 3.3 that the unit n-element is congruent to any basis n-element, and wrote this result in a form involving an (as yet undetermined) scalar congruence factor c.
e1 e2 ! en c 1
n

In the discussions and non-Mathematica derivations that follow in this chapter, it turns out to be more notationally convenient to use the inverse of this factor, which we denote !, and rewrite this requirement as

1 1 ! e1 e2 ! en
n

5.8

This is the axiom which finally enables us to define the unit n-element. This axiom requires that in a metric space the unit n-element be identical to the complement of unity. The hitherto unspecified scalar constant ! may now be determined from the specific complement mapping or metric under consideration. The dual of this axiom is 1 1, since 1 and 1 are duals. We can confirm by applying the Dual
n
n

function. (Note that Dual requires us to unambiguously specify that n is the dimension of the space concerned by writing it as n)
DualB1 1 F
n

1 1

Hence taking the complement of expression 5.8 and using this dual axiom tells us that the complement of the complement of unity is unity. This result clearly complies with axiom 4 . 2009 9 3

Grassmann Algebra Book.nb

345

Hence taking the complement of expression 5.8 and using this dual axiom tells us that the complement of the complement of unity is unity. This result clearly complies with axiom 4 .

111
n

5.9

We can now write the unit n-element as 1 instead of 1 in any Grassmann algebras that have a
n

metric. For example in a metric space, 1 becomes the unit for the regressive product.

a 1a
m m

5.10

5.3 Defining the Complement


The complement of an m-element
To define the complement of a general m-element, we need only define the complement of basis melements, since by the linearity axiom 2, we have that for a a general m-element expressed as a
m

linear combination of basis m-elements, the complement of a is the corresponding linear


m

combination of the complements of the basis m-elements.

a ai ei
m m

a ai ei
m m

5.11

The complement of a basis m-element


To define the complement of an m-element we need to define the complements of the basis melements. The complement of a basis m-element however, cannot be defined independently of the complements of basis elements in exterior linear spaces of other grades, since they are related by the complement axiom. For example, the complement of a basis 4-element may also be expressed as the regressive product of the complements of two basis 2-elements, or as the regressive product of the complement of a basis 3-element and the complement of a basis 1-element.
e1 e2 e3 e4 e1 e2 e3 e4 e1 e2 e3 e4

The complement axiom enables us to define the complement of a basis m-element in terms only of the complements of basis 1-elements.

2009 9 3

Grassmann Algebra Book.nb

346

e1 e2 ! e m e1 e2 ! e m

5.12

Thus in order to define the complement of any element in a Grassmann algebra, we only need to define the complements of the basis 1-elements, that is, the correspondence between basis 1elements and basis (n|1)-elements.

Defining the complement of a basis 1-element


The most general definition we could devise for the complement of a basis 1-element is a linear combination of basis (n|1)-elements. For example in a 3-space we could define the complement of e1 as a linear combination of the three basis 2-elements.
e1 a1 e1 e2 + a2 e1 e3 + a3 e2 e3

However, it will be much more notationally convenient to define the complements of basis 1elements as a linear combination of their cobasis elements. (For a definition of cobasis elements, see Chapter 2: The Exterior Product.)
e1 a11 e1 + a12 e2 + a13 e3

Substituting for the cobasis symbols we see that we get a somewhat notationally different, though equivalent, form to the definition with which we began.
e1 a11 e2 e3 + a12 H- e1 e3 L + a13 e1 e2

Hence in a space of any number of dimensions we can write:


n

ei aij ej
j=1

For reasons which the reader will perhaps discern from the choice of notation involving gij , we extract the scalar factor ! as an explicit factor from the coefficients of the complement mapping. We assume for simplicity, that ! is positive. It will turn out to be the reciprocal of the 'volume' (measure) of the basis n-element.
n

ei # gij ej
j=1

5.13

This then is the form in which we will define the complement of basis 1-elements. The scalar ! and the scalars gij are at this point otherwise entirely arbitrary. However, we must still ensure that our definition satisfies the complement of a complement axiom 4. We will see that in order to satisfy 4, some constraints will need to be imposed on ! and gij . Therefore, our task now in the ensuing sections is to determine these constraints. We begin by determining ! in terms of the gij .

Constraints on the value of !


2009 9 3

Grassmann Algebra Book.nb

347

Constraints on the value of !


A consideration of axiom 5 in the form 11 enables us to determine a relationship between the scalar ! and the gij . First we apply some axioms (in order): the complement of axiom 5, axiom 2, and axiom 3.
1 1 ! e1 e2 ! en ! e1 e2 ! en ! e1 e2 ! en

Then we use the definition of the complement of a 1-element in terms of cobasis elements (formula 5.13)
n n j=1 n j=1

! !n

g1 j ej g2 j ej ! gn j ej
j=1

Now because ei ej is equal to - ej ei (see the axioms for the regressive product), just as ei ej is equal to - ej ei , we can rewrite this regressive product (see the section on determinants in Chapter 2: The Exterior Product) as
! !n gij e1 e2 ! en

The symbol gij denotes the determinant of the gij . In Chapter 3: The Regressive Product, we explore the regressive product of cobasis elements. Formula 3.41 is useful here:
e 1 e 2 ! e m c m -1 H e 1 e 2 ! e m L

Put m equal to n, and c equal to chapter) to then give


1 ! !n gij ! !n gij !2 gij 1 !n - 1 1 !n - 1 1

(as per our original definition of ! at the beginning of this

H e1 e2 ! en L

Example in 2 dimensions
1 1 ! e1 e2 ! e1 e2

2009 9 3

Grassmann Algebra Book.nb

348

! e1 e2 ! !2 Ig11 e1 + g12 e2 M Ig21 e1 + g22 e2 M ! !2 Hg11 g22 - g12 g21 L e1 e2 ! !2 gij e1 e2 ! !2 gij ! !2 gij !2 gij 1 ! 1 ! e1 e2 1

Example in 3 dimensions
1 1 ! e1 e2 e3 ! e1 e2 e3 ! e1 e2 e3 ! !3 Ig11 e1 + g12 e2 + g13 e3 M Ig21 e1 + g22 e2 + g23 e3 M Ig31 e1 + g32 e2 + g33 e3 M ! !3 gij e1 e2 e3 ! !3 gij ! !3 gij !2 gij 1 !2 1 !2 H e1 e2 e3 L 1

Choosing the value of !


Formula 5.13 defined the complement of a basis 1-element as a linear combination of cobasis elements.
n

ei ! gij ej
j=1

The previous section showed that in order for this definition to conform to axiom 5 (1 1), the scalar ! must be related to the determinant of the scalar coefficients gij by

2009 9 3

Grassmann Algebra Book.nb

349

!2 gij 1

Although we have not yet identified the gij with the metric tensor, our assumption in this text of only considering Riemannian metrics (rather than pseudo-Riemannian metrics, for example), permits us now to further suppose that the determinant of the gij is positive. Combining this with our assumption above that ! is positive, enables us to write
! 1 gij

Hence from this point on then, we take the value of ! to be the reciprocal of the positive square root of the determinant of the coefficients of the complement mapping gij .

1 gij 5.14

Defining the complements in matrix form


We can define all the complements of the basis 1-elements in one matrix equation. Let B be the matrix of basis elements, and G be the matrix of the gij .
e1 e2 en g11 g21 gn1 g12 g22 gn2 g1 n g2 n gn n

Then the formulas


ei 1 gij
n

gij ej
j=1

can be collected in the form


B 1 G e1 e2 en 1 G g11 g21 gn1 g12 g22 gn2 g1 n g2 n gnn e1 e2 en GB

2009 9 3

Grassmann Algebra Book.nb

350

5.4 The Euclidean Complement


Tabulating Euclidean complements of basis elements
The Euclidean complement of a basis m-element may be defined as its cobasis element. Conceptually this is the simplest correspondence we can define between basis m-elements and basis (n|m)-elements. In this case the matrix of the metric tensor is the identity matrix, with the result that the 'volume' of the basis n-element is unity. (However, we cannot yet talk formally ! about 'volumes' because we have not yet introduced the notion of inner product.) We tabulate the basis-complement pairs for spaces of two, three and four dimensions by using the GrassmannAlgebra function ComplementPalette.
1

Basis elements and their Euclidean complements in 2-space


!2 ; ComplementPalette

Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1

2009 9 3

Grassmann Algebra Book.nb

351

Basis elements and their Euclidean complements in 3-space


!3 ; ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1

2009 9 3

Grassmann Algebra Book.nb

352

Basis elements and their Euclidean complements in 4-space


!4 ; ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4 Complement e1 e2 e3 e4 e2 e3 e4 - He1 e3 e4 L e1 e2 e4 - He1 e2 e3 L e3 e4 - He2 e4 L e2 e3 e1 e4 - He1 e3 L e1 e2 e4 - e3 e2 - e1 1

Formulae for the Euclidean complement of basis elements


In this section we summarize some of the formulae involving Euclidean complements of basis elements. Later, we will generalize these formulas for the general Riemannian metric case. The Euclidean complement of a basis 1-element is its cobasis (n|1)-element. For a basis 1-element ei then we have:

2009 9 3

Grassmann Algebra Book.nb

353

e i e i H - 1 L i-1 e 1 ! i ! e n
This simple relationship extends naturally to complements of basis elements of any grade.

5.15

ei ei
m m

5.16

This may also be written:

e i1 ! e i m H - 1 L K m e 1 ! i1 ! i m ! e n
m

Km ig +
g=1

1 2

m H m + 1L

5.17

where the symbol means the corresponding element is missing from the product. In particular, the unit n-element is now the basis n-element:

1 e1 e2 ! en
and the complement of the basis n-element is just unity.

5.18

1 e1 e2 ! en
This simple correspondence is that of a Euclidean complement, defined by a Euclidean metric, simply given by gij dij , with consequence that ! 1.

5.19

gij dij

! 1

5.20

Finally, we can see that exterior and regressive products of basis elements with complements take on particularly simple forms.

ei ej dij 1
m m

5.21

ei ej dij
m m

5.22

These forms will be the basis for the definition of the inner product in the next chapter. We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product, despite its formal similarity to the Euclidean complement, is only a notational convenience. We do not define it for linear combinations of elements as the definition of a complement requires. 2009 9 3

Grassmann Algebra Book.nb

354

We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product, despite its formal similarity to the Euclidean complement, is only a notational convenience. We do not define it for linear combinations of elements as the definition of a complement requires.

Products leading to a scalar or n-element


In this section we collect together some simple Euclidean results which we will find useful in the rest of the chapter. From the basis element formulae above we can immediately derive formulae for the exterior and regressive products of general elements a and b.
m m

a ai ei
m m

b bi ei
m m

a ai ei
m m

b bi ei
m m

a b K ai ei O K bk ek O I ai bi M 1
m m m m

a b I ai bi M 1
m m

5.23

Taking the complement of this formula, or else doing a derivation mutatis mutandis, leads to the regressive product form.

a b ai bi
m m

5.24

a b Ka bO 1
m m m m

5.25

In the case a is equal to b, and writing ai 2 as a we obtain


m m m

a a I ai 2 M 1 a 1
m m m

5.26

a a ai 2 a
m m m

5.27

A common case is for the 1-element.


x ai ei y bi ei

2009 9 3

Grassmann Algebra Book.nb

355

x y I ai bi M 1

x y ai bi

5.28

5.5 Complementary Interlude


Alternative forms for complements
As a consequence of the complement axiom and the fact that multiplication by a scalar is equivalent to exterior multiplication (see Chapter 2: Section 2.4) we can write the complement of a scalar, an m-element, and a scalar multiple of an m-element in several alternative forms.

Alternative forms for the complement of a scalar


From the linearity axiom 2 we have a a a a, hence:
m m

a == a 1 a 1 == a 1

The complement of a scalar can then be expressed in any of the following forms:

a 1a 1a 1a 1 a a1 a 1 Alternative forms for the complement of an m-element a 1a 1a 1a 1 a


m m m m m

5.29

5.30

Alternative forms for the complement of a scalar multiple of an m-element a a aa aa aa a a


m m m m m

5.31

Orthogonality
The specific complement mapping that we impose on a Grassmann algebra will define the notion of orthogonality for that algebra. A simple element and its complement will be referred to as being orthogonal to each other. In standard linear space terminology, the space of a simple element and the space of its complement are said to be orthogonal complements of each other. This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to every 1-element in the complement of the m-element.

2009 9 3

Grassmann Algebra Book.nb

356

This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to every 1-element in the complement of the m-element. In the special case of a 2-dimensional vector space, the complement of a vector is itself a vector.

x x

A vector and its complement in a two-dimensional vector space

In a vector three-space, the complement of a vector is a bivector. And of course the complement of a bivector is a vector.

A vector and its bivector complement in a three-dimensional vector space

More generally we can say that two simple elements of the same grade a and b, are orthogonal if and only if the exterior product of one with the complement of the other is zero.

ab 0

ab 0

5.32

In the case that a is x and b is x, both forms are clearly verified.


xx 0 xx 0

By taking the complement of these equations, and applying both the complement axiom, and the complement of a complement axiom, we see that we can also say that two simple elements of the same grade a and b, are orthogonal if and only if the regressive product of one with the complement of the other is zero. 2009 9 3

Grassmann Algebra Book.nb

357

By taking the complement of these equations, and applying both the complement axiom, and the complement of a complement axiom, we see that we can also say that two simple elements of the same grade a and b, are orthogonal if and only if the regressive product of one with the complement of the other is zero.

ab 0

ab 0

5.33

We will develop these results in more depth in the next chapter.

Visualizing the complement axiom


We can use this notion of orthogonality to visualize the complement axiom geometrically. Consider the bivector xy. Then x y is orthogonal to xy. But since x y x y this also means that the intersection of the two (n|1)-spaces defined by x and y is orthogonal to xy. We can depict this in 3-space as follows:

Visualizing the complement axiom in vector three-space

The regressive product in terms of complements


2009 9 3

Grassmann Algebra Book.nb

358

The regressive product in terms of complements


All the operations in the Grassmann algebra can be expressed in terms only of the exterior product and the complement operations. It is this fact that makes the complement so important for an understanding of the algebra. In particular the regressive product (discussed in Chapter 3) and the interior product (to be discussed in the next chapter) have simple representations in terms of the exterior and complement operations. We have already introduced the complement axiom 3 (equation 5.3) as part of the definition of the complement.
ab ab
m k m k

Taking the complement of both sides of this axiom, noting that the grade of a b is m+k|n, and
m k

using the complement of a complement axiom 4 gives:


a b a b H- 1Lf Hm+k-n,nL a b
m k m k m k

Hence, any regressive product can be written as the complement of an exterior product of complements.

a b H- 1Lf Hm+k-n,nL a b
m k m k

5.34

In section 5.6 below, we will show that f(m,n) is equal to m(n-m), so that f (m+k-n, n) becomes (m+k-n) (n-(m+k-n)) which simplifies to (m+k) (m+k-n). For reference here, we include this version of the formula below, although it has not yet been derived.

a b H - 1 L H m +kL H m +k-nL a b
m k m k

5.35

The expression of a regressive product in terms of an exterior product and a double application of the complement operation poses an interesting conundrum. On the left hand side, there is a product which is defined in a completely non-metric way, and is thus independent of any metric imposed on the space. On the right hand side we have a double application of the complement operation, each of which requires a metric. This leads us to the notion that the complement operation applied twice in this way is independent of any metric used to define it. A second application of the complement operation effectively 'cancels out' the metric elements introduced during the first application.

2009 9 3

Grassmann Algebra Book.nb

359

Glimpses of the inner product


The exterior product of an m-element with the complement of another m-element is either zero, or an n-element. If it is zero, we say that the elements are orthogonal. Orthogonality was discussed in a previous section. If on the other hand it is an n-element, it must be a scalar multiple (a, say) of the unit n-element 1. Hence:
ab a 1
m m

ab a 1 a
m m

H- 1L

m Hn-m L

ba a
m m m

H - 1 L m Hn-m L b a a
m m

H- 1L

m m Hn-mL+f Hm,nL

ba a

We will see in the following chapter how this formula is the foundation of the definition of the inner product. It shows how a scalar may be obtained from the regressive product of an element with the complement of an element of the same grade. It also shows the 'equivalent' definition in terms of the exterior product. If f(m,n) is equal to m(n-m), as we will soon show, then the last formula simplifies, giving

ab a 1
m m

ba a
m m

5.36

5.6 The Complement of a Complement


The complement of a complement axiom
In the sections below we determine the function f (m, n) in axiom 4 (Formula 5.7) above:
a H- 1Lf Hm,nL a
m m

To find the function f (m, n) we proceed to calculate the complement of the complement of an melement. What we also discover along the way, is that for a formula like this to be valid, the coefficients of our metric mapping gij must also be symmetric. And, as we have already previewed in Section 5.5 above, f (m, n) must be equal to m(n-m).

2009 9 3

Grassmann Algebra Book.nb

360

The complement of a cobasis element


The complement of the complement of a basis element is obtained by taking the complement of Formula 5.13
n

ei ! gij ej
j=1

to give
n

ei ! gij ej
j=1

In order to determine ei , we need to obtain an expression for the complement of a cobasis element of a 1-element ej . Note that whereas the complement of a cobasis element is defined, the cobasis element of a complement is not. To simplify the development we consider, without loss of generality, a specific basis element e1 . First we express the cobasis element as a basis (n|1)-element, and then use the complement axiom to express the right-hand side as a regressive product of complements of 1-elements.
e1 e2 e3 ! en e2 e3 ! en

Substituting for the ei from the definition 5.13:


e1 ! Ig21 e1 + g22 e2 + ! + g2 n en M ! Ig31 e1 + g32 e2 + ! + g3 n en M ! ! Ign1 e1 + gn2 e2 + ! + gnn en M

Expanding this expression and collecting terms gives:


n

e 1 !n - 1 g 1 j H - 1 L j - 1 e 1 e 2 ! j ! e n
j=1

In this expression, the notation j means that the jth factor has been omitted. The scalars g1 j are the cofactors of g1 j in the array gij . Further discussion of cofactors, and how they result from such products of (n|1) elements may be found in Chapter 2. The results for regressive products are identical mutatis mutandis to those for the exterior product. By equation 3.42, regressive products of the form above simplify to a scalar multiple of the missing element. (Remember, ! is now equal to the inverse of c for the purposes of this Chapter).
H - 1 L j-1 e 1 e 2 ! j ! e n 1 !n-2 H - 1 L n-1 e j

The expression for e1 then simplifies to:

2009 9 3

Grassmann Algebra Book.nb

361

The expression for e1 then simplifies to:


n

e 1 H - 1 L n-1 ! g 1 j e j
j=1

The final step is to see that the actual basis element chosen (e1 ) was, as expected, of no significance in determining the final form of the formula. We thus have the more general result:
n

ei H- 1Ln-1 # gij ej
j=1

5.37

Thus we have determined the complement of the cobasis element of a basis 1-element as a specific 1-element. The coefficients of this 1-element are the products of the scalar ! with cofactors of the gij , and a possible sign depending on the dimension of the space. In matrix formulation, this can be written as
B H - 1 L n-1 1 G G B

where B is a column matrix of basis vectors, and G is the matrix of cofactors of elements of G. Our major use of this formula is to derive an expression for the complement of a complement of a basis element. This we do below.

The complement of the complement of a basis 1-element


Consider again the basis element ei . The complement of ei is, by definition:
n

ei ! gij ej
j=1

Taking the complement a second time gives:


n

ei ! gij ej
j=1

We can now substitute for the ej from the formula derived in the previous section to get:
n n k=1

ei H- 1L

n-1

! gij gjk ek
j=1

In the matrix formulation of the previous section, this can be written as

2009 9 3

Grassmann Algebra Book.nb

362

B H - 1 L n-1

1 G

GG B

In Chapter 2 the sum over j of the terms gij gkj was shown to be equal to the determinant gij whenever i equals k and zero otherwise. That is:
n

gij gkj gij dik


j=1

Note carefully that in this sum, the order of the subscripts is reversed in the cofactor term compared to that in the expression for ei . Thus we conclude that if and only if the array gij is symmetric, that is, gij gji , can we express the complement of a complement of a basis 1-element in terms only of itself and no other basis element.
ei H- 1Ln-1 !2 gij dik ek == H- 1Ln-1 !2 gij ei

Furthermore, since we have already shown in Section 5.3 that !2 gij 1, we also have that:
e i H - 1 L n-1 e i

In sum: In order to satisfy the complement of a complement axiom for m-elements it is necessary that gij gji . For 1-elements it is also sufficient. Below we shall show that the symmetry of the gij is also sufficient for m-elements.

The complement of the complement of a basis m-element


Consider the basis m-element e1 e2 !em . By taking the complement of the complement of this element and applying the complement axiom twice, we obtain:
e1 e2 ! em e1 e2 ! em e1 e2 ! em

But since ei H-1Ln-1 ei we obtain immediately that:


e 1 e 2 ! e m H - 1 L m Hn-1L e 1 e 2 ! e m

But of course the form of this result is valid for any basis element ei . Writing H- 1Lm Hn-1L in the
m

equivalent form H- 1Lm Hn-mL (since the parity of m(n-1) is equal to the parity of m(n-m)), we obtain that for any basis element of any grade:

e i H - 1 L m Hn-m L e i
m m

5.38

The complement of the complement of an m-element


2009 9 3

Grassmann Algebra Book.nb

363

The complement of the complement of an m-element


Consider the general m-element a i ai ei . By taking the complement of the complement of
m m

this element and substituting from equation 5.18, we obtain:


a a i e i a i H - 1 L m Hn-m L e i H - 1 L m Hn-m L a
m i m i m m

Finally then we have shown that, provided the complement mapping gij is symmetric and the constant ! is such that !2 gij 1, the complement of a complement axiom is satisfied by an otherwise arbitrary mapping, with sign H- 1Lm Hn-mL .

a H - 1 L m Hn-m L a
m m

5.39

Special cases
The complement of the complement of a scalar is the scalar itself.

aa
The complement of the complement of an n-element is the n-element itself.

5.40

aa
n n

5.41

The complement of the complement of any element in a 3-space is the element itself, since H- 1Lm H3-mL is positive for m equal to 0, 1, 2, or 3.
aa
m m

Alternatively, we can say that aa except when a is of odd degree in an even-dimensional space.
m m m

The simplest case of an element of odd degree in an even-dimensional space is a vector in a vector 2-space.

2009 9 3

Grassmann Algebra Book.nb

364

x x

x = -x

Vectors x, x, and x in a two-dimensional vector space

Idempotent complements
No simple element (except zero) can be equal to its complement. This is because the exterior product of a simple element with its complement is congruent to the unit n-element (which is not zero) - a property that follows directly from the definition of complement. However, this is not necessarily true for non-simple elements. Let S be the sum of a simple melement (a, say) and its complement.
m

Sa+a
m m

Then the complement of S is equal to


S a + a a + H - 1 L m Hn-m L a
m m m m

Since H- 1Lm Hn-mL is equal to H- 1Lm Hn-1L , we can see that S S if and only if the dimension n of the space is odd, or the grade m is even. For example, in a vector 3-space, the complement of a sum of a vector and its bivector complement is identical to itself.
!3 ; S = x + x; 9S, $@SD= 8x + x, x + x<

And in a point 3-space, the sum of a 2-element and its 2-element complement is identical to itself.
"3 ; S = B + B; 9S, $@SD=
2 2

:B + B, B + B>
2 2 2 2

In Chapter 7: Exploring Screw Algebra, we will explore this property further.

2009 9 3

Grassmann Algebra Book.nb

365

In Chapter 7: Exploring Screw Algebra, we will explore this property further.

5.7 Working with Metrics


Working with metrics
In GrassmannAlgebra, there are three basic ways in which you might apply a metric. First, you can transform expressions involving Clifford, hypercomplex, generalized Grassmann, interior, inner, or scalar products to expressions specifically involving scalar products of basis elements. Your results involving these scalar products of basis elements are valid for any metric. If you wish to make a specific metric apply to your results, you can declare that metric, and use ToMetricElements to perform the substitution of the currently declared metric elements for the corresponding scalar products of basis elements. We will discuss these transformations after we have introduced the interior product in the next chapter. Second, if you have elements expressed in terms of basis elements, you can compute their complements by first taking their complement (applying GrassmannComplement, or using the OverBar in the GrassmannAlgebra Palette), and then applying ConvertComplements. This will use the currently declared metric to convert the complements of basis elements to complementary basis elements. We will discuss these transformations in the section to follow this one. Third, if you want to display the metrics induced by the various exterior product spaces, you can use MetricL for an individual space or MetricPalette to display all the induced metrics. We discuss these in the sections below.

The default metric


In order to simplify complements and interior products, and any products defined in terms of them (for example, Clifford and hypercomplex products), GrassmannAlgebra needs to know what metric has been imposed on the underlying linear space L. Grassmann and all those writing in the
1

early tradition of the Ausdehnungslehre tacitly assumed a Euclidean metric; that is, one in which gij dij . This metric is also the one tacitly assumed in beginning presentations of the threedimensional vector calculus, and is most evident in the definition of the cross product as a vector normal to the factors of the product. The default metric assumed by GrassmannAlgebra is the Euclidean metric. In the case that the GrassmannAlgebra package has just been loaded, entering Metric will show the components of the default Euclidean metric tensor.
Metric 881, 0, 0<, 80, 1, 0<, 80, 0, 1<<

These components are arranged as the elements of a 3!3 matrix because the default basis is 3dimensional.

2009 9 3

Grassmann Algebra Book.nb

366

These components are arranged as the elements of a 3!3 matrix because the default basis is 3dimensional.
Basis 8e1 , e2 , e3 <

However, if the dimension of the basis is changed, the default metric changes accordingly.
DeclareBasis@8", i, j, k<D 8#, i, j, k< Metric 881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<

We will take the liberty of referring to such a matrix as the metric.

Declaring a metric
GrassmannAlgebra permits us to declare a metric as a matrix with any numeric or symbolic components. There are just three conditions to which a valid matrix must conform in order to be a metric: 1) It must be symmetric (and hence square). 2) Its order must be the same as the dimension of the declared linear space. 3) Its components must be scalars. It is up to the user to ensure the first two conditions. The third is handled by GrassmannAlgebra automatically assuming all symbols in the matrix are scalars.

Example: Declaring a metric


We return to a vector 3-space (by entering !3 from the palette), create a 3!3 matrix M, and then declare it as the metric.
!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<; MatrixForm@MD a 0 n 0 b 0 n 0 g DeclareMetric@MD 88a, 0, n<, 80, b, 0<, 8n, 0, g<<

We can verify that this is indeed the metric by entering Metric.


Metric 88a, 0, n<, 80, b, 0<, 8n, 0, g<<

We can verify that a, b, and n are scalars by using ScalarQ.

2009 9 3

Grassmann Algebra Book.nb

367

ScalarQ@8a, b, n<D 8True, True, True<

Declaring a general metric


For theoretical calculations it is sometimes useful to be able quickly to declare a metric of general symbolic elements. We can do this as described in the previous section, or we can use the GrassmannAlgebra function DeclareMetric[g] where g is a symbol. We will often use the 'double-struck' symbol - for the kernel symbol of the metric components.
!3 ; G = DeclareMetric@-D; MatrixForm@GD $1,1 $1,2 $1,3 $1,2 $2,2 $2,3 $1,3 $2,3 $3,3

You can check that the components of G are now considered scalars.
ScalarQ@GD 88True, True, True<, 8True, True, True<, 8True, True, True<<

You can test to see if a symbol is a component of the currently declared metric by using MetricQ.
MetricQ@8-2,3 , -23 , -3,4 <D 8True, False, False<

Calculating induced metrics


The GrassmannAlgebra function for calculating the metric induced on L by the metric in L is
m 1

MetricL[m].

Induced general metrics


Suppose we are working in 3-space with the general metric defined above. Then the metrics on L,
0

L, L, L can be calculated by entering:


1 2 3

!3 ; DeclareMetric@-D; M0 = MetricL@0D; MatrixForm@M0 D H1L

2009 9 3

Grassmann Algebra Book.nb

368

M1 = MetricL@1D; MatrixForm@M1 D $1,1 $1,2 $1,3 $1,2 $2,2 $2,3 $1,3 $2,3 $3,3 M2 = MetricL@2D; MatrixForm@M2 D - $2 1,2 + $1,1 $2,2 - $1,2 $1,3 + $1,1 $2,3 - $1,2 $1,3 + $1,1 $2,3 - $1,3 $2,2 + $1,2 $2,3 - $2 1,3 + $1,1 $3,3 - $1,3 $2,3 + $1,2 $3,3 - $2 2,3 + $2,2 $3,3

- $1,3 $2,2 + $1,2 $2,3 - $1,3 $2,3 + $1,2 $3,3 M3 = MetricL@3D; MatrixForm@M3 D

2 2 I - $2 1,3 $2,2 + 2 $1,2 $1,3 $2,3 - $1,1 $2,3 - $1,2 $3,3 + $1,1 $2,2 $3,3 M

The metric in L is of course just the determinant of the metric in L.


3 1

Example: Induced specific metrics


We return to the metric discussed in a previous example.
!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<; DeclareMetric@MD; MatrixForm@MD a 0 n 0 b 0 n 0 g

The induced metrics are


MatrixForm@MetricL@DD & 80, 1, 2, 3< :H 1 L, a 0 n 0 b 0 , n 0 g ab 0
2

-b n 0 bg , H a b g - b n2 L>

0 ag-n -b n 0

Verifying the symmetry of the induced metrics


It is easy to verify the symmetry of the induced metrics in any particular case by inspection. However to automate this we need only ask Mathematica to compare the metric to its transpose. For example we can verify the symmetry of the metric induced on L by a general metric in 4-space.
3

!4 ; DeclareMetric@-D; M3 = MetricL@3D; M3 Transpose@M3 D True

2009 9 3

Grassmann Algebra Book.nb

369

The metric for a cobasis


In the sections above we have shown how to calculate the metric induced on L by the metric
m

defined on L. Because of the way in which the cobasis elements of L are naturally ordered
1 m

alphanumerically, their ordering and signs will not generally correspond to that of the basis elements of L . The arrangement of the elements of the metric tensor for the cobasis elements of L
n-m m

will therefore differ from the arrangement of the elements of the metric tensor for the basis elements of L . The metric tensor of a cobasis is the tensor of cofactors of the metric tensor of the
n-m

basis. Hence we can obtain the tensor of cofactors of the elements of the metric tensor of L by
m

reordering and resigning the elements of the metric tensor induced on L .


n-m

As an example, take the general metric in 3-space discussed in the previous section. We will show that we can obtain the tensor of the cofactors of the elements of the metric tensor of L by
1

reordering and resigning the elements of the metric tensor induced on L.


2

The metric on L is given as a correspondence G1 between the basis elements and cobasis elements
1

of L.
1

e1 e2 e3
2

! G1

e2 e3 - He1 e3 L ; e1 e2

G1 =

-1,1 -1,2 -1,3 -1,2 -2,2 -2,3 ; -1,3 -2,3 -3,3

The metric on L is given as a correspondence G2 between the basis elements and cobasis elements of L.
2

e1 e2 e1 e3 e2 e3

! G2

e3 - e2 ; e1 - -1,2 -1,3 + -1,1 -2,3 - -1,3 -2,2 + -1,2 -2,3 - -2 1,3 + -1,1 -3,3 - -1,3 -2,3 + -1,2 -3,3 ; - -2 2,3 + -2,2 -3,3

- -2 1,2 + -1,1 -2,2 G2 = - -1,2 -1,3 + -1,1 -2,3

- -1,3 -2,2 + -1,2 -2,3 - -1,3 -2,3 + -1,2 -3,3

We can transform the columns of basis elements in this equation into the form of the preceding one with the transformation T which is simply determined as:
T= 0 0 1 0 -1 0 ; 1 0 0 e1 e2 e1 e3 e2 e3 e2 e3 - He1 e3 L ; e1 e2 T e3 - e2 e1 e1 e2 ; e3

And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily check 2009 9this 3 in Mathematica by entering the predicate

Grassmann Algebra Book.nb

370

And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily check this in Mathematica by entering the predicate
Simplify@G1 .HT.G2 .TL Det@G1 D IdentityMatrix@3DD True

Creating palettes of induced metrics


You can create a palette of all the induced metrics by declaring a metric and then entering MetricPalette. The induced metrics of a Euclidean vector 4-space are all Euclidean.
!4 ; MetricPalette

Metric Palette
L
0

H1L 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1

L
1

L
2

L
3

L
4

H1L

Here are the metrics for a non-Euclidean vector 3-space.

2009 9 3

Grassmann Algebra Book.nb

371

!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<; DeclareMetric@MD; MetricPalette

Metric Palette
L
0

H1L a 0 n 0 b 0 n 0 g ab 0 -b n 0 bg 0 a g - n2 -b n 0

L
1

L
2

L
3

H a b g - b n2 L

Here are general metrics for a vector 3-space.


!3 ; DeclareMetric@.D; MetricPalette

Metric Palette
L
0

H1L "1,1 "1,2 "1,3 "1,2 "2,2 "2,3 "1,3 "2,3 "3,3 - "2 1,2 + "1,1 "2,2 - "1,2 "1,3 + "1,1 "2,3 - "1,3 "2,2 + "1 - "2 1,3 + "1,1 "3,3 - "1,3 "2,3 + "1 - "2 2,3 + "2,2

L
1

L
2

- "1,2 "1,3 + "1,1 "2,3

- "1,3 "2,2 + "1,2 "2,3 - "1,3 "2,3 + "1,2 "3,3 L


3

2 2 I - "2 1,3 "2,2 + 2 "1,2 "1,3 "2,3 - "1,1 "2,3 - "1,2 "3,3 + "1,1 "2

You can paste any of these metric matrices into your notebook by clicking on the palette.
- .2 1,2 + .1,1 .2,2 - .1,2 .1,3 + .1,1 .2,3 - .1,2 .1,3 + .1,1 .2,3 - .1,3 .2,2 + .1,2 .2,3 - .2 1,3 + .1,1 .3,3 - .1,3 .2,3 + .1,2 .3,3 - .2 2,3 + .2,2 .3,3

- .1,3 .2,2 + .1,2 .2,3 - .1,3 .2,3 + .1,2 .3,3

5.8 Calculating Complements


2009 9 3

Grassmann Algebra Book.nb

372

5.8 Calculating Complements


Entering a complement
To enter the expression for the complement of a Grassmann expression X in GrassmannAlgebra, enter GrassmannComplement[X]. For example to enter the expression for the complement of xy, enter:
GrassmannComplement@x yD xy

Or, you can simply select the expression xy and click the button on the GrassmannAlgebra palette (or this one). Note that an expression with a bar over it symbolizes another expression (the complement of the original expression), not a GrassmannAlgebra operation on the expression. Hence no conversions will occur by entering an overbarred expression.
GrassmannComplement will also work for lists, matrices, or tensors of elements. For example, here is a matrix: M = 88a e1 e2 , b e2 <, 8- b e2 , c e3 e1 <<; MatrixForm@MD K a e1 e2 b e2 O - b e2 c e3 e1

And here is its complement:


M MatrixForm a e1 e2 - b e2 b e2 c e3 e1

Creating palettes of complements of basis elements


GrassmannAlgebra has a function ComplementPalette for tabulating basis elements and their complements in the currently declared space and the currently declared metric. Once the palette is generated you can paste from it by clicking on any of the buttons.

Euclidean metric
Here we repeat the construction of the complement palette of Section 5.4 in a 2-space.

2009 9 3

Grassmann Algebra Book.nb

373

!2 ; ComplementPalette

Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1

Palettes of complements for other bases are obtained by declaring the bases, then entering the command ComplementPalette.
DeclareBasis@86, 7<D; ComplementPalette

Complement Palette
Basis Complement 1 # $ #$ #$ $ -# 1

Non-Euclidean metric 2-space


Here is the complement palette in a 2-space with general metric.

2009 9 3

Grassmann Algebra Book.nb

374

!2 ; DeclareMetric@gD; ComplementPalette

Complement Palette
Basis 1 e1 e2 e1 e2
g e2 g1,2 g

Complement
e1 e2 g e2 g1,1

e1 g1,2 g e1 g2,2 g

The symbol g represents the determinant of the metric tensor. It (or expressions involving it) can be evaluated in any space with any declared metric using ToMetricElements.
ToMetricElements@gD - g2 1,2 + g1,1 g2,2

The palette is arranged to 'collapse' the full determinant expression under the symbol g since it is often large. However if you wish, you can include the full form of g in the palette by substituting for it.
ComplementPalette . g ToMetricElements@gD

Complement Palette
Basis 1 e1 e2 e1 e2
e2 g1,1 -g2 1,2 +g1,1 g2,2 e2 g1,2 -g2 1,2 +g1,1 g2,2

Complement
e1 e2 -g2 1,2 +g1,1 g2,2

e1 g1,2 -g2 1,2 +g1,1 g2,2 e1 g2,2 -g2 1,2 +g1,1 g2,2

- g2 1,2 + g1,1 g2,2

3-space
Here is the corresponding palette in 3-space.

2009 9 3

Grassmann Algebra Book.nb

375

A; DeclareMetric@gD; ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3
The determinant of this metric tensor is
ToMetricElements@gD
2 2 - g2 1,3 g2,2 + 2 g1,2 g1,3 g2,3 - g1,1 g2,3 - g1,2 g3,3 + g1,1 g2,2 g3,3

Complement
e1 e2 e3 g g1,3 e1 e2 g g2,3 e1 e2 g g3,3 e1 e2 g e3 I-g2 1,2 +g1,1 g2,2 M g

g1,2 e1 e3 g g2,2 e1 e3 g g2,3 e1 e3 g

+ + +

g1,1 e2 e3 g g1,2 e2 e3 g g1,3 e2 e3 g

e2 Hg1,2 g1,3 -g1,1 g2,3 L g

+ +

e1 H-g1,3 g2,2 +g1,2 g2,3 L g e1 H-g1,3 g2,3 +g1,2 g3,3 L g

e3 H-g1,2 g1,3 +g1,1 g2,3 L g e3 H-g1,3 g2,2 +g1,2 g2,3 L g

+ +

e2 Ig2 1,3 -g1,1 g3,3 M g

e2 Hg1,3 g2,3 -g1,2 g3,3 L g

e1 I-g2 2,3 +g2,2 g3,3 M g

Converting complements of basis elements


The GrassmannAlgebra function for converting complements of basis elements to other basis elements is ConvertComplements. If the declared metric is non-Euclidean, the result will include elements of the metric tensor.

Example: Converting complements of individual basis elements in a Euclidean space


Here is the palette of complements for a Euclidean 3-space.

2009 9 3

Grassmann Algebra Book.nb

376

A; ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1

We could have obtained these complements in three steps: 1. Generate the basis elements (using LBasis) 2. Take their complements (using GrassmannComplement) 3. Convert the complements to other basis elements (using ConvertComplements).
LBasis 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 << GrassmannComplement@LBasisD 981<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <= ConvertComplements@GrassmannComplement@LBasisDD 88e1 e2 e3 <, 8e2 e3 , - He1 e3 L, e1 e2 <, 8e3 , - e2 , e1 <, 81<<

Example: Converting complements of individual basis elements in a nonEuclidean space


To convert complements of basis elements in a non-Euclidean space, the same three steps as in the previous section.
!3 ; DeclareMetric@88a, 0, n<, 80, b, 0<, 8n, 0, g<<D MatrixForm a 0 n 0 b 0 n 0 g

2009 9 3

Grassmann Algebra Book.nb

377

Here is the palette of complements for this metric.


ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement
e1 e2 e3 g n e1 e2 g

a e2 e3 g

b e1 e3 g

g e1 e2 g

+ +

n e2 e3 g a b e3 g

b n e1 g

I-a g+n2 M e2 g b g e1 g

b n e3 g

We could have obtained these complements in three steps: 1. Generate the basis elements (using LBasis) 2. Take their complements (using GrassmannComplement) 3. Convert the complements to other basis elements (using ConvertComplements).
LBasis 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 << GrassmannComplement@LBasisD 981<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <= ConvertComplements@GrassmannComplement@LBasisDD :: e1 e2 e3 g :b n e1 g + a b e3 g , >, : n e1 e2 g + a e2 e3 g , b g e1 g ,b e1 e3 g b n e3 g , g e1 e2 g >, : g >> + n e2 e3 g >,

I-a g + n2 M e2 g

The determinant of the metric tensor is 2009 9 3

Grassmann Algebra Book.nb

378

The determinant of the metric tensor is


ToMetricElements@gD a b g - b n2

The palette is arranged to 'collapse' the full determinant expression under the symbol g since it is often large. However if you wish, you can include the full form of g in the palette by substituting for it.
ComplementPalette . g ToMetricElements@gD

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 n e1 e2 a b g-b n2

Complement
e1 e2 e3 a b g-b n2

+
b e1 e3

a e2 e3 a b g-b n2

g e1 e2 a b g-b n2 b n e1

a b g-b n2

+ +

n e2 e3 a b g-b n2 a b e3 a b g-b n2

a b g-b n2

I-a g+n2 M e2 a b g-b n2 b g e1 a b g-b n2

b n e3 a b g-b n2

a b g - b n2

Example: Verifying the complement of a complement axiom for basis elements


The complement of a complement of any element in 3-space is the element itself. We can verify this for the basis elements by applying the complement conversion procedure twice. This example verifies the result for a general metric in 3-space.
DeclareMetric@gD 88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

2009 9 3

Grassmann Algebra Book.nb

379

G = ConvertComplements@GrassmannComplement@LBasisDD :: e1 e2 e3 g g2,3 e1 e2 g g3,3 e1 e2 g : >, : g1,3 e1 e2 g g2,2 e1 e3 g g2,3 e1 e3 g + + + g1,2 e1 e3 g g1,2 e2 e3 g g1,3 e2 e3 g e2 Hg1,2 g1,3 - g1,1 g2,3 L g , e3 H- g1,2 g1,3 + g1,1 g2,3 L g + e1 H- g1,3 g2,3 + g1,2 g3,3 L g + e2 Hg1,3 g2,3 - g1,2 g3,3 L g >, : g >> + , + + >, , + g1,1 e2 e3 g ,

e3 I- g2 1,2 + g1,1 g2,2 M g

e1 H- g1,3 g2,2 + g1,2 g2,3 L g e2 Ig2 1,3 - g1,1 g3,3 M g e3 H- g1,3 g2,2 + g1,2 g2,3 L g e1 I- g2 2,3 + g2,2 g3,3 M g

Taking the complement of these and converting them again returns us to the original list of basis elements.
ConvertComplements@GrassmannComplement@GDD 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <<

Example: Converting complements of basis elements in general expressions


ConvertComplements will take any Grassmann expression and convert any complements of basis elements in it. Suppose we are in a 3-space with a Euclidean metric.
A; ConvertComplementsB2 + a b e2 e2 c e3 F 2 + a b c e1 e2

If the metric is more general we get the more general result.

2009 9 3

Grassmann Algebra Book.nb

380

DeclareMetric@gD; F = ConvertComplementsB2 + a b e2 e2 c e3 F 2+ a b c g2,2 g3,3 e1 e2 g Simplify@FD 2 g + a b c g2,2 Hg3,3 e1 e2 - g2,3 e1 e3 + g1,3 e2 e3 L g a b c g2,2 g2,3 e1 e3 g + a b c g1,3 g2,2 e2 e3 g

ConvertComplements will also apply some simplification rules to expressions involving non-basis elements.
A; ConvertComplementsB2 + a b e2 e2 c x F 2+abcx DeclareMetric@gD; ConvertComplementsB2 + a b e2 e2 c x F 2 + a b c x g2,2

Simplifying expressions involving complements


ConvertComplements has been specifically designed to convert expressions involving complements of basis elements according to the currently declared metric of the space. Generally the results will differ when the metrics differ. GrassmannSimplify, on the other hand, only includes simplification rules which are independent of the metric. Generally, these rules apply when an expression involving complements can be expressed without the complements, or can be expressed by another form deemed simpler, often involving the interior product (which will be discussed in the next chapter).

In the example discussed in the previous section we see that using GrassmannSimplify gives a result involving the interior product e2 e2 (in this case a scalar product) which is valid independent of the metric.
% B2 + a b e2 e2 c x F 2 + a b c He2 e2 L x

In the case of a Euclidean metric, the scalar product e2 e2 in the result is equal to 1. In the case of a general metric it is equal to g2,2 . Here are some more examples in 3-space.

2009 9 3

Grassmann Algebra Book.nb

381

A; %B:x, x, x y, x y, Hu vL y, u v y, Hu vL y z, u v y z, Hx yL z, Hx yL z>F 8x, x, x y, x y, u v y, u v y, u v y z, u v y z, Hx yL z, x y z<

The dimension of the space may change the sign of some expressions, or indeed make them zero. Here are the same examples in 4-space, giving two sign-changes, and one zero.
!4 ; %B:x, x, x y, x y, Hu vL y, u v y, Hu vL y z, u v y z, Hx yL z, Hx yL z>F 8- x, x, x y, - Hx yL, u v y, u v y, u v y z, u v y z, Hx yL z, 0<

GrassmannSimplify will also handle expressions with symbolic grades.


TableB!n ; :n, %B:a, a b, a b>F>, 8n, 2, 4<F
m m k m k

::2, :H- 1Lm a, H- 1Lm a b , H- 1Lk+m a b>>,


m m k m k

:3, :a, a b, a b>>, :4, :H- 1Lm a, H- 1Lm a b , H- 1Lk+m a b>>>


m m k m k m m k m k

Converting expressions involving complements to specified forms


In the usual way of developing the algebra, the most basic product is the exterior product. The regressive product is then developed as a "dual" product to the exterior product (that is, it has the same axiom structure, but different interpretation on its symbols). The introduction of a complement operation (mapping) then enables the definition of the interior product (and its special cases the inner product and the scalar product). The interior, inner and scalar products are discussed in the next chapter. The exterior and regressive products are taken as basic. If a complement operation (or equivalently, a metric) is introduced (defined) on the space, then the exterior and regressive products can be expressed in terms of each other and the complement. The interior product operation can also be expressed in terms either of the exterior product and the complement, or else the regressive product and the complement. To complete the suite of conversion possibilities, the exterior and regressive products can be expressed in terms of the interior product and the complement, and the complement of an element can be expressed as its interior product with the unit n-element. In this chapter, we discuss only the conversions between the exterior product, the regressive product, and the complement. The examples below show the effects of these conversions on a single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the complement of an element of any grade is the element itself, hence these conversions in a 3-space show no extra signs. In spaces of other dimension however, the results may show signs dependent on the grades of the elements. This is exemplified by the 4-dimensional cases.

2009 9 3

In this chapter, we discuss only the conversions between the exterior product, Grassmann the Algebra regressive Book.nb 382 product, and the complement. The examples below show the effects of these conversions on a single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the complement of an element of any grade is the element itself, hence these conversions in a 3-space show no extra signs. In spaces of other dimension however, the results may show signs dependent on the grades of the elements. This is exemplified by the 4-dimensional cases.

Example: Conversions in 3 and 4 dimensional spaces


As an example expression we take the regressive product of two exterior products.
A; G = a b g d ;
m k p q

Converting the exterior products to regressive products and complements yields a possible sign change in 4-space depending on the grades of the elements.
8!3 ; ExteriorToRegressive@GD, !4 ; ExteriorToRegressive@GD< : a b g d , H - 1 L k+m a b H - 1 L p+q g d >
m k p q m k p q

Converting the regressive products to exterior products also yields a possible sign change.
8!3 ; RegressiveToExterior@GD, !4 ; RegressiveToExterior@GD< : a b g d , H - 1 L k+m +p+q a b g d >
m k p q m k p q

Converting regressive products of basis elements in a metric space


In a metric space we can compute the regressive product of basis elements in terms of exterior products of basis elements.

Example: The regressive product of two bivectors in Euclidean space.


Here is the regressive product of two bivectors
A; Z = %a %b Ha1 e1 e2 + a2 e1 e3 + a3 e2 e3 L Hb1 e1 e2 + b2 e1 e3 + b3 e2 e3 L

Expanding and simplifying the product gives


Zs = % @ZD H- a2 b1 + a1 b2 L e1 e2 e1 e3 + H- a3 b1 + a1 b3 L e1 e2 e2 e3 + H- a3 b2 + a2 b3 L e1 e3 e2 e3

Now convert the regressive products to exterior products and complements.

2009 9 3

Grassmann Algebra Book.nb

383

Ze = RegressiveToExterior@ZD a1 e1 e2 b1 e1 e2 + a1 e1 e2 b2 e1 e3 + a1 e1 e2 b3 e2 e3 + a2 e1 e3 b1 e1 e2 + a2 e1 e3 b2 e1 e3 + a2 e1 e3 b3 e2 e3 + a3 e2 e3 b1 e1 e2 + a3 e2 e3 b2 e1 e3 + a3 e2 e3 b3 e2 e3

Finally, convert the complements of the basis elements.


ConvertComplements@Ze D H- a2 b1 + a1 b2 L e1 + H- a3 b1 + a1 b3 L e2 + H- a3 b2 + a2 b3 L e3

Example: The regressive product of two bivectors in non-Euclidean space.


Take the same regressive product of bivectors as in the previous example. Declare a non-Euclidean metric.
DeclareMetric@gD 88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

Then convert the complements of the basis elements.


ConvertComplements@Ze D g H- a2 b1 + a1 b2 L e1 + g H- a3 b1 + a1 b3 L e2 + g H- a3 b2 + a2 b3 L e3

Example: The regressive product of three elements in Euclidean 3-space


Here is a pair of regressive products of three basis elements in a 3-space.
A; X = 8He1 e2 L He2 e3 L He3 e1 L, He1 e2 e3 L He1 e2 e3 L e1 <;

Convert the regressive products to exterior products and complements.


Xe = RegressiveToExterior@XD 9e1 e2 e2 e3 e3 e1 , e1 e2 e3 e1 e2 e3 e1 =

Convert the complements. ConvertComplements assumes the currently declared metric - in this case the default Euclidean metric.
ConvertComplements@Xe D 81, e1 <

2009 9 3

Grassmann Algebra Book.nb

384

Example: The regressive product of three elements in non-Euclidean 3-space


We now change the metric to a general metric.
DeclareMetric@gD 88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

Applying ConvertComplements assumes this metric, and shows us that both results now have the determinant of the metric tensor as multipliers.
ConvertComplements@Xe D 8g, g e1 <

Note that because their were two regressive product operations, was one regressive product operation, so

g was generated twice

(giving g). In the example of the regressive product of bivectors in the previous example, there g was generated once.

5.9 Complements in a vector space


The Euclidean complement in a vector 2-space
Consider a vector x in a 2-dimensional Euclidean vector space expressed in terms of basis vectors e1 and e2 .
x a e1 + b e2

Since this is a Euclidean vector space, we can depict the basis vectors at right angles to each other. But note that since it is a vector space, we do not depict an origin.

2009 9 3

Grassmann Algebra Book.nb

385

Complementing a two-dimensional vector

The complement of x is given by:


x a e1 + b e2 a e2 - b e1

Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis element, and a basis element and its cobasis element are defined by their exterior product being the basis n-element, in this case e1 e2 . It is clear from the depiction above that x and x are at right angles to each other, thus verifying our geometric interpretation of the algebraic notion of orthogonality: a simple element and its complement are orthogonal. Taking the complement of x gives -x:
x a e2 - b e1 - a e1 - b e2 - x

Or, we could have used the complement of a complement axiom:


x H - 1 L 1 H2-1L x - x

Continuing to take complements we find that we eventually return to the original element.
x -x xx

In a vector 2-space, taking the complement of a vector is thus equivalent to rotating the vector by one right angle counterclockwise.

The non-Euclidean complement in a vector 2-space


2009 9 3

Grassmann Algebra Book.nb

386

The non-Euclidean complement in a vector 2-space


Suppose now a vector x in a 2-dimensional non-Euclidean vector space expressed similarly in terms of basis vectors e1 and e2 .
x a e1 + b e2

Since this is a non-Euclidean vector space, the basis vectors are generally not at right angles to each other. We declare a general metric, and display the Complement and Metric palettes.
!2 ; DeclareMetric@gD; Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Basis 1 e1 e2 e1 e2
g e2 g1,2 g

Complement
e1 e2 g e2 g1,1

Metric Palette
L
0

H1L g1,1 g1,2 g1,2 g2,2

e1 g1,2 g e1 g2,2 g

L
1

L I - g2 1,2 + g1,1 g2,2 M 2

The complement of x is given by:


x a e1 + b e2 ConvertComplements@a e1 + b e2 D x a e1 + b e2 e2 Ha g1,1 + b g1,2 L g + e1 H- a g1,2 - b g2,2 L g

Taking the complement of x still gives -x:


x a e1 + b e2 ConvertComplementsAa e1 + b e2 E x a e1 + b e2 - a e1 - b e2

And taking the complement of x still gives -x:

2009 9 3

Grassmann Algebra Book.nb

387

x a e1 + b e2 ConvertComplementsBa e1 + b e2 F

x a e1 + b e2

e2 H- a g1,1 - b g1,2 L g

e1 Ha g1,2 + b g2,2 L g

Example
Suppose we take a simple metric and calculate x
1 2

He1 + e2 L and its complements

!2 ; M = DeclareMetric@882, 1<, 81, 1<<D; Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Basis 1 e1 e2 e1 e2
e1 + e2 2

Metric Palette
L
0

Complement e1 e2 - e1 + 2 e2 - e1 + e2 1

H1L

L K2 1O 1 1 1 L
2

H1L

; e1 + e2 2

x ConvertComplements@ x - e1 + 3 e2 2

x ConvertComplements@ xe1 2 e2 2

e1 + e2 2

2009 9 3

Grassmann Algebra Book.nb

388

x ConvertComplements@ x e1 3 e2 2

e1 + e2 2

x e2 x e1 x x

Complementing a two-dimensional vector in a non-Euclidean space

We can see that although the basis vectors are neither unit vectors, nor orthogonal, that as for the Euclidean case, x is equal to -x, and x is equal to - x. The standard conceptual vehicle for exploring the notion of orthogonality is the interior product and its special cases, the inner and scalar products. These we will develop in the next chapter where we will revisit examples of this type, and connect them more closely with familiar concepts.

The Euclidean complement in a vector 3-space


We now explore the Euclidean complement of a vector x in a vector 3-space. This differs somewhat from the vector 2-space case, since the complement of a vector is a bivector, and the complement of a bivector is a vector.
x a e1 + b e2 + c e3

For reference we display the complement palette.

2009 9 3

Grassmann Algebra Book.nb

389

!3 ; ComplementPalette

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1

The complement of x is therefore given by:


x a e1 + b e2 + c e3 a e2 e3 - b e1 e3 + c e1 e2

Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis element, and a basis element and its cobasis element are defined by their exterior product being the basis n-element, in this case e1 e2 e3 . Taking the complement of x gives x. Note that this result differs in sign from the two-dimensional case.
x a e2 e3 - b e1 e3 + c e1 e2 a e1 + b e2 + c e3 x

Or, we could have used the complement of a complement axiom:


x H - 1 L 1 H3-1L x x

Since in a vector 3-space, the complement of the complement of a vector, or of a bivector returns us to the original element, continuing to take complements as we did in the two-dimensional case leads us to no further elements.
xxx xx

The bivector x is a sum of components, each in one of the coordinate bivectors. We have already shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be factored into the exterior product of two vectors. It is in this form that the bivector will be most easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the vector x, but they are all contained in the bivector x. You can get express the bivector x as the exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors from the first factor to the second factor, or vice-versa. 2009 9 3

The bivector x is a sum of components, each in one of the coordinate bivectors. We have already shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be factored into the exterior product of two vectors. It is in this form that Grassmann the bivector willBook.nb be most 390 Algebra easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the vector x, but they are all contained in the bivector x. You can get express the bivector x as the exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors from the first factor to the second factor, or vice-versa.
ExteriorFactorize@a e2 e3 - b e1 e3 + c e1 e2 D c e1 a e3 c e2 b e3 c

The vector x is orthogonal to each of these vectors since the exterior product of x with each of them is zero.
%B: e1 e1 80, 0< a e3 c c Ha e2 e3 - b e1 e3 + c e1 e2 L,

a e3

Ha e2 e3 - b e1 e3 + c e1 e2 L>F

A vector and its bivector as complements in a vector 3-space

The complements of each of these vectors will be bivectors which contain the original vector x. We can verify this easily with ConvertComplements and GrassmannSimplify.
%AConvertComplementsA 9Hc e1 - a e3 L Ha e1 + b e2 + c e3 L, Hc e2 - b e3 L Ha e1 + b e2 + c e3 L=EE 80, 0<

The non-Euclidean complement in a vector 3-space


2009 9 3

Grassmann Algebra Book.nb

391

The non-Euclidean complement in a vector 3-space


Suppose now a vector x in a 3-dimensional non-Euclidean vector space.
x a e1 + b e2 + c e3 !3 ; DeclareMetric@gD;

The complement of x is given by:


x a e1 + b e2 + c e3 ConvertComplements@a e1 + b e2 + c e3 D x a e1 + b e2 + c e3 Ha g1,3 + b g2,3 + c g3,3 L e1 e2 g H- a g1,2 - b g2,2 - c g2,3 L e1 e3 g + Ha g1,1 + b g1,2 + c g1,3 L e2 e3 g +

Taking the complement of x gives x:


x a e1 + b e2 + c e3 ConvertComplementsAa e1 + b e2 + c e3 E x a e1 + b e2 + c e3 a e1 + b e2 + c e3

Example
Suppose we take the metric used in the example above on the non-Euclidean complement in a vector 2-space and extend it orthogonally to the third dimension.

2009 9 3

Grassmann Algebra Book.nb

392

!3 ; M = DeclareMetric@882, 1, 0<, 81, 1, 0<, 80, 0, 1<<D; Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 - He1 e3 L + 2 e2 e3 - He1 e3 L + e2 e3 e1 e2 e3 e1 - 2 e2 e1 - e2 1
1 2

Metric Palette
L
0

H1L 2 1 0 1 1 0 0 0 1 1 0 0 0 2 1 0 1 1 H1L

L
1

L
2

L
3

We now calculate the complement of the same vector x to x.


x e1 + e2 2 ; e1 + e2 2

He1 + e2 L and verify that x is equal

x ConvertComplements@ x - He1 e3 L + 3 e2 e3 2

x ConvertComplements@ x e1 2 + e2 2

e1 + e2 2

If we factorize the bivector x using ExteriorFactorize, we obtain


ExteriorFactorizeB- He1 e3 L + e1 3 e2 2 e3 3 e2 e3 2 F

2009 9 3

Grassmann Algebra Book.nb

393

and observe again that the vector x is orthogonal to each of these factors, since clearly the exterior product of the complement of x with each of these factors is zero.
%B: e1 80, 0< 3 e2 2 - He1 e3 L + 3 e2 e3 2 , e3 - He1 e3 L + 3 e2 e3 2 >F

5.10 Complements in a bound space


Metrics in a bound space
If we want to interpret one element of a linear space as an origin point, we need to consider which forms of metric make sense, or are useful in some degree. We have seen above that a Euclidean metric makes sense both in vector spaces (as we expected) but also in a bound space where one basis element is interpreted as the origin point and the others as basis vectors. More general metrics make sense in vector spaces, because the entities all have the same interpretation. This leads us to consider hybrid metrics in bound spaces in which the vector subspace has a general metric but the origin is orthogonal to all vectors. We can therefore adopt a Euclidean metric for the origin, and a more general metric for the vector subspace.

Gij

1 0 0 g11 0 gn1

! 0 ! g1 n ! ! gn n

5.42

These hybrid metrics will be useful later when we discuss screw algebra in Chapter 7 and mechanics in Chapter 8. Of course the permitted transformations on a space with such a metric must be restricted to those which maintain the orthogonality of the origin point to the vector subspace. It will be convenient in what follows to refer to a bound space with underlying linear space of dimension n+1 as an n-plane. All the formulae for complements in the vector subspace still hold. In an n-plane, the vector subspace has dimension n, and hence the complement operation for the n-plane and its vector subspace will not be the same. We will denote the complement operation in the vector subspace by using an overvector instead of an overbar , and call it a vector space complement. The vector space complement (overvector) operation can be entered from the Basic Operations palette by using the button under the complement (overbar) button. The constant ! will be the same in both spaces and still equate to the inverse of the square root of the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the metric tensor above that:

2009 9 3

Grassmann Algebra Book.nb

394

The constant ! will be the same in both spaces and still equate to the inverse of the square root of the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the metric tensor above that:

1 g

! e1 e2 ! en

5.43

1 g

e1 e2 ! en

5.44

1 ! 1 ! 1

5.45

5.46

In this interpretation 1 will be called the unit n-plane, while 1 will be called the unit n-vector. The unit n-plane is the unit n-vector bound through the origin.

The complement of an m-vector


If we define ei as the cobasis element of ei in the vector basis of the vector subspace (note that this is denoted by an 'underbracket' rather than an 'underbar'), then the formula for the complement of a basis vector in the vector subspace is:

ei

gij ej g j=1

5.47

In the n-plane, the complement ei of a basis vector ei is given by the metric of the n-plane as:
ei 1
n

gij J- " ej N g j=1

Hence by using the formula above we have that:

ei - ! ei

5.48

More generally we have that for a basis m-vector, its complement in the n-plane is related to its complement in the vector subspace by:

2009 9 3

Grassmann Algebra Book.nb

395

ei H- 1L m ! ei
m m

5.49

And for a general m-vector:

a H- 1L m ! a
m m

5.50

The vector space complement of any exterior product involving the origin " is undefined.

Products of vectorial elements in a bound space


In the previous section we have shown that, in a bound space the complement of an m-vector will contain the origin as a factor.
a H- 1Lm " a
m m

Hence the exterior product of two such complements will be zero.


a b KH- 1Lm " aO H- 1Lk " b 0
m k m k

The complement of this equation is also zero, showing that the regressive product of vectorial elements in a bound space is zero.
ab ab 0 0
m k m k

For a regressive product to be non-zero, it needs to be able to assemble a copy of the unit nelement out of its factors. If it cannot, the regressive product is zero. In this case, the space is a bound space whose unit n-element contains the origin " but the origin is missing from both factors of the product, hence it is zero. It is important to note that the complement axiom is only defined for complements in the full space. Thus, we cannot say that
ab ab
m k m k

A simple example using basis elements in a Euclidean point 3-space will suffice. Substituting basis elements and using the definitions for the vector-space complement shows that the left side is nonzero, while the right side is zero.
e1 e2 e3 e1 e2 He2 e3 L He3 e1 L 0

The complement of an element bound through the origin


2009 9 3

Grassmann Algebra Book.nb

396

The complement of an element bound through the origin


To express the complement of a bound element in the n-plane in terms of the complement in the vector subspace we can use the complement axiom.
" ei " ei 1 I- " ei M 1 g 1 g 1
n j=1 n

e1 e2 ! en - "

gij ej g j=1

gij Jej ej N J" ej N

gij Jej " ej N ej g j=1 1


n

gij 1 ej g j=1 1
n

gij ej 1 ei ei g j=1

! ei ei

5.51

More generally, we can show that for a general m-vector, the complement of the m-vector bound through the origin in an n-plane is simply the complement of the m-vector in the vector subspace of the n-plane.

! a a
m m

5.52

The complement of the complement of an m-vector


Let n be the dimension of a bound space. Then n-1 is the dimension of the vector subspace. Let a
m

be an m-vector in the bound space. Then the complement of the complement of a is given by
m

a H - 1 L m Hn-m L a
m m

The vector space complement of the vector space complement of a in the vector subspace is
m

2009 9 3

Grassmann Algebra Book.nb

397

a H - 1 L m Hn-1-m L a
m m

Hence

a H- 1L m a
m m

5.53

We can verify this using the formulae derived in the previous section.
a H- 1Lm " a
m m

a H- 1Lm " a H- 1Lm a


m m m

Calculating with vector space complements


Entering a vector space complement
To enter a vector space complement of a Grassmann expression X in GrassmannAlgebra you can either use the GrassmannAlgebra palette by selecting the expression X and clicking the button

, or simply enter OverVector[X] directly.


OverVector@e1 e2 D e1 e2

Simplifying a vector space complement


If the basis of your currently declared space does not contain the origin ", then the vector space complement (OverVector) operation is equivalent to the normal (OverBar) operation, and ConvertComplements will treat them as the same.
!3 ; ConvertComplementsA9e1 , x, e1 , x =E 8e2 e3 , x, e2 e3 , x<

If, on the other hand, the basis of your currently declared space does contain the origin ", then ConvertComplements will convert any expressions containing OverVector complements to their equivalent OverBar forms.
"3 ; ConvertComplementsA9e1 , x, e1 , x=E 9e2 e3 , # x, - H# e2 e3 L, x=

The vector space complement of the origin


2009 9 3

Grassmann Algebra Book.nb

398

The vector space complement of the origin


Note that the vector space complement of the origin, or any exterior product of elements involving the origin is undefined, and will be left unevaluated by ConvertComplements.
"3 ; ConvertComplementsB:", " e1 , " + x, " + e1 >F 9#, # e1 , # x + #, # + e2 e3 =

5.11 Complements of bound elements


The Euclidean complement of a point in the plane
Now suppose we are working in the Euclidean plane with a basis of one origin point " and two basis vectors e1 and e2 . Let us declare this basis then call for a palette of basis elements and their complements.
A; "2 ; ComplementPalette

Complement Palette
Basis 1 ! e1 e2 ! e1 ! e2 e1 e2 ! e1 e2 Complement ! e1 e2 e1 e2 - H! e2 L ! e1 e2 - e1 ! 1

The complement table tells us that each basis vector is orthogonal to the axis involving the other basis vector. For example, e2 is orthogonal to the axis " e1 . Suppose now we take a general vector x a e1 + b e2 . The complement of this vector is:
x a e1 + b e2 - a " e2 + b " e1 " Hb e1 - a e2 L

2009 9 3

Grassmann Algebra Book.nb

399

Again, this is an axis (bound vector) through the origin orthogonal to the vector x. Now let us take a general point P " + x and explore what element is orthogonal to this point.
P " + x e1 e2 + " Hb e1 - a e2 L

The effect of adding the bivector e1 e2 to x is to shift the bound vector " Hb e1 - a e2 L parallel to itself. We can factor e1 e2 into the exterior product of two vectors, one parallel to b e1 - a e2 .
e1 e2 z Hb e1 - a e2 L

Now we can write P as:


P H" + zL Hb e1 - a e2 L

The vector z is the position vector of any point on the line defined by the bound vector P. A particular point of interest is the point on the line closest to the point P or the origin. The position vector z would then be a vector orthogonal to the direction of the line. Thus we can write e1 e2 as:
e1 e2 Ha e1 + b e2 L Hb e1 - a e2 L a2 + b2

The final expression for the complement of the point P " + x " + a e1 + b e2 can then be written as a bound vector in a direction orthogonal to the position vector of P.

P ! -

Ha e1 + b e2 L a +b
2 2

Hb e1 - a e2 L P* Hb e1 - a e2 L

5.54

This formula turns out to be a special case of a more general formula which we will derive later in the section, and for which we now propose some convenient notation.

P ! + x

P* ! + x*

x* -

x x2

5.55

We call the point P* the inverse point to P. Inverse points are situated on the same line through the origin, and on opposite sides of it. The product of their distances from the origin is unity.
Remembering that the vector space complement of the position vector x is denoted x, we can now rewrite our derived formula in a more condensed form.

P P* I- xM

5.56

2009 9 3

Grassmann Algebra Book.nb

400

P x x x* P* !

P=!+x P* =!+x* P =P * - x P

The Euclidean complement of a point in the plane

Of course, since the bound space of a plane is a three-dimensional linear space, the complement of the complement of a point is the point itself, just like the complement of a complement of a vector in a vector 3-space is the vector itself. To verify this in GrassmannAlgebra we can use ConvertComplements on the formula derived in terms of basis elements above (but because the complement operation is a Protected function, we need to use the extra symbol ^ before the equals sign).
P ^= " Ha e1 + b e2 L a2 + b2 Hb e1 - a e2 L; ConvertComplements@PD

# + a e1 + b e2

The Euclidean complement of a point in a point 3-space


To indicate the generality of the method, we undertake the same derivation in an Euclidean point 3space.

2009 9 3

Grassmann Algebra Book.nb

401

"3 ; ComplementPalette

Complement Palette
Basis 1 ! e1 e2 e3 ! e1 ! e2 ! e3 e1 e2 e1 e3 e2 e3 ! e1 e2 ! e1 e3 ! e2 e3 e1 e2 e3 ! e1 e2 e3 Complement ! e1 e2 e3 e1 e2 e3 - H! e2 e3 L ! e1 e3 - H! e1 e2 L e2 e3 - He1 e3 L e1 e2 ! e3 - H! e2 L ! e1 e3 - e2 e1 - ! 1

Each basis vector is orthogonal to the coordinate plane involving the other basis vectors. For example, e2 is orthogonal to the axis " e1 e3 . Suppose now we take a general vector x a e1 + b e2 + c e3 . The complement of this vector is:
x a e1 + b e2 + c e3 - a " e2 e3 + b " e1 e3 - c " e1 e2 " H- a e2 e3 + b e1 e3 - c e1 e2 L

Again, this is a plane (bound bivector) through the origin orthogonal to the vector x. The complement of the general point P " + x is
P " + x e1 e2 e3 + " H- a e2 e3 + b e1 e3 - c e1 e2 L

The effect of adding the trivector e1 e2 e3 to x is to shift the bound bivector " H- a e2 e3 + b e1 e3 - c e1 e2 L parallel to itself. We can factor e1 e2 e3 into the exterior product of a vector and a bivector congruent to - a e2 e3 + b e1 e3 - c e1 e2 . 2009 9 3

Grassmann Algebra Book.nb

402

The effect of adding the trivector e1 e2 e3 to x is to shift the bound bivector " H- a e2 e3 + b e1 e3 - c e1 e2 L parallel to itself. We can factor e1 e2 e3 into the exterior product of a vector and a bivector congruent to - a e2 e3 + b e1 e3 - c e1 e2 .
e1 e2 e3 z H- a e2 e3 + b e1 e3 - c e1 e2 L

Now we can write P as:


P H" + zL H- a e2 e3 + b e1 e3 - c e1 e2 L

The vector z is the position vector of any point on the plane defined by the bound bivector P. A particular point of interest is the point on the plane closest to the point P or the origin. The position vector z would then be a vector orthogonal to the plane. Thus we can write e1 e2 e3 as:
e1 e2 e3 Ha e1 + b e2 + c e3 L H- a e2 e3 + b e1 e3 - c e1 e2 L a2 + b2 + c2

The final expression for the complement of the point P " + x " + a e1 + b e2 + c e3 can then be written as a bound bivector in a direction orthogonal to the position vector of P.

a2 + b2 + c2 H- a e2 e3 + b e1 e3 - c e1 e2 L
From this result we see that our condensed formula is also valid in a point 3-space.

P ! -

Ha e1 + b e2 + c e3 L

5.57

P P* I- xM

5.58

2009 9 3

Grassmann Algebra Book.nb

403

The Euclidean complement of a point in a point 3-space

In a point 3-space, the complement of the complement of a point is the negative of the point itself, or equivalently, the point with a weight of -1.
P = " + a e1 + b e2 + c e3 ; ConvertComplements@PD - # - a e1 - b e2 - c e3

The complement of a bound element


We now come to the point where we can determine a formula which expresses the complement of a general bound element in an n-plane in terms of a special point and the complement in the vector subspace of the n-plane. Consider a bound simple m-vector Pa in an n-plane, where the position vector x of the point P is
m

orthogonal to a, that is, to each of the 1-element factors of a.


m m

P a H" + xL a " a + x a
m m m m

From the Common Factor Theorem we have that


Kx aO x Hx a1 a2 a3 am L x
m

Hx xL Ha1 a2 am L - Ha1 xL Hx a2 a3 am L + H a2 xL Hx a 1 a3 am L - H- 1Lm H am xL Hx a 1 a2 am -1 L

But since we have chosen x to be orthogonal to each of the ai , the terms ai x are zero so that

2009 9 3

Grassmann Algebra Book.nb

404

Kx aO x Hx xL a
m m

By taking the complement of this expression, we can manipulate it into the form we want.
Kx aO x Hx xL a
m m

First apply the complement axiom (once on the left and twice on the right).
Hx aL x Ix xM a
m m

Then the complement of a complement axiom on x.


Hx aL x Hx xL a
m m

Now, the regressive product of an element with its complement is a scalar which, as we will see in the next chapter is the scalar product of the element with itself. This scalar product may also be written as the square of the magnitude (measure) of the element x2 . Hence on the right side we have
x x H - 1 L n-1 x x H - 1 L n-1 x x H - 1 L n-1 x 2

Substituting gives
H x a L x H - 1 L n-1 x 2 a
m m

On the left side we can change the order of the factors


H x a L x H - 1 L n-1+m x H x a L
m m

Collecting these results gives


H- 1Lm x Hx aL x2 a
m m

We now need to convert these complements to vector space complements.


x a H - 1 L m +1 " x a
m m

a H- 1Lm " a
m m

Substituting and factoring out the Origin " we get


" Kx x aO " KH- 1Lm x2 aO
m m

Extracting the vectorial factors yields


x x a H- 1Lm x2 a
m m

We can now return to the formula for the bound element under consideration and take its complement.

2009 9 3

Grassmann Algebra Book.nb

405

P a H" + xL a " a + x a
m m m m

P a " a + x a a + H - 1 L m +1 " x a
m m m m m

Substituting for a from our derivation above gives


m

P a H- 1Lm
m

x x
2

x a + H - 1 L m +1 " x a
m m

" -

x x2

K- a xO
m

P a ! m

x x2

K- a xO
m

P ! + x

ax 0
m

5.59

This formula hold only under the condition that the position vector x of the point P is orthogonal to a. Here, we have preempted the notation of the next chapter by using a x 0 to express this
m m

condition.

The complement of a point


As a check, we see that putting a equal to unity in this formula gives us the formula we have
m

already induced earlier in the previous section.


P " x x
2

I- xM

P " + x

Euclidean complements of bound elements


The complement of a bound vector x x2

P a ! -

Hx aL

P ! + x

ax 0

5.60

As a specific example, let x be e1 , and a be e2 . In the plane, the complement of this bound vector becomes a point.

2009 9 3

Grassmann Algebra Book.nb

406

P e2 H" - e1 L He1 e2 L H" - e1 L H1L " - e1

The Euclidean complement of a bound vector in the plane

In a point 3-space, the complement of this bound vector becomes a second bound vector (orthogonal to the first one).
P e2 H" - e1 L He1 e2 L H" - e1 L e3

The Euclidean complement of a bound vector in space

2009 9 3

Grassmann Algebra Book.nb

407

The complement of a bound bivector x

P a b ! -

Hx a bL x2 ! + x Ha bL x 0

5.61

Again, let x be e1 and a be e2 ; and put b to be e3 . In a point 3-space, the complement of this bound bivector becomes a point.
P e2 e3 H" - e1 L He1 e2 e3 L H" - e1 L H1L " - e1

The Euclidean complement of a bound bivector in the plane

The regressive product of point complements


In the sections above, we have explored the simplest cases of complements of bound elements. There is another way to compose these results if we express the bound element as an exterior product of bound elements and then apply the complement axiom to obtain a regressive product of complements of bound elements. We look at the simplest case first: that of the complement of a bound vector expressed as the product of two points. Let
L PQ L PQ PQ

In geometric terms, this can be interpreted as saying that the complement of a line defined by two points is the intersection of the complements of the points.

2009 9 3

Grassmann Algebra Book.nb

408

In geometric terms, this can be interpreted as saying that the complement of a line defined by two points is the intersection of the complements of the points. The simplest case is in the plane, where the complement of the points are themselves lines, and whose intersection is a point.

P Q = PQ = R R = PQ

Q
R

y P* x*
P

z* ! x

R*

z y* P

R Q*
Q

The regressive product of point complements

In a point 3-space, the same relations hold, but their interpretations are different. The complement P of a point is a bound bivector as we have depicted earlier. The complements of two different points P and Q yield two distinct bound bivectors which intersect in a line R, say. And the complement of this line R is the line passing through the points P and Q. The complement of a bound bivector can be composed as the regressive product of the complements of three points. In a point 3-space, this results in the intersection of three bound bivectors yielding a point. The complement of this point is a bound bivector passing through the original three points. The simple examples explored in the sections above are straightforwardly extended mutatis mutandis to higher dimensional spaces, but of course they are more challenging to depict!

2009 9 3

Grassmann Algebra Book.nb

409

5.12 Reciprocal Bases


Reciprocal bases
Up to this point we have only considered one basis for L, which we have denoted with subscripted
1

indices, for example ei . We will refer to this basis as the standard basis. Introducing a second basis reciprocal to this standard basis enables us to write the formulae for complements in a more symmetric way. This section will summarize formulae relating basis elements and their cobases and complements in terms of reciprocal bases. For simplicity, we adopt the Einstein summation convention. In L the metric tensor gij forms the relationship between the reciprocal bases.
1

ei gij ej
This relationship induces a metric tensor gi,j on L.
m
m m

5.62

ei gij ej
m
m

5.63

The complement of a basis element


The complement of a basis 1-element of the standard basis has already been defined by:

ei

1 g

gij ej

5.64

Here g gij is the determinant of the metric tensor. Taking the complement of formula 5.64 and substituting for ei in formula 5.66 gives:

ei

1 g

ei

5.65

The reciprocal relation to this is:

2009 9 3

Grassmann Algebra Book.nb

410

ei

g ei

5.66

These formulae can be extended to basis elements of L.


m

ei
m

g ei
m

5.67

ei
m

1 g

ei
m

5.68

Particular cases of these formulae are:

1 g

e1 e2 ! en 1

g e1 e2 ! en

5.69

e1 e2 ! en

5.70

e1 e2 ! en

1 g

5.71

ei

g H - 1 L i-1 e 1 ! i ! e n

5.72

ei

1 g

H - 1 L i-1 e 1 ! i ! e n

5.73

ei1 ! e i m

g H - 1 L K m e 1 ! i1 ! i m ! e n

5.74

e i1 ! e i m

1 g

H - 1L K m e 1 ! i1 ! i m ! e n

5.75

1 where Km = m g=1 ig + 2 m Hm + 1L and the symbol means the corresponding element is missing from the product.

2009 9 3

Grassmann Algebra Book.nb

411

1 where Km = m g=1 ig + 2 m Hm + 1L and the symbol means the corresponding element is missing from the product.

The complement of a cobasis element


The cobasis of a general basis element was developed in Section 2.6 as:
e i H - 1 L K m e 1 ! i1 ! i m ! e n
m 1 where Km m g=1 ig + 2 m Hm + 1L.

Taking the complement of this formula and applying formula 5.69 gives the required result.
e i H - 1 L K m e 1 ! i1 ! i m ! e n
m

g H- 1LK m e1 ! i1 ! i m ! en g H - 1 L m Hn-m L e i1 ! e i m g H - 1 L m Hn-m L e i


m

ei
m

g H - 1 L m Hn- m L e i
m

5.76

Similarly:

i e m

1 g

H - 1 L m Hn- m L e i
m

5.77

The complement of a complement of a basis element


To verify the complement of a complement axiom, we begin with the equation for the complement of a standard basis m-element, take the complement of the equation, and then substitute for ei
m

from equation 5.79.


ei
m

g ei
m

2009 9 3

Grassmann Algebra Book.nb

412

g N

1 g

H - 1 L m Hn-m L e i
m

H - 1 L m Hn-m L e i
m

A similar result is obtained for the complement of the complement of a reciprocal basis element.

The exterior product of basis elements


The exterior product of a basis element with the complement of the corresponding basis element in the reciprocal basis is equal to the unit n-element. The exterior product with the complement of any other basis element in the reciprocal basis is zero.
ei ej ei
m m m

1 g

ej di 1
m

ei ej ei J
m m m

g N ej di j 1
m

ei ej di 1
m m

ei ej di j 1
m m

5.78

The exterior product of a basis element with the complement of any other basis element in the same basis is equal to the corresponding component of the metric tensor times the unit n-element.
ei ej ei Kgjk ek O ei gjk
m m m m m m m

1 g J

ek gij 1
m m ij

ei ej ei Kg
m m m

m jk

ek O ei g
m m

m jk

g N ek g
m

ei ej gij 1
m m

ei ej g
m m

m ij

5.79

In L these reduce to:


1

ei ej di 1

ei ej di j 1

5.80

ei ej gij 1

ei ej gij 1

5.81

The regressive product of basis elements


2009 9 3

Grassmann Algebra Book.nb

413

The regressive product of basis elements


In the series of formulae below we have repeated the formulae derived for the exterior product and shown its equivalent for the regressive product in the same box for comparison. We have used the fact that di j and g
m ij

are symmetric to interchange the indices.


j j

ei ej di 1
m m

ei ej di
m m

5.82

ei ej di j 1
m m

ei ej di j
m m

5.83

ei ej gij 1
m m

ei ej gij
m m

5.84

ei ej g
m m

m ij

ei ej g
m m

m ij

5.85

The complement of a simple element is simple


The expression for the complement of a basis element in terms of its cobasis element in the reciprocal basis allows us a straightforward proof that the complement of a simple element is simple. To show this, suppose a simple element a expressed in terms of its factors ai .
m

a a1 a2 ! am
m

Take any other n|m 1-elements am+1 , am+2 , !, an such that a1 , a2 , !, am , am+1 , !, an form an independent set, and thus a basis of L. Then by equation 5.76 we have:
1

e i1 ! e i m

g H- 1 LK m e 1 ! i1 ! i m ! e n g a a m +1 a m +2 ! a n

a a1 a2 ! am
m

where ga is the determinant of the metric tensor in the a basis. The complement of a1 a2 ! am is thus a scalar multiple of am+1 am+2 ! an . Since this is evidently simple, the assertion is proven.

5.13 Summary
2009 9 3

Grassmann Algebra Book.nb

414

5.13 Summary
In this chapter we have shown that by defining the complement operation on a basis of L as
1
n

ei ! gij ej
j=1

and accepting the complement axiom


ab ab
m k m k

as the mechanism for extending the complement to higher grade elements, the requirement that
a %a
m m

constrains gij to be symmetric and ! to have the value % metric tensor gij .

1 g

,where g is the determinant of the

The metric tensor gij was introduced as a mapping between the two linear spaces L and L .
1 n-1

To this point none of these concepts has yet been related to notions of interior, inner or scalar products. This will be addressed in the next chapter, where we will also see that the constraint 1 !% is equivalent to requiring that the magnitude of the unit n-element is unity.
g

Once we have constrained gij to be symmetric, we are able to introduce the standard notion of a reciprocal basis. The formulae for complements of basis elements in any of the L then become
m

much simpler to express. This chapter also discusses how the complement operation can be interpreted geometrically both in vector spaces and in bound spaces. In both spaces the complement axiom holds, but its geometric interpretation in the two spaces is quite different.

2009 9 3

Grassmann Algebra Book.nb

415

6 The Interior Product

6.1 Introduction
To this point we have defined three important operations in the Grassmann algebra: the exterior product, the regressive product, and the complement. In this chapter we introduce the interior product, the fourth operation of fundamental importance to the algebra. The interior product of two elements is defined as the regressive product of one element with the complement of the other. Whilst the exterior product of an m-element and a kelement generates an (m+k)-element, the interior product of an m-element and a k-element (m k) generates an (m|k)-element. This means that the interior product of two elements of equal grade is a 0-element (or scalar). The interior product of an element with itself is a scalar, and it is this scalar that is used to define the measure of the element. The interior product of two 1-elements corresponds to the usual notion of inner, scalar, or dot product. But we will see that the notion of measure is not restricted to 1-elements. Just as one may associate the measure of a vector with a length, the measure of a bivector may be associated with an area, and the measure of a trivector with a volume. If the exterior product of an m-element and a 1-element is zero, then it is known that the 1-element is contained in the m-element. If the interior product of an m-element and a 1-element is zero, then this means that the 1-element is contained in the complement of the m-element. In this case it may be said that the 1-element is orthogonal to the m-element. The basing of the notion of interior product on the notions of regressive product and complement follows here the Grassmannian tradition rather than that of the current literature which introduces the inner product onto a linear space as an arbitrary extra definition. We do this in the belief that it is the most straightforward way to obtain consistency within the algebra and to see and exploit the relationships between the notions of exterior product, regressive product, complement and interior product, and to discover and prove formulae relating them. We use the term 'interior' in addition to 'inner' to signal that the products are not quite the same. In traditional usage the inner product has resulted in a scalar. The interior product is however more general, being able to operate on two elements of any and perhaps different grades. We reserve the term inner product for the interior product of two elements of the same grade. An inner product of two elements of grade 1 is called a scalar product. In sum: inner products are scalar whilst, in general, interior products are not. We denote the interior product with a small circle with a 'bar' through it, thus . This is to signify that it has a more extended meaning than the inner product. Thus the interior product of a with b
m k

becomes a b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k

the element of higher grade should be on the left. The interior product has the same left associativity as the negation operator, or minus sign. It is possible to define both left and right interior products, but in practice the added complexity is not rewarded by an increase in utility. 2009 9 3

We denote the interior product with a small circle with a 'bar' through it, thus . This is to signify that it has a more extended meaning than the inner product. Thus the interior product of a with b Grassmann Algebra Book.nb 416
m k

becomes a b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k

the element of higher grade should be on the left. The interior product has the same left associativity as the negation operator, or minus sign. It is possible to define both left and right interior products, but in practice the added complexity is not rewarded by an increase in utility. We will see in Chapter 10: The Generalized Product that the interior product of two elements can be expressed as a certain generalized product, which is independent of the order of its factors.

6.2 Defining the Interior Product


Definition of the inner product
Grassmann defined the complement a of an element a such that the regressive product of a by a in
m m m m

that order, was always a non-negative scalar. That is a a 0 for any element of any grade m in
m m

a space of any dimension. As is well known, this non-negativeness is a naturally accepted property of Euclidean inner products. This immediately suggests a definition for the inner product of an element with itself as a a, and,
m m

by extension, a formula for the inner product of any two m-elements as a b.


m m

The inner product of a and b is denoted a b and is defined by:


m m m m

ab ab
m m m m

L
0

6.1

The result of taking the regressive product of a and a in the reverse order introduces a potential
m m

change of sign.
a a H - 1 L m Hn-m L a a
m m m m

Because of the form of Grassmann's definition of the complement, the element a a is always a
m m

non-negative scalar. However, it is clear from the above that the element a a may well not be.
m m

And furthermore, it may depend on the dimension of the space. Hence in the definition of the inner product it is important that the complemented element be the second factor. A scalar product is the inner product of two 1-elements.

Definition of the interior product


The fundamentals of Grassmann's algebra (based on the two products and the complement operation) are however strong enough for this expression to be meaningful for elements of any grade, leading to the definition of the interior product. The interior product of a and b is denoted a b and is defined by:
m k m k

2009 9 3

Grassmann Algebra Book.nb

417

The interior product of a and b is denoted a b and is defined by:


m k m k

ab ab
m k m k

m -k

mk

6.2

Thus, the interior product does not depend on the dimension of the space, since the dependence on the dimension of the regressive product and the complement operations 'cancel' each other out. This makes it, like the exterior product, of special importance in the algebra. If m < k, then a b is necessarily zero (otherwise the grade of the product would be negative),
m k

hence:

ab 0
m k

m<k

6.3

An important convention
In order to avoid unnecessarily distracting caveats on every formula involving interior products, in the rest of this book we will suppose that the grade of the first factor is always greater than or equal to the grade of the second factor. The formulae will remain true even if this is not the case, but they will be trivially so by virtue of their terms reducing to zero.

Left and right interior products


To deal with the apparent asymmetry of the interior product, some authors (for example E. Tonti On the Formal Structure of Physical Theories, page 68, 1975) have introduced the notion of left and right interior products equivalent to the definitions:

a b ab
m m k k m m k k

L L

m -k

mk km 6.4

a b ab

k- m

Here, a b is the right interior product, and a b is the left interior product. By interchanging
m k m k

the order of the factors in one of the regressive products, and then interchanging the m-element with the k-element, we can see that they are related by the formula:

a b H - 1 L k Hn- m L b a
m k k m

6.5

In this book, we do not follow this dual-definition approach since we have found it to be an unnecessary complication, especially with its dependence on grade and the dimension of the space. In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to the same interior product independent of the order of its factors, thus retrieving, in this wider system, the sought-after symmetry. 2009 9 3

Grassmann Algebra Book.nb

418

In this book, we do not follow this dual-definition approach since we have found it to be an unnecessary complication, especially with its dependence on grade and the dimension of the space. In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to the same interior product independent of the order of its factors, thus retrieving, in this wider system, the sought-after symmetry.

Historical note
Grassmann and workers in the Grassmannian tradition define the interior product of two elements as the product of one with the complement of the other, the product being either exterior or regressive depending on which interpretation produces a non-zero result. Furthermore, when the grades of the elements are equal, it is defined either way. This definition involves the confusion between scalars and n-elements discussed in Chapter 5, Section 5.1 (equivalent to assuming a Euclidean metric and identifying scalars with pseudoscalars). It is to obviate this inconsistency and restriction on generality that the approach adopted here bases its definition of the interior product explicitly on the regressive exterior product.

Implications of the regressive product axioms


By expressing one or more elements as a complement, the relations of the regressive product axiom set may be rewritten in terms of the interior product, thus yielding some of its more fundamental properties.

6: The interior product of an m-element and a k-element is an (m|k)-element.


The grade of the interior product of two elements is the difference of their grades.

a L, b L
m m k k

ab L
m k

m -k

6.6

Thus, in contradistinction to the regressive product, the grade of an interior product does not depend on the dimension of the underlying linear space. If the grade of the first factor is less than that of the second, the interior product is zero.

ab 0
m k

m<k

6.7

7: The interior product is not associative.


The following formulae are derived directly from the associativity of the regressive product, and show that the interior product is not associative.

ab
m k

g a bg
r m k r

6.8

2009 9 3

Grassmann Algebra Book.nb

419

ab
m k

g a bg
r m k r

6.9

8: The unit scalar 1 is the identity for the interior product.


The interior product of an element with the unit scalar 1 does not change it. The interior product of scalars is equivalent to ordinary scalar multiplication. The complement operation may be viewed as the interior product with the unit n-element 1.

a1 a
m m

6.10

aa aa a a
m m m

6.11

11 1

6.12

a 1a
m

6.13

9: The inverse of a scalar with respect to the interior product is its complement.
The interior product of the complement of a scalar and the reciprocal of the scalar is the unit nelement.

1 a

1 a

6.14

10: The interior product of two elements is congruent to the interior product of their complements in reverse order.
The interior product of two elements is equal (apart from a possible sign) to the interior product of their complements in reverse order.

a b H - 1 L Hn- m L H m -kL b a
m k k m

6.15

If the elements are of the same grade, the interior product of two elements is equal to the interior product of their complements. (It will be shown later that, because of the symmetry of the metric tensor, the interior product of two elements of the same grade is symmetric, hence the order of the factors on either side of the equation may be reversed.) 2009 9 3

Grassmann Algebra Book.nb

420

If the elements are of the same grade, the interior product of two elements is equal to the interior product of their complements. (It will be shown later that, because of the symmetry of the metric tensor, the interior product of two elements of the same grade is symmetric, hence the order of the factors on either side of the equation may be reversed.)

ab ba
m m m m

6.16

11: An interior product with zero is zero.


Every exterior linear space has a zero element whose interior product with any other element is zero.

a0 0 0a
m m

6.17

12: The interior product is distributive over addition.


The interior product is both left and right distributive over addition.

Ka + b O g a g + b g
m m r m r m r

6.18

a Kb + gO a b + a g
m r r m r m r

6.19

Orthogonality
Orthogonality is a concept generated by the complement operation. If x is a 1-element and a is a simple m-element, then x and a may be said to be linearly dependent
m m

if and only if a x 0, that is, x is contained in (the space of) a.


m m

Similarly, x and a are said to be orthogonal if and only if a x 0, that is, x is contained in a.
m m m

By taking the complement of a x 0, we can see that a x 0 if and only if a x 0. Thus it


m m m

may also be said that a 1-element x is orthogonal to a simple element a if and only if their interior
m

product is zero, that is, a x 0.


m

If x x1 x2 ! xk then a and x are said to be totally orthogonal if and only if a xi 0 for


k m k m

all xi contained in x.
k

However, for a Hx1 x2 ! xk L to be zero it is only necessary that one of the xi be


m

orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula 7 m 2009 9 3 we can write a Hx1 x2 ! xk L as Ka x1 O Hx2 ! xk L, whence it becomes
m m

immediately clear that if a x1 is zero then so is the whole product a x.


m m k

Grassmann Algebra Book.nb

421

However, for a Hx1 x2 ! xk L to be zero it is only necessary that one of the xi be


m

orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula 7


m

we can write a Hx1 x2 ! xk L as Ka x1 O Hx2 ! xk L, whence it becomes


m m

immediately clear that if a x1 is zero then so is the whole product a x.


m m k

Just as the vanishing of the exterior product of two simple elements a and x implies only that they
m k

have some 1-element in common, so the vanishing of their interior product implies only that a and
m

x (m k) have some 1-element in common, and conversely. That is, there is a 1-element x such
k

that the following implications hold.


ax 0
m k

:a x 0, x x 0>
m k

ax 0
m k

:a x 0, x x 0>
m k

ax 0
m k

:a x 0, x x 0>
m k

ax 0
m k

ax
m

ax 0
m

xx 0
k

6.20

Example: The interior product of a simple bivector and a vector


In this section we preview the simplest case of an interior product which is not an inner product that of a bivector with a vector. At this stage we just sketch the computation sufficiently to be able to depict it geometrically. We will revisit the general case in more detail later in the chapter. Let x be a vector and B x1 x2 be a simple bivector in a space of any number of dimensions. The interior product of the bivector B with the vector x is the vector B x, which is orthogonal to both x and its orthogonal projection x onto B. The vector B x can be expanded by the Interior Common Factor Theorem (which we will discuss later) to give:
B x Hx1 x2 L x Hx x1 L x2 - Hx x2 L x1

Since B x is expressed as a linear combination of x1 and x2 it is clearly contained in the bivector B. Thus
B HB xL 0

The resulting vector B x is also orthogonal to x. We can show this by taking its scalar product with x.

2009 9 3

Grassmann Algebra Book.nb

422

HB xL x B Hx xL 0 ` The unit bivector B is defined by ` B B BB

The orthogonal projection x of x onto B is given by


` ` x - B IB xM

And the component x of x orthogonal to B is given by


` ` x IB xM B

The interior product of a bivector and a vector in a vector 3-space


The analysis above has shown us that in a vector n-space, B x lies in B and is orthogonal to x. It is also orthogonal to the components of x in B (x ) and orthogonal to B (x ). We can depict this in a vector 3-space as follows:

The interior product of a bivector with a vector in a vector 3-space

2009 9 3

Grassmann Algebra Book.nb

423

The interior product of a bivector and a vector in a vector 2-space


In a vector 2-space, the same analysis is valid. However, x is zero because it involves the exterior ` product of three vectors in a 2-space; and x is identical to x because the unit bivector B must be equal to the unit 2-element of the space 1. (We discussed unit n-elements in Chapter 5.)
` ` x IB xM B 0 ` ` x - B IB xM - 1 I1 xM - 1 x - HxL x

The interior product of a bivector with a vector in a vector 2-space

6.3 Properties of the Interior Product


Implications of the complement axioms
Our aim in this section is to show how the interior product of two elements, or complements of elements, can be expressed in terms of either exterior or regressive products together (perhaps) with the complement operation.

Product and complement axioms


In addition to the definitions and formulae we have already derived for the interior product there are many more that can be derived by successive application of just three groups of axioms - the anti-commutativity, complement, and complement of a complement axioms. We list them here again for reference.

2009 9 3

Grassmann Algebra Book.nb

424

Anticommutativity axioms a b H- 1L m k b a
m k k m

a b H - 1 L Hn-kL Hn- m L b a
m k k m

6.21

Complement axioms ab ab
m k m k

ab ab
m k m k

6.22

Complement of a complement axiom a H - 1 L m Hn- m L a


m m

6.23

Formula 1
a b a b H - 1 L k Hn-m L b a
m k m k k m

a b a b H - 1 L k Hn- m L b a
m k m k k m

6.24

Formula 2
a b a b H - 1 L m Hn-m L a b H - 1 L m Hn-m L a b
m k m k m k m k

a b H - 1 L m Hn- m L a b H - 1 L H m +kL Hn- m L b a


m k m k k m

6.25

Formula 3
a b a b a b H- 1Lm k b a
m k m k m k k m

a b a b a b H- 1L m k b a
m k m k m k k m

6.26

2009 9 3

Grassmann Algebra Book.nb

425

Formula 4
a b a b H - 1 L k Hn-kL a b
m k m k m k

a b H - 1 L k Hn-kL a b
m k m k

6.27

Formula 5
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+m Hn-kL b a
m k m k k m

a b H - 1 L H m +kL Hn-kL b a
m k k m

6.28

Formula 6
a b a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+k Hn-m L b a
m k m k m k k m

a b H - 1 L k H m +kL b a
m k k m

6.29

Formula 7
a b a b a b H - 1 L m Hn-m L+k Hn -k L a b
m k m k m k m k

a b a b H- 1LHm+kL Hn-Hm+kLL a b
m k m k m k

a b H- 1LHm+kL Hn-Hm+kLL a b H- 1Lm k b a


m k m k k m

6.30

Formula 8
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL a b
m k m k m k

a b H - 1 L k Hn-kL a b
m k m k

6.31

Formula 9
2009 9 3

Grassmann Algebra Book.nb

426

Formula 9
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+m Hn-m L a b
m k m k m k

a b H - 1 L k Hn-kL+ m Hn- m L a b
m k m k

6.32

Extended interior products


The interior product of an element with the exterior product of several elements may be shown directly from its definition to be a type of 'extended' interior product.
a b b ! b
m k1 k2 kp

a Hb b ! bL a b b ! b
m k1 k2 kp m k1 k2 kp

a b b ! b
m k1 k2 kp

a b b ! b
m k1 k2 kp

a b b ! b
m k1 k2 kp

a b b ! b
m k1 k2 kp

6.33

Thus, the interior product is left-associative. If the interior product of any of the b with a is zero, then the complete interior product is zero. We
ki m

can see this by rearranging the factors in the exterior product to bring that factor to into first position. By interchanging the order of the factors in the exterior product we can derive alternative expressions. For the case of two factors we have
a bg
m k r

H- 1Lk r a g b
m r k

ab
m k

g H- 1Lk r Ka g O b
r m r k

6.34

2009 9 3

Grassmann Algebra Book.nb

427

Converting interior products


GrassmannAlgebra has a number of functions for converting between exterior, regressive, and interior products, and (perhaps) the complement operation. We give some examples below. Note carefully however, that the conversion is carried out in the currently declared space, that is, GrassmannAlgebra will use the current dimension of the space for its conversion so that the formula for any attached sign may be simplified. All of the functions are effectively listable.

Precedence of the exterior, regressive, and interior products


GrassmannAlgebra has Mathematica's natural precedence ranking for its operators which it uses to simplify the appearance of output by removing brackets if the input conforms to the precedence. The interior product operator has the same precedence as the minus operator.
8HHHa bL gL d L , a Hb Hg Hd LLL< 8a b g d , a Hb Hg Hd LLL<

The exterior product operator has a higher precedence than the interior product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<

The regressive product operator has a higher precedence than the interior product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<

The exterior product operator has a higher precedence than the regressive product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<

Converting interior products to regressive products


A = : a b g , a b g >;
m k j m k j

!3 ; InteriorToRegressive@AD :a b g, a b g>
m k j m k j

Converting interior products to regressive products is independent of the dimension of the space since the definitional formula is used.

2009 9 3

Grassmann Algebra Book.nb

428

Converting interior products to exterior products


In a 3-space, the complement of a complement of an element is the element itself, hence there are no extra signs resulting from the conversion.
!3 ; InteriorToExterior@AD :a b g, a b g>
m k j m k j

In a 4-space however, this is not so, so there may be extra signs resulting from the conversion.
!4 ; InteriorToExterior@AD :H- 1Lk+m H- 1Lm a b g, H- 1Lm a H- 1Lk b g >
m k j m k j

You can aggregate and simplify the signs by applying AggregateSigns.


InteriorToExterior@AD AggregateSigns :H- 1Lk a b g, H- 1Lk+m a b g>
m k j m k j

Converting exterior and regressive products to interior products


Converting exterior and regressive products to interior products may utilize the conversion in which the complement of an element is replaced by its interior product with the unit n-element
1. B = :a b g , a b g >;
m k j m k j

In a 3-space, conversion to interior products gives


!3 ; ToInteriorForm@BD :H- 1Lj m+k m 1 1 b g a
k j m

, H - 1 L 1+j+k+j k+m +j m g 1 a b
j m k

>

But in a 4-space, the signs may be different.


!4 ; ToInteriorForm@BD :H- 1Lj+k+m+j m+k m 1 1 b g a
k j m

, H - 1 L k+j k+m +j m g 1 a b
j m k

>

And in a 5-space, the signs return to those of the (also odd) 3-space.

2009 9 3

Grassmann Algebra Book.nb

429

!5 ; ToInteriorForm@BD :H- 1Lj m+k m 1 1 b g a


k j m

, H - 1 L 1+j+k+j k+m +j m g 1 a b
j m k

>

Example: Orthogonalizing a set of 1-elements


Suppose we have a set of independent 1-elements a1 , a2 , a3 , !, and we wish to create an orthogonal set e1 , e2 , e3 , ! spanning the same space. We begin by choosing one of the elements, a1 say, arbitrarily and setting this to be the first element e1 in the orthogonal set to be created.
e1 a1

To create a second element e2 orthogonal to e1 within the space concerned, we choose a second element of the space, a2 say, and form the interior product.
e2 He1 a2 L e1

From our discussion of the interior product of a simple bivector and a vector above, we can see that e2 is orthogonal to e1 . Alternatively we can take their interior product to see that it is zero:
e2 e1 HHe1 a2 L e1 L e1 He1 a2 L He1 e1 L 0

We can also see that a2 lies in the 2-element e1 e2 (that is, a2 is a linear combination of e1 and e2 ) by taking their exterior product and expanding it.
e1 HHe1 a2 L e1 L a2 e1 HHe1 e1 L a2 - He1 a2 L e1 L a2 0

We will develop the formula used for this expansion in Section 6.4: The Interior Common Factor Theorem. For the meantime however, all we need to observe is that e2 must be a linear combination of e1 and a2 , because it is expressed in no other terms. We create the rest of the orthogonal elements in a similar manner.
e3 He1 e2 a3 L He1 e2 L e4 He1 e2 e3 a4 L He1 e2 e3 L !

In general then, the (i+1)th element ei+1 of the orthogonal set e1 , e2 , e3 , ! is obtained from the previous i elements and the (i+1)th element ai+1 of the original set.

e i+1 H e 1 e 2 ! e i a i+1 L H e 1 e 2 ! e i L

6.35

Following a similar procedure to the one used for the second element, we can easily show that ei+1 is orthogonal to e1 e2 !ei and hence to each of the e1 ,e2 ,!,ei , and that ai lies in e1 e2 !ei and hence is a linear combination of e1 ,e2 ,!,ei . This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.

2009 9 3

Grassmann Algebra Book.nb

430

This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.

6.4 The Interior Common Factor Theorem


The Interior Common Factor Formula
The dual Common Factor Axiom introduced in Chapter 3 is:
: a g b g a b g g L, m + k + j 2 n>
m j k j m k j j j

Interchanging the order of the factors and replacing a and b by their complements gives
m k

: g a g b g a b g L, j m + k>
j m j k j m k j j

Finally, by replacing a b with a b we get the Interior Common Factor Formula.


m k m k

ga gb g ab
j m j k j m k

g
j

j m+k

6.36

In this case g is simple, the expression is a j-element, and j = m+k.


j

The Interior Common Factor Theorem


The Common Factor Theorem introduced in Chapter 3 enabled an explicit expression for a regressive product to be derived. We now derive the interior product version called the Interior Common Factor Theorem. This theorem is a source of many useful relations in Grassmann algebra. We start with the Common Factor Theorem in the form:
n

a b ai b ai
m s i=1 m -p s p

where a a1 a1 a2 a2 ! an an , a is simple, p = m+s|n, and n ! K


m m -p p m -p p m -p p m

m O. k

Suppose now that b b, k = n|s = m|p. Substituting for b and using the formula
s k s

a b a b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m

the term in brackets as

2009 9 3

Grassmann Algebra Book.nb Suppose now that b b, k = n|s = m|p. Substituting for b and using the formula

431

a b a b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m

the term in brackets as


ai b m -p s ai b
k k

ai b 1
k k

so that the Common Factor Theorem becomes:


n

a b ai b ai
m k i=1 k k k

m -k

6.37
k m -k

a a1 a1 a2 a2 ! an an
m k m -k m -k

where k m, and n ! K

m O, and a is simple. k m

The formula indicates that an interior product of a simple element with another, not necessarily simple, element of equal or lower grade, may be expressed in terms of the factors of the simple element of higher grade. When a is not simple, it may always be expressed as the sum of simple components:
a a + a2 + a3 + ! (the i in the ai are superscripts, not powers). From the linearity of the
m m m m m m 1

interior product, the Interior Common Factor Theorem may then be applied to each of the simple terms.
a b Ka1 + a2 + a3 + !O b a1 b + a2 b + a3 b + !
m k m m m k m k m k m k

Thus, the Interior Common Factor Theorem may be applied to the interior product of any two elements, simple or non-simple. Indeed, the application to components in the preceding expansion may be applied to the components of a even if it is simple.
m

To make the Interior Common Factor Theorem more explicit, suppose a a1 a2 ! am , then:
m

Ha1 a2 ! a m L b
k

i1 ! ik

H a i1 ! a ik L b
k

H - 1 L Kk a 1 ! i1 ! ik ! a m
k

6.38

Kk ig +
g=1

1 2

k Hk + 1L

where j means aj is missing from the product.

2009 9 3

Grassmann Algebra Book.nb

432

Examples of the Interior Common Factor Theorem


In this section we take the formula just developed and give examples for specific values of m and k. We do this using the GrassmannAlgebra function InteriorCommonFactorTheorem which generates the specific case of the theorem when given an argument of the form a b. The grades m
m k

and k must be given as positive integers.

b is a scalar
Ha1 a2 ! a m L b b Ha1 a2 ! a m L
0 0

6.39

b is a 1-element
m

H a 1 a 2 ! a m L b H a i b L H - 1 L i-1 a 1 ! i ! a m
i=1

6.40

InteriorCommonFactorTheoremBa bF
2 1

a b a1 a2 b - a2 b a1 + a1 b a2
2 1 1 1 1

InteriorCommonFactorTheoremBa bF
3 1

a b a1 a2 a3 b a3 b a1 a2 - a2 b a1 a3 + a1 b a2 a3
3 1 1 1 1 1

InteriorCommonFactorTheoremBa bF
4 1

a b a1 a2 a3 a4 b - a4 b a1 a2 a3 +
4 1 1 1

a3 b a1 a2 a4 - a2 b a1 a3 a4 + a1 b a2 a3 a4
1 1 1

2009 9 3

Grassmann Algebra Book.nb

433

b is a 2-element
Ha1 a2 ! a m L b
2

Hai aj L b
i,j 2

H - 1 L i+j-1 a 1 ! i ! j ! a m

6.41

InteriorCommonFactorTheoremBa bF
3 2

a b a1 a2 a3 b b a2 a3 a1 - b a1 a3 a2 + b a1 a2 a3
3 2 2 2 2 2

InteriorCommonFactorTheoremBa bF
4 2

a b a1 a2 a3 a4 b
4 2 2

b a3 a4 a1 a2 - b a2 a4 a1 a3 + b a2 a3 a1 a4 +
2 2 2

b a1 a4 a2 a3 - b a1 a3 a2 a4 + b a1 a2 a3 a4
2 2 2

b is a 3-element
InteriorCommonFactorTheoremBa bF
4 3

a b a1 a2 a3 a4 b - b a2 a3 a4 a1 +
4 3 3 3

b a1 a3 a4 a2 - b a1 a2 a4 a3 + b a1 a2 a3 a4
3 3 3

The computational form of the Interior Common Factor Theorem


The Interior Common Factor Theorem rewritten in terms of the symbols we have been using for multilinear forms is
n

A B Ai B Ai
m k i=1 k k k m

m -k

6.42
k m -k

A A1 A1 A2 A2 ! An An
k m -k m -k

where k m, and n ! K

m O, and A is simple. k m

2009 9 3

Grassmann Algebra Book.nb

434

where k m, and n ! K

m O, and A is simple. k m

Just as we did for the Common Factor Theorem in Chapter 3, we can express this in a computational form. If
#k BAF :A1 , A2 , !, An >
m

#k BAF : A1 , A2 , !, An >
m

m -k

m -k

m -k

6.43

then
n

A B K"k BAF
m k i=1

m PiT

BO "k BAF
k

m PiT

6.44

where k m, and n ! K

m O, and A is simple. k m

Or, using Mathematica's inbuilt Listability

A B SBJ"k BAF BN "k BAFF


m k m k m

6.45

Or again using Mathematica's Dot function to do the summation.

A B J"k BAF BN."k BAF


m k m k m

6.46

(The Dot function is an operation on lists, and should not be confused with the interior product.) In the next section we will discuss the inner product (where the grades of the factors are equal), and show that the inner product is symmetric. Thus we can write the above formulas in the alternative forms:
n

A B KB "k BAF
m k i=1 k

m PiT

O "k BAF

m PiT

6.47

A B SBJB "k BAFN "k BAFF


m k k m m

6.48

2009 9 3

Grassmann Algebra Book.nb

435

A B JB "k BAFN."k BAF


m k k m m

6.49

Examples
Here are the three computational forms compared to the result from the InteriorCommonFactorTheorem function which implements the initially derived theorem.
m = 3; k = 2; n = 3;
n

A B B #k BAF
m k i=1 k

m PiT

#k BAF

m PiT

A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2

A B SBJB #k BAFN #k BAFF


m k k m m

A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2

A B JB #k BAFN.#k BAF
m k k m m

A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2

InteriorCommonFactorTheoremBA BF
3 2

A B A1 A2 A3 B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2 2

6.5 The Inner Product


Implications of the Common Factor Axiom
One of the special cases of the Common Factor Axiom is
:a b a b 1 L , m + k - n 0>
m k m k 0

Putting b equal to b then gives immediately


k m

ab ab ab 1
m m m m m m

2009 9 3

Grassmann Algebra Book.nb

436

a b Ka bO 1
m m m m

6.50

The dual to this special case of the Common Factor Axiom says that an exterior product of two elements whose grades sum to the dimension of the space is equal to the (scalar) regressive product of the two elements multiplied by the unit n-element.
:a b a b 1 L , m + k - n 0>
m k m k n n

Now that we have defined the complement operation, the unit n-element 1 becomes 1. We can
n

then rewrite this dual axiom as


ab ab 1 ab 1
m m m m m m

a b Ka bO 1
m m m m

6.51

Taking the complement of this equation gives another form for the inner product.

ab ab
m m m m

6.52

The symmetry of the inner product


It is a normally accepted feature of the inner product that it be symmetric - that is, independent of the order of its factors. To show that our definition does indeed imply this symmetry we only need to do some simple transformations on the right hand side of the last equation.
a b a b a b H - 1 L m Hn-m L a b b a b a
m m m m m m m m m m m m

ab ba
m m m m

6.53

This symmetry, is of course, a property that we would expect of an inner product. However it is interesting to remember that the major result which permits this symmetry is that the complement operation has been required to satisfy the complement of a complement axiom, that is, that the complement of the complement of an element should be, apart from a possible sign, equal to the element itself.

2009 9 3

Grassmann Algebra Book.nb

437

The inner product of complements


We have already shown earlier in 10 (Implications of the regressive product axioms) that the inner product of two elements is equal to the inner product of their complements in reverse order. This followed from the more general axiom for interior products. However, since we have now shown that the inner product is symmetric, we can immediately see that the inner product of two elements is equal to the inner product of their complements.
ab ba ab
m m m m m m

ab ab
m m m m

6.54

The inner product of simple elements


The inner product of simple elements can be expressed as a determinant. We begin with the form
a b Ha1 a2 ! am L Hb1 b2 ! bm L
m m

The formula for extended interior products is


a b b ! b
m k1 k2 kp

a b b ! b
m k1 k2 kp

So we can write Ha1 a2 ! am L Hb1 b2 ! bm L as:


HHa1 a2 ! am L b1 L Hb2 ! bm L

or, by rearranging the b factors to bring bj to the beginning of the product as:
HHa1 a2 ! am L bj L IH- 1Lj-1 Hb1 b2 ! j ! bm LM

The Interior Common Factor Theorem gives the first factor of this expression as:
m

H a 1 a 2 ! a m L b j H a i b j L H - 1 L i-1 a 1 a 2 ! i ! a m
i=1

Thus, Ha1 a2 ! am L Hb1 b2 ! bm L becomes:

2009 9 3

Grassmann Algebra Book.nb

438

Ha1 a2 ! a m L Hb1 b2 ! b m L
m

H a i b j L H - 1 L i+j H a 1 ! i ! a m L
i=1

6.55

Hb1 ! j ! b m L
If this process is repeated, we find the strikingly simple formula:

Ha1 a2 ! am L Hb1 b2 ! bm L Det@ai bj D

6.56

Ha1 a2 ! a m L Hb1 b2 ! b m L a1 b1 a1 b2 a1 a2 b1 a2 b2 a2 b1 b2 a m b1 a m b2 a m a1 b m a2 b m bm am bm 6.57

Calculating inner products


The expression for the inner product of two simple elements can be developed in GrassmannAlgebra by using either the operation ComposeScalarProductMatrix or the operation ToScalarProducts.

ComposeScalarProductMatrix
ComposeScalarProductMatrix@A BD develops the inner product of A and B into the matrix of the scalar products of their factors. You can use Mathematica's inbuilt Det operation on this matrix to get its determinant, and hence the inner product.
M1 = ComposeScalarProductMatrixBa bF; MatrixForm@M1 D
3 3

a1 b1 a1 b2 a1 b3 a2 b1 a2 b2 a2 b3 a3 b1 a3 b2 a3 b3 Det@M1 D - Ha1 b3 L Ha2 b2 L Ha3 b1 L + Ha1 b2 L Ha2 b3 L Ha3 b1 L + Ha1 b3 L Ha2 b1 L Ha3 b2 L - Ha1 b1 L Ha2 b3 L Ha3 b2 L Ha1 b2 L Ha2 b1 L Ha3 b3 L + Ha1 b1 L Ha2 b2 L Ha3 b3 L

If you just want to display M1 to look like a determinant, you can use Mathematica's Grid and BracketingBar.

2009 9 3

Grassmann Algebra Book.nb

439

BracketingBar@Grid@M1 DD a1 b1 a1 b2 a1 b3 a2 b1 a2 b2 a2 b3 a3 b1 a3 b2 a3 b3

If we reverse the order of the two elements and then take the transpose of the resulting scalar product matrix we get a matrix M2 which differs from M1 only by the ordering of its scalar products.
M2 = ComposeScalarProductMatrixBb aF; MatrixForm@Transpose@M2 DD
3 3

b1 a1 b2 a1 b3 a1 b1 a2 b2 a2 b3 a2 b1 a3 b2 a3 b3 a3

This shows clearly how the symmetry of the inner product depends on the symmetry of the scalar product (which in turn depends on the symmetry of the metric tensor).

ToScalarProducts
To obtain the expansion of the inner product more directly, you can use ToScalarProducts.
ToScalarProductsBa bF
3 3

- Ha1 b3 L Ha2 b2 L Ha3 b1 L + Ha1 b2 L Ha2 b3 L Ha3 b1 L + Ha1 b3 L Ha2 b1 L Ha3 b2 L - Ha1 b1 L Ha2 b3 L Ha3 b2 L Ha1 b2 L Ha2 b1 L Ha3 b3 L + Ha1 b1 L Ha2 b2 L Ha3 b3 L

In fact, ToScalarProducts works to convert any interior products in a Grassmann expression to scalar products. For example, here is an expression involving three interior products.
!5 ; X1 = x a + b He1 e2 e3 L e1 + c He1 e2 L He2 e3 L;

Applying ToScalarProducts expands any interior products until the expression only contains scalar products.
X2 = ToScalarProducts@X1 D a x + c H- He1 e3 L He2 e2 L + He1 e2 L He2 e3 LL + b He1 e3 L e1 e2 - b He1 e2 L e1 e3 + b He1 e1 L e2 e3

You can convert these scalar products to metric elements with ToMetricElements. For the default Euclidean metric you would get:
X3 = ToMetricElements@X2 D a x + b e2 e3

For a general metric you get the original expression with the general metric elements substituted for the scalar products.

2009 9 3

Grassmann Algebra Book.nb

440

For a general metric you get the original expression with the general metric elements substituted for the scalar products.
DeclareMetric@gD; X4 = ToMetricElements@X2 D a x - c g1,3 g2,2 + c g1,2 g2,3 + b g1,3 e1 e2 - b g1,2 e1 e3 + b g1,1 e2 e3

Inner products of basis elements


The following formulas collect together various results for the inner product of two basis melements. Here, a and b refer to any basis elements. The reader is referred further to Chapter 2 for a
m m

discussion of cobases, and to Chapter 5 for a discussion of the complement and the metric tensor.

ab ba a b b a
m m m m m m m m

6.58

ei ej di
m m

ei ej di j
m m

6.59

ei ej ei ej g ei ej gij
m m m m m m

6.60

ei ej ei ej
m m m m

1 g

ei ej g
m m

m ij

6.61

6.6 The Measure of an m-element


The definition of measure
The measure of an m-element a is denoted a and defined as the positive square root of the inner
m m

product of a with itself.


m

a
m

aa
m m

6.62

The measure of a scalar is its absolute value.

2009 9 3

Grassmann Algebra Book.nb

441

The measure of a scalar is its absolute value.

a2

6.63

The measure of unity (or its negative) is, as would be expected, equal to unity.

1 1

- 1 1

6.64

The measure of an n-element a a e1 en is the absolute value of its single scalar coefficient
n

multiplied by the determinant g of the metric tensor (which, in this book, we assume to be positive).

a a e1 ! en
n

a
n

a2 g a

6.65

The measure of the unit n-element (or its negative) is, as would be expected, equal to unity.

1 1

- 1 1

6.66

The measure of an element is equal to the measure of its complement, since, as we have shown above, a a a a.
m m m m

a a
m m

6.67

(The different heights of the bracketing bars in this formula is an artifact of Mathematica's typesetting, and is inconsequential to the meaning of the notation.) The measure of a simple element a is equal to the determinant of the scalar products of its factors.
m

a Ha1 a2 ! am L Ha1 a2 ! am L Det@ai aj D


m

6.68

In a Euclidean metric, the measure of an element is given by the familiar 'root sum of squares' formula yielding a measure that is always real and positive.

a S ai ei
m m

a
m

S ai 2

6.69

Unit elements
2009 9 3

Grassmann Algebra Book.nb

442

Unit elements
` The unit m-element associated with the element a is denoted a and may now be defined as:
m m

` a
m

a
m

a
m

6.70

` The unit scalar a corresponding to the scalar a, is equal to +1 if a is positive and is equal to -1 if a is negative.

` a

a a

1 a>0 -1 a < 0 ` -1 -1

6.71

` 11

6.72

` The unit n-element a corresponding to the n-element a a e1 ! en is equal to the unit nn n

element 1 if a is positive, and equal to -1 if a is negative.

` a
n

a
n

a e1 ! en a g

a>0 6.73

a
n

-1 a < 0

` 1 1

` -1 -1

6.74

` The measure of a unit m-element a is therefore always unity.


m

` a 1
m

6.75

Calculating measures
To calculate the measure of any simple element you can use the GrassmannAlgebra function Measure.

2009 9 3

Grassmann Algebra Book.nb

443

Measure@x yD - Hx yL2 + Hx xL Hy yL Measure will compute the measure of a simple element using the currently declared metric.

For example, with the default Euclidean metric, the measure of this 3-element is 5.
Measure@H2 e1 - 3 e2 L He1 + e2 + e3 L H5 e2 + e3 LD 5

With a general metric in 3-space


DeclareMetric@gD 88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 << Measure@H2 e1 - 3 e2 L He1 + e2 + e3 L H5 e2 + e3 LD , I50 g1,2 g1,3 g2,3 + 25 g1,1 g2,2 g3,3 - 25 Ig2 g2,2 + g1,1 g2 + g2 g3,3 MM 1,3 2,3 1,2 Measure will automatically apply any simplifications due to the symmetry of the scalar product.

The measure of free elements


The measure of an m-element has a well-defined geometric significance.

Scalars: m = 0
a2 a2

The measure of a scalar is its absolute value.


Measure@aD a2

Vectors: m = 1
x2 x x

If x is interpreted as a vector, then x is called the length of x.


Measure@xD xx

2009 9 3

Grassmann Algebra Book.nb

444

Bivectors: m = 2
x y2 Hx yL Hx yL xx yx xy yy

If x y is interpreted as a bivector, then x y is called the area of x y. The scalar x y is in fact the area of the parallelogram formed by the vectors x and y.
Measure@x yD - Hx yL2 + Hx xL Hy yL

Graphic showing a parallelogram formed by two vectors x and y, and its area. Because of the nilpotent properties of the exterior product, the measure of the bivector is independent of the way in which it is expressed in terms of its vector factors.
Measure@Hx + yL yD - Hx yL2 + Hx xL Hy yL

Graphic showing a parallelogram formed by vectors Hx + yL and y, and its area.

Trivectors: m = 3
x y z2 Hx y zL Hx y zL xx yx zx xy yy zy xz yz zz

If x y z is interpreted as a trivector, then x y z is called the volume of x y z. The scalar x y z is in fact the volume of the parallelepiped formed by the vectors x, y and z.
Measure@x y zD , I- Hx zL2 Hy yL + 2 Hx yL Hx zL Hy zL Hx xL Hy zL2 - Hx yL2 Hz zL + Hx xL Hy yL Hz zLM

Graphic showing a parallelepiped formed by vectors x, y and z, and its volume.

2009 9 3

Grassmann Algebra Book.nb

445

The measure of bound elements


In this section we explore some relationships for measures of bound elements in a bound n-space under the hybrid metric introduced in Chapter 5. Essentially this metric has the properties that: 1. The scalar product of the origin with itself (and hence its measure) is unity. 2. The scalar product of the origin with any vector or multivector is zero. 3. The scalar product of any two vectors is given by the metric tensor on the n-dimensional vector subspace of the bound n-space. We summarize these properties as follows:

! ! 1

a ! 0
m

ei ej gij

6.76

The measure of a point


Since a point P is defined as the sum of the origin " and the point's position vector x relative to the origin, we can write:
P P H" + xL H" + xL " " + 2 " x + x x 1 + xx

P ! + x

P2 1 + x2

6.77

Although this is at first sight a strange result, it is due entirely to the hybrid metric referred to above. Unlike a vector difference of two points whose measure starts off from zero and increases continuously as the two points move further apart, the measure of a single point starts off from unity as it coincides with the origin and increases continuously as it moves further away from it. The measure of the difference of two points is, as expected, equal to the measure of the associated vector.
P1 " + x1 P2 " + x2 HP1 - P2 L HP1 - P2 L P1 P1 - 2 P1 P2 + P2 P2 H1 + x1 x1 L - 2 H1 + x1 x2 L + H1 + x2 x2 L x1 x1 - 2 x1 x2 + x2 x2 Hx1 - x2 L Hx1 - x2 L

The measure of a bound multivector


The second Triangle Formula (from the section on triangle formulas later in this chapter), is

2009 9 3

Grassmann Algebra Book.nb

446

a b Hx zL Ka xO b z + Ka zO b x
m m m m m m

If we put x z P and b a in this formula, and then rearrange the order we get:
m m

KP aO KP aO Ka aO HP PL - Ka PO Ka PO
m m m m m m

But a P a H" + xL a x and P P 1 + x x, so that:


m m m

KP aO KP aO Ka aO H1 + x xL - Ka xO Ka xO
m m m m m m

By applying the second Triangle Formula again we can also write


Ka xO Ka xO Ka aO Hx xL - Ka xO Ka xO
m m m m m m

Subtracting these gives the final result.

KP aO KP aO a a + Ka xO Ka xO
m m m m m m

6.78

2 m m

2 m

P a a + a x

6.79

2 ` 2 ` P a 1 + a x m m

6.80

Note that the measure of a multivector bound through the origin (x 0) is just the measure of the multivector alone.

! a a
m m

6.81

` ! a 1
m

6.82

If x is the component of x orthogonal to a, then we can once again apply the triangle formula
m

above to give
Ka x O Ka x O Ka aO Hx x L - Ka x O Ka x O
m m m m m m

But a x a x and a x 0 so that this simplifies to


m m m

2009 9 3

Grassmann Algebra Book.nb

447

But a x a x and a x 0 so that this simplifies to


m m m

Ka xO Ka xO Ka aO Hx x L
m m m m

We can thus write formulas for the measure of a bound element in the alternative forms:
2 2

P a a I1 + v 2 M
m m

6.83

` 2 P a 1 + v 2
m

6.84

Thus the measure of a bound unit multivector indicates its minimum distance from the origin.

Determining the multivector of a bound multivector


The hybrid metric that we have chosen to explore has the property that the origin is orthogonal to any multivector. Hence if we take the interior product of any bound multivector with the origin the result will be to 'unbind' or 'free' it. To see this, begin with a simple bound m-vector, take its interior product with the origin, and then apply the Common Factor Theorem.
P a P a1 a2
m

KP aO " H" PL a - H" a1 L HP a2 L +


m m

All terms except the first are clearly zero, and since " P reduces to 1, the first term reduces to a.
m

KP aO ! a
m m

6.85

Example
Suppose we are in a bound 3-space, and we consider the bound bivector expressed as the product of three points: P Q R. Such a 2-element could, for example, define a plane. We want to find the bivector.
P = P Q R; " 3 ; P = " + e1 ; Q = " + e2 ; R = " + e3 ;

2009 9 3

Grassmann Algebra Book.nb

448

Expanding and simplifying gives four terms.


B1 = %@P "D # e1 e2 # - # e1 e3 # + # e2 e3 # + e1 e2 e3 #

We can convert these to scalar products using the ToScalarProducts (which is based on the Interior Common Factor Theorem).
B2 = ToScalarProducts@B1 D # HH# e2 - # e3 L e1 + H- H# e1 L + # e3 L e2 + H# e1 - # e2 L e3 L + H# # + # e3 L e1 e2 + H- H# #L - # e2 L e1 e3 + H# # + # e1 L e2 e3

Now, when we apply ToMetricElements, all the scalar products of basis vectors with the origin are put to zero, and we get the required bivector.
B3 = ToMetricElements@B2 D e1 e2 - e1 e3 + e2 e3

This bivector is simple, and if we wish, we can factorize it.


B4 = ExteriorFactorize@B3 D He1 - e3 L He2 - e3 L

In terms of the original points we can rewrite this bivector just as we would if we were deriving it geometrically from the plane of P Q R.
B5 HP - R L HQ - R L

Expanding this bivector gives an important form related to the original simple exterior product P Q R.
B6 P Q - P R + Q R

This form was discussed in Chapter 4: m-Planes: The m-vector of a bound m-vector.

6.7 The Induced Metric Tensor


Calculating induced metric tensors
In the last chapter we saw how we could use the GrassmannAlgebra function MetricL[m] to calculate the matrix of metric tensor elements induced on L by the metric tensor declared on L .
m 1

For example, you can declare a general metric in a 3-space:

2009 9 3

Grassmann Algebra Book.nb

449

!3 ; DeclareMetric@-D 88$1,1 , $1,2 , $1,3 <, 8$1,2 , $2,2 , $2,3 <, 8$1,3 , $2,3 , $3,3 <<

and then ask for the metric induced on L.


2

MetricL@2D 99- $2 1,2 + $1,1 $2,2 , - $1,2 $1,3 + $1,1 $2,3 , - $1,3 $2,2 + $1,2 $2,3 =, 9- $1,2 $1,3 + $1,1 $2,3 , - $2 1,3 + $1,1 $3,3 , - $1,3 $2,3 + $1,2 $3,3 =, 9- $1,3 $2,2 + $1,2 $2,3 , - $1,3 $2,3 + $1,2 $3,3 , - $2 2,3 + $2,2 $3,3 ==

You can also get a palette of induced metrics by entering MetricPalette.


MetricPalette

Metric Palette
L
0

H1L #1,1 #1,2 #1,3 #1,2 #2,2 #2,3 #1,3 #2,3 #3,3 - #2 1,2 + #1,1 #2,2 - #1,2 #1,3 + #1,1 #2,3 - #1,3 #2,2 + #1,2 #2 - #2 1,3 + #1,1 #3,3 - #1,3 #2,3 + #1,2 #3 - #2 2,3 + #2,2 #3,3

L
1

L
2

- #1,2 #1,3 + #1,1 #2,3

- #1,3 #2,2 + #1,2 #2,3 - #1,3 #2,3 + #1,2 #3,3


L
3

2 2 I - #2 1,3 #2,2 + 2 #1,2 #1,3 #2,3 - #1,1 #2,3 - #1,2 #3,3 + #1,1 #2,2 #3,

Using scalar products to construct induced metric tensors


An alternative way of viewing and calculating the metric tensor on L is to construct the matrix of
m

inner products of the basis elements of L with themselves, reduce these inner products to scalar
m

products, and then substitute for each scalar product the corresponding metric tensor element from L. In what follows we will show an example of how to reproduce the result in the previous section.
1

First, we calculate the basis of L.


2

!3 ; B = BasisL@2D 8e1 e2 , e1 e3 , e2 e3 <

Second, we use the Mathematica function Outer to construct the matrix of all combinations of interior products of these basis elements. 2009 9 3

Grassmann Algebra Book.nb

450

Second, we use the Mathematica function Outer to construct the matrix of all combinations of interior products of these basis elements.
M1 = Outer@InteriorProduct, B, BD; MatrixForm@M1 D e1 e2 e1 e2 e1 e2 e1 e3 e1 e2 e2 e3 e1 e3 e1 e2 e1 e3 e1 e3 e1 e3 e2 e3 e2 e3 e1 e2 e2 e3 e1 e3 e2 e3 e2 e3

Third, we use the GrassmannAlgebra function ComposeScalarProductMatrix to convert each of these inner products into matrices of scalar products.
M2 = ComposeScalarProductMatrix@M1 D; MatrixForm@M2 D e1 e1 e2 e1 e1 e1 K e3 e1 K K e2 e1 e3 e1 e1 e1 e1 e2 O K e2 e2 e2 e1 e1 e1 e1 e2 O K e3 e2 e3 e1 e2 e1 e2 e2 O K e3 e2 e3 e1 e1 e3 e1 e2 O K e2 e3 e2 e2 e1 e3 e1 e2 O K e3 e3 e3 e2 e2 e3 e2 e2 O K e3 e3 e3 e2 e1 e3 O e2 e3 e1 e3 O e3 e3 e2 e3 O e3 e3

Fourth, we substitute the metric element -i,j for ei ej in the matrix.


DeclareMetric@-D; M3 = ToMetricElements@M2 D; MatrixForm@M3 D $1,1 $1,2 $1,2 $2,2 $1,1 $1,2 $1,3 $2,3 $1,2 $2,2 $1,3 $2,3 $1,1 $1,3 $1,2 $2,3 $1,1 $1,3 $1,3 $3,3 $1,2 $2,3 $1,3 $3,3 $1,2 $1,3 $2,2 $2,3 $1,2 $1,3 $2,3 $3,3 $2,2 $2,3 $2,3 $3,3

Fifth, we compute the determinants of each of the elemental matrices.


M4 = Map@Det, M3 , 82<D; MatrixForm@M4 D - $2 1,2 + $1,1 $2,2 - $1,2 $1,3 + $1,1 $2,3 - $1,2 $1,3 + $1,1 $2,3 - $1,3 $2,2 + $1,2 $2,3 - $2 1,3 + $1,1 $3,3 - $1,3 $2,3 + $1,2 $3,3 - $2 2,3 + $2,2 $3,3

- $1,3 $2,2 + $1,2 $2,3 - $1,3 $2,3 + $1,2 $3,3

We can have Mathematica check that this is the same matrix as obtained from the MetricL function.
M4 MetricL@2D True

2009 9 3

Grassmann Algebra Book.nb

451

Displaying induced metric tensors as a matrix of matrices


GrassmannAlgebra has an inbuilt function MetricMatrix for displaying an induced metric in matrix form. For example, we can display the induced metric on L in a 4-space.
3

!4 ; DeclareMetric@-D 88$1,1 , $1,2 , $1,3 , $1,4 <, 8$1,2 , $2,2 , $2,3 , $2,4 <, 8$1,3 , $2,3 , $3,3 , $3,4 <, 8$1,4 , $2,4 , $3,4 , $4,4 << MetricMatrix@3D MatrixForm
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4 !1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4

Of course, we are only displaying the metric tensor in this form. Its elements are correctly the determinants of the matrices displayed, not the matrices themselves.

6.8 Product Formulas for Interior Products


The basic interior Product Formula
In discussing product formulas for the interior product we refer to the section on Product Formulas for Regressive Products in Chapter 3. In transforming the results of that section to results involving interior products, we may often simply transform expressions or their duals involving regressive products with (n-m)-elements x into expressions involving interior products with m-elements
n-m

x.
m

n-m

x
m

x
m

The Interior Common Factor Theorem theorem plays the same role in this development that the Common Factor Theorem did in Chapter 3.

The basic interior Product Formula

2009 9 3

Grassmann Algebra Book.nb

452

The basic interior Product Formula


The basic interior Product Formula is a transformation of the first regressive Product Formula in Chapter 3. Beginning with
a b x Ka x O b + H- 1Lm a b x
m k n-1 m n-1 k m k

n-1

replace x with x to get


n-1

a b x Ka xO b + H- 1L m a b x
m k m k m k

6.86

Derivation using the Interior Common Factor Theorem


The basic interior Product Formula may also be shown using the Interior Common Factor Theorem. To do this, suppose initially that a and b are simple. Then they can be expressed as
m k

a a1 ! am and b b1 ! bk . Thus:
m k

a b x HHa1 ! am L Hb1 ! bk LL x
m k m

H - 1 L i-1 H x a i L H a 1 ! i ! a m L H b 1 ! b k L
i=1 k

+ H - 1 L m +j-1 H x b j L H a 1 ! a m L H b 1 ! j ! b k L
j=1

Ka xO Hb1 ! bk L + H- 1Lm Ha1 ! am L b x


m k

Deriving interior Product Formulas


Deriving interior Product Formulas manually
If x is of a grade higher than 1, then similar relations hold, but with extra terms on the right-hand side. For example, if we replace x by x1 x2 and note that:
a b Hx1 x2 L
m k

a b x1 x2
m k

then the right-hand side may be expanded by applying the basic Product Formula twice. The first application is

2009 9 3

Grassmann Algebra Book.nb

453

a b Hx1 x2 L Ka x1 O b + H- 1Lm a b x1
m k m k m k

x2

The second application is to each term on the right hand side.


Ka x1 O b x2 KKa x1 O x2 O b + H- 1Lm-1 Ka x1 O b x2
m k m k m k

H- 1Lm a b x1
m k

x2 H- 1Lm Ka x2 O b x1 + a
m k m

b x1 x2
k

Adding these equations and remembering that Hx1 x2 L H x1 L x2 gives

a b Hx1 x2 L Ka Hx1 x2 LO b + a b Hx1 x2 L


m k m k m k

3.64 - H- 1L m Ka x1 O b x2 - Ka x2 O b x1
m k m k

As with the product formula for regressive products, successive application doubles the number of terms. We started with two terms on the right hand side of the basic interior Product Formula. By applying it again we obtain a Product Formula with four terms on the right hand side. The next application would give us eight terms. Thus the Product Formula for a b Hx1 x2 xj L would lead to 2j terms.
m k

Deriving interior Product Formulas from the dual regressive Product Formulas
A second derivation may be obtained directly by first computing the Dual of a regressive Product Formula, and then directly using the substitution
x
n-m

x
m

x
m

For example, here is a regressive Product Formula.


F1 = DeriveProductFormulaB a b xF
m k 2

a b x1 x2
m k

a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k

2009 9 3

Grassmann Algebra Book.nb

454

F2 = Dual@F1 D a b x1 x2 a b x1 x2
m k -1+n -1+n m k -1+n

+ b x1
k

-1+n

H- 1Lm -

a x1
m

-1+n

b x2
k

-1+n

+ a x2
m

-1+n

-1+n

a x1 x2
m -1+n

b
k

-1+n

F3 = F2 . A_ Underscript@x_, n - 1D A x

Deriving interior Product Formulas sequentially


We can automate this process by encoding the two rules used in the derivation above and having Mathematica apply them. The first rule is the basic Interior Product Formula.
a b x Ka xO b + H- 1Lm a b x
m k m k m k

which we encode as R1 .
R1 := Is_. HA_ B_L x_ s HA xL B + s H- 1LG@AD A HB xLM

The second rule is the repeated interior product formula


HZ xL y Z Hx yL

which we encode as R2 .
R2 := HHZ_ x_L y_ Z Hx yLL

We will also need to declare the xi as extra vector symbols.


V@x_ D;

To apply these rules, start with an exterior product. Call it F0 .


F0 = a b ;
m k

Take the interior product of F0 with x1 , and apply both rules as many times as they are applicable to get F1 .
F1 = F0 x1 . 8R1 , R2 < Ka x1 O b + H- 1Lm a b x1
m k m k

Repeat by taking the interior product of F1 with x2 , expanding, and applying the rules to get F2 .

2009 9 3

Grassmann Algebra Book.nb

455

F2 = #@F1 x2 D . 8R1 , R2 < H - 1 L -1+m K a x 1 O b x 2 + H - 1 L m K a x 2 O b x 1 +


m k m k

Ka x1 x2 O b + H- 1L2 m a b x1 x2
m k m k

We can simplify the signs by using GrassmannAlgebra's SimplifyPowerSigns.


F2 = #@F1 x2 D . 8R1 , R2 < SimplifyPowerSigns H - 1 L 1+m K a x 1 O b x 2 +
m k

H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k

To get succeeding formulas, we simply repeat this process.


F3 = #@F2 x3 D . 8R1 , R2 < SimplifyPowerSigns Ka x1 O b x2 x3 - Ka x2 O b x1 x3 +
m k m k

Ka x3 O b x1 x2 + H- 1Lm Ka x1 x2 O b x3 +
m k m k

H - 1 L 1+m K a x 1 x 3 O b x 2 + H - 1 L m K a x 2 x 3 O b x 1 +
m k m k

Ka x1 x2 x3 O b + H- 1Lm a b x1 x2 x3
m k m k

Collecting the terms with the same signs gives

a b Hx1 x2 x3 L
m k

Ka x1 O b x2 x3 m k

Ka x2 O b x1 x3 + Ka x3 O b x1 x2 +
m k m k

3.65
k

H- 1L

Ka x1 x2 O b x3 - Ka x1 x3 O b x2 +
m k m

- Ka x2 x3 O b x1 + a b x1 x2 x3
m k m k

Ka x1 x2 x3 O b
m k

Deriving interior Product Formulas automatically


2009 9 3

Grassmann Algebra Book.nb

456

Deriving interior Product Formulas automatically


Just as we did in Chapter 3, it is straightforward to encapsulate this sequential derivation in a function for automatic application.

1. Define the rules to be applied at each step


DeriveInteriorProductFormulaOnce@F_, x_D := GrassmannExpand@F xD . 9s_. A_ B_ z_ s HA zL B + s H- 1LRawGrade@AD A HB zL, Z_ z_ y_ Z z y=;

2. Apply the rules as many times as required


DeriveInteriorProductFormula@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D SimplifyPowerSigns@Fold@DeriveInteriorProductFormulaOnce, A B, 8ComposeSimpleForm@X, 1D< . Wedge SequenceDD;

Check the results


H1 = DeriveInteriorProductFormulaB a b x1 F
m k

a b x1 Ka x1 O b + H- 1Lm a b x1
m k m k m k

H2 = DeriveInteriorProductFormulaB a b xF
m k 2

a b x 1 x 2 H - 1 L 1+m K a x 1 O b x 2 +
m k m k

H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k

2009 9 3

Grassmann Algebra Book.nb

457

H3 = DeriveInteriorProductFormulaB a b xF
m k 3

a b x1 x2 x3 Ka x1 O b x2 x3 - Ka x2 O b x1 x3 +
m k m k m k

Ka x3 O b x1 x2 + H- 1Lm Ka x1 x2 O b x3 +
m k m k

H - 1 L 1+m K a x 1 x 3 O b x 2 + H - 1 L m K a x 2 x 3 O b x 1 +
m k m k

Ka x1 x2 x3 O b + H- 1Lm a b x1 x2 x3
m k m k

These give the same results as the manually derived formulas


: a b x1 F1 === H1 ,
m k

a b x1 x2 F2 === H2 ,
m k

a b x1 x2 x3 F3 === H3 >
m k

8True, True, True<

Computable forms of interior Product Formulas


The computable form of the interior Product Formula
As in Chapter 3, we can take the interior Product Formula derived in the previous sections and rewrite it in its computational form using the notions of complete span and complete cospan. The computational form relies on the fact that in GrassmannAlgebra the exterior and interior products have an inbuilt Listable attribute.

ab x
m k j

3.66
j k j

SBFlattenBH- 1L

r@jD m

a " BxF b " BxF FF


m

In this formula, # BxF and # BxF are the complete span and complete cospan of X. The sign
j j j

function r creates a list of elements alternating between 1 and - 1. For example, for j equal to 2, and simplifying the right hand side, we have

2009 9 3

Grassmann Algebra Book.nb

458

a b Hx1 x2 L
m k

$BSBFlattenBH- 1Lr@2D m Ka # @x1 x2 DO b # @x1 x2 D FFF


m k

a b x1 x2 - H- 1Lm Ka x1 O b x2 +
m k m k

H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k

Encapsulating the computable formula


We can easily encapsulate this computable formula in a function ComputeInteriorProductFormula for easy comparison with the results from our automatic derivation above using DeriveInteriorProductFormula.
ComputeInteriorProductFormula@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D 8ASAFlattenA H- 1Lr@RawGrade@XDD RawGrade@AD IA " @XDM HB " @XDLEEE;

Comparing results
We can compare the results obtained by our two formulations.
!5 ; TableBComputeInteriorProductFormulaB a b xF ===
m k j

DeriveInteriorProductFormulaB a b xF .
m k j

IH- 1Lm+1 - H- 1Lm M , 8j, 1, 5<F 8True, True, True, True, True<

The invariance of interior Product Formulas


In Chapter 2 we saw that the m:k-forms based on a simple factorized m-element X were, like the melement, independent of the factorization.

S@HG1 # #k @XDL IG2 " #k @XDME


Just as in Chapter 3, we can see that the interior Product Formula is composed of a number of m:kforms, and, as expected, shows the same invariance. For example, suppose we generate the formula F1 for a 3-element X equal to xyz:

2009 9 3

Grassmann Algebra Book.nb

459

F1 = ComputeInteriorProductFormulaB a b Hx y zLF
m k

abxyz
m k

Ka xO b y z - Ka yO b x z + Ka zO b x y +
m k m k m k

H- 1Lm Ka x yO b z - H- 1Lm Ka x zO b y +
m k m k

H- 1Lm Ka y zO b x + Ka x y zO b + H- 1Lm a b x y z
m k m k m k

We can express the 3-element as a product of different 1-element factors by adding to any given factor, scalar multiples of the other factors. For example
F2 = ComputeInteriorProductFormulaB a b Hx y Hz + a x + b yLLF
m k

a b x y Ha x + b y + zL Ka xO b y Ha x + b y + zL m k m k

Ka yO b x Ha x + b y + zL + Ka Ha x + b y + zLO b x y +
m k m k

H- 1Lm Ka x yO b Ha x + b y + zL m k

H- 1Lm Ka x Ha x + b y + zLO b y +
m k

H- 1Lm Ka y Ha x + b y + zLO b x +
m k

Ka x y Ha x + b y + zLO b + H- 1Lm a b x y Ha x + b y + zL
m k m k

Applying GrassmannExpandAndSimplify to these expressions shows that they are equal.


% @F1 F2 D True

An alternative form for the interior Product Formula


Again, as in Chapter 3, we can rearrange the interior Product Formula by interchanging the span and cospan so that the span elements become associated with a and the cospan elements become
m

associated with b (and changing some signs).


k

Thus we can write the interior Product Formula in the alternative form

3.67
2009 9 3

Grassmann Algebra Book.nb

460

a b x SBFlattenB
m k j

H- 1Lr@jD m H- 1Lrr@jD RB a " BxF b " BxF FFF


m j k j

Encapsulating the alternative computable formula


Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative one. We call it the ComputeInteriorProductFormulaB.
ComputeInteriorProductFormulaB@HA_ B_L X_D := HA BL ComposeSimpleForm@X, 1D SBFlattenBH- 1Lr@RawGrade@XDD RawGrade@AD H- 1Lrr@RawGrade@XDD RB a " BxF b " BxF FFF;
m j k j

Comparing results
Comparing the results of ComputeInteriorProductFormula with ComputeInteriorProductFormulaB verifies their equivalence for values of j from 1 to 10.
!10 ; TableB%BComputeInteriorProductFormulaB a b xFF ===
m k j

%BComputeInteriorProductFormulaBB a b xFF, 8j, 1, 5<F


m k j

8True, True, True, True, True<

The interior decomposition formula


The basic interior Product Formula is
a b x Ka xO b + H- 1Lm a b x
m k m k m k

By putting b equal to x we get


k

Ka xO x Ka xO x + H- 1Lm a Hx xL
m m m

Rearranging terms expresses a 'decomposition' of the element a in terms of the 1-element x:


m

3.67
2009 9 3

Grassmann Algebra Book.nb

461

` ` ` ` a H- 1Lm KKa xO x - Ka xO xO
m m m

6.87

` Here, x is a unit element. ` x x xx ` ` ` ` Note that Ka xO x and Ka xO x are orthogonal.


m m

Interior Product Formulas for 1-elements


Interior Product Formula 1
For completeness we repeat the basic interior Product Formula introduced at the beginning of the section.

a b x Ka xO b + H- 1L m a b x
m k m k m k

6.88

Interior Product Formula 2


Now take the dual of the first regressive Product Formula
a b x K a x O b + H - 1 L n-m a b x
m k m k m k

Here, x is a 1-element. Put b equal to b


k k

a b x K a x O b + H - 1 L n-m a b x
m k m k m k

and note that:


a b x H - 1 L Hn-k+1L Hk-1L a H b x L H - 1 L n-1 a H b x L
m k m k m k

a b x K a x O b + H- 1 L m -1 a b x
m k m k m k

6.89

This product formula is the basis for the derivation of a set of formulas of geometric interest later in the chapter: the Triangle Formulas.

Interior Product Formula 3


2009 9 3

Grassmann Algebra Book.nb

462

Interior Product Formula 3


An interior product of elements can always be expressed as the interior product of their complements in reverse order.
a b H - 1 L Hm -kL Hn-m L b a
m k k m

When applied to interior Product Formula 2, the terms on the right-hand side interchange forms.
K a x O b H - 1 L Hm -kL Hn-m L+n+k+1 b K a x O
m k k m

a b x H - 1 L Hm -kL Hn-m L+m +1 b x a


m k k m

Hence interior Product Formula 2 may be expressed in a variety of ways, for example:

a b x K a x O b + H - 1 L H m -kL Hn- m L b x a
m k m k k m

6.90

Interior Product Formula 4


Taking the interior product of interior Product Formula 2 with a second 1-element x2 gives:

a b x1 x2
m k m +k

6.91 K a x2 O b x1
m k

Ka x1 O b x2 + H- 1L
m k

Interior Product Formulas in terms of double sums


In this section we summarize the three major interior Product Formulas expressed in terms of double sums. These are equivalent to the computable forms already discussed.

Interior Product Formula 5


Consider the double sum regressive General Product Formula from Chapter 3.
p m k p n i=1 m p-r k r

a b x H - 1 L r Hn-m L a x i b x i
r=0

x x1 x1 x2 x2 ! xn xn
p r p-r r p-r r

nK

p-r

p O r

Now replace b with b , b xi with H- 1Lr Hn-rL b xi , and n|k with k to give:

n-k

n-k

n-k

2009 9 3

Grassmann Algebra Book.nb

463

Now replace b with b , b xi with H- 1Lr Hn-rL b xi , and n|k with k to give:
k n-k n-k r n-k r

p m k p

n i=1 m p-r k r

a b x H - 1 L r H m -rL a x i b x i
r=0

6.92

x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r

nK

p O r

Interior Product Formula 6


By a similar process we obtain:
p m k p n i=1 m p-r k r

a b x H- 1Lr m a xi b xi
r=0

6.93 p O r

x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r

nK

Interior Product Formula 7


In Formula 5 putting b equal to a expresses a simple p-element in terms of products with a unit mk m

element.
p

x H- 1L
p r=0

r H m -rL

` ` a xi Ka xi O
i=1 m p-r m r

6.94

x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r

p nK O r

6.9 The Zero Interior Sum Theorem


The zero interior sum
There is an interesting theorem that Grassmann proves in Section 183 of his Ausdehnungslehre of 1862. Roughly translated into the terminology and notation of this book, this theorem states: If, from a series of m 1-elements, one forms the exterior products of any given grade, and conjoins each of them in an interior product with its supplementary combination, then the sum of these products is zero, that is

2009 9 3

Grassmann Algebra Book.nb

464

If, from a series of m 1-elements, one forms the exterior products of any given grade, and conjoins each of them in an interior product with its supplementary combination, then the sum of these products is zero, that is
S1 C1 + S2 C2 + 0

if S1 , S2 , are the exterior products of the m 1-elements a1 , a2 , , am of any given (say the kth) grade, and C2 , C1 , are the supplementary combinations. By the supplementary combination to an exterior product of k 1-elements out of a series of m 1elements, he is referring to the exterior product of the remaining m-k 1-elements ordered consistently with the original series. In our terminology the Si and Ci are respectively the k-span and k-cospan of the m-element A a1 a2 am . This theorem is a useful source of identities involving sums of interior products. We shall also see in Chapter 10: Exploring the Generalized Product, that it has an important role to play in proving the Generalized Product Theorem which shows the equivalence of two different forms for the expansion of the generalized product in terms of interior and exterior products. Chapter 10 will also discuss the generalization of the Zero Interior Sum Theorem leading to a Zero Generalized Sum Theorem. As we will see in the following sections, the Zero Interior Sum Theorem turns out to be another result associated with the invariance of an m:k-form to a factorization of the m-element A.

Composing interior sums


An interior sum is a particular case of an m:k-form. In Chapter 2 we considered various m:k-forms. In this case we are interested in the simplest one in which the product is the interior product.

S@H#k @ADL I#k @ADME

S@H#k @ADL I#k @ADME

To generate an interior m:k-sum using this form, all we need to do is to specify m and k.
Fm_,k_ := SBK#k BaFO K#k BaFOF
m m

Here are the simplest cases. It will be immediately clear from these cases that m:k-sums in which k is less than m are zero, since each interior product term is zero.

Interior sums for a 2-element


F2,1 a1 a2 + a2 -a1

Interior sums for a 3-element


F3,1 a1 a2 a3 + a2 - Ha1 a3 L + a3 a1 a2

2009 9 3

Grassmann Algebra Book.nb

465

F3,2 a1 a2 a3 + a1 a3 -a2 + a2 a3 a1

Interior sums for a 4-element


F4,1 a1 a2 a3 a4 + a2 - Ha1 a3 a4 L + a3 a1 a2 a4 + a4 - Ha1 a2 a3 L F4,2 a1 a2 a3 a4 + a1 a3 - Ha2 a4 L + a1 a4 a2 a3 + a2 a3 a1 a4 + a2 a4 - Ha1 a3 L + a3 a4 a1 a2 F4,3 a1 a2 a3 a4 + a1 a2 a4 -a3 + a1 a3 a4 a2 + a2 a3 a4 -a1

Verifying the theorem


To verify the theorem, we only need to convert the interior products to scalar products and simplify.
%@ToScalarProducts@8F2,1 , F3,2 , F4,2 , F4,3 <DD 80, 0, 0, 0<

The Gram-Schmidt process


To prove the Zero Interior Sum Theorem, we will need the Gram-Schmidt process. Suppose we are given any simple m-element A a1 a2 a3 am . The Gram-Schmidt process tells us that we can always refactor A into a product of orthogonal 1-element factors of the form:
A a1 Ha2 + a a1 L Ha3 + b a1 + c a2 L

Beginning with the first two factors a1 and a2 + a a1 , we can choose the scalar a to make the factors orthogonal.
a1 Ha2 + a a1 L 0 aa1 a2 a1 a1

For the first three factors, we can choose the scalars to make all three factors mutually orthogonal. Let
f13 = a1 Ha3 + b a1 + c a2 L; f23 = Ha2 + a a1 L Ha3 + b a1 + c a2 L;

We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve for the coefficients.

2009 9 3

Grassmann Algebra Book.nb

466

We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve for the coefficients.
S = Solve@%@8f13 0, f23 0<D, 8b, c<D ::b cHa1 a3 L Ha2 a2 L - Ha1 a2 L Ha2 a3 L - Ha1 a2 L2 + Ha1 a1 L Ha2 a2 L Ha1 a2 L Ha1 a3 L - Ha1 a1 L Ha2 a3 L Ha1 a2 L2 - Ha1 a1 L Ha2 a2 L , >>

It is of interest to note that these scalars may be expressed in terms of interior products of 2elements.
:b Ha1 a2 L Ha2 a3 L Ha1 a2 L Ha1 a2 L , cHa1 a2 L Ha1 a3 L Ha1 a2 L Ha1 a2 L >

It is easy to confirm that with these definitions for a, b, and c, the first three factors of A are mutually orthogonal. First we make lists of the factors and values for the scalars we have derived.
F = 8f1 a1 , f2 a2 + a a1 , f3 a3 + b a1 + c a2 <; H = :a a1 a2 a1 a1 , b Ha1 a2 L Ha2 a3 L Ha1 a2 L Ha1 a2 L , c Ha1 a2 L Ha1 a3 L Ha1 a2 L Ha1 a2 L >;

Then we substitute them into the three orthogonality conditions, and simplify to get zero in each case.
Together@%@8f1 f2 , f1 f3 , f2 f3 < . F . ToScalarProducts@HDDD 80, 0, 0<

Finally we confirm that this refactorization does not change the original product.
%@f1 f2 f3 . F . ToScalarProducts@HDD a1 a2 a3

Proving the Zero Interior Sum Theorem


Proving the Interior Sum Theorem is now straightforward and relies on two results: 1. That an m:k-form is invariant to factorizations of its simple m-element. 2. That a simple m-element may be refactorized into mutually orthogonal factors. For the first result we rely on the discussion of multilinear forms in Chapter 2. For the second result we rely on the well-known Gram-Schmidt process discussed in the previous section. Consider an arbitrary simple m-element A.
A a1 a2 a3 am

2009 9 3

Grassmann Algebra Book.nb

467

Any m:k-form which is the sum of interior products of k-span and k-cospan elements is given by the expression:

Fm:k SAH#k @ADL I#k @ADME


This expression is invariant to factorizations of A. By the Gram-Schmidt process A may be expressed as the exterior product of mutually orthogonal factors. The form based on this refactorization is clearly zero since it is composed of sums of terms, each of which involves the interior product of mutually orthogonal factors. Hence the theorem is proved.

Example
To demonstrate this proof in a simple case, take A to be a simple 3-element.
A = a1 a2 a3 ;

The 3:2-form based on A is


F32 = S@H#2 @ADL I#2 @ADME a1 a2 a3 + a1 a3 -a2 + a2 a3 a1

Orthogonalize A by the Gram-Schmidt process to rewrite it as the exterior product of mutually orthogonal factors.
A = a1 Ha2 + a a1 L Ha3 + b a1 + c a2 L;

The form F32 may be rewritten as


H32 = S@H#2 @ADL I#2 @ADME a1 Ha a1 + a2 L Hb a1 + c a2 + a3 L + a1 Hb a1 + c a2 + a3 L H- a a1 - a2 L + Ha a1 + a2 L Hb a1 + c a2 + a3 L a1

Since each one of these factors is mutually orthogonal, each term is individually zero. Hence H32 0. Since H32 is equal to the original interior sum F32 the theorem is confirmed.
%@H32 F32 D True

2009 9 3

Grassmann Algebra Book.nb

468

6.10 The Cross Product


Defining a generalized cross product
The cross or vector product of the three-dimensional vector algebra of Gibbs et al. [Gibbs 1928] corresponds to two operations in Grassmann's more general algebra. Taking the cross-product of two vectors in three dimensions corresponds to taking the complement of their exterior product. However, whilst the usual cross product formulation is valid only for vectors in three dimensions, the exterior product formulation is valid for elements of any grade in any number of dimensions. Therefore the opportunity exists to generalize the concept. Because our generalization reduces to the usual definition under the usual circumstances, we take the liberty of continuing to refer to the generalized cross product as, simply, the cross product. The cross product of a and b is denoted ab and is defined as the complement of their exterior
m k m k

product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.

ab ab
m k m k

6.95

This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space.

Cross products involving 1-elements


For 1-elements xi the definition above has the following consequences, independent of the dimension of the space.

The triple cross product


The triple cross product is a 1-element in any number of dimensions.
Hx1 x2 L x3 Hx1 x2 L x3 Hx1 x2 L x3

Hx1 x2 L x3 Hx1 x2 L x3 Hx3 x1 L x2 - Hx3 x2 L x1


x 1 H x 2 x 3 L x 1 H x 2 x 3 L x 1 H x 2 x 3 L H - 1 L Hn-2L H x 2 x 3 L x 1

6.96

x 1 H x 2 x 3 L H - 1 L n H x 2 x 3 L x 1

6.97

The box product or triple vector product


2009 9 3

Grassmann Algebra Book.nb

469

The box product or triple vector product


The box product, or triple vector product, is an (n|3)-element, and therefore a scalar only in three dimensions.
Hx1 x2 L x3 Hx1 x2 L x3 x1 x2 x3

Hx1 x2 L x3 x1 x2 x3

6.98

Because Grassmann identified n-elements with their scalar Euclidean complements (see the historical note in the introduction to Chapter 5), he considered x1 x2 x3 in a 3-space to be a scalar. His notation for the exterior product of elements was to use square brackets @x1 x2 x3 D, thus originating the 'box' product notation for Hx1 x2 L x3 used in the three-dimensional vector algebra in the early decades of the twentieth century.

The scalar product of two cross products


The scalar product of two cross products is a scalar in any number of dimensions.
Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L

Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L

6.99

The cross product of two cross products


The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in three dimensions. It corresponds to the regressive product of two exterior products.
Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L

Hx1 x2 L Hx3 x4 L Hx1 x2 L Hx3 x4 L

6.100

Note that this product, unlike the other products above, does not rely on a metric, because the expression involves only exterior and regressive products.

Implications of the axioms for the cross product


By expressing one or more elements as a complement, axioms for the exterior and regressive products may be rewritten in terms of the cross product, thus yielding some of its more fundamental properties.

2009 9 3

Grassmann Algebra Book.nb

470

6:

The cross product of an m-element and a k-element is an (n|(m+k))element.

The grade of the cross product of an m-element and a k-element is n|(m+k).

a L, b L
m m k k

ab
m k

n-H m +kL

6.101

7:

The cross product is not associative.

The cross product is not associative. However, it can be expressed in terms of exterior and interior products.

ab
m k

g H - 1 L Hn-1L H m +kL a b
r m k

g
r

6.102

a bg
m k r

H- 1LHn-Hk+rLL Hm+k+rL b g
k r

a
m

6.103

8:

The cross product with unity yields the complement.

The cross product of an element with the unit scalar 1 yields its complement. Thus the complement operation may be viewed as the cross product with unity.

1 a a1 a
m m m

6.104

The cross product of an element with a scalar yields that scalar multiple of its complement.

a a aa a a
m m m

6.105

9:

The cross product of unity with itself is the unit n-element.

The cross product of 1 with itself is the unit n-element. The cross product of a scalar and its reciprocal is the unit n-element.

11 a

1 a

6.106

10:

The cross product of two 1-elements anti-commutes.

2009 9 3

Grassmann Algebra Book.nb

471

10:

The cross product of two 1-elements anti-commutes.

The cross product of two elements is equal (apart from a possible sign) to their cross product in reverse order. The cross product of two 1-elements is anti-commutative, just as is the exterior product.

a b H- 1L m k b a
m k k m

6.107

11:

The cross product with zero is zero. 0a 0 a0


m m

6.108

12:

The cross product is both left and right distributive under addition.

Ka + b O g a g + b g
m m r m r m r

6.109

a Kb + gO a b + a g
m r r m r m r

6.110

The cross product as a universal product


We have already shown that all products can be expressed in terms of the exterior product and the complement operation. Additionally, 8 above shows that the complement operation may be written as the cross product with unity.
1 a a1 a
m m m

We can therefore write the exterior, regressive, and interior products in terms only of the cross product.

a b H- 1LHm+kL Hn-Hm+kLL 1 a b
m k m k

6.111

2009 9 3

Grassmann Algebra Book.nb

472

a b H - 1 L m Hn- m L+k Hn-kL K 1 a O 1 b


m k m k

6.112

a b H - 1 L m Hn- m L K 1 a O b
m k m k

6.113

These formulae show that any result in the Grassmann algebra involving exterior, regressive or interior products, or complements, could be expressed in terms of the generalized cross product alone. This is somewhat reminiscent of the role played by the Scheffer stroke (or Pierce arrow) symbol of Boolean algebra.

Cross product formulae


The complement of a cross product
The complement of a cross product of two elements is, but for a possible sign, the exterior product of the elements.

a b H- 1LHm+kL Hn-Hm+kLL a b
m k m k

6.114

The Common Factor Axiom for cross products


The Common Factor Axiom can be written for m+k+p = n.

ag bg g ab
m p k p p m k

g H - 1 L p Hn-pL g a b
p p m k

g
p

6.115

Product formulae for cross products


Of course, many of the product formulae derived previously have counterparts involving cross products. For example the complement of formula 3.41 may be written:

a b x H - 1 L n-1 K a x O b + H - 1 L m a b x
m k m k m k

6.116

The measure of a cross product


The measure of the cross product of two elements is equal to the measure of their exterior product.

2009 9 3

Grassmann Algebra Book.nb

473

ab ab
m k m k

6.117

6.11 The Triangle Formulae


Triangle components
In this section several formulae will be developed which are generalizations of the fundamental Pythagorean relationship for the right-angled triangle and the trigonometric identity cos2 HqL + sin2 HqL = 1. These formulae will be found useful in determining projections, shortest distances between multiplanes, and a variety of other geometric results. The starting point is the product formula 6.89 with k = m.
a b x K a x O b + H - 1 L m -1 a b x
m m m m m m

By dividing through by the inner product a b, we obtain a decomposition of the 1-element x into
m m

two components.

Ka xO b x
m m

ab
m m

+ H- 1L

m -1 m

a Kb xO
m

6.118
m

ab
m

Now take formula 6.91:


a b x 1 x 2 K a x 1 O b x 2 + H - 1 L m +k K a x 2 O b x 1
m k m k m k

and put k = m, x1 equal to x and x2 equal to z.

Ka bO Hx zL Ka xO Kb zO + Ka zO Kb xO
m m m m m m

6.119

When a is equal to b and x is equal to z, this reduces to a relationship similar to the fundamental
m m

trigonometric identity.

` ` ` ` x x Ka xO Ka xO + Ka xO Ka xO
m m m m

6.120

Or, more succinctly:

2009 9 3

Grassmann Algebra Book.nb

474

Or, more succinctly:

` ` 2 ` ` 2 a x + a x 1
m m

6.121

Similarly, formula 6.118 reduces to a more obvious decomposition for x in terms of a unit melement.

` ` ` ` x Ka x O a + H- 1 L m -1 a K a x O
m m m m

6.122

Because of its geometric significance when a is simple, the properties of this equation are worth
m

investigating further. It will be shown that the square (scalar product with itself) of each of its terms is the corresponding term of formula 6.120. That is:

` ` ` ` ` ` KKa xO aO KKa xO aO Ka xO Ka xO
m m m m m m

6.123

` ` ` ` ` ` Ka Ka xOO Ka Ka xOO Ka xO Ka xO
m m m m m m

6.124

Further, by taking the scalar product of formula 6.122 with itself, and using formulae 6.120, 6.123, and 6.124, yields the result that the terms on the right-hand side of formula 6.122 are orthogonal.

` ` ` ` KKa xO aO Ka Ka xOO 0
m m m m

6.125

It is these facts that suggest the name triangle formulae for formulae 6.121 and 6.122. Diagram of a vector x decomposed into components in and orthogonal to a.
m

The measure of the triangle components


` Let a a1 ! am , then:
m

` ` ` ` ` ` ` ` KKa xO aO KKa xO aO H- 1Lm KKa xO KKa xO aOO a


m m m m m m m m

Focusing now on the first factor of the inner product on the right-hand side we get:

2009 9 3

Grassmann Algebra Book.nb

475

` ` ` ` ` Ka xO KKa xO aO Ha1 ! am xL KKa xO aO


m m m m m

Note that the second factor in the interior product of the right-hand side is of grade 1. We apply the Interior Common Factor Theorem to give:
` ` ` ` Ha1 ! am xL KKa xO aO H- 1Lm KKKa xO aO xO a1 ! am +
m m m m

H - 1 L i-1
i

` ` KKKa xO aO ai O a1 ! i ! am x
m m

But the terms in this last sum are zero since:


` ` ` ` KKa xO aO ai Ka xO Ka ai O 0
m m m m

This leaves only the first term in the expansion. Thus:


` ` ` Ka xO KKa xO aO
m m m

` ` ` ` ` ` H- 1Lm KKKa xO aO xO a H- 1Lm Ka xO Ka xO a


m m m m m m

` ` Since a a 1, we have finally that:


m m

` ` ` ` ` ` KKa xO aO KKa xO aO Ka xO Ka xO
m m m m m m

` ` ` The measure of Ka xO a is therefore the same as the measure of a x.


m m m

` ` ` Ka xO a a x
m m m

6.126

Equivalent forms for the triangle components


There is an interesting relationship between the two terms Ka xO a and a Ka xO which
m m m m

enables formula 6.124 to be proved in an identical manner to formula 6.123. Using formula 6.16 we find that each form may be expressed (apart from a possible sign) in the other form provided a
m

is replaced by its complement a.


m

K a x O a H - 1 L n-m -1 a H a x L H - 1 L n-m -1 a K a x O
m m m m m m

2009 9 3

Grassmann Algebra Book.nb

476

K a x O a H - 1 L n- m -1 a K a x O
m m m m

6.127

In a similar manner it may be shown that:

a K a x O H - 1 L m -1 K a x O a
m m m m

6.128

` ` ` Since the proof of formula 6.103 is valid for any simple a it is also valid for a (which is equal to a)
m m m

since this is also simple. The proof of formula 6.124 is then completed by applying formula 6.127. The next four sections will look at some applications of these formulae.

6.12 Angle
Defining the angle between elements
The angle between 1-elements
The interior product enables us to define, as is standard practice, the angle between two 1-elements

` ` 2 Cos@qD2 x1 x2

6.129

Diagram of two vectors showing the angle between them. However, putting a equal to x1 and x equal to x2 in formula 6.121 yields:
m

` ` 2 ` ` 2 x1 x2 + x1 x2 1
This, together with the definition for angle above implies that:

6.130

` ` 2 Sin@qD2 x1 x2
This is a formula equivalent to the three-dimensional cross product formulation:

6.131

2009 9 3

Grassmann Algebra Book.nb

477

` ` 2 Sin@qD2 x1 x2

but one which is not restricted to three-dimensional space. Thus formula 6.130 is an identity equivalent to sin2 HqL + cos2 HqL = 1.

The angle between a 1-element and a simple m-element


One may however carry this concept further, and show that it is meaningful to talk about the angle between a 1-element x and a simple m-element a for which the general formula 6.121 holds.
m

` ` 2 ` ` 2 a x + a x 1
m m

` ` 2 Sin@qD2 a x
m

6.132

` ` 2 Cos@qD2 a x
m

6.133

We will explore this more fully in what follows.

The angle between a vector and a bivector


As a simple example, take the case where x is rewritten as x1 and a is interpreted as the bivector
m

x2 x3 .

Diagram of three vectors showing the angles between them, and the angle between the bivector and the vector. The angle between any two of the vectors is given by formula 6.129.
` ` 2 Cos@qij D2 xi xj

The cosine of the angle between the vector x1 and the bivector x2 x3 may be obtained from formula 6.124.
HHx2 x3 L x1 L HHx2 x3 L x1 L ` ` 2 a x m HHx2 x3 L Hx2 x3 LL Hx1 x1 L

To express the right-hand side in terms of angles, let:

2009 9 3

Grassmann Algebra Book.nb

478

A=

HHx2 x3 L x1 L HHx2 x3 L x1 L HHx2 x3 L Hx2 x3 LL Hx1 x1 L

First, convert the interior products to scalar products and then convert the scalar products to angle form given by formula 6.129. GrassmannAlgebra provides the function ToAngleForm for doing this in one operation.
V@x_ D; ToAngleForm@A, qD ICos@q1,2 D2 + Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 DM Csc@q2,3 D2

This result may be verified by elementary geometry to be Cos@qD2 where q is defined on the diagram above. Thus we see a verification that formula 6.124 permits the calculation of the angle between a vector and an m-vector in terms of the angles between the given vector and any m vector factors of the m-vector.

The angle between two bivectors


Suppose we have two bivectors x1 x2 and x3 x4 . Diagram of three vectors showing the angles between them, and the angle between the bivector and the vector. In a 3-space we can find the vector complements of the bivectors, and then find the angle between these vectors. This is equivalent to the cross product formulation. Let q be the cosine of the angle between the vectors, then:
Cos@qD Hx1 x2 L Hx3 x4 L x1 x2 x3 x4 Hx1 x2 L Hx3 x4 L x1 x2 x3 x4

But from formulas 6.28 and 6.67 we can remove the complement operations to get the simpler expression:

Cos@qD

Hx1 x2 L Hx3 x4 L x1 x2 x3 x4

6.134

Note that, in contradistinction to the 3-space formulation using cross products, this formulation is valid in a space of any number of dimensions. An actual calculation is most readably expressed by expanding each of the terms separately. We can either represent the xi in terms of basis elements or deal with them directly. A direct expansion, valid for any metric is:
ToScalarProducts@Hx1 x2 L Hx3 x4 LD - Hx1 x4 L Hx2 x3 L + Hx1 x3 L Hx2 x4 L

2009 9 3

Grassmann Algebra Book.nb

479

Measure@x1 x2 D Measure@x3 x4 D - Hx1 x2 L2 + Hx1 x1 L Hx2 x2 L - Hx3 x4 L2 + Hx3 x3 L Hx4 x4 L

The volume of a parallelepiped


We can calculate the volume of a parallelepiped as the measure of the trivector whose vectors make up the sides of the parallelepiped. GrassmannAlgebra provides the function Measure for expressing the volume in terms of scalar products:
V = Measure@x1 x2 x3 D , I- Hx x L2 Hx x L + 2 Hx x L Hx x L Hx x L 1 3 2 2 1 2 1 3 2 3 Hx1 x1 L Hx2 x3 L2 - Hx1 x2 L2 Hx3 x3 L + Hx1 x1 L Hx2 x2 L Hx3 x3 LM

Note that this has been simplified somewhat as permitted by the symmetry of the scalar product. By putting this in its angle form we get the usual expression for the volume of a parallelepiped:
ToAngleForm@V, qD , I- x 2 x 2 x 2 I- 1 + Cos@q D2 + 1 2 3 1,2 Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 D + Cos@q2,3 D2 MM

A slight rearrangement gives the volume of the parallelepiped as:

, I1 + 2 Cos@q

V x1 x2 x3 2 1,2 D Cos@q1,3 D Cos@q2,3 D - Cos@q1,2 D 2 2

6.135

Cos@q1,3 D - Cos@q2,3 D M
We can of course use the same approach in any number of dimensions. For example, the 'volume' of a 4-dimensional parallelepiped in terms of the lengths of its sides and the angles between them is:
!4 ; ToAngleForm@Measure@x1 x2 x3 x4 D, qD , Ix 2 x 2 x 2 x 2 I1 - Cos@q D2 1 2 3 4 2,3 Cos@q2,4 D2 + 2 Cos@q1,3 D Cos@q1,4 D Cos@q3,4 D - Cos@q3,4 D2 + 2 Cos@q2,3 D Cos@q2,4 D H- Cos@q1,3 D Cos@q1,4 D + Cos@q3,4 DL + 2 Cos@q1,2 D HCos@q1,4 D HCos@q2,4 D - Cos@q2,3 D Cos@q3,4 DL + Cos@q1,3 D HCos@q2,3 D - Cos@q2,4 D Cos@q3,4 DLL Cos@q1,4 D2 Sin@q2,3 D2 - Cos@q1,3 D2 Sin@q2,4 D2 Cos@q1,2 D2 Sin@q3,4 D2 MM

6.13 Projection
2009 9 3

Grassmann Algebra Book.nb

480

6.13 Projection
To be completed.

6.14 Interior Products of Interpreted Elements


To be completed.

6.15 The Closest Approach of Multiplanes


To be completed.

2009 9 3

Grassmann Algebra Book.nb

481

7 Exploring Screw Algebra

7.1 Introduction
In Chapter 8: Exploring Mechanics, we will see that systems of forces and momenta, and the velocity and infinitesimal displacement of a rigid body may be represented by the sum of a bound vector and a bivector. We have already noted in Chapter 1 that a single force is better represented by a bound vector than by a (free) vector. Systems of forces are then better represented by sums of bound vectors; and a sum of bound vectors may always be reduced to the sum of a single bound vector and a single bivector. We call the sum of a bound vector and a bivector a 2-entity. These geometric entities are therefore worth exploring for their ability to represent the principal physical entities of mechanics. In this chapter we begin by establishing some properties of 2-entities in an n-plane, and then show how, in a 3-plane, they take on a particularly symmetrical and potent form. This form, a 2-entity in a 3-plane, is called a screw. Since it is in the 3-plane that we wish to explore 3-dimensional mechanics we then explore the properties of screws in more detail. The objective of this chapter then, is to lay the algebraic and geometric foundations for the chapter on mechanics to follow.

Historical Note The classic text on screws and their application to mechanics is by Sir Robert Stawell Ball: A Treatise on the Theory of Screws (1900). Ball was aware of Grassmann's work as he explains in the Biographical Notes to this text in a comment on the Ausdehnungslehre of 1862.

This remarkable work, a development of an earlier volume (1844), by the same author, contains much that is of instruction and interest in connection with the present theory. ... Here we have a very general theory, which includes screw coordinates as a particular case.
The principal proponent of screw theory from a Grassmannian viewpoint was Edward Wyllys Hyde. In 1888 he wrote a paper entitled 'The Directional Theory of Screws' on which Ball comments, again in the Biographical Notes to A Treatise on the Theory of Screws.

The author writes: "I shall define a screw to be the sum of a point-vector and a planevector perpendicular to it, the former being a directed and posited line, the latter the product of two vectors, hence a directed but not posited plane." Prof. Hyde proves by his [sic] calculus many of the fundamental theorems in the present theory in a very concise manner.

2009 9 3

Grassmann Algebra Book.nb

482

7.2 A Canonical Form for a 2-Entity


The canonical form
The most general 2-entity in a bound vector space may always be written as the sum of a bound vector and a bivector.
S Pa + b

Here, P is a point, a a vector and b a bivector. Remember that only in vector 1, 2 and 3-spaces are bivectors necessarily simple. In what follows we will show that in a metric space the point P may always be chosen in such a way that the bivector b is orthogonal to the vector a. This property, when specialised to three-dimensional space, is important for the theory of screws to be developed in the rest of the chapter. We show it as follows: To the above equation, add and subtract the bound vector P* a such that IP - P* M a 0, giving:
S P* a + b* b* b + IP - P* M a

We want to choose P* such that b* is orthogonal to a, that is:


Ib + IP - P* M aM a 0

Expanding the left-hand side of this equation gives:


b a + Ia IP - P* MM a - Ha aL IP - P* M 0

But from our first condition on the choice of P* (that is, IP - P* M a 0) we see that the second term is zero giving:

P* P -

ba aa

whence b* may be expressed as:


b* b + ba aa a

Finally, substituting for P* and b* in the expression for S above gives the canonical form for the 2entity S, in which the new bivector component is now orthogonal to the vector a. The bound vector component defines a line called the central axis of S.

2009 9 3

Grassmann Algebra Book.nb

483

S P-

ba aa

a + b +

ba aa

7.1

Canonical forms in an n-plane


Canonical forms in !1
In a bound vector space of one dimension, that is a 1-plane, there are points, vectors, and bound vectors. There are no bivectors. Hence every 2-entity is of the form S Pa and is therefore in some sense already in its canonical form.

Canonical forms in !2
In a bound vector space of two dimensions (the plane), every bound element can be expressed as a bound vector. To see this we note that any bivector is simple and can be expressed using the vector of the bound vector as one of its factors. For example, if P is a point and a, b1 , b2 , are vectors, any bound element in the plane can be expressed in the form:
P a + b1 b2

But because any bivector in the plane can be expressed as a scalar factor times any other bivector, we can also write:
b1 b2 k a

so that the bound element may now be written as the bound vector:
P a + b1 b2 P a + k a HP + kL a P* a

We can use GrassmannAlgebra to verify that formula 7.1 gives this result in a 2-space. First we declare the 2-space, and write a and b in terms of basis elements:
'2 ; a = a e1 + b e2 ; b = c e1 e2 ;

For the canonical expression [7.1] above we wish to show that the bivector term is zero. We do this by converting the interior products to scalar products using ToScalarProducts.
SimplifyBToScalarProductsBb + 0 ba aa aFF

In sum: A bound 2-element in the plane, Pa + b, may always be expressed as a bound vector:

2009 9 3

Grassmann Algebra Book.nb

484

S Pa + b P -

ba aa

7.2

Canonical forms in !3
In a bound vector space of three dimensions, that is a 3-plane, every 2-element can be expressed as the sum of a bound vector and a bivector orthogonal to the vector of the bound vector. (The bivector is necessarily simple, because all bivectors in a 3-space are simple.) Such canonical forms are called screws, and will be discussed in more detail in the sections to follow.

Creating 2-entities
A 2-entity may be created by applying the GrassmannAlgebra function CreateElement. For example in 3-dimensional space we create a 2-entity based on the symbol s by entering:
'3 ; S = CreateElementBsF
2

s1 # e1 + s2 # e2 + s3 # e3 + s4 e1 e2 + s5 e1 e3 + s6 e2 e3

Note that CreateElement automatically declares the generated symbols si as scalars by adding the pattern s_ . We can confirm this by entering Scalars.
Scalars :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, s_ , _>
0

To explicitly factor out the origin and express the 2-element in the form S Pa + b, we can use the GrassmannAlgebra function GrassmannSimplify.
% @SD # He1 s1 + e2 s2 + e3 s3 L + s4 e1 e2 + s5 e1 e3 + s6 e2 e3

2009 9 3

Grassmann Algebra Book.nb

485

7.3 The Complement of 2-Entity


Complements in an n-plane
In this section an expression will be developed for the complement of a 2-entity in an n-plane with a metric. In an n-plane, the complement of the sum of a bound vector and a bivector is the sum of a bound (n-2)-vector and an (n-1)-vector. It is of pivotal consequence for the theory of mechanics that such a complement in a 3-plane (the usual three-dimensional space) is itself the sum of a bound vector and a bivector. Geometrically, a quantity and its complement have the same measure but are orthogonal. In addition, for a 2-element (such as the sum of a bound vector and a bivector) in a 3-plane, the complement of the complement of an entity is the entity itself. These results will find application throughout Chapter 8: Exploring Mechanics. The metric we choose to explore is the hybrid metric Gij defined in [5.33], in which the origin is orthogonal to all vectors, but otherwise the vector space metric is arbitrary.

The complement referred to the origin


Consider the general sum of a bound vector and a bivector expressed in its simplest form referred to the origin. "Referring" an element to the origin means expressing its bound component as bound through the origin, rather than through some other more general point.
X "a + b

The complement of X is then:


X "a + b

which, by formulae 5.41 and 5.43 derived in Chapter 5, gives:


X "b + a

X !a + b

X !b + a

7.3

It may be seen that the effect of taking the complement of X when it is in a form referred to the origin is equivalent to interchanging the vector a and the bivector b whilst taking their vector space complements. This means that the vector that was bound through the origin becomes a free (n|1)vector, whilst the bivector that was free becomes an (n|2)-vector bound through the origin.

The complement referred to a general point


Let X "a + b and P " + n. We can refer X to the point P by adding and subtracting na to and from the above expression for X.

2009 9 3

Grassmann Algebra Book.nb

486

X " a + n a + b - n a H" + nL a + Hb - n aL

Or, equivalently:
X P a + bp bp b - n a

By manipulating the complement Pa, we can write it in the alternative form - a P.


P a -a P - a P - a P

Further, from formula 5.41, we have that:


- a P I" aM P bp = " bp

So that the relationship between X and its complement X , can finally be written:

X P a + bp

X ! bp + I! aM P

7.4

Remember, this formula is valid for the hybrid metric [5.33] in an n-plane of arbitrary dimension. We explore its application to 3-planes in the section below.

7.4 The Screw


The definition of a screw
A screw is the canonical form of a 2-entity in a three-plane, and may always be written in the form:

S Pa + s a
where:
a Pa s a

7.5

is the vector of the screw is the central axis of the screw is the pitch of the screw is the bivector of the screw

Remember that a is the (free) complement of the vector a in the three-plane, and hence is a simple bivector.

2009 9 3

Grassmann Algebra Book.nb

487

The unit screw


` ` Let the scalar a denote the magnitude of the vector a of the screw, so that aa a. A unit screw S ` may then be defined by Sa S, and written as:

` ` ` S = Pa + s a

7.6

Note that the unit screw does not have unit magnitude. The magnitude of a screw will be discussed in Section 7.5.

The pitch of a screw


An explicit formula for the pitch s of a screw is obtained by taking the exterior product of the screw expression with its vector a.
S a IP a + s aM a

The first term in the expansion of the right hand side is zero, leaving:
Sa s aa

Further, by taking the free complement of this expression and invoking the definition of the interior product we get:
Sa s aa s aa s aa

Dividing through by the square of the magnitude of a gives:


` ` ` ` Sa s aa s

In sum: We can obtain the pitch of a screw from the any of the following formulae:

Sa Sa ` ` s Sa aa aa

7.7

The central axis of a screw


An explicit formula for the central axis Pa of a screw is obtained by taking the interior product of the screw expression with its vector a.
S a IP a + s aM a

The second term in the expansion of the right-hand side is zero (since an element is always orthogonal to its complement), leaving: 2009 9 3

Grassmann Algebra Book.nb

488

The second term in the expansion of the right-hand side is zero (since an element is always orthogonal to its complement), leaving:
S a HP aL a

The right-hand side of this equation may be expanded using the Interior Common Factor Theorem to give:
S a Ha PL a - Ha aL P

By taking the exterior product of this expression with a, we eliminate the first term on the righthand side to get:
HS aL a - Ha aL HP aL

By dividing through by the square of the magnitude of a, we can express this in terms of unit elements.
` ` IS aM a - HP aL

In sum: We can obtain the central axis Pa of a screw from any of the following formulae:

Pa -

HS aL a aa

` ` ` ` - IS aM a a IS aM

7.8

Orthogonal decomposition of a screw


By taking the expression for a screw and substituting the expressions derived in [7.7] for the pitch and [7.8] for the central axis we obtain:
` ` Sa S - IS aM a + a aa

In order to transform the second term into the form we want, we note first that S a is a scalar. So that we can write:
S a a S a a S a a HS aL a

Hence S can be written as:

` ` ` ` S - IS aM a + IS aM a

7.9

This type of decomposition is in fact valid for any m-element S and 1-element a, as we have shown ` ` in Chapter 6, formula 6.68. The central axis is the term - IS aM a which is the component of S ` ` parallel to a, and the term IS aM a is s a, the component of S orthogonal to a.

7.5 The Algebra of Screws


2009 9 3

Grassmann Algebra Book.nb

489

7.5 The Algebra of Screws


To be completed

7.6 Computing with Screws


To be completed

2009 9 3

Grassmann Algebra Book.nb

490

8 Exploring Mechanics

8.1 Introduction
Grassmann algebra applied to the field of mechanics performs an interesting synthesis between what now seem to be regarded as disparate concepts. In particular we will explore the synthesis it can effect between the concepts of force and moment; velocity and angular velocity; and linear momentum and angular momentum. This synthesis has an important concomitant, namely that the form in which the mechanical entities are represented, is for many results of interest, independent of the dimension of the space involved. It may be argued therefore, that such a representation is more fundamental than one specifically requiring a three-dimensional context (as indeed does that which uses the Gibbs-Heaviside vector algebra). This is a more concrete result than may be apparent at first sight since the form, as well as being valid for spaces of dimension three or greater, is also valid for spaces of dimension zero, one or two. Of most interest, however, is the fact that the complementary form of a mechanical entity takes on a different form depending on the number of dimensions concerned. For example, the velocity of a rigid body is the sum of a bound vector and a bivector in a space of any dimensions. Its complement is dependent on the dimension of the space, but in each case it may be viewed as representing the points of the body (if they exist) which have zero velocity. In three dimensions the complementary velocity can specify the axis of rotation of the rigid body, while in two dimensions it can specify the centre (point) of rotation. Furthermore, some results in mechanics (for example, Newton's second law or the conditions of equilibrium of a rigid body) will be shown not to require the space to be a metric space. On the other hand, use of the Gibbs-Heaviside 'cross product' to express angular momentum or the moment condition immediately supposes the space to possess a metric. Mechanics as it is known today is in the strange situation of being a field of mathematical physics in which location is very important, and of which the calculus traditionally used (being vectorial) can take no proper account. As already discussed in Chapter 1, one may take the example of the concept of force. A (physical) force is not satisfactorily represented by a vector, yet contemporary practice is still to use a vector calculus for this task. To patch up this inadequacy the concept of moment is introduced and the conditions of equilibrium of a rigid body augmented by a condition on the sum of the moments. The justification for this condition is often not well treated in contemporary texts. Many of these texts will of course rightly state that forces are not (represented by) free vectors and yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although confusing, this inaptitude is usually offset by various comments attached to the symbolic descriptions and calculations. For example, a position is represented by a 'position vector'. A position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the coordinate system. The coordinate system itself is supposed to consist of an origin point and a number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and position vectors would not occur if the calculus used to describe them were a calculus of position 2009 9 3as well as direction (vectors). (points)

Many of these texts will of course rightly state that forces are not (represented by) free vectors and yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although confusing, this inaptitude is usually offset by various comments attached Grassmann to the Algebra symbolic Book.nb 491 descriptions and calculations. For example, a position is represented by a 'position vector'. A position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the coordinate system. The coordinate system itself is supposed to consist of an origin point and a number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and position vectors would not occur if the calculus used to describe them were a calculus of position (points) as well as direction (vectors). In order to describe the phenomena of mechanics in purely vectorial terms it has been necessary therefore to devise the notions of couple and moment as notions almost distinct from that of force, thus effectively splitting in two all the results of mechanics. In traditional mechanics, to every 'linear' quantity: force, linear momentum, translational velocity, etc., corresponds an 'angular' quantity: moment, angular momentum, angular velocity. In this chapter we will show that by representing mechanical quantities correctly in terms of elements of a Grassmann algebra this dichotomy disappears and mechanics takes on a more unified form. In particular we will show that there exists a screw / representing (including moments) a system of forces (remember that a system of forces in three dimensions is not necessarily replaceable by a single force); a screw ) representing the momentum of a system of bodies (linear and angular), and a screw & representing the velocity of a rigid body (linear and angular): all invariant combinations of the linear and angular components. For example, the velocity & is a complete characterization of the kinematics of the rigid body independent of any particular point on the body used to specify its motion. Expressions for work, power and kinetic energy of a system of forces and a rigid body will be shown to be determinable by an interior product between the relevant screws. For example, the interior product of / with & will give the power of the system of forces acting on the rigid body, that is, the sum of the translational and rotational powers.

Historical Note The application of the Ausdehnungslehre to mechanics has obtained far fewer proponents than might be expected. This may be due to the fact that Grassmann himself was late going into the subject. His 'Die Mechanik nach den principien der Ausdehnungslehre' (1877) was written just a few weeks before his death. Furthermore, in the period before his ideas became completely submerged by the popularity of the Gibbs-Heaviside system (around the turn of the century) there were few people with sufficient understanding of the Ausdehnungslehre to break new ground using its methods. There are only three people who have written substantially in English using the original concepts of the Ausdehnungslehre: Edward Wyllys Hyde (1888), Alfred North Whitehead (1898), and Henry James Forder (1941). Each of them has discussed the theory of screws in more or less detail, but none has addressed the complete field of mechanics. The principal works in other languages are in German, and apart from Grassmann's paper in 1877 mentioned above, are the book by Jahnke (1905) which includes applications to mechanics, and the short monograph by Lotze (1922) which lays particular emphasis on rigid body mechanics.

2009 9 3

Grassmann Algebra Book.nb

492

8.2 Force
Representing force
The notion of force as used in mechanics involves the concepts of magnitude, sense, and line of action. It will be readily seen in what follows that such a physical entity may be faithfully represented by a bound vector. Similarly, all the usual properties of systems of forces are faithfully represented by the analogous properties of sums of bound vectors. For the moment it is not important whether the space has a metric, or what its dimension is. Let / denote a force. Then:

$ Pf
The vector f is called the force vector and expresses the sense and direction of the force. In a metric space it would also express its magnitude. The point P is any point on the line of action of the force. It can be expressed as the sum of the origin point " and the position vector n of the point.

8.1

This simple representation delineates clearly the role of the (free) vector f. The vector f represents all the properties of the force except the position of its line of action. The operation P has the effect of 'binding' the force vector to the line through P. On the other hand the expression of a bound vector as a product of points is also useful when expressing certain types of forces, for example gravitational forces. Newton's law of gravitation for the force exerted on a point mass m1 (weighted point) by a point mass m2 may be written:

/12

G m1 m2 R2

8.2

Note that this expression correctly changes sign if the masses are interchanged.

Systems of forces
A system of forces may be represented by a sum of bound vectors.

$ $i Pi fi

8.3

A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term PHfi L to and from the sum [8.3]. 2009 9 3

Grassmann Algebra Book.nb

493

A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term PHfi L to and from the sum [8.3].

$ P I fi M + HPi - PL fi

8.4

This 'adding and subtracting' operation is a common one in our treatment of mechanics. We call it referring the sum to the point P. Note that although the expression for / now involves the point P, it is completely independent of P since the terms involving P cancel. The sum may be said to be invariant with respect to any point used to express it. Examining the terms in the expression 8.4 above we can see that since fi is a vector (f, say) the first term reduces to the bound vector Pf representing a force through the chosen point P. The vector f is called the resultant force vector.
f fi

The second term is a sum of bivectors of the form HPi -PLfi . Such a bivector may be seen to faithfully represent a moment: specifically, the moment of the force represented by Pi fi about the point P. Let this moment be denoted 0iP .
0iP HPi - PL fi

To see that it is more reasonable that a moment be represented as a bivector rather than as a vector, one has only to consider that the physical dimensions of a moment are the product of a length and a force unit. The expression for moment above clarifies distinctly how it can arise from two located entities: the bound vector Pi fi and the point P, and yet itself be a 'free' entity. The bivector, it will be remembered, has no concept of location associated with it. This has certainly been a source of confusion among students of mechanics using the usual Gibbs-Heaviside three-dimensional vector calculus. It is well known that the moment of a force about a point does not possess the property of location, and yet it still depends on that point. While notions of 'free' and 'bound' are not properly mathematically characterized, this type confusion is likely to persist. The second term in [8.4] representing the sum of the moments of the forces about the point P will be denoted:
0P HPi - PL fi

Then, any system of forces may be represented by the sum of a bound vector and a bivector.

$ P f + %P

8.5

The bound vector Pf represents a force through an arbitrary point P with force vector equal to the sum of the force vectors of the system. The bivector 0P represent the sum of the moments of the forces about the same point P.

2009 9 3

Grassmann Algebra Book.nb

494

The bivector 0P represent the sum of the moments of the forces about the same point P. Suppose the system of forces be referred to some other point P* . The system of forces may then be written in either of the two forms:
P f + 0P P* f + 0P*

Solving for 0P* gives us a formula relating the moment sum about different points.
0P* 0P + HP - P* L f

If the bound vector term Pf in formula 8.5 is zero then the system of forces is called a couple. If the bivector term 0P is zero then the system of forces reduces to a single force.

Equilibrium
If a sum of forces is zero, that is:
/ P f + 0P 0

then it is straightforward to show that each of Pf and 0P must be zero. Indeed if such were not the case then the bound vector would be equal to the negative of the bivector, a possibility excluded by the implicit presence of the origin in a bound vector, and its absence in a bivector. Furthermore Pf being zero implies f is zero. These considerations lead us directly to the basic theorem of statics. For a body to be in equilibrium, the sum of the forces must be zero.

$ $i 0

8.6

This one expression encapsulates both the usual conditions for equilibrium of a body, that is: that the sum of the force vectors be zero; and that the sum of the moments of the forces about an arbitrary point P be zero.

Force in a metric 3-plane


In a 3-plane the bivector 0P is necessarily simple. In a metric 3-plane, the vector-space complement of the simple bivector 0P is just the usual moment vector of the three-dimensional Gibbs-Heaviside vector calculus. In three dimensions, the complement of an exterior product is equivalent to the usual cross product.
0P HPi - PL fi

Thus, a system of forces in a metric 3-plane may be reduced to a single force through an arbitrarily chosen point P plus the vector-space complement of the usual moment vector about P.

2009 9 3

Grassmann Algebra Book.nb

495

$ P f + HPi - PL fi

8.7

8.3 Momentum
The velocity of a particle
Suppose a point P with position vector n, that is P " + n. Since the origin is fixed, the velocity P of the point P is clearly a vector given by the timederivative of the position vector of P.

dP dt

dn dt

8.8

Representing momentum
As may be suspected from Newton's second law, momentum is of the same tensorial nature as force. The momentum of a particle is represented by a bound vector as is a force. The momentum of a system of particles (or of a rigid body, or of a system of rigid bodies) is representable by a sum of bound vectors as is a system of forces. The momentum of a particle comprises three factors: the mass, the position, and the velocity of the particle. The mass is represented by a scalar, the position by a point, and the velocity by a vector.

& P K m PO
Here
) m P P mP mP

8.9

is the particle momentum. is the particle mass. is the particle position point. is the particle velocity. is the particle point mass. is the particle momentum vector.

The momentum of a particle may be viewed either as a momentum vector bound to a line through the position of the particle, or as a velocity vector bound to a line through the point mass. The simple description [8.9] delineates clearly the role of the (free) vector m P. It represents all the properties of the momentum of a particle except its position.

The momentum of a system of particles


2009 9 3

Grassmann Algebra Book.nb

496

The momentum of a system of particles


The momentum of a system of particles is denoted by ), and may be represented by the sum of bound vectors:

& &i P i K m i P i O

8.10

A sum of bound vectors is not necessarily reducible to a bound vector: the momentum of a system of particles is not necessarily reducible to the momentum of a single particle. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the terms P mi Pi to and from the sum [8.10].

& P mi Pi + HPi - PL K mi Pi O
It cannot be too strongly emphasized that although the momentum of the system has now been referred to the point P, it is completely independent of P.

8.11

Examining the terms in formula 8.11 we see that since mi Pi is a vector (l, say) the first term reduces to the bound vector Pl representing the (linear) momentum of a 'particle' situated at the point P. The vector l is called the linear momentum of the system. The 'particle' may be viewed as a particle with mass equal to the total mass of the system and with velocity equal to the velocity of the centre of mass.
l mi Pi

The second term is a sum of bivectors of the form HPi -PLKmi Pi O. Such a bivector may be seen to faithfully represent the moment of momentum or angular momentum of the system of particles about the point P. Let this moment of momentum for particle i be denoted HiP .
HiP HPi - PL Kmi Pi O

The expression for angular momentum Hip above clarifies distinctly how it can arise from two located entities: the bound vector Pi Kmi Pi O and the point P, and yet itself be a 'free' entity. Similarly to the notion of moment, the notion of angular momentum treated using the threedimensional Gibbs-Heaviside vector calculus has caused some confusion amongst students of mechanics: the angular momentum of a particle about a point depends on the positions of the particle and of the point and yet itself has no property of location. The second term in formula 8.11 representing the sum of the moments of momenta about the point P will be denoted ,P . 2009 9 3

Grassmann Algebra Book.nb

497

The second term in formula 8.11 representing the sum of the moments of momenta about the point P will be denoted ,P .
,P HPi - PL Kmi Pi O

Then the momentum ) of any system of particles may be represented by the sum of a bound vector and a bivector.

& P l + 'P

8.12

The bound vector Pl represents the linear momentum of the system referred to an arbitrary point P. The bivector ,P represents the angular momentum of the system about the point P.

The momentum of a system of bodies


The momentum of a system of bodies (rigid or non-rigid) is of the same form as that for a system of particles [8.12], since to calculate momentum it is not necessary to consider any constraints between the particles. Suppose a number of bodies with momenta )i , then the total momentum )i may be written:
)i Pi li + ,Pi

Referring this momentum sum to a point P by adding and subtracting the term Pli we obtain:
)i P li + HPi - PL li + ,Pi

It is thus natural to represent the linear momentum vector of the system of bodies l by:
l li

and the angular momentum of the system of bodies ,P by


,P HPi - PL li + ,Pi

That is, the momentum of a system of bodies is of the same form as the momentum of a system of particles. The linear momentum vector of the system is the sum of the linear momentum vectors of the bodies. The angular momentum of the system is made up of two parts: the sum of the angular momenta of the bodies about their respective chosen points; and the sum of the moments of the linear momenta of the bodies (referred to their respective chosen points) about the point P. Again it may be worth emphasizing that the total momentum ) is independent of the point P, whilst its component terms are dependent on it. However, we can easily convert the formulae to refer to some other point say P* .
) P l + ,P P* l + ,P*

Hence the angular momentum referred to the new point is given by:

2009 9 3

Grassmann Algebra Book.nb

498

Hence the angular momentum referred to the new point is given by:
,P* ,P + HP - P* L l

Linear momentum and the mass centre


There is a simple relationship between the linear momentum of the system, and the momentum of a 'particle' at the mass centre. The centre of mass PG of a system may be defined by:
M PG mi Pi

Here

M Pi PG

is the total mass of the system. is the position of the ith particle or centre of mass of the ith body. is the position of the centre of mass of the system.

Differentiating this equation yields:


M PG mi Pi l

Formula 8.12 then may then be written:


) P KM PG O + ,P

If now we refer the momentum to the centre of mass, that is, we choose P to be PG , we can write the momentum of the system as:

& PG KM PG O + 'PG

8.13

Thus the momentum ) of a system is equivalent to the momentum of a particle of mass equal to the total mass M of the system situated at the centre of mass PG plus the angular momentum about the centre of mass.

Momentum in a metric 3-plane


In a 3-plane the bivector ,P is necessarily simple. In a metric 3-plane, the vector-space complement of the simple bivector ,P is just the usual momentum vector of the three-dimensional Gibbs-Heaviside vector calculus. In three dimensions, the complement of the exterior product of two vectors is equivalent to the usual cross product.
,P HPi - PL li

Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single particle through an arbitrarily chosen point P plus the vector-space complement of the usual moment of moment vector about P. 2009 9 3

Grassmann Algebra Book.nb

499

Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single particle through an arbitrarily chosen point P plus the vector-space complement of the usual moment of moment vector about P.

& P l + HPi - PL li

8.14

8.4 Newton's Law


Rate of change of momentum
The momentum of a system of particles has been given by [8.10] as:
) Pi K mi Pi O

The rate of change of this momentum with respect to time is:


) Pi K mi Pi O + Pi K mi Pi O + Pi K mi Pi O

In what follows, it will be supposed for simplicity that the masses are not time varying, and since the first term is zero, this expression becomes:

& Pi K mi Pi O
Consider now the system with momentum:
) P l + ,P

8.15

The rate of change of momentum with respect to time is:


& P l + P l + 'P

8.16

The term Pl, or what is equivalent, the term PKM PG O, is zero under the following conditions: P is a fixed point. l is zero. P is parallel to PG . P is equal to PG .

2009 9 3

Grassmann Algebra Book.nb

500

In particular then, when the point to which the momentum is referred is the centre of mass, the rate of change of momentum [4.2] becomes:

& P G l + 'P G

8.17

Newton's second law


For a system of bodies of momentum ) acted upon by a system of forces /, Newton's second law may be expressed as:

$&
This equation captures Newton's law in its most complete form, encapsulating both linear and angular terms. Substituting for / and ) from equations 8.5 and 8.16 we have:

8.18

P f + %P P l + P l + 'P
By equating the bound vector terms of [8.19] we obtain the vector equation:
fl

8.19

By equating the bivector terms of [8.19] we obtain the bivector equation:


0P P l + ,P

In a metric 3-plane, it is the vector complement of this bivector equation which is usually given as the moment condition.
0P P l + ,P

If P is a fixed point so that P is zero, Newton's law [8.18] is equivalent to the pair of equations:

fl %P 'P

8.20

2009 9 3

Grassmann Algebra Book.nb

501

8.5 The Angular Velocity of a Rigid Body


To be completed.

8.6 The Momentum of a Rigid Body


To be completed.

8.7 The Velocity of a Rigid Body


To be completed.

8.8 The Complementary Velocity of a Rigid Body


To be completed.

8.9 The Infinitesimal Displacement of a Rigid Body


To be completed.

8.10 Work, Power and Kinetic Energy


To be completed.

2009 9 3

Grassmann Algebra Book.nb

502

9 Grassmann Algebra

9.1 Introduction
The Grassmann algebra is an algebra of "numbers" composed of linear sums of elements from any of the exterior linear spaces generated from a given linear space. These are numbers in the same sense that, for example, a complex number or a matrix is a number. The essential property that an algebra has, in addition to being a linear space is, broadly speaking, one of closure under a product operation. That is, the algebra has a product operation such that the product of any elements of the algebra is also an element of the algebra. Thus the exterior product spaces L discussed up to this point, are not algebras, since products
m

(exterior, regressive or interior) of their elements are not generally elements of the same space. However, there are two important exceptions. L is not only a linear space, but an algebra (and a
0

field) as well under the exterior and interior products. L is an algebra and a field under the
n

regressive product. Many of our examples will be using GrassmannAlgebra functions, and because we will often be changing the dimension of the space depending on the example under consideration, we will indicate a change without comment simply by entering the appropriate DeclareBasis symbol from the palette. For example the following entry:
&3 ;

effects the change to a 3-dimensional linear or vector space.

9.2 Grassmann Numbers


Creating Grassmann numbers
A Grassmann number is a sum of elements from any of the exterior product spaces L. They thus
m

form a linear space in their own right, which we call L. A basis for L is obtained from the current declared basis of L by collecting together all the basis
1

elements of the various L. This may be generated by entering BasisL[].


m

2009 9 3

Grassmann Algebra Book.nb

503

&3 ; BasisL@D 81, e1 , e2 , e3 , e1 e2 , e1 e3 , e2 e3 , e1 e2 e3 <

If we wish to generate a general symbolic Grassmann number in the currently declared basis, we can use the function GrassmannAlgebra function CreateGrassmannNumber. CreateGrassmannNumber[symbol] creates a Grassmann number with scalar coefficients formed by subscripting the symbol given. For example, if we enter GrassmannNumber[x], we obtain:
CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

The positional ordering of the xi with respect to the basis elements in any given term is due to Mathematica's internal ordering routines, and does not affect the meaning of the result. The CreateGrassmannNumber operation automatically adds the pattern for the generated coefficients xi to the list of declared scalars, as we see by entering Scalars.
Scalars :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, x_ , _>
0

If we wish to enter our own coefficients, we can use CreateGrassmannNumber with a placeholder (, pl) as argument, and then tab through the placeholders generated, replacing them with the desired coefficients:
CreateGrassmannNumber@D + e1 + e2 + e3 + e1 e2 + e1 e3 + e2 e3 + e1 e2 e3

Note that the coefficients entered here will not be automatically put on the list of declared scalars. If several Grassmann numbers are required, we can apply the operation to a list of symbols. That is, CreateGrassmannNumber is Listable. Below we generate three numbers in 2-space.
&2 ; CreateGrassmannNumber@8x, y, z<D MatrixForm x1 + e1 x3 + e2 x4 + x2 e1 e2 y1 + e1 y3 + e2 y4 + y2 e1 e2 z1 + e1 z3 + e2 z4 + z2 e1 e2

Grassmann numbers can of course be entered in general symbolic form, not just using basis elements. For example a Grassmann number might take the form:
2 + x - 3 y + 5 xy + xyz

In some cases it might be faster to change basis temporarily in order to generate a template with all but the required scalars formed.

2009 9 3

Grassmann Algebra Book.nb

504

DeclareBasis@8x, y, z<D; T = Table@CreateGrassmannNumber@D, 83<D; &3 ; MatrixForm@TD + x + y + z + xy + xz + yz + xyz + x + y + z + xy + xz + yz + xyz + x + y + z + xy + xz + yz + xyz

Body and soul


For certain operations (particularly the inverse) it is critically important whether or not a Grassmann number has a (non-zero) scalar component. The scalar component of a Grassmann number is called its body. The body may be obtained from a Grassmann number by using the Body function. Suppose we have a general Grassmann number X in a 3-space.
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

The body of X is then:


Body@XD x0

The rest of a Grassmann number is called its soul. The soul may be obtained from a Grassmann number by using the Soul function. The soul of the number X is:
Soul@XD e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 Body and Soul apply to Grassmann numbers whose components are given in any form. Z = Hx yL + HHx Hy + 2LL Hz - 2LL + Hz Hy xLL; 8Body@ZD, Soul@ZD< 8x y + 2 Hx zL + z y x, - 4 x + x y z - 2 x y<

The first two terms of the body are scalar because they are interior products of 1-elements (scalar products). The third term of the body is a scalar in a space of 3 dimensions.
Body and Soul are both Listable. Taking the Soul of the list of Grassmann numbers generated in 2-space above gives

2009 9 3

Grassmann Algebra Book.nb

505

x0 + e1 x1 + e2 x2 + x3 e1 e2 SoulB y0 + e1 y1 + e2 y2 + y3 e1 e2 F MatrixForm z0 + e1 z1 + e2 z2 + z3 e1 e2 e1 x1 + e2 x2 + x3 e1 e2 e1 y1 + e2 y2 + y3 e1 e2 e1 z1 + e2 z2 + z3 e1 e2

Even and odd components


Changing the order of the factors in an exterior product changes its sign according to the axiom:
a b = H- 1Lm k b a
m k k m

Thus factors of odd grade anti-commute; whereas, if one of the factors is of even grade, they commute. This means that, if X is a Grassmann number whose components are all of odd grade, then it is nilpotent. The even components of a Grassmann number can be extracted with the function EvenGrade and the odd components with OddGrade.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 EvenGrade@XD x0 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 OddGrade@XD e1 x1 + e2 x2 + e3 x3 + x7 e1 e2 e3

It should be noted that the result returned by EvenGrade or OddGrade is dependent on the dimension of the current space. For example, the calculations above for the even and odd components of the number X were carried out in a 3-space. If now we change to a 2-space and repeat the calculations, we will get a different result for OddGrade[X] because the term of grade three is necessarily zero in this 2-space.
&2 ; OddGrade@XD e1 x1 + e2 x2 + e3 x3

For EvenGrade and OddGrade to apply, it is not necessary that the Grassmann number be in terms of basis elements or simple exterior products. Applying EvenGrade and OddGrade to the number Z below still separates the number into its even and odd components:
Z = Hx yL + HHx Hy + 2LL Hz - 2LL + Hz Hy xLL;

2009 9 3

Grassmann Algebra Book.nb

506

&3 ; 8EvenGrade@ZD, OddGrade@ZD< 8x y + 2 Hx zL + z y x - 2 x y, - 4 x + x y z< EvenGrade and OddGrade are Listable.

To test whether a number is even or odd we use EvenGradeQ and OddGradeQ. Consider again the general Grassmann number X in 3-space.
8EvenGradeQ@XD, OddGradeQ@XD, EvenGradeQ@EvenGrade@XDD, OddGradeQ@EvenGrade@XDD< 8False, False, True, False< EvenGradeQ and OddGradeQ are not Listable. When applied to a list of elements, they require all the elements in the list to conform in order to return True. They can of course still be mapped over a list of elements to question the evenness or oddness of the individual components. EvenGradeQ 81, x, x y, x y z< 8True, False, True, False<

Finally, there is the question as to whether the Grassmann number 0 is even or odd. It may be recalled from Chapter 2 that the single symbol 0 is actually a shorthand for the zero element of any of the exterior linear spaces, and hence is of indeterminate grade. Entering Grade[0] will return a flag Grade0 which can be manipulated as appropriate. Therefore, both EvenGradeQ[0] and OddGradeQ[0] will return False.
8Grade@0D, EvenGradeQ@0D, OddGradeQ@0D< 8Grade0, False, False<

The grades of a Grassmann number


The GrassmannAlgebra function Grade will determine the grades of each of the components in a Grassmann number, and return a list of the grades. For example, applying Grade to a general number X in 3-space shows that it contains components of all grades up to 3.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 Grade@XD 80, 1, 2, 3< Grade will also work with more general Grassmann number expressions. For example we can apply it to the number Z which we previously defined above. Z x y + x H2 + yL H- 2 + zL + z y x

2009 9 3

Grassmann Algebra Book.nb

507

Grade@ZD 80, 1, 2<

Because it may be necessary to expand out a Grassmann number into a sum of terms (each of which has a definite grade) before the grades can be calculated, and because Mathematica will reorganize the ordering of those terms in the sum according to its own internal algorithms, direct correspondence with the original form is often lost. However, if a list of the grades of each term in an expansion of a Grassmann number is required, we should: Expand and simplify the number into a sum of terms. Create a list of the terms. Map Grade over the list. For example:
A = % @ZD - 4 x + x y + 2 Hx zL + x y z - x y z 1 - 2 x y A = List A 8- 4 x, x y, 2 Hx zL, x y z, - Hx y z 1L, - 2 x y< Grade A 81, 0, 0, 1, 0, 2<

To extract the components of a Grassmann number of a given grade or grades, we can use the GrassmannAlgebra ExtractGrade[m][X] function which takes a Grassmann number and extracts the components of grade m. Extracting the components of grade 1 from the number Z defined above gives:
ExtractGrade@1D@ZD -4 x + x y z ExtractGrade also works on lists or tensors of GrassmannNumbers. DeclareExtraScalars@8y_ , z_ <D :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, z_ , x_ , y_ , _>
0

x0 + e1 x1 + e2 x2 + x3 e1 e2 ExtractGrade@1DB y0 + e1 y1 + e2 y2 + y3 e1 e2 F MatrixForm z0 + e1 z1 + e2 z2 + z3 e1 e2 e1 x1 + e2 x2 e1 y1 + e2 y2 e1 z1 + e2 z2

Working with complex scalars


2009 9 3

Grassmann Algebra Book.nb

508

Working with complex scalars


The imaginary unit is treated by Mathematica and the GrassmannAlgebra package just as if it were a numeric quantity. This means that just like other numeric quantities, it will not appear in the list of declared scalars, even if explicitly entered.
DeclareScalars@8a, b, c, d, 2, , <D 8a, b, c, d<

However, or any other numeric quantity is treated as a scalar.


ScalarQ@8a, b, c, 2, , , 2 - 3 <D 8True, True, True, True, True, True, True<

This feature allows the GrassmannAlgebra package to deal with complex numbers just as it would any other scalars.
%@HHa + bL xL H yLD H a - bL x y

9.3 Operations with Grassmann Numbers


The exterior product of Grassmann numbers
We have already shown how to create a general Grassmann number in 3-space:
X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

We now create a second Grassmann number Y so that we can look at various operations applied to two Grassmann numbers in 3-space.
Y = CreateGrassmannNumber@yD y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 e2 + y5 e1 e3 + y6 e2 e3 + y7 e1 e2 e3

The exterior product of two general Grassmann numbers in 3-space is obtained by multiplying out the numbers termwise and simplifying the result.

2009 9 3

Grassmann Algebra Book.nb

509

% @X YD x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + e3 Hx3 y0 + x0 y3 L + Hx4 y0 - x2 y1 + x1 y2 + x0 y4 L e1 e2 + Hx5 y0 - x3 y1 + x1 y3 + x0 y5 L e1 e3 + Hx6 y0 - x3 y2 + x2 y3 + x0 y6 L e2 e3 + Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 L e1 e2 e3

When the bodies of the numbers are zero we get only components of grades two and three.
%@Soul@XD Soul@YDD H-x2 y1 + x1 y2 L e1 e2 + H-x3 y1 + x1 y3 L e1 e3 + H-x3 y2 + x2 y3 L e2 e3 + Hx6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 L e1 e2 e3

The regressive product of Grassmann numbers


The regressive product of X and Y is again obtained by multiplying out the numbers termwise and simplifying.
Z = % @X YD e1 e2 e3 Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 + e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L + e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 e2 + Hx7 y5 + x5 y7 L e1 e3 + Hx7 y6 + x6 y7 L e2 e3 + x7 y7 e1 e2 e3 L

We cannot obtain the result in the form of a pure exterior product without relating the unit nelement to the basis n-element. In general these two elements may be related by a scalar multiple which we have denoted !. In 3-space this relationship is given by 1! e1 e2 e3 . To obtain the
3

result as an exterior product, but one that will also involve the scalar !, we use the GrassmannAlgebra function ToCongruenceForm.
Z1 = ToCongruenceForm@ZD 1 % Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 + e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L + e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 e2 + Hx7 y5 + x5 y7 L e1 e3 + Hx7 y6 + x6 y7 L e2 e3 + x7 y7 e1 e2 e3 L

In a space with a Euclidean metric ! is unity. GrassmannSimplify looks at the currently declared metric and substitutes the value of !. Since the currently declared metric is by default Euclidean, applying GrassmannSimplify puts ! to unity.

2009 9 3

Grassmann Algebra Book.nb

510

%@Z1D x7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 + e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L + e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 e2 + Hx7 y5 + x5 y7 L e1 e3 + Hx7 y6 + x6 y7 L e2 e3 + x7 y7 e1 e2 e3

The complement of a Grassmann number


Consider again a general Grassmann number X in 3-space.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

The complement of X is denoted X . Entering X applies the complement operation, but does not simplify it in any way.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

Further simplification to an explicit Grassmann number depends on the metric of the space concerned and may be accomplished by applying GrassmannSimplify. GrassmannSimplify will look at the metric and make the necessary transformations. In the Euclidean metric assumed by GrassmannAlgebra as the default we have:
%@ X D e3 x4 - e2 x5 + e1 x6 + x7 + x3 e1 e2 - x2 e1 e3 + x1 e2 e3 + x0 e1 e2 e3

For metrics other than Euclidean, the expression for the complement will be more complex. We can explore the complement of a general Grassmann number in a general metric by entering DeclareMetric[$] and then applying GrassmannSimplify to X . As an example, we take the complement of a general Grassmann number in a 2-space with a general metric. We choose a 2space rather than a 3-space, because the result is less complex to display.
&2 ; DeclareMetric@-D 88$1,1 , $1,2 <, 8$1,2 , $2,2 << U = CreateGrassmannNumber@uD u0 + e1 u1 + e2 u2 + u3 e1 e2 %@ U D Simplify I-u3 $2 1,2 + e2 Hu1 $1,1 + u2 $1,2 L + u3 $1,1 $2,2 e1 Hu1 $1,2 + u2 $2,2 L + u0 e1 e2 M J - $2 1,2 + $1,1 $2,2 N

The interior product of Grassmann numbers


2009 9 3

Grassmann Algebra Book.nb

511

The interior product of Grassmann numbers


The interior product of two elements a and b where m is less than k, is always zero. Thus the
m k

interior product X Y is zero if the components of X are of lesser grade than those of Y. For two general Grassmann numbers we will therefore have some of the products of the components being zero. By way of example we take two general Grassmann numbers U and V in 2-space:
&2 ; 8U = CreateGrassmannNumber@uD, V = CreateGrassmannNumber@wD< 8u0 + e1 u1 + e2 u2 + u3 e1 e2 , w0 + e1 w1 + e2 w2 + w3 e1 e2 <

The interior product of U with V is:


W1 = % @U V D u0 w0 + e1 u1 w0 + e2 u2 w0 + HHe1 e1 L u1 + He1 e2 L u2 L w1 + He1 e2 e1 L u3 w1 + HHe1 e2 L u1 + He2 e2 L u2 L w2 + He1 e2 e2 L u3 w2 + He1 e2 e1 e2 L u3 w3 + u3 w0 e1 e2

Note that there are no products of the form ei Hej ek L as these have been put to zero by GrassmannSimplify. If we wish, we can convert the remaining interior products to scalar products using the GrassmannAlgebra function ToScalarProducts.
W2 = ToScalarProducts@W1 D u0 w0 + e1 u1 w0 + e2 u2 w0 + He1 e1 L u1 w1 + He1 e2 L u2 w1 He1 e2 L e1 u3 w1 + He1 e1 L e2 u3 w1 + He1 e2 L u1 w2 + He2 e2 L u2 w2 - He2 e2 L e1 u3 w2 + He1 e2 L e2 u3 w2 He1 e2 L2 u3 w3 + He1 e1 L He2 e2 L u3 w3 + u3 w0 e1 e2

Finally, we can substitute the values from the currently declared metric tensor for the scalar products using the GrassmannAlgebra function ToMetricForm. For example, if we first declare a general metric:
DeclareMetric@-D 88$1,1 , $1,2 <, 8$1,2 , $2,2 << ToMetricForm@W2 D u0 w0 + e1 u1 w0 + e2 u2 w0 + u1 w1 $1,1 + e2 u3 w1 $1,1 + u2 w1 $1,2 - e1 u3 w1 $1,2 + u1 w2 $1,2 + e2 u3 w2 $1,2 - u3 w3 $2 1,2 + u2 w2 $2,2 - e1 u3 w2 $2,2 + u3 w3 $1,1 $2,2 + u3 w0 e1 e2

The interior product of two general Grassmann numbers in 3-space is

2009 9 3

Grassmann Algebra Book.nb

512

x0 y0 + x1 y1 + x2 y2 + x3 y3 + x4 y4 + e3 Hx3 y0 + x5 y1 + x6 y2 + x7 y4 L + x5 y5 + e2 Hx2 y0 + x4 y1 - x6 y3 - x7 y5 L + x6 y6 + e1 Hx1 y0 - x4 y2 - x5 y3 + x7 y6 L + x7 y7 + Hx4 y0 + x7 y3 L e1 e2 + Hx5 y0 - x7 y2 L e1 e3 + Hx6 y0 + x7 y1 L e2 e3 + x7 y0 e1 e2 e3

Now consider the case of two general Grassmann number in 3-space.


&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 Y = CreateGrassmannNumber@yD y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 e2 + y5 e1 e3 + y6 e2 e3 + y7 e1 e2 e3

The interior product of X with Y is:


Z = % @X YD x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 + HHe1 e1 L x1 + He1 e2 L x2 + He1 e3 L x3 L y1 + He1 e2 e1 L x4 y1 + He1 e3 e1 L x5 y1 + He2 e3 e1 L x6 y1 + He1 e2 e3 e1 L x7 y1 + HHe1 e2 L x1 + He2 e2 L x2 + He2 e3 L x3 L y2 + He1 e2 e2 L x4 y2 + He1 e3 e2 L x5 y2 + He2 e3 e2 L x6 y2 + He1 e2 e3 e2 L x7 y2 + HHe1 e3 L x1 + He2 e3 L x2 + He3 e3 L x3 L y3 + He1 e2 e3 L x4 y3 + He1 e3 e3 L x5 y3 + He2 e3 e3 L x6 y3 + He1 e2 e3 e3 L x7 y3 + HHe1 e2 e1 e2 L x4 + He1 e2 e1 e3 L x5 + He1 e2 e2 e3 L x6 L y4 + He1 e2 e3 e1 e2 L x7 y4 + HHe1 e2 e1 e3 L x4 + He1 e3 e1 e3 L x5 + He1 e3 e2 e3 L x6 L y5 + He1 e2 e3 e1 e3 L x7 y5 + HHe1 e2 e2 e3 L x4 + He1 e3 e2 e3 L x5 + He2 e3 e2 e3 L x6 L y6 + He1 e2 e3 e2 e3 L x7 y6 + He1 e2 e3 e1 e2 e3 L x7 y7 + x4 y0 e1 e2 + x5 y0 e1 e3 + x6 y0 e2 e3 + x7 y0 e1 e2 e3

We can expand these interior products to inner products and replace them by elements of the metric tensor by using the GrassmannAlgebra function ToMetricForm. For example in the case of a Euclidean metric we have:
1; ToMetricForm@ZD x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 + x1 y1 + e2 x4 y1 + e3 x5 y1 + x2 y2 e1 x4 y2 + e3 x6 y2 + x3 y3 - e1 x5 y3 - e2 x6 y3 + x4 y4 + e3 x7 y4 + x5 y5 - e2 x7 y5 + x6 y6 + e1 x7 y6 + x7 y7 + x4 y0 e1 e2 + x7 y3 e1 e2 + x5 y0 e1 e3 - x7 y2 e1 e3 + x6 y0 e2 e3 + x7 y1 e2 e3 + x7 y0 e1 e2 e3

2009 9 3

Grassmann Algebra Book.nb

513

9.4 Simplifying Grassmann Numbers


Elementary simplifying operations
All of the elementary simplifying operations that work with elements of a single exterior linear space also work with Grassmann numbers. In this section we take a Grassmann number A in 2space involving both exterior and interior products, and explore the application of those component operations of GrassmannSimplify which apply to exterior and interior products.
A = HH1 + xL H2 + y + y xLL H3 + zL H1 + xL H2 + y + y xL H3 + zL

Expanding products
ExpandProducts performs the first level of the simplification operation by simply expanding out any products. It treats scalars as elements of the Grassmann algebra, just like those of higher grade.Taking the expression A and expanding out both the exterior and interior products gives: A1 = ExpandProducts@AD 123 + 12z + 1y3 + 1yz + x23 + x2z + xy3 + xyz + 1yx3 + 1yxz + xyx3 + xyxz

Factoring scalars
FactorScalars collects all the scalars in each term, which figure either by way of ordinary multiplication or by way of exterior multiplication, and multiplies them as a scalar factor at the beginning of each term. Remember that both the exterior and interior product of scalars is a scalar. Mathematica will also automatically rearrange the terms in this sum according to its internal ordering algorithm. A2 = FactorScalars@A1 D 6 + 6 x + 3 y + 2 H1 zL + 2 Hx zL + y z + x y z + yxz + xyxz + 3 xy + 3 yx + 3 xyx

2009 9 3

Grassmann Algebra Book.nb

514

Checking for zero terms


ZeroSimplify looks at each term and decides if it is zero, either from nilpotency or from expotency. A term is nilpotent if it contains repeated 1-element factors. A term is expotent if the number of 1-element factors in it is greater than the dimension of the space. (We have coined this term 'expotent' for convenience.) An interior product is zero if the grade of the first factor is less than the grade of the second factor. ZeroSimplify then puts those terms it finds to be zero, equal to zero. A3 = ZeroSimplify@A2 D 6 + 6 x + 3 y + 2 Hx zL + y z + x y z + y x z + 3 x y + 3 y x

Reordering factors
In order to simplify further it is necessary to put the factors of each term into a canonical order so that terms of opposite sign in an exterior product will cancel and any inner product will be transformed into just one of its two symmetric forms. This reordering may be achieved by using the GrassmannAlgebra ToStandardOrdering function. (The rules for GrassmannAlgebra standard ordering are given in the package documentation.)
A4 = ToStandardOrdering@A3 D 6 + 6 x + 3 y + 2 Hx zL + y z

Simplifying expressions
We can perform all the simplifying operations above with GrassmannSimplify. Applying GrassmannSimplify to our original expression A we get the expression A4 .
% @ A4 D 6 + 6 x + 3 y + 2 Hx zL + y z

2009 9 3

Grassmann Algebra Book.nb

515

9.5 Powers of Grassmann Numbers


Direct computation of powers
The zeroth power of a Grassmann number is defined to be the unit scalar 1. The first power is defined to be the Grassmann number itself. Higher powers can be obtained by simply taking the exterior product of the number with itself the requisite number of times, and then applying GrassmannSimplify to simplify the result. To refer to an exterior power n of a Grassmann number X (but not to compute it), we will use the usual notation Xn . As an example we take the general element of 3-space, X, introduced at the beginning of this chapter, and calculate some powers:
X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 % @X XD x2 0 + 2 e1 x0 x1 + 2 e2 x0 x2 + 2 e3 x0 x3 + 2 x0 x4 e1 e2 + 2 x0 x5 e1 e3 + 2 x0 x6 e2 e3 + H- 2 x2 x5 + 2 Hx3 x4 + x1 x6 + x0 x7 LL e1 e2 e3 % @X X XD
2 2 2 2 2 x3 0 + 3 e1 x0 x1 + 3 e2 x0 x2 + 3 e3 x0 x3 + 3 x0 x4 e1 e2 + 3 x0 x5 e1 e3 + 2 3 x2 0 x6 e2 e3 + I6 x0 x3 x4 - 6 x0 x2 x5 + 6 x0 x1 x6 + 3 x0 x7 M e1 e2 e3

Direct computation is generally an inefficient way to calculate a power, as the terms are all expanded out before checking to see which are zero. It will be seen below that we can generally calculate powers more efficiently by using knowledge of which terms in the product would turn out to be zero.

Powers of even Grassmann numbers


We take the even component of X, and construct a table of its first, second, third and fourth powers.
Xe = EvenGrade@XD x0 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3

2009 9 3

Grassmann Algebra Book.nb

516

%@8Xe , Xe Xe , Xe Xe Xe , Xe Xe Xe Xe <D Simplify TableForm x0 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 x0 Hx0 + 2 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL x2 0 Hx0 + 3 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL x3 0 Hx0 + 4 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL

The soul of the even part of a general Grassmann number in 3-space just happens to be nilpotent due to its being simple. (See Section 2.10.)
Xes = EvenSoul@XD x4 e1 e2 + x5 e1 e3 + x6 e2 e3 %@Xes Xes D 0

However, in the general case an even soul will not be nilpotent, but its powers will become zero when the grades of the terms in the power expansion exceed the dimension of the space. We can show this by calculating the powers of an even soul in spaces of successively higher dimensions. Let
Zes = m p + q r s t + u v w x y z; Table@8&n ; Dimension, %@Zes Zes D<, 8n, 3, 8<D TableForm 3 4 5 6 7 8 0 0 0 2 mpqrst 2 mpqrst 2 mpqrst + 2 mpuvwxyz

Powers of odd Grassmann numbers


Odd Grassmann numbers are nilpotent. Hence all positive integer powers except the first are zero.
Xo = OddGrade@XD e1 x1 + e2 x2 + e3 x3 + x7 e1 e2 e3 % @Xo Xo D 0

2009 9 3

Grassmann Algebra Book.nb

517

Computing positive powers of Grassmann numbers


We can make use of the nilpotence of the odd component to compute powers of Grassmann numbers more efficiently than the direct approach described at the beginning of this section by noting that it will cause a binomial expansion of the sum of the even and odd components to terminate after the first two terms. Let X Xe + Xo , then:
X p H X e + X o L p X e p + p X e p-1 X o + !

X p X e p-1 H X e + p X o L
where the powers of p and p|1 refer to the exterior powers of the Grassmann numbers. For example, the cube of a general Grassmann number X in 3-space may then be calculated as
&3 ; Xe = EvenGrade@XD; Xo = OddGrade@XD; %@Xe Xe HXe + 3 Xo LD
2 2 2 2 2 x3 0 + 3 e1 x0 x1 + 3 e2 x0 x2 + 3 e3 x0 x3 + 3 x0 x4 e1 e2 + 3 x0 x5 e1 e3 + 2 3 x2 0 x6 e2 e3 + Ix0 H- 6 x2 x5 + 6 Hx3 x4 + x1 x6 LL + 3 x0 x7 M e1 e2 e3

9.1

This algorithm is used in the general power function GrassmannPower described below.

Powers of Grassmann numbers with no body


A further simplification in the computation of powers of Grassmann numbers arises when they have no body. Powers of numbers with no body will eventually become zero. In this section we determine formulae for the highest non-zero power of such numbers. The formula developed above for positive powers of Grassmann number still applies to the specific case of numbers without a body. If we can determine an expression for the highest non-zero power of an even number with no body, we can substitute it into the formula to obtain the required result. Let S be an even Grassmann number with no body. Express S as a sum of components s, where i is
i

the grade of the component. (Components may be a sum of several terms. Components with no underscript are 1-elements by default.) As a first example we take a general element in 3-space.
S = s + s + s;
2 3

The (exterior) square of S may be obtained by expanding and simplifying SS.


% @S SD ss + ss
2 2

which simplifies to:

2009 9 3

Grassmann Algebra Book.nb

518

SS 2 ss
2

It is clear from this that because this expression is of grade three, further multiplication by S would give zero. It thus represents an expression for the highest non-zero power (the square) of a general bodyless element in 3-space. To generalize this result, we proceed as follows: 1) Determine an expression for the highest non-zero power of an even Grassmann number with no body in an even-dimensional space. 2) Substitute this expression into the general expression for the positive power of a Grassmann number in the last section. 3) Repeat these steps for the case of an odd-dimensional space. The highest non-zero power pmax of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the result being zero. For a space of even dimension n this is obviously pmax = n . 2 For example, let X be a 2-element in a 4-space.
2

&4 ; X = CreateElementBxF
2 2

x1 e1 e2 + x2 e1 e3 + x3 e1 e4 + x4 e2 e3 + x5 e2 e4 + x6 e3 e4

The square of this number is a 4-element. Any further multiplication by a bodyless Grassmann number will give zero.
% BX XF
2 2

H- 2 x2 x5 + 2 Hx3 x4 + x1 x6 LL e1 e2 e3 e4

Substituting s for Xe and


2
n

n for 2

p into formula 9.1 for the pth power gives:


Xo O

X p s 2 -1 K s +
2 2

n 2

The only odd component Xo capable of generating a non-zero term is the one of grade 1. Hence Xo is equal to s and we have finally that: The maximum non-zero power of a bodyless Grassmann number in a space of even dimension n is equal to n and may be computed from the formula 2

Xpmax X 2 s 2 -1 Ks +
2 2

n 2

sO

n even

9.2

A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the 1 result being zero. For a space of odd dimension n this is obviously n. However the highest 2 power of a general bodyless number will be one greater than this due to the possibility of 2009 9 3 this even product by the element of grade 1. Hence pmax = n+1 . multiplying 2

Grassmann Algebra Book.nb

519

A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the 1 result being zero. For a space of odd dimension n this is obviously n. However the highest 2 power of a general bodyless number will be one greater than this due to the possibility of 1 multiplying this even product by the element of grade 1. Hence pmax = n+ . 2 Substituting s for Xe , s for Xo , and
2 2

n+1 for 2

p into the formula for the pth power and noting that the

term involving the power of s only is zero, leads to the following: The maximum non-zero power of a bodyless Grassmann number in a space of odd dimension n is 1 equal to n+ and may be computed from the formula 2
n+1 2

Xpmax X

Hn + 1L 2

s
2

n-1 2

n odd

9.3

A formula which gives the highest power for both even and odd spaces is

pmax

1 4

H2 n + 1 - H- 1Ln L

9.4

The following table gives the maximum non-zero power of a bodiless Grassmann number and the formula for it for spaces up to dimension 8.
n pmax 1 2 3 4 5 6 7 8 1 1 2 2 3 3 4 4 Xpmax s s+s
2

2 ss
2

K2 s + s O s
2 2 2

3 sss
2

K3 s + s O s s
2 2 2 2 2 2

4 ssss K4 s + s O s s s
2 2 2 2

Verifying the formulae in a 3-space


Declare a 3-space and create a Grassmann number X3 with no body.
&3 ; X3 = Soul@CreateGrassmannNumber@xDD e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

2009 9 3

Grassmann Algebra Book.nb

520

:s = ExtractGrade@1D@X3 D, s = ExtractGrade@2D@X3 D>


2

8e1 x1 + e2 x2 + e3 x3 , x4 e1 e2 + x5 e1 e3 + x6 e2 e3 < % BX3 X3 2 s s F


2

True

Verifying the formulae in a 4-space


Declare a 4-space and create a Grassmann number X4 with no body.
&4 ; X4 = Soul@CreateGrassmannNumber@xDD e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 + x11 e1 e2 e3 + x12 e1 e2 e4 + x13 e1 e3 e4 + x14 e2 e3 e4 + x15 e1 e2 e3 e4 :s = ExtractGrade@1D@X4 D, s = ExtractGrade@2D@X4 D>
2

8e1 x1 + e2 x2 + e3 x3 + e4 x4 , x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 < % BX4 X4 K2 s + s O s F


2 2

True

The inverse of a Grassmann number


Let X be a general Grassmann number in 3-space.
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

Finding an inverse of X is equivalent to finding the Grassmann number A such that XA1 or A X1. In what follows we shall show that A commutes with X and therefore is a unique inverse which we may call the inverse of X. To find A we need to solve the equation XA-10. First we create a general Grassmann number for A.
A = CreateGrassmannNumber@aD a0 + e1 a1 + e2 a2 + e3 a3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 + a7 e1 e2 e3

The calculate and simplify the expression XA-1.

2009 9 3

Grassmann Algebra Book.nb

521

% @X A - 1D - 1 + a0 x0 + e1 Ha1 x0 + a0 x1 L + e2 Ha2 x0 + a0 x2 L + e3 Ha3 x0 + a0 x3 L + Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L e1 e2 + Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L e1 e3 + Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L e2 e3 + Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L e1 e2 e3

To make this expression zero we need to solve for the ai in terms of the xi which make the coefficients zero. This is most easily done with the GrassmannSolve function to be introduced later. Here, to see more clearly the process, we do it directly with Mathematica's Solve function.
S = Solve@8- 1 + a0 x0 0, Ha1 x0 + a0 x1 L 0, Ha2 x0 + a0 x2 L 0, Ha3 x0 + a0 x3 L 0, Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L 0, Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L 0, Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L 0, Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L 0<D Flatten :a7 2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 x3 0 x5 x2 0 , a6 x6 x2 0 , a1 x1 x2 0 , a4 x2 x2 0 x4 x2 0 , x3 x2 0 1 x0

a5 -

, a2 -

, a3 -

, a0

>

We denote the inverse of X by Xr . We obtain an explicit expression for Xr by substituting the values obtained above in the formula for A.
Xr = A . S 1 x0 e1 x1 x2 0 x2 0 + e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x3 0 x5 e1 e3 x2 0 -

x6 e2 e3

H2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 L e1 e2 e3

To verify that this is indeed the inverse and that the inverse commutes, we calculate the products of X and Xr
%@8X Xr , Xr X<D Simplify 81, 1<

To avoid having to rewrite the coefficients for the Solve equations, we can use the GrassmannAlgebra function GrassmannSolve (discussed in Section 9.6) to get the same results. To use GrassmannSolve we only need to enter a single undefined symbol (here we have used Y)for the Grassmann number we are looking to solve for.

2009 9 3

Grassmann Algebra Book.nb

522

GrassmannSolve@X Y 1, YD ::Y 1 x0 e1 x1 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x3 0 x5 e1 e3 x2 0 >>

x6 e2 e3 x2 0

H- 2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 e2 e3

It is evident from this example that (at least in 3-space), for a Grassmann number to have an inverse, it must have a non-zero body. To calculate an inverse directly, we can use the function GrassmannInverse. To see the pattern for inverses in general, we calculate the inverses of a general Grassmann number in 1, 2, 3, and 4spaces:

Inverse in a 1-space
&1 ; X1 = CreateGrassmannNumber@xD x0 + e1 x1 X1 r = GrassmannInverse@X1 D 1 x0 e1 x1 x2 0

Inverse in a 2-space
&2 ; X2 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2 X2 r = GrassmannInverse@X2 D 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0

Inverse in a 3-space
&3 ; X3 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 X3 r = GrassmannInverse@X3 D 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x5 e1 e3 x2 0 x7 x2 0 -

x6 e2 e3

- 2 x2 x5 + 2 Hx3 x4 + x1 x6 L x3 0

e1 e2 e3

Inverse in a 4-space
2009 9 3

Grassmann Algebra Book.nb

523

Inverse in a 4-space
&4 ; X4 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 + x11 e1 e2 e3 + x12 e1 e2 e4 + x13 e1 e3 e4 + x14 e2 e3 e4 + x15 e1 e2 e3 e4 X4 r = GrassmannInverse@X4 D 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 x2 0 x3 0 - 2 x2 x7 + 2 Hx4 x5 + x1 x9 L x3 0 - 2 x3 x7 + 2 Hx4 x6 + x1 x10 L x3 0 - 2 x3 x9 + 2 Hx4 x8 + x2 x10 L x3 0 - 2 x6 x9 + 2 Hx7 x8 + x5 x10 L x3 0 e3 x3 x2 0 e4 x4 x2 0 x2 0 x11 x2 0 x12 x2 0 x13 x2 0 x14 x2 0 x15 x2 0 x5 e1 e2 x2 0 x9 e2 e4 x2 0 x10 e3 e4 x2 0 +

x6 e1 e3

x7 e1 e4

x8 e2 e3

- 2 x2 x6 + 2 Hx3 x5 + x1 x8 L

e1 e2 e3 +

e1 e2 e4 +

e1 e3 e4 +

e2 e3 e4 +

e1 e2 e3 e4

The form of the inverse of a Grassmann number


To understand the form of the inverse of a Grassmann number it is instructive to generate it by yet another method. Let b be a bodiless Grassmann number and bq its qth exterior power. Then the following identity will hold.
H1 + bL I1 - b + b2 - b3 + b4 - ... % bq M 1 % bq+1

We can see this most easily by writing the expansion in the form below and noting that all the intermediate terms cancel.
1 - b + b2 - b3 + b4 - ... % bq + b - b2 + b3 - b4 + ... bq % bq+1

Alternatively, consider the product in the reverse order I1-b+b2 -b3 +b4 - ...%bq MH1+bL. Expanding this gives precisely the same result. Hence a Grassmann number and its inverse commute. Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a space of n dimensions, the greatest non-zero power is pmax = 1 I2 n + 1 + H- 1Ln M. Thus if q is 4 equal to3pmax , bq+1 is equal to zero, and the identity becomes: 2009 9

Grassmann Algebra Book.nb

524

Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a space of n dimensions, the greatest non-zero power is pmax = 1 I2 n + 1 + H- 1Ln M. Thus if q is 4 equal to pmax , bq+1 is equal to zero, and the identity becomes:

H1 + bL I1 - b + b2 - b3 + b4 - ... % bpmax M 1
We have thus shown that 1 - b + b2 - b3 + b4 ... % bpmax is the inverse of 1+b.

9.5

If now we have a general Grassmann number X, say, we can write X as Xx0 H1+bL, so that if Xs is the soul of X, then
X x0 H1 + bL x0 + Xs b Xs x0

The inverse of X then becomes:

-1

1 x0

1-

Xs x0

Xs x0

Xs x0

+ ... %

Xs x0

pmax

9.6

We tabulate some examples for Xx0 +Xs , expressing the inverse X-1 both in terms of Xs and x0 alone, and in terms of the even and odd components s and s of Xs .
2

n pmax 1 2 3 4 1 1 2 2
1 x0 1 x0 1 x0 1 x0

X -1 J1 J1 Xs x0 Xs x0 Xs x0 Xs x0

X -1 N N
X 2
0

1 x0 1 x0 1 x0 1 x0

J1 J1 Xs x0

Xs x0 Xs x0 2 x0 2

N N s sO
2

J1 J1 -

+ J xs N N +
X J xs 0

K1 Xs x0

N N

K1 -

1 x0 2

K2 s + s O s O
2 2

It is easy to verify the formulae of the table for dimensions 1 and 2 from the results above. The formulae for spaces of dimension 3 and 4 are given below. But for a slight rearrangement of terms they are identical to the results above.

Verifying the formula in a 3-space


&3 ; Xs = Soul@CreateGrassmannNumber@xDD e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

2009 9 3

Grassmann Algebra Book.nb

525

:s = ExtractGrade@1D@Xs D, s = ExtractGrade@2D@Xs D>


2

8e1 x1 + e2 x2 + e3 x3 , x4 e1 e2 + x5 e1 e3 + x6 e2 e3 < %B 1 1 x0 1Xs x0 + 2 x0 2 s s Hx0 + Xs LF


2

Verifying the formula in a 4-space


&4 ; Xs = Soul@CreateGrassmannNumber@xDD e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 + x11 e1 e2 e3 + x12 e1 e2 e4 + x13 e1 e3 e4 + x14 e2 e3 e4 + x15 e1 e2 e3 e4 :s = ExtractGrade@1D@Xs D, s = ExtractGrade@2D@Xs D>
2

8e1 x1 + e2 x2 + e3 x3 + e4 x4 , x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 < %B 1 1 x0 1Xs x0 + 1 x0 2 K2 s + sO s Hx0 + Xs LF


2 2

Integer powers of a Grassmann number


In the GrassmannAlgebra function GrassmannPower we collect together the algorithm introduced above for positive integer powers of Grassmann numbers with that for calculating inverses, so that GrassmannPower works with any integer power, positive or negative.
? GrassmannPower GrassmannPower@X,nD computes the nth power of a Grassmann number X in a space of the currently declared number of dimensions. n may be any numeric or symbolic quantity.

Powers in terms of basis elements


&2 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2 Y = GrassmannPower@X, 3D
2 2 2 x3 0 + 3 e1 x0 x1 + 3 e2 x0 x2 + 3 x0 x3 e1 e2

2009 9 3

Grassmann Algebra Book.nb

526

Z = GrassmannPower@X, - 3D 1 x3 0 3 e1 x1 x4 0 3 e2 x2 x4 0 3 x3 e1 e2 x4 0

As usual, we verify with GrassmannSimplify.


%@8Y Z, Z Y<D 81, 1<

General powers
GrassmannPower will also work with general elements that are not expressed in terms of basis elements. X = 1 + x + x y + x y z; 8Y = GrassmannPower@X, 3D, Z = GrassmannPower@X, - 3D< 81 + 3 x + 3 x y, 1 - 3 x - 3 x y< %@8Y Z, Z Y<D 81, 1<

Symbolic powers
We take a general Grassmann number in 2-space.
&2 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2 Y = GrassmannPower@X, nD
-1+n 1+n 1+n xn x1 + n e2 xx2 + n xx3 e1 e2 0 + n e1 x0 0 0

Z = GrassmannPower@X, - nD
n -1-n 1-n 1-n xx1 - n e2 xx2 - n xx3 e1 e2 0 - n e1 x0 0 0

Before we verify that these are indeed inverses of each other, we need to declare the power n to be a scalar.
DeclareExtraScalars@nD; %@8Y Z, Z Y<D 81, 1<

The nth power of a general Grassmann number in 3-space is:

2009 9 3

Grassmann Algebra Book.nb

527

&3 ; X = CreateGrassmannNumber@xD; Y = GrassmannPower@X, nD


-1+n 1+n 1+n xn x1 + n e2 xx2 + n e3 xx3 + 0 + n e1 x0 0 0 1+n 1+n 1+n n xx4 e1 e2 + n xx5 e1 e3 + n xx6 e2 e3 + 0 0 0 2+n 1+n IxIIn - n2 M x2 x5 + I- n + n2 M Hx3 x4 + x1 x6 LM + n xx7 M e1 e2 e3 0 0

Z = GrassmannPower@X, - nD
n -1-n 1-n 1-n xx1 - n e2 xx2 - n e3 xx3 0 - n e1 x0 0 0 1-n 1-n 1-n n xx4 e1 e2 - n xx5 e1 e3 - n xx6 e2 e3 + 0 0 0 2-n IxIx3 In x4 + n2 x4 M - n x2 x5 - n2 x2 x5 + n x1 x6 + n2 x1 x6 M 0 1-n n xx7 M e1 e2 e3 0

%@8Y Z, Z Y<D 81, 1<

General powers of a Grassmann number


As well as integer powers, GrassmannPower is able to compute non-integer powers.

The square root of a Grassmann number


The square root of a general Grassmann number X in 3-space is given by:
&3 ; X = CreateGrassmannNumber@xD; Y = GrassmannPowerBX, x0 + e1 x1 2 x6 e2 e3 2 x0 x0 + + e2 x2 2 x0 + e3 x3 2 x0 + 2 + x4 e1 e2 2 x0 x7 x0 + x5 e1 e3 2 x0 + 1 2 F

-x3 x4 + x2 x5 - x1 x6 4
2 x3 0

e1 e2 e3

We verify that this is indeed the square root.


Simplify@%@X Y YDD True

The pth power of a Grassmann number


Consider a general Grassmann number X in 2-space raised to the power of p and -p.
&2 ; X = CreateGrassmannNumber@xD; Y = GrassmannPower@X, pD
-1+p 1+p 1+p xp x1 + p e2 xx2 + p xx3 e1 e2 0 + p e1 x0 0 0

2009 9 3

Grassmann Algebra Book.nb

528

Z = GrassmannPower@X, -pD
-1-p 1-p 1-p x-p x1 - p e2 xx2 - p xx3 e1 e2 0 - p e1 x0 0 0

We verify that these are indeed inverses.


%@8Y Z, Z Y<D 81, 1<

9.6 Solving Equations


Solving for unknown coefficients
Suppose we have a Grassmann number whose coefficients are functions of a number of parameters, and we wish to find the values of the parameters which will make the number zero. For example:
Z = a - b - 1 + Hb + 3 - 5 aL e1 + Hd - 2 c + 6L e3 + He + 4 aL e1 e2 + Hc + 2 + 2 gL e2 e3 + H- 1 + e - 7 g + bL e1 e2 e3 ;

We can calculate the values of the parameters by using the GrassmannAlgebra function GrassmannScalarSolve.
GrassmannScalarSolve@Z 0D ::a 1 2 , b1 2 , c - 1, d - 8, e - 2, g 1 2 >>

GrassmannScalarSolve will also work for several equations and for general symbols (other than basis elements). Z1 = 8a H- xL + b y H- 2 xL + f == c x + d x y, H7 - aL y == H13 - dL y x - f<; GrassmannScalarSolve@Z1 D ::a 7, c - 7, d 13, b 13 2 , f 0>>

GrassmannScalarSolve can be used in several other ways. As with other Mathematica functions its usage statement can be obtained by entering ?GrassmannScalarSolve.

2009 9 3

Grassmann Algebra Book.nb

529

? GrassmannScalarSolve GrassmannScalarSolve@eqnsD attempts to find the values of those HdeclaredL scalar variables which make the equations True. GrassmannScalarSolve@eqns,scalarsD attempts to find the values of the scalars which make the equations True. If not already in the DeclaredScalars list the scalars will be added to it. GrassmannScalarSolve@eqns,scalars,elimsD attempts to find the values of the scalars which make the equations True while eliminating the scalars elims. If the equations are not fully solvable, GrassmannScalarSolve will still find the values of the scalar variables which reduce the number of terms in the equations as much as possible, and will additionally return the reduced equations.

Solving for an unknown Grassmann number


The function for solving for an unknown Grassmann number, or several unknown Grassmann numbers is GrassmannSolve.
? GrassmannSolve GrassmannSolve@eqns,varsD attempts to solve an equation or set of equations for the Grassmann number variables vars. If the equations are not fully solvable, GrassmannSolve will still find the values of the Grassmann number variables which reduce the number of terms in the equations as much as possible, and will additionally return the reduced equations.

Suppose first that we have an equation involving an unknown Grassmann number, A, say. For example if X is a general number in 3-space, and we want to find its inverse A as we did in the Section 9.5 above, we can solve the following equation for A:
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 R = GrassmannSolve@X A 1, AD ::A 1 x0 e1 x1 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x3 0 x5 e1 e3 x2 0 >>

x6 e2 e3 x2 0

H- 2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 e2 e3

As usual, we verify the result.

2009 9 3

Grassmann Algebra Book.nb

530

% @X A 1 . R D 8True<

In general, GrassmannSolve can solve the same sorts of equations that Mathematica's inbuilt Solve routines can deal with because it uses Solve as its main calculation engine. In particular, it can handle powers of the unknowns. Further, it does not require the equations necessarily to be expressed in terms of basis elements. For example, we can take a quadratic equation in a:
Q = a a + 2 H1 + x + x yL a + H1 + 2 x + 2 x yL 0; S = GrassmannSolve@Q, aD 88a - 1 + x C1 + C2 x y<<

Here there are two arbitrary parameters C1 and C2 introduced by the GrassmannSolve function. (C is a symbol reserved by Mathematica for the generation of unknown coefficients in the solution of equations.) Now test whether a is indeed a solution to the equation:
% @Q . SD 8True<

Note that here we have a double infinity of solutions, parametrized by the pair {C1 ,C2 }. This means that for any values of C1 and C2 , S is a solution. For example for various C1 and C2 :
8%@Q . a - 1 - xD, %@Q . a - 1D, %@Q . a - 1 - 9 x yD< 8True, True, True< GrassmannSolve can also take several equations in several unknowns. As an example we take the following pair of equations: Q = 8H1 + xL b + x y a 3 + 2 x y, H1 - yL a a + H2 + y + x + 5 x yL 7 - 5 x<;

First, we solve for the two unknown Grassmann numbers a and b.


S = GrassmannSolve@Q, 8a, b<D ::a :a 5 + 5 3x 5 3x 5 + 2y 5 2y , b 3 - 3 x + I2 + 5 M x y>, 2 5 xy + , b 3 - 3 x + I2 - 5 M x y>> 5 2 5 xy

Here we get two solutions, which we verify with GrassmannSimplify.


% @Q . SD 88True, True<, 8True, True<<

Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We can add the symbol x to the list of Grassmann numbers to be solved for.

2009 9 3

Grassmann Algebra Book.nb

531

Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We can add the symbol x to the list of Grassmann numbers to be solved for.
S = GrassmannSolve@Q, 8a, b, x<D ::a C2 + y I- 31 - 36 C1 + 11 C2 2M 12 C2 18 - 11 + C2 2 1 6 + y 2 - C2 , 12 - 11 + C2 2 + 203 y 121 6 C2 - 11 + C2 2 5 6 -

b-

108 C1 I- 11 +
2 C2 2M

x y C1 +

I5 - C2 2 M>, :a y C3 , b

18 11

, x

31 y 36

>>

Again we get two solutions, but this time involving some arbitrary scalar constants.
%@Q . SD Simplify 88True, True<, 8True, True<<

9.7 Exterior Division


Defining exterior quotients
An exterior quotient of two Grassmann numbers X and Y is a solution Z to the equation:
X YZ

or to the equation:
X ZY

To distinguish these two possibilities, we call the solution Z to the first equation XYZ the left exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the left. Correspondingly, we call the solution Z to the second equation XZY the right exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the right. To solve for Z in either case, we can use the GrassmannSolve function. For example if X and Y are general elements in a 2-space, the left exterior quotient is obtained as:
&2 ; X = CreateGrassmannNumber@xD; Y = CreateGrassmannNumber@yD; GrassmannSolve@X Y Z, 8Z<D ::Z x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 >>

e2 H-x2 y0 + x0 y2 L

H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2 y2 0

Whereas the right exterior quotient is obtained as: 2009 9 3

Grassmann Algebra Book.nb

532

Whereas the right exterior quotient is obtained as:


GrassmannSolve@X Z Y, 8Z<D ::Z x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 >>

e2 H-x2 y0 + x0 y2 L

H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2 y2 0

Note the differences in signs in the e1 e2 term. To obtain these quotients more directly, GrassmannAlgebra provides the functions LeftGrassmannDivide and RightGrassmannDivide.
? LeftGrassmannDivide LeftGrassmannDivide@X,YD calculates the Grassmann number Z such that X == YZ. ? RightGrassmannDivide RightGrassmannDivide@X,YD calculates the Grassmann number Z such that X == ZY.

A shorthand infix notation for the division operation is provided by the down-vector symbol LeftDownVector B (LeftGrassmannDivide) and RightDownArrow E (RightGrassmannDivide). The previous examples would then take the form
X Y x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 X Y x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 e2 H-x2 y0 + x0 y2 L y2 0 e2 H-x2 y0 + x0 y2 L y2 0 -

H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2

H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2

Special cases of exterior division


The inverse of a Grassmann number
The inverse of a Grassmann number may be obtained as either the left or right form quotient.

2009 9 3

Grassmann Algebra Book.nb

533

81 X, 1 X< : 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0 , 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0 >

Division by a scalar
Division by a scalar is just the Grassmann number divided by the scalar as expected.
8X a, X a< : x0 a + e1 x1 a + e2 x2 a + x3 e1 e2 a , x0 a + e1 x1 a + e2 x2 a + x3 e1 e2 a >

Division of a Grassmann number by itself


Division of a Grassmann number by itself give unity.
8X X, X X< 81, 1<

Division by scalar multiples


Division involving scalar multiples also gives the results expected.
8Ha XL X, X Ha XL< :a, 1 a >

Division by an even Grassmann number


Division by an even Grassmann number will give the same result for both division operations, since exterior multiplication by an even Grassmann number is commutative.
8X H1 + e1 e2 L, X H1 + e1 e2 L< 8x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 e2 , x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 e2 <

The non-uniqueness of exterior division


Because of the nilpotency of the exterior product, the division of Grassmann numbers does not necessarily lead to a unique result. To take a simple case consider the division of a simple 2element xy by one of its 1-element factors x.

2009 9 3

Grassmann Algebra Book.nb

534

Hx yL x y + x C1 + C2 x y

The quotient is returned as a linear combination of elements containing arbitrary scalar constants C1 and C2 . To see that this is indeed a left quotient, we can multiply it by the denominator from the left and see that we get the numerator.
%@x Hy + x C1 + C2 x yLD xy

We get a slightly different result for the right quotient which we verify by multiplying from the right.
Hx yL x - y + x C1 + C2 x y %@H- y + x C1 + C2 x yL xD xy

Here is a slightly more complex example.


&3 ; Hx y zL Hx - 2 y + 3 z + x zL z + 2 2 2 2 2 2 y H- 1 + 3 C1 + 2 C2 + C3 L + C1 x y + C2 x z + C3 y z + C4 x y z 3 9 C1 - 3 C2 3 C3 +x 1 3 C1 - C2 C3

In some circumstances we may not want the most general result. Say, for example, we knew that the result we wanted had to be a 1-element. We can use the GrassmannAlgebra function ExtractGrade.
ExtractGrade@1D@Hx y zL Hx - 2 y + 3 z + x zLD z x 3 2 1 2 9 C1 - 3 C2 - C2 3 C3 2 C3 2 + + y H- 1 + 3 C1 + 2 C2 + C3 L

2 3 C1 2

Should there be insufficient information to determine a result, the quotient will be returned unchanged. For example.
Hx yL z Hx yL z

2009 9 3

Grassmann Algebra Book.nb

535

9.8 Factorization of Grassmann Numbers


The non-uniqueness of factorization
Factorization of a Grassmann number will rarely be unique due to the nilpotency property of the exterior product. However, using the capability of GrassmannScalarSolve to generate general solutions with arbitrary constants we may be able to find a factorization in the form we require. The GrassmannAlgebra function which implements the attempt to factorize is GrassmannFactor.
? GrassmannFactor GrassmannFactor@X,FD attempts to factorize the Grassmann number X into the form given by F. F must be an expression involving the symbols of X, together with sufficient scalar coefficients whose values are to be determined. GrassmannFactor@X,F,SD attempts to factorize the Grassmann number X by determining the scalars S. If the scalars S are not already declared as scalars, they will be automatically declared. If no factorization can be effected in the form given, the original expression will be returned.

Example: Factorizing a Grassmann number in 2-space


First, consider a general Grassmann number in 2-space.
&2 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2

By using GrassmannFactor we find a general factorization into two Grassmann numbers of lower maximum grade. We want to see if there is a factorization in the form Ha+b e1 +c e2 LHf+g e1 +h e2 L, so we enter this as the second argument. The scalars that we want to find are {a,b,c,f,g,h}, so we enter this list as the third argument.
Xf = GrassmannFactor@X, Ha + b e1 + c e2 L Hf + g e1 + h e2 L, 8a, b, c, f, g, h<D x0 f + e2 H- h x0 + f x2 L f2 e1 Hh x1 - f x3 L x2 + e1 H- h x0 x1 + f x1 x2 + f x0 x3 L f2 x2

f + h e2 +

This factorization is valid for any values of the scalars {a,b,c,f,g,h} which appear in the result, as we can verify by applying GrassmannSimplify to the result.

2009 9 3

Grassmann Algebra Book.nb

536

%@XfD x0 + e1 x1 + e2 x2 + x3 e1 e2

Example: Factorizing a 2-element in 3-space


Next, consider a general 2-element in 3-space. We know that 2-elements in 3-space are simple, hence we would expect to be able to obtain a factorization.
&3 ; Y = CreateElementByF
2

y1 e1 e2 + y2 e1 e3 + y3 e2 e3

By using GrassmannFactor we can determine a factorization. In this example we obtain two classes of solution, each with different parameters.
Xf = GrassmannFactor@Y, Ha e1 + b e2 + c e3 L Hd e1 + e e2 + f e3 L, 8a, b, c, d, e, f<D e1 Jy2 + : c e3 +
c H-f y1 +e y2 L N y3

e2 Hc e + y3 L f ,

e e2 + f e3 +

e1 H- f y1 + e y2 L y3
b e y2 y3

e1 Jy1 + b e2 + e

N -

e3 y3 e

e e2 +

e e1 y2 y3

>

Both may be verified as valid factorizations.


%@XfD 8y1 e1 e2 + y2 e1 e3 + y3 e2 e3 , y1 e1 e2 + y2 e1 e3 + y3 e2 e3 <

Example: Factorizing a 3-element in 4-space


Finally, consider a specific 3-element in 4-space.
&4 ; X = - 14 w x y + 26 w x z + 11 w y z + 13 x y z;

Since every (n|1)-element in an n-space is simple, we know the 3-element can be factorized. Suppose also that we would like the factorization in the following form:
J = Hx a1 + y a2 L Hy b2 + z b3 L Hz g3 + w g4 L Hx a1 + y a2 L Hy b2 + z b3 L Hz g3 + w g4 L

2009 9 3

Grassmann Algebra Book.nb

537

By using GrassmannFactor we can explore if a factorization into such a form exists.


XJ = GrassmannFactor@X, JD x a1 + 11 y a1 26 14 y a1 g4 + 26 z a1 g4 w g4 13 z g4 14

We check that this is indeed a factorization by applying GrassmannSimplify.


%@XJD - 14 x y w + 13 x y z + 26 x z w + 11 y z w

9.9 Functions of Grassmann Numbers


The Taylor series formula
Functions of Grassmann numbers can be defined by power series as with ordinary numbers. Let X be a Grassmann number composed of a body x0 and a soul Xs such that X x0 + Xs . Then the Taylor series expanded about the body x0 (and truncated at the third term) is:
Normal@Series@f@XD, 8X, x0 , 3<DD f@x0 D + HX - x0 L f @x0 D + 1 2 HX - x0 L2 f! @x0 D + 1 6 H X - x 0 L 3 f H3L @ x 0 D

It has been shown in Section 9.5 that the degree pmax of the highest non-zero power of a Grassmann number in a space of n dimensions is given by pmax = 1 I2 n + 1 - H- 1Ln M. 4 If we expand the series up to the term in this power, it then becomes a formula for a function f of a Grassmann number in that space. The tables below give explicit expressions for a function f[X] of a Grassmann variable in spaces of dimensions 1 to 4.
n pmax 1 2 3 4 1 1 2 2 f@XD f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs + f@x0 D + f @x0 D Xs +
1 2 1 2

f" @x0 D Xs 2 f" @x0 D Xs 2

If we replace the square of the soul of X by its simpler formulation in terms of the components of X of the first and second grade we get a computationally more convenient expression in the case on dimensions 3 and 4.

2009 9 3

Grassmann Algebra Book.nb

538

n pmax 1 2 3 4 1 1 2 2

f@XD f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs + f" @x0 D s s


2

f@x0 D + f @x0 D Xs +

1 2

f" @x0 D K2 s + s O s
2 2

The form of a function of a Grassmann number


To automatically generate the form of formula for a function of a Grassmann number, we can use the GrassmannAlgebra function GrassmannFunctionForm.
? GrassmannFunctionForm GrassmannFunctionForm@8f@xD,x<,Xb,XsD expresses the function f@xD of a Grassmann number in a space of the currently declared number of dimensions in terms of the symbols Xb and Xs, where Xb represents the body, and Xs represents the soul of the number. GrassmannFunctionForm@8f@x,y,z,...D,8x,y,z,...<<,8Xb,Yb,Zb,...<,8 Xs,Ys,Zs,...<D expresses the function f@x,y,z,...D of Grassmann numbers x,y,z,... in terms of the symbols Xb,Yb,Zb,... and Xs,Ys,Zs..., where Xb,Yb,Zb,... represent the bodies, and Xs,Ys,Zs... represent the souls of the numbers. GrassmannFunctionForm@f@Xb,XsDD expresses the function f@XD of a Grassmann number X=Xb+Xs in a space of the currently declared number of dimensions in terms of body and soul symbols Xb and Xs. GrassmannFunctionForm@f@8Xb,Yb,Zb,...<,8Xs,Ys,Zs,...<DD is equivalent to the second form above.

GrassmannFunctionForm for a single variable


The form of a function will be different, depending on the dimension of the space it is in - the higher the dimension, the potentially more terms there are in the series. We can generate a table similar to the first of the pair above by using GrassmannFunctionForm.

2009 9 3

Grassmann Algebra Book.nb

539

Table@8i, H&i ; GrassmannFunctionForm@f@Xb , Xs DDL<, 8i, 1, 8<D TableForm 1 2 3 4 5 6 7 8 f@Xb D + Xs f @Xb D f@Xb D + Xs f @Xb D f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D +
1 2 1 2 1 2 1 2 1 2 1 2 X2 s f @Xb D X2 s f @Xb D X2 s f @Xb D + X2 s f @Xb D + X2 s f @Xb D + X2 s f @Xb D + 1 6 1 6 1 6 1 6 H3L X3 @Xb D s f H3L X3 @Xb D s f H3L X3 @Xb D + s f H3L X3 @Xb D + s f 1 24 1 24 H4L X4 @Xb D s f H4L X4 @Xb D s f

GrassmannFunctionForm for several variables


GrassmannFunctionForm will also give the form of the function for functions of several Grassmann variables. Due to their length, we will have to display them individually rather than in table form. We display only the odd dimensional results, since the following even dimensional ones are the same. &1 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + Xs fH1,0L @Xb , Yb D + Xs Ys fH1,1L @Xb , Yb D &3 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + 1 2
H0,2L Y2 @Xb , Yb D + s f

Xs fH1,0L @Xb , Yb D + Xs Ys fH1,1L @Xb , Yb D + 1 2


H2,0L X2 @Xb , Yb D + s f

1 2

H1,2L Xs Y2 @Xb , Yb D + s f

1 2

H2,1L X2 @Xb , Yb D + s Ys f

1 4

2 H2,2L X2 @Xb , Yb D s Ys f

2009 9 3

Grassmann Algebra Book.nb

540

&5 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + 1 6 1 2 1 2 1 1 2


H0,2L Y2 @Xb , Yb D + s f

H0,3L Y3 @Xb , Yb D + Xs fH1,0L @Xb , Yb D + Xs Ys fH1,1L @Xb , Yb D + s f H1,2L Xs Y2 @Xb , Yb D + s f H2,0L X2 @Xb , Yb D + s f

1 6

H1,3L Xs Y3 @Xb , Yb D + s f

1 2

H2,1L X2 @Xb , Yb D + s Ys f

1 4

2 H2,2L X2 @Xb , Yb D + s Ys f

12 1 12

3 H2,3L X2 @Xb , Yb D + s Ys f 2 H3,2L X3 @Xb , Yb D + s Ys f

1 6 1

H3,0L X3 @Xb , Yb D + s f

1 6

H3,1L X3 @Xb , Yb D + s Ys f

36

3 H3,3L X3 @Xb , Yb D s Ys f

Calculating functions of Grassmann numbers


To calculate functions of Grassmann numbers, GrassmannAlgebra uses the GrassmannFunction operation based on GrassmannFunctionForm.
? GrassmannFunction GrassmannFunction@8f@xD,x<,XD or GrassmannFunction@f@XDD computes the function f of a single Grassmann number X in a space of the currently declared number of dimensions. In the first form f@xD may be any formula; in the second, f must be a pure function. GrassmannFunction@8f@x,y,z,..D,8x,y,z,..<<,8X,Y,Z,..<D or GrassmannFunction@f@X,Y,Z,..DD computes the function f of Grassmann numbers X,Y,Z,... In the first form f@x,y,z,..D may be any formula Hwith xi corresponding to XiL; in the second, f must be a pure function. In both these forms the result of the function may be dependent on the ordering of the arguments X,Y,Z,...

In the sections below we explore the GrassmannFunction operation and give some examples. We will work in a 3-space with the general Grassmann number X.
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

2009 9 3

Grassmann Algebra Book.nb

541

Powers of Grassmann numbers


The computation of powers of Grassmann numbers has already been discussed in Section 9.5. We can also calculate powers of Grassmann numbers using the GrassmannFunction operation. The syntax we will usually use is:
GrassmannFunction@Xa D
-1+a 1+a 1+a xa x1 + a e2 xx2 + a e3 xx3 + 0 + a e1 x0 0 0 1+a 1+a 1+a a xx4 e1 e2 + a xx5 e1 e3 + a xx6 e2 e3 + 0 0 0 2+a 1+a IxIIa - a2 M x2 x5 + I- a + a2 M Hx3 x4 + x1 x6 LM + a xx7 M e1 e2 e3 0 0

GrassmannFunctionAX-1 E 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x5 e1 e3 x2 0 x7 x2 0 -

x6 e2 e3

- 2 x2 x5 + 2 Hx3 x4 + x1 x6 L x3 0

e1 e2 e3

But we can also use GrassmannFunction@8xa , x<, XD, Ha &L@XD GrassmannFunction or GrassmannPower[X,a] to get the same result.

Exponential and logarithmic functions of Grassmann numbers


Exponential functions
Here is the exponential of a general Grassmann number in 3-space.
expX = GrassmannFunction@Exp@XDD x0 + x0 e 1 x 1 + x0 e 2 x 2 + x0 e 3 x 3 + x0 x 4 e 1 e 2 + x0 x 5 e 1 e 3 + x0 x 6 e 2 e 3 + x0 H x 3 x 4 - x 2 x 5 + x 1 x 6 + x 7 L e 1 e 2 e 3

Here is the exponential of its negative.


expXn = GrassmannFunction@Exp@- XDD -x0 - -x0 e1 x1 - -x0 e2 x2 - -x0 e3 x3 - -x0 x4 e1 e2 - -x0 x5 e1 e3 -x0 x6 e2 e3 + -x0 Hx3 x4 - x2 x5 + x1 x6 - x7 L e1 e2 e3

We verify that they are indeed inverses of one another.


%@expX expXnD Simplify 1

Logarithmic functions
2009 9 3

Grassmann Algebra Book.nb

542

Logarithmic functions
The logarithm of X is:
GrassmannFunction@Log@XDD Log@x0 D + x6 e2 e3 x0 e1 x1 x0 + + e2 x2 x0 + e3 x3 x0 + x4 e1 e2 x0 + x7 x0 + x5 e1 e3 x0 +

-x3 x4 + x2 x5 - x1 x6 x2 0

e1 e2 e3

We expect the logarithm of the previously calculated exponential to be the original number X.
Log@expXD GrassmannFunction Simplify PowerExpand x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3

Trigonometric functions of Grassmann numbers


Sines and cosines of a Grassmann number
We explore the classic trigonometric relation Sin@XD2 + Cos@XD2 1and show that even for Grassmann numbers this remains true.
s2X = GrassmannFunctionA9Sin@xD2 , x=, XE Sin@x0 D2 + 2 Cos@x0 D Sin@x0 D e1 x1 + 2 Cos@x0 D Sin@x0 D e2 x2 + 2 Cos@x0 D Sin@x0 D e3 x3 + 2 Cos@x0 D Sin@x0 D x4 e1 e2 + 2 Cos@x0 D Sin@x0 D x5 e1 e3 + 2 Cos@x0 D Sin@x0 D x6 e2 e3 + II- 2 Cos@x0 D2 + 2 Sin@x0 D2 M x2 x5 + I2 Cos@x0 D2 - 2 Sin@x0 D2 M Hx3 x4 + x1 x6 L + 2 Cos@x0 D Sin@x0 D x7 M e1 e2 e3 c2X = GrassmannFunctionA9Cos@xD2 , x=, XE Cos@x0 D2 - 2 Cos@x0 D Sin@x0 D e1 x1 - 2 Cos@x0 D Sin@x0 D e2 x2 2 Cos@x0 D Sin@x0 D e3 x3 - 2 Cos@x0 D Sin@x0 D x4 e1 e2 2 Cos@x0 D Sin@x0 D x5 e1 e3 - 2 Cos@x0 D Sin@x0 D x6 e2 e3 + II2 Cos@x0 D2 - 2 Sin@x0 D2 M x2 x5 + I- 2 Cos@x0 D2 + 2 Sin@x0 D2 M Hx3 x4 + x1 x6 L - 2 Cos@x0 D Sin@x0 D x7 M e1 e2 e3 %@s2X + c2X 1D Simplify True

2009 9 3

Grassmann Algebra Book.nb

543

The tangent of a Grassmann number


Finally we show that the tangent of a Grassmann number is the exterior product of its sine and its secant.
tanX = GrassmannFunction@Tan@XDD Sec@x0 D2 e1 x1 + Sec@x0 D2 e2 x2 + Sec@x0 D2 e3 x3 + Tan@x0 D + Sec@x0 D2 x4 e1 e2 + Sec@x0 D2 x5 e1 e3 + Sec@x0 D2 x6 e2 e3 + Sec@x0 D2 Hx7 + H- 2 x2 x5 + 2 Hx3 x4 + x1 x6 LL Tan@x0 DL e1 e2 e3 sinX = GrassmannFunction@Sin@XDD Sin@x0 D + Cos@x0 D e1 x1 + Cos@x0 D e2 x2 + Cos@x0 D e3 x3 + Cos@x0 D x4 e1 e2 + Cos@x0 D x5 e1 e3 + Cos@x0 D x6 e2 e3 + HSin@x0 D H-x3 x4 + x2 x5 - x1 x6 L + Cos@x0 D x7 L e1 e2 e3 secX = GrassmannFunction@Sec@XDD Sec@x0 D + Sec@x0 D e1 x1 Tan@x0 D + Sec@x0 D e2 x2 Tan@x0 D + Sec@x0 D e3 x3 Tan@x0 D + Sec@x0 D x4 Tan@x0 D e1 e2 + Sec@x0 D x5 Tan@x0 D e1 e3 + Sec@x0 D x6 Tan@x0 D e2 e3 + ISec@x0 D3 Hx3 x4 - x2 x5 + x1 x6 L + Sec@x0 D Ix7 Tan@x0 D + Hx3 x4 - x2 x5 + x1 x6 L Tan@x0 D2 MM e1 e2 e3 tanX %@sinX secXD Simplify True

Functions of several Grassmann numbers


GrassmannFunction can calculate functions of several Grassmann variables. It is important to realize however, that there may be several different results depending on the order in which the exterior products occur in the series expansion for the function, a factor that does not arise with functions of scalar variables. This means then, that the usual function notation for several variables is not adequate for specifically denoting precisely which form is required. For simplicity in what follows we work in 2-space. &2 ; 8X = CreateGrassmannNumber@xD, Y = CreateGrassmannNumber@yD< 8x0 + e1 x1 + e2 x2 + x3 e1 e2 , y0 + e1 y1 + e2 y2 + y3 e1 e2 <

Products of two Grassmann numbers


As a first example, we look at the product. Calculating the exterior product of two Grassmann numbers in 2-space gives:

2009 9 3

Grassmann Algebra Book.nb

544

% @X YD x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + Hx3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2

Using GrassmannFunction in the form below, we get the same result.


GrassmannFunction@8x y, 8x, y<<, 8X, Y<D x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + Hx3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2

However, to allow for the two different exterior products that X and Y can have together, the variables in the second argument {X,Y} may be interchanged. The parameters x and y of the first argument {x y,{x,y}} are simply dummy variables and can stay the same. Thus YX may be obtained by evaluating:
% @Y XD x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2

or using GrassmannFunction with {Y,X} as its second argument.


GrassmannFunction@8x y, 8x, y<<, 8Y, X<D x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2

The exponential of a sum


As a final example we show how GrassmannFunction manages the two different exterior product equivalents of the scalar identity Exp[x+y] Exp[x] Exp[y]. First calculate Exp[X] and Exp[y].
expX = GrassmannFunction@Exp@XDD x0 + x0 e 1 x 1 + x0 e 2 x 2 + x0 x 3 e 1 e 2 expY = GrassmannFunction@Exp@YDD y0 + y0 e 1 y 1 + y0 e 2 y 2 + y0 y 3 e 1 e 2

If we compute their product using the exterior product operation we observe that the order must be important, and that a different result is obtained when the order is reversed.
%@expX expYD x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2

2009 9 3

Grassmann Algebra Book.nb

545

%@expY expXD x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 e2

We can compute these two results also by using GrassmannFunction. Note that the second argument in the second computation has its components in reverse order.
GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8X, Y<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2 GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8Y, X<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 e2

On the other hand, if we wish to compute the exponential of the sum X + Y we need to use GrassmannFunction, because there can be two possibilities for its definition. That is, although Exp[x+y] appears to be independent of the order of its arguments, an interchange in the 'ordering' argument from {X,Y} to {Y,X} will change the signs in the result.
GrassmannFunction@8Exp@x + yD, 8x, y<<, 8X, Y<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2 GrassmannFunction@8Exp@x + yD, 8x, y<<, 8Y, X<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 e2

In sum: A function of several Grassmann variables may have different results depending on the ordering of the variables in the function, even if their usual (scalar) form is commutative.

9.10 Summary
To be completed

2009 9 3

Grassmann Algebra Book.nb

546

10 Exploring the Generalized Grassmann Product

10.1 Introduction
In this chapter we define and explore a new product which we call the generalized Grassmann product. The generalized Grassmann product was originally developed by the author in order to treat the hypercomplex and Clifford products of general elements in a succinct manner. The hypercomplex and Clifford products of general elements can lead to quite complex expressions, however we will show that such expressions always reduce to simple sums of generalized Grassmann products. We discuss the hypercomplex product in the next chapter, and the Clifford product in Chapter 12. The generalized Grassmann product is really a sequence of products indexed by a parameter, called the order of the product. When the order is zero, the product reduces to the exterior product. When the order is its maximum value, the product reduces to the interior product. In between, the sequence of generalized Grassmann products makes a transition between the two. For example, consider an exterior product of simple elements. It only takes one 1-element to be common to both factors of the exterior product for the product to be zero. On the other end of the sequence, even if one of the simple elements has all its 1-element factors in common with the other simple element, their interior product is not zero. The elements of the sequence of generalized products require progressively more common 1-elements in their factors for them to reduce to zero. Specifically, if two simple elements contain a common factor of grade g, then their generalized Grassmann products of order less than g are zero. The generalized Grassmann product has another enticing property. Although the interior product of elements of different grade is not commutative, the generalized Grasmann product form of the interior product is commutative. For brevity in the rest of this book, where no confusion will arise, we may call the generalized Grassmann product simply the generalized product.

2009 9 3

Grassmann Algebra Book.nb

547

10.2 Defining the Generalized Product


Definition of the generalized product
The generalized Grassmann product of order l of an m-element a and a simple k-element b is
m k

denoted a b and defined by


m l k

k K O l

a b a bj bj
m l k j=1 m l k-l

10.1

b b1 b1 b2 b2 !
k l k-l l k-l

Here, b is expressed in all of the K O essentially different arrangements of its 1-element factors
k

k l

into a l-element and a (k-l)-element. This pairwise decomposition of a simple exterior product is the same as that used in the Common Factor Theorem in chapters 3 and 6. The grade of a generalized Grassmann product a b is therefore m + k - 2l, and like the grades of
m l k

the exterior and interior products in terms of which it is defined, is independent of the dimension of the underlying space. Note the similarity to the form of the Interior Common Factor Theorem. However, in the definition of the generalized product there is no requirement for the interior product on the right hand side to be an inner product, since an exterior product has replaced the ordinary multiplication operator. For brevity in the discussion that follows, we will assume without explicitly drawing attention to the fact, that an element undergoing a factorization is necessarily simple.

Case l = 0: Reduction to the exterior product


When the order l of a generalized product is zero, bj reduces to a scalar, and the product reduces
l

to the exterior product.

a b ab
m 0 k m k

l0

10.2

2009 9 3

Grassmann Algebra Book.nb

548

Case 0 < l < Min[m, k]: Reduction to exterior and interior products
When the order l of the generalized product is greater than zero but less than the smaller of the grades of the factors in the product, the product may be expanded in terms of either factor (provided it is simple). The formula for expansion in terms of the first factor is similar to the definition [10.1] which expands in terms of the second factor.
m K O l

a b ai b ai
m l k
i=1

0 < l < Min@m, kD

m-l

10.3

a a1 a1 a2 a2 !
m l m-l l m-l

In Section 10.3 we will prove that the product can actually be expanded in terms of both factors, and that this formula is an almost immediate consequent. For completeness it should be remarked that [10.3] is also valid for l equal to 0 or Min[m,k]. We shall include these special cases in the ranges for l in subsequent formulas.

Case l = Min[m, k]: Reduction to the interior product


When the order of the product l is equal to the smaller of the grades of the factors of the product, there is only one term in the sum and the generalized product reduces to the interior product.

a b ab
m l k m l k m k m

lkm lmk 10.4

a b ba
k

This is a particularly enticing property of the generalized product. If the interior product of two elements is non-zero, but they are of unequal grade, an interchange in the order of their factors will give an interior product which is zero. If an interior product of two factors is to be non-zero, it must have the larger factor first. The interior product is non-commutative. By contrast, the generalized product form of the interior product of two elements of unequal grade is commutative and equal to that interior product which is non-zero, whichever one it may be.
a b b a ab
m k k k k m m k

km

We discuss the (quasi)-commutativity of the generalized product in the next section.

2009 9 3

Grassmann Algebra Book.nb

549

Case Min[m, k] < l < Max[m, k]: Reduction to zero


When the order of the product l is greater than the smaller of the grades of the factors of the product, but less than the larger of the grades, the generalized product may still be expanded in terms of the factors of the element of larger grade, leading however to a sum of terms which is zero.

ab 0
m l k

Min@m, kD < l < Max@m, kD

10.5

This result, in which a sum of terms is identically zero, is a source of many useful identities between exterior and interior products. It will be explored in Section 10.4 below.

Case l = Max[m, k]: Reduction to zero


When l is equal to the larger of the grades of the factors, the generalized product reduces to a single interior product which is zero by virtue of its left factor being of lesser grade than its right factor.

ab 0
m l k

l Max@m, kD

10.6

Case l > Max[m, k]: Undefined


When the order of the product l is greater than the larger of the grades of the factors of the product, the generalized product cannot be expanded in terms of either of its factors and is therefore undefined.

a b Undefined
m l k

l > Max@m, kD

10.7

10.3 The Symmetric Form of the Generalized Product


Expansion of the generalized product in terms of both factors
If a and b are both simple we can express the generalized Grassmann product more symmetrically
m k

by converting the interior products with the Interior Common Factor Theorem (See Chapter 6). To show this, we start with the definition [10.1] of the generalized Grassmann product.

2009 9 3

Grassmann Algebra Book.nb

550

k K O l

a b a bj bj
m l k j=1 m l k-l

From the Interior Common Factor Theorem we can write the interior product abj as:
m l

m K O l

a bj
m l

ai bj
l l

ai
m-l

i=1

Substituting this in the above expression for the generalized product gives:
k K O l m l k

m K O l

ab
j=1

ai bj
l l

ai bj
m-l k-l

i=1

The expansion of the generalized product in terms of both factors a and b is thus given by:
m k

m k K O K O l l

a b ai bj
m l k i=1 j=1 l l

ai bj
m-l k-l

0 l Min@m, kD

10.8

a a1 a1 a2 a2 !,
m l m-l l m-l

b b1 b1 b2 b2 !
k l k-l l k-l

The quasi-commutativity of the generalized product


From the symmetric form of the generalized product [10.8] we can show directly that the ordering of the factors in a product is immaterial except perhaps for a sign. To show this, rewrite the symmetric form for the product of b with a, that is, b a . We then
k m k l m

reverse the order of the exterior product to obtain the terms of a b except for possibly a sign. The
m l k

inner product is of course symmetric, and so can be written in either ordering.


m k K OK O l l

ba
k l m j=1 i=1 m k K OK O l l

bj ai
l l

bj ai
k-l m-l


j=1 i=1

ai bj H- 1LHm-lL Hk-lL ai bj
l l m-l k-l

Comparison with [10.8] then gives:

2009 9 3

Grassmann Algebra Book.nb

551

Comparison with [10.8] then gives:

a b H- 1LHm-lL Hk-lL b a
m l k k l m

10.9

It is easy to see, by using the distributivity of the generalized product that this relationship holds also for non-simple a and b.
m k

Expansion in terms of the other factor


We can now prove the alternative expression [10.3] for a b in terms of the factors of a simple a
m l k m

by expanding the right hand side of the quasi-commutativity relationship [10.9].


m K O l

a b H- 1LHm-lL Hk-lL b a H- 1LHm-lL Hk-lL b ai ai


m l k k l m

i=1

m-l

Interchanging the order of the terms of the exterior product then gives the required alternative expression:
m K O l

a b ai b ai
m l k

i=1
m

m-l

0 l Min@m, kD

a a1 a1 a2 a2 !
l m-l l m-l

10.4 Calculating with Generalized Products


Entering a generalized product
In GrassmannAlgebra you can enter a generalized product in the form:
GeneralizedProduct@lD@X, YD

where l is the order of the product and X and Y are the factors. On entering this expression into Mathematica it will display it in the form:
XY
l

Alternatively, you can click on the generalized product button on the GrassmannAlgebra

palette, and then tab through the placeholders, entering the elements of the product as you go. To enter multiple products, click repeatedly on the button. For example to enter a concatenated product of four elements, click three times: 2009 9 3

Grassmann Algebra Book.nb

552

To enter multiple products, click repeatedly on the button. For example to enter a concatenated product of four elements, click three times:

If you select this expression and enter it, you will see how the inherently binary product is grouped.
JJ N N

Tabbing through the placeholders and entering factors and product orders, we might obtain something like:
KJX YN ZO W
4 3 2

Here, X, Y, Z, and W represent any Grassmann expressions. Using Mathematica's FullForm operation shows how this is internally represented.
FullFormBJJX YN ZN WF
4 1 0

GeneralizedProduct@0D@ GeneralizedProduct@1D@GeneralizedProduct@4D@X, YD, ZD, WD

You can edit the expression at any stage to group the products in a different order.
FullFormBX KKY ZO WOF
4 3 2

GeneralizedProduct@4D@X, GeneralizedProduct@2D@GeneralizedProduct@3D@Y, ZD, WDD

Finally, you can also enter a generalized product sequentially by typing the first factor, then clicking on the product operator from the first column of the GrassmannAlgebra palette, then

typing the second factor.

Reduction to interior products


If the generalized product a b of simple elements is expanded in terms of the second factor b it
m l k k

will result in a sum of

k l

terms (0 l k) involving just exterior and interior products. If it is


m

expanded in terms of the first factor a it will result in a sum of

m l

terms (0 l m).

These sums, although appearing different because expressed in terms of interior products, are of course equal, as has been shown in the previous section. The GrassmannAlgebra function ToInteriorProducts will take any generalized product and convert it to a sum involving exterior and interior products by expanding whichever factor leads to the simplest expression. If the elements are given in symbolic grade-underscripted form, ToInteriorProducts will first create a corresponding simple exterior product before expanding. 2009 9 3

Grassmann Algebra Book.nb

553

The GrassmannAlgebra function ToInteriorProducts will take any generalized product and convert it to a sum involving exterior and interior products by expanding whichever factor leads to the simplest expression. If the elements are given in symbolic grade-underscripted form, ToInteriorProducts will first create a corresponding simple exterior product before expanding. For example, suppose the generalized product is given in symbolic form as a b. Applying
m 2 3

ToInteriorProducts expands with respect to b and gives:


3

A = ToInteriorProductsBa bF
m 2 3

Ka b1 b2 O b3 - Ka b1 b3 O b2 + Ka b2 b3 O b1
m m m

To expand with respect to a we must of course give m as an integer.


m

B = ToInteriorProductsBa bF
4 2 k

b a1 a2 a3 a4 - b a1 a3 a2 a4 + b a1 a4 a2 a3 +
k k k

b a2 a3 a1 a4 - b a2 a4 a1 a3 + b a3 a4 a1 a2
k k k

Note that in the expansion A there are terms.

3 2

(= 3) terms while in expansion B there are

4 2

(= 6)

If now we let m equal 4 and k equal 3, these two expressions become equal.
A1 = ToInteriorProductsBa bF . a ComposeSimpleFormBaF
m 2 3 m 4

Ha1 a2 a3 a4 b1 b2 L b3 Ha1 a2 a3 a4 b1 b3 L b2 + Ha1 a2 a3 a4 b2 b3 L b1 B1 = ToInteriorProductsBa bF . b ComposeSimpleFormBbF


4 2 k k 3

Hb1 b2 b3 a1 a2 L a3 a4 - Hb1 b2 b3 a1 a3 L a2 a4 + Hb1 b2 b3 a1 a4 L a2 a3 + Hb1 b2 b3 a2 a3 L a1 a4 Hb1 b2 b3 a2 a4 L a1 a3 + Hb1 b2 b3 a3 a4 L a1 a2

Although these expressions are equal, they appear different because of their expansion in terms of interior products. To display them in the same form, we need to reduce the interior products to inner (and exterior) products.

Reduction to inner products


Reducing the interior products in a generalized product expansion to inner products is equivalent to appplying the symmetric form of the expansion. Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this product we must first increase the dimension of the space to at least 4 (since it is zero in a space of lesser dimension). Then, we convert any interior products to inner products by applying ToInnerProducts , and simplify the result with GrassmannExpandAndSimplify. 2009 93

Grassmann Algebra Book.nb

554

Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this product we must first increase the dimension of the space to at least 4 (since it is zero in a space of lesser dimension). Then, we convert any interior products to inner products by applying ToInnerProducts, and simplify the result with GrassmannExpandAndSimplify.
!4 ; A2 = %@ToInnerProducts@A1 DD Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 B2 = %@ToInnerProducts@B1 DD Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3

By inspection we see that these two expressions are the same. Checking with Mathematica verifies this.
A2 B2 True

The identity of the forms A2 and B2 is an example of the expansion of a generalized product in terms of either factor yielding the same result.

Tabulating generalized products


In order to get a clearer concept of how the sequence of generalized products for two elements are related, we can look at building up this sequence from the Spine and Cospine of one of the elements. (See Chapter 3: The Regressive Product, for a discussion of Spine and Cospine). Consider a generalized product a b of an arbitrary m-element a and a simple k-element b
m l k m k

expressed as a simple exterior product. We need to be specific about the grade k, so for this example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater than or equal to k in order to have the resulting formulas remain valid should we later substitute specific values. 2009 9 3

Book.nb Consider a generalized product a b of an arbitrary m-element a andGrassmann a simple Algebra k-element b

555

m l k

expressed as a simple exterior product. We need to be specific about the grade k, so for this example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater than or equal to k in order to have the resulting formulas remain valid should we later substitute specific values. The spine and cospine of b are given by
3

S = SpineBbF
3

81, b1 , b2 , b3 , b1 b2 , b1 b3 , b2 b3 , b1 b2 b3 < Sc = CospineBbF


3

8b1 b2 b3 , b2 b3 , - Hb1 b3 L, b1 b2 , b3 , -b2 , b1 , 1<

We can now generate the list of products of the form Ka sO c, where s is an element of S, and
m

c is the corresponding element of Sc, by threading these operations across the two lists. T1 = ThreadBThreadBa SF ScF
m

:Ka 1O b1 b2 b3 , Ka b1 O b2 b3 ,
m m

Ka b2 O - Hb1 b3 L, Ka b3 O b1 b2 , Ka b1 b2 O b3 ,
m m m

Ka b1 b3 O -b2 , Ka b2 b3 O b1 , Ka b1 b2 b3 O 1>
m m m

The collection of elements of this list which are of the same grade (g, say) will form the terms in the sum of the generalized product of the same grade g. We can see the distribution of grades by applying Grade to the list.
Grade@T1 D 83 + m, 1 + m, 1 + m, 1 + m, - 1 + m, - 1 + m, - 1 + m, - 3 + m<

We can select the terms of a given grade (using Select), add them together (using Plus), and and tabulate these sums over the different grades g (using Table). We also reverse the list to get the grades in decreasing order.
T2 = Reverse@Table@Plus Select@T1 , GradeQ@gDD, 8g, m - 3, m + 3, 2<DD :Ka 1O b1 b2 b3 ,
m

Ka b2 O - Hb1 b3 L + Ka b1 O b2 b3 + Ka b3 O b1 b2 ,
m m m

Ka b1 b2 O b3 + Ka b1 b3 O -b2 + Ka b2 b3 O b1 ,
m m m

Ka b1 b2 b3 O 1>
m

The elements of this list are now the generalized products a b.


m l 3

2009 9 3

Grassmann Algebra Book.nb

556

The elements of this list are now the generalized products a b.


m l 3

T3 = TableBa b, 8l, 0, 3<F


m l 3

:a b, a b, a b, a b>
m 0 3 m 1 3 m 2 3 m 3 3

We can display the individual formulas by again using Thread.


TableForm@Thread@T3 T2 DD a b Ka 1O b1 b2 b3
m 0 3 m

a b Ka b2 O - Hb1 b3 L + Ka b1 O b2 b3 + Ka b3 O b1 b2
m 1 3 m m m

a b Ka b1 b2 O b3 + Ka b1 b3 O -b2 + Ka b2 b3 O b1
m 2 3 m m m

a b Ka b1 b2 b3 O 1
m 3 3 m

Finally, we can collect these steps into a single function, generalize it for arbitrary simple b, and
k

restrict it to apply only to the grades of factors we have assumed in this development. That is, the grade of a should be symbolic, but constrained to be greater than or equal to the grade of b, which should itself be greater than 1.
GeneralizedProductList@GeneralizedProduct@l_D@a_, b_DD ; HHead@RawGrade@aDD === Symbol RawGrade@aD RawGrade@bDL && RawGrade@bD > 1 := ModuleB8m, k, B, Bc, T1, T2, T3, g, m<, m = RawGrade@aD; k = RawGrade@bD; B = Spine@bD; Bc = Cospine@bD; T1 = Thread@Thread@a BD BcD; T2 = Reverse@Table@Plus Select@T1, RawGrade@D g &D, 8g, m - k, m + k, 2<DD; T3 = TableBa b, 8m, 0, k<F; TableForm@Thread@T3 T2DDF;
m

Example
Here are the formulas for a 2-element and a 4-element as second factors.

2009 9 3

Grassmann Algebra Book.nb

557

GeneralizedProductListBa bF
m l 2

a b Ka 1O b1 b2
m 0 2 m 1 2 m 2 2 m

a b Ka b1 O b2 + Ka b2 O -b1
m m

a b Ka b1 b2 O 1
m

GeneralizedProductListBa bF
m l 4

a b Ka 1O b1 b2 b3 b4
m 0 4 m 1 4 m 2 4 m 3 4 m 4 4 m

a b Ka b2 O - Hb1 b3 b4 L + Ka b4 O - Hb1 b2 b3 L + Ka b1 O b2 b3
m m m

a b Ka b1 b3 O - Hb2 b4 L + Ka b2 b4 O - Hb1 b3 L + Ka b1 b2 O b3
m m m

a b Ka b1 b2 b3 O b4 + Ka b1 b2 b4 O -b3 + Ka b1 b3 b4 O b2 + K
m m m

a b Ka b1 b2 b3 b4 O 1
m

10.5 The Generalized Product Theorem


The A and B forms of a generalized product
There is a surprising alternative form for the expansion of the generalized product in terms of exterior and interior products. We distinguish the two forms by calling the first (definitional) form the A form and the new second form the B form.
k K O l

A:

a b a bj bj
m l k j=1 k K O l m l k-l

10.10
m k-l l

B:

a b a bj bj
m l k j=1

b b1 b1 b2 b2 !
k l k-l l k-l

The identity between the A form and the B form is the source of many useful relations in the Grassmann and Clifford algebras. We call it the Generalized Product Theorem.

2009 9 3

Grassmann Algebra Book.nb

558

The identity between the A form and the B form is the source of many useful relations in the Grassmann and Clifford algebras. We call it the Generalized Product Theorem. In the previous sections, the identity between the expansions in terms of either of the two factors was shown to be at the inner product level. The identity between the A form and the B form is at a further level of complexity - that of the scalar product. That is, in order to show the identity between the two forms, a generalized product may need to be reduced to an expression involving exterior and scalar products.

Simple proofs for l equal to 0, 1, k-1 and k


The proofs for l equal to 0 and k are trivial. Substitution into the formulas shows that they reduce respectively to the exterior and interior products. We can also show straightforwardly that the A and B forms are equal for l equal to 1 and k-1, hence proving the theorem for all generalized products of the form a b in which k is less than or
m l k

equal to 3. I have not yet found such a straightforward proof for the general case. The example of the next section will extend this result to show that the theorem is true for m and k equal to 4 and l equal to 2. By using the quasi-commutativity of the generalized product we can thus show that the theorem is valid for all generalized products in a 4-space. The example will also make clearer the detail by which the equality of the two forms is achieved.

l=1
We begin with the B form for l equal to 1, and show that it reduces to the A form for l equal to 1.
k

a b a bj bj
m 1 k j=1 m k-1 1

b b1 b1 b2 b2 !
k 1 k-1 1 k-1

We next retrieve an interior product theorem from Chapter 6


a b x Ka xO b + H- 1Lm a b x
m k m k m k

which enables us to expand the summand to give two types of terms


k m 1 k j=1 m 1 k-1 k j=1 m k-1 1

a b a bj bj + H- 1Lm a bj bj

The first sum is of the A form, while the second sum is zero on account of the Zero Interior Sum theorem. Hence we have proved the theorem for l equal to 1.

l = k-1
This time we begin with the A form for l equal to k-1, and show that it reduces to the B form for l equal to k-1.

2009 9 3

Grassmann Algebra Book.nb

559

a b a bj bj
m k-1 k j=1 m k-1 1

b b1 b1 b2 b2 !
k k-1 1 k-1 1

We then retrieve a second interior product theorem from Chapter 6


a b x K a x O b + H - 1 L m +1 a b x
m k m k m k

which enables us to expand the summand again to give two types of terms
k m k-1 k j=1 m 1 k-1 k j=1 m k-1 1

a b a b j b j + H - 1 L m +1 a b j b j

The first sum is of the B form, while the second sum is zero on account of the Zero Interior Sum theorem. Hence we have proved the theorem for l equal to k-1.

Example: Verification of the Generalized Product Theorem


As an example we take the generalized product a b, and generate the interior product form for
m 2 4

both the A and B forms. Note the 'B' at the end of the function name ToInteriorProducts in the second case to indicate that it will generate the B form).
A = ToInteriorProductsBa bF
m 2 4

Ka b1 b2 O b3 b4 - Ka b1 b3 O b2 b4 + Ka b1 b4 O b2 b3 +
m m m

Ka b2 b3 O b1 b4 - Ka b2 b4 O b1 b3 + Ka b3 b4 O b1 b2
m m m

B = ToInteriorProductsBBa bF
m 2 4

a b1 b2 b3 b4 - a b1 b3 b2 b4 + a b1 b4 b2 b3 +
m m m

a b2 b3 b1 b4 - a b2 b4 b1 b3 + a b3 b4 b1 b2
m m m

Note that the A form and the B form at this first (interior product) level of their expansion both 4 have the same number of terms (in this case = 6). Although the expressions also both have the 2 same concatenation of factors, their grouping is different. Due to Mathematica's inbuilt precedence, parentheses are not needed in the second case: the interior product has higher precedence, so parentheses effectively surround the exterior products.

2009 9 3

Grassmann Algebra Book.nb

560

The next step is to convert these two expressions to their inner product forms. But to do this we will need to specify the grade of a. For this example, we take m equal to 4, and replace a in the
m m

expressions above by " (and consequently also we need to increase the dimension of the space above the default value of 3 to prevent " being treated as zero).
!6 ; 2 = ComposeSimpleFormBaF
4

a1 a2 a3 a4 A1 = %BToInnerProductsBA . a 2FF
m

Ha3 a4 b3 b4 L a1 a2 b1 b2 - Ha3 a4 b2 b4 L a1 a2 b1 b3 + Ha3 a4 b2 b3 L a1 a2 b1 b4 + Ha3 a4 b1 b4 L a1 a2 b2 b3 Ha3 a4 b1 b3 L a1 a2 b2 b4 + Ha3 a4 b1 b2 L a1 a2 b3 b4 Ha2 a4 b3 b4 L a1 a3 b1 b2 + Ha2 a4 b2 b4 L a1 a3 b1 b3 Ha2 a4 b2 b3 L a1 a3 b1 b4 - Ha2 a4 b1 b4 L a1 a3 b2 b3 + Ha2 a4 b1 b3 L a1 a3 b2 b4 - Ha2 a4 b1 b2 L a1 a3 b3 b4 + Ha2 a3 b3 b4 L a1 a4 b1 b2 - Ha2 a3 b2 b4 L a1 a4 b1 b3 + Ha2 a3 b2 b3 L a1 a4 b1 b4 + Ha2 a3 b1 b4 L a1 a4 b2 b3 Ha2 a3 b1 b3 L a1 a4 b2 b4 + Ha2 a3 b1 b2 L a1 a4 b3 b4 + Ha1 a4 b3 b4 L a2 a3 b1 b2 - Ha1 a4 b2 b4 L a2 a3 b1 b3 + Ha1 a4 b2 b3 L a2 a3 b1 b4 + Ha1 a4 b1 b4 L a2 a3 b2 b3 Ha1 a4 b1 b3 L a2 a3 b2 b4 + Ha1 a4 b1 b2 L a2 a3 b3 b4 Ha1 a3 b3 b4 L a2 a4 b1 b2 + Ha1 a3 b2 b4 L a2 a4 b1 b3 Ha1 a3 b2 b3 L a2 a4 b1 b4 - Ha1 a3 b1 b4 L a2 a4 b2 b3 + Ha1 a3 b1 b3 L a2 a4 b2 b4 - Ha1 a3 b1 b2 L a2 a4 b3 b4 + Ha1 a2 b3 b4 L a3 a4 b1 b2 - Ha1 a2 b2 b4 L a3 a4 b1 b3 + Ha1 a2 b2 b3 L a3 a4 b1 b4 + Ha1 a2 b1 b4 L a3 a4 b2 b3 Ha1 a2 b1 b3 L a3 a4 b2 b4 + Ha1 a2 b1 b2 L a3 a4 b3 b4

2009 9 3

Grassmann Algebra Book.nb

561

B1 = %BToInnerProductsBB . a 2FF
m

H2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 LL a1 a2 a3 a4 + H- Ha4 b2 b3 b4 L + a4 b3 b2 b4 - a4 b4 b2 b3 L a1 a2 a3 b1 Ha4 b1 b3 b4 - a4 b3 b1 b4 + a4 b4 b1 b3 L a1 a2 a3 b2 + H- Ha4 b1 b2 b4 L + a4 b2 b1 b4 - a4 b4 b1 b2 L a1 a2 a3 b3 Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 b4 + Ha3 b2 b3 b4 - a3 b3 b2 b4 + a3 b4 b2 b3 L a1 a2 a4 b1 + H- Ha3 b1 b3 b4 L + a3 b3 b1 b4 - a3 b4 b1 b3 L a1 a2 a4 b2 Ha3 b1 b2 b4 - a3 b2 b1 b4 + a3 b4 b1 b2 L a1 a2 a4 b3 + H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 b4 Ha3 a4 b3 b4 L a1 a2 b1 b2 - Ha3 a4 b2 b4 L a1 a2 b1 b3 + Ha3 a4 b2 b3 L a1 a2 b1 b4 + Ha3 a4 b1 b4 L a1 a2 b2 b3 Ha3 a4 b1 b3 L a1 a2 b2 b4 + Ha3 a4 b1 b2 L a1 a2 b3 b4 + H- Ha2 b2 b3 b4 L + a2 b3 b2 b4 - a2 b4 b2 b3 L a1 a3 a4 b1 Ha2 b1 b3 b4 - a2 b3 b1 b4 + a2 b4 b1 b3 L a1 a3 a4 b2 + H- Ha2 b1 b2 b4 L + a2 b2 b1 b4 - a2 b4 b1 b2 L a1 a3 a4 b3 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 b4 Ha2 a4 b3 b4 L a1 a3 b1 b2 + Ha2 a4 b2 b4 L a1 a3 b1 b3 Ha2 a4 b2 b3 L a1 a3 b1 b4 - Ha2 a4 b1 b4 L a1 a3 b2 b3 + Ha2 a4 b1 b3 L a1 a3 b2 b4 - Ha2 a4 b1 b2 L a1 a3 b3 b4 + Ha2 a3 b3 b4 L a1 a4 b1 b2 - Ha2 a3 b2 b4 L a1 a4 b1 b3 + Ha2 a3 b2 b3 L a1 a4 b1 b4 + Ha2 a3 b1 b4 L a1 a4 b2 b3 Ha2 a3 b1 b3 L a1 a4 b2 b4 + Ha2 a3 b1 b2 L a1 a4 b3 b4 + Ha1 b2 b3 b4 - a1 b3 b2 b4 + a1 b4 b2 b3 L a2 a3 a4 b1 + H- Ha1 b1 b3 b4 L + a1 b3 b1 b4 - a1 b4 b1 b3 L a2 a3 a4 b2 Ha1 b1 b2 b4 - a1 b2 b1 b4 + a1 b4 b1 b2 L a2 a3 a4 b3 + H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 b4 Ha1 a4 b3 b4 L a2 a3 b1 b2 - Ha1 a4 b2 b4 L a2 a3 b1 b3 + Ha1 a4 b2 b3 L a2 a3 b1 b4 + Ha1 a4 b1 b4 L a2 a3 b2 b3 Ha1 a4 b1 b3 L a2 a3 b2 b4 + Ha1 a4 b1 b2 L a2 a3 b3 b4 Ha1 a3 b3 b4 L a2 a4 b1 b2 + Ha1 a3 b2 b4 L a2 a4 b1 b3 Ha1 a3 b2 b3 L a2 a4 b1 b4 - Ha1 a3 b1 b4 L a2 a4 b2 b3 + Ha1 a3 b1 b3 L a2 a4 b2 b4 - Ha1 a3 b1 b2 L a2 a4 b3 b4 + Ha1 a2 b3 b4 L a3 a4 b1 b2 - Ha1 a2 b2 b4 L a3 a4 b1 b3 + Ha1 a2 b2 b3 L a3 a4 b1 b4 + Ha1 a2 b1 b4 L a3 a4 b2 b3 Ha1 a2 b1 b3 L a3 a4 b2 b4 + Ha1 a2 b1 b2 L a3 a4 b3 b4

+ +

+ +

+ +

+ +

It can be seen that at this inner product level, these two expressions are not of the same form: B1 contains additional terms to those of A1 . The interior products of A1 are only of the form Hai aj br bs L, whereas the extra terms of B1 are of the forms Ibi bj br bs M andHai bj br bs L. Calculating their difference gives:

2009 9 3

Grassmann Algebra Book.nb

562

AB = B1 - A1 H2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 LL a1 a2 a3 a4 + H- Ha4 b2 b3 b4 L + a4 b3 b2 b4 - a4 b4 b2 b3 L a1 a2 a3 b1 Ha4 b1 b3 b4 - a4 b3 b1 b4 + a4 b4 b1 b3 L a1 a2 a3 b2 + H- Ha4 b1 b2 b4 L + a4 b2 b1 b4 - a4 b4 b1 b2 L a1 a2 a3 b3 Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 b4 + Ha3 b2 b3 b4 - a3 b3 b2 b4 + a3 b4 b2 b3 L a1 a2 a4 b1 + H- Ha3 b1 b3 b4 L + a3 b3 b1 b4 - a3 b4 b1 b3 L a1 a2 a4 b2 Ha3 b1 b2 b4 - a3 b2 b1 b4 + a3 b4 b1 b2 L a1 a2 a4 b3 + H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 b4 H- Ha2 b2 b3 b4 L + a2 b3 b2 b4 - a2 b4 b2 b3 L a1 a3 a4 b1 Ha2 b1 b3 b4 - a2 b3 b1 b4 + a2 b4 b1 b3 L a1 a3 a4 b2 + H- Ha2 b1 b2 b4 L + a2 b2 b1 b4 - a2 b4 b1 b2 L a1 a3 a4 b3 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 b4 + Ha1 b2 b3 b4 - a1 b3 b2 b4 + a1 b4 b2 b3 L a2 a3 a4 b1 + H- Ha1 b1 b3 b4 L + a1 b3 b1 b4 - a1 b4 b1 b3 L a2 a3 a4 b2 Ha1 b1 b2 b4 - a1 b2 b1 b4 + a1 b4 b1 b2 L a2 a3 a4 b3 + H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 b4

+ +

+ + + +

By inspection, we can see that each scalar coefficient is a zero interior sum, thus making AB equal to zero and proving the theorem for this particular case. We can also verify this directly by reducing the inner products of AB to scalar products.
%@ToScalarProducts@ABDD 0

Verification that the B form may be expanded in terms of either factor


In this section we verify that just as for the A form, the B form may be expanded in terms of either of its factors. The principal difference between an expansion using the A form, and one using the B form is that the A form needs only to be expanded to the inner product level to show the identity between two equal expressions. On the other hand, expansion of the B form requires expansion to the scalar product level, often involving significantly more computation. We take the same generalized product a b that we discussed in the previous section, and show
4 2 3

that we need to expand the product to the scalar product level before the identity of the two expansions become evident.

2009 9 3

Grassmann Algebra Book.nb

563

82, %< = :ComposeSimpleFormBaF, ComposeSimpleFormBbF>


4 3

8a1 a2 a3 a4 , b1 b2 b3 <

Reduction to interior products


A = ToInteriorProductsBB2 %F
2

a1 a2 a3 a4 b1 b2 b3 a1 a2 a3 a4 b2 b1 b3 + a1 a2 a3 a4 b3 b1 b2

To expand with respect to a we reverse the order of the generalized product to give b a and
4 3 2 4

multiply by H- 1LHm-lL Hk-lL (which we note for this case to be 1) to give


B = ToInteriorProductsBB% 2F
2

a1 a2 b1 b2 b3 a3 a4 - a1 a3 b1 b2 b3 a2 a4 + a1 a4 b1 b2 b3 a2 a3 + a2 a3 b1 b2 b3 a1 a4 a2 a4 b1 b2 b3 a1 a3 + a3 a4 b1 b2 b3 a1 a2

Reduction to inner products


A1 = %@ToInnerProducts@ADD Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 + H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 + Ha3 a4 b2 b3 L a1 a2 b1 Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 + Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3

2009 9 3

Grassmann Algebra Book.nb

564

B1 = %@ToInnerProducts@BDD Ha3 a4 b2 b3 L a1 a2 b1 Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha2 a3 a4 b3 - a2 a4 a3 b3 + a2 b3 a3 a4 L a1 b1 b2 + H- Ha2 a3 a4 b2 L + a2 a4 a3 b2 - a2 b2 a3 a4 L a1 b1 b3 + Ha2 a3 a4 b1 - a2 a4 a3 b1 + a2 b1 a3 a4 L a1 b2 b3 + Ha1 a4 b2 b3 L a2 a3 b1 - Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 - Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 - Ha1 a3 b1 b2 L a2 a4 b3 + H- Ha1 a3 a4 b3 L + a1 a4 a3 b3 - a1 b3 a3 a4 L a2 b1 b2 + Ha1 a3 a4 b2 - a1 a4 a3 b2 + a1 b2 a3 a4 L a2 b1 b3 + H- Ha1 a3 a4 b1 L + a1 a4 a3 b1 - a1 b1 a3 a4 L a2 b2 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 + Ha1 a2 a4 b3 - a1 a4 a2 b3 + a1 b3 a2 a4 L a3 b1 b2 + H- Ha1 a2 a4 b2 L + a1 a4 a2 b2 - a1 b2 a2 a4 L a3 b1 b3 + Ha1 a2 a4 b1 - a1 a4 a2 b1 + a1 b1 a2 a4 L a3 b2 b3 + H- Ha1 a2 a3 b3 L + a1 a3 a2 b3 - a1 b3 a2 a3 L a4 b1 b2 + Ha1 a2 a3 b2 - a1 a3 a2 b2 + a1 b2 a2 a3 L a4 b1 b3 + H- Ha1 a2 a3 b1 L + a1 a3 a2 b1 - a1 b1 a2 a3 L a4 b2 b3 + H2 Ha1 a2 a3 a4 L - 2 Ha1 a3 a2 a4 L + 2 Ha1 a4 a2 a3 LL b1 b2 b3

By observation we can see that A1 and B1 contain some common terms. We compute the difference of the two inner product forms and collect terms with the same exterior product factor:

2009 9 3

Grassmann Algebra Book.nb

565

AB = B1 - A1 - Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 + Ha2 a3 a4 b3 - a2 a4 a3 b3 + a2 b3 a3 a4 L a1 b1 b2 + H- Ha2 a3 a4 b2 L + a2 a4 a3 b2 - a2 b2 a3 a4 L a1 b1 b3 + Ha2 a3 a4 b1 - a2 a4 a3 b1 + a2 b1 a3 a4 L a1 b2 b3 H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 + H- Ha1 a3 a4 b3 L + a1 a4 a3 b3 - a1 b3 a3 a4 L a2 b1 b2 + Ha1 a3 a4 b2 - a1 a4 a3 b2 + a1 b2 a3 a4 L a2 b1 b3 + H- Ha1 a3 a4 b1 L + a1 a4 a3 b1 - a1 b1 a3 a4 L a2 b2 b3 + Ha1 a2 a4 b3 - a1 a4 a2 b3 + a1 b3 a2 a4 L a3 b1 b2 + H- Ha1 a2 a4 b2 L + a1 a4 a2 b2 - a1 b2 a2 a4 L a3 b1 b3 + Ha1 a2 a4 b1 - a1 a4 a2 b1 + a1 b1 a2 a4 L a3 b2 b3 + H- Ha1 a2 a3 b3 L + a1 a3 a2 b3 - a1 b3 a2 a3 L a4 b1 b2 + Ha1 a2 a3 b2 - a1 a3 a2 b2 + a1 b2 a2 a3 L a4 b1 b3 + H- Ha1 a2 a3 b1 L + a1 a3 a2 b1 - a1 b1 a2 a3 L a4 b2 b3 + H2 Ha1 a2 a3 a4 L - 2 Ha1 a3 a2 a4 L + 2 Ha1 a4 a2 a3 LL b1 b2 b3

Reduction to scalar products


In order to show the equality of the two forms, we need to show that the above difference is zero. As in the previous section we may do this by converting the inner products to scalar products. Alternatively we may observe that the coefficients of the terms are zero by virtue of the Zero Interior Sum Theorem discussed later in this chapter.
%@ToScalarProducts@ABDD 0

This is an example of how the B form of the generalized product may be expanded in terms of either factor.

Example: Case Min[m, k] < l < Max[m, k]: Reduction to zero


When the order of the product l is greater than the smaller of the grades of the factors of the product, but less than the larger of the grades, the generalized product may still be expanded in terms of the factors of the element of larger grade. This leads however to a sum of terms which is zero.
ab 0
m l k

Min@m, kD < l < Max@m, kD

When l is equal to the larger of the grades of the factors, the generalized product reduces to a single interior product which is zero by virtue of its left factor being of lesser grade than its right factor. Suppose l = k > m. Then:

2009 9 3

Grassmann Algebra Book.nb

566

a b ab 0
m k k m k

l=k>m

These relationships are the source of an interesting suite of identities relating exterior and interior products. We take some examples; in each case we verify that the result is zero by converting the expression to its scalar product form. But we will need to circumvent the default behaviour of ToInteriorProducts by not intially telling it the grade of one of the terms (since it would normally return 0 for these type of products). We also need to declare some vector symbols and use the B variant of ToInteriorProducts (ToInteriorProductsB) to force the desired expansion.
DeclareExtraVectorSymbols@a_ D;

Example 1
A123 = ToInteriorProductsBBa bF . a a1
2 3

a1 b1 b2 b3 - a1 b2 b1 b3 + a1 b3 b1 b2 $@ToScalarProducts@A123 DD 0

Example 2
A124 = ToInteriorProductsBBa bF . a a1
2 4

a1 b1 b2 b3 b4 - a1 b1 b3 b2 b4 + a1 b1 b4 b2 b3 + a1 b2 b3 b1 b4 - a1 b2 b4 b1 b3 + a1 b3 b4 b1 b2 $@ToScalarProducts@A124 DD 0

Example 3
A234 = ToInteriorProductsBBa bF . a a1 a2
3 4

- Ha1 a2 b1 b2 b3 b4 L + a1 a2 b2 b1 b3 b4 a1 a2 b3 b1 b2 b4 + a1 a2 b4 b1 b2 b3 $@ToScalarProducts@A234 DD 0

2009 9 3

Grassmann Algebra Book.nb

567

10.6 Products with Common Factors


Products with common factors
One of the more interesting properties of the generalized Grassmann product of simple elements is its behaviour when these elements contain a common factor. If two simple elements contain a common factor of grade g, then their generalized Grassmann product of order g reduces to a simple interior product, and their generalized products of order less than g are zero. We can show this most directly from the B form of the generalized product formula 10.10. The B form is
k K O l

a b a bj bj
m l k j=1 m k-l l

b b1 b1 b2 b2 !
k l k-l l k-l

Let a and b have a common factor g and residual factors A and B.


m k l a b

a gA
m l a

b gB
k l b

Substituting these products into the formula above enables us to begin writing the generalized product sum:
gA gB gAB g +
l a l l b l a b l

But we quickly see that because the subsequent terms must now contain some common terms in their exterior product, they must all reduce to zero. Hence, for simple elements if the order of the generalized product is equal to the grade of the common factor, the generalized product reduces to a simple interior product.

gA gB gAB g
l a l l b l a b l

10.11

By interchanging g, A, and B we also have two simple alternate expressions


l a b

2009 9 3

Grassmann Algebra Book.nb

568

Ag gB AgB g
a l l l b a l b l

10.12

Ag Bg ABg g
a l l b l a b l l

In a similar way, we can see that if the order of the generalized product is less than the grade of the common factor, all terms must now contain some common terms in their exterior product, and the generalized product reduces to zero.

gA gB 0
m a l m b

l<m

10.13

Of course, this result is also valid in the case that either or both of A and B are scalars.
a b

g g B 0,
m l m b

g A g 0,
m a l m

gg0
m l m

l<m

10.14

Congruent factors
If both A and B are scalars, that is, a and b are equal to 0, they may be factored out of the formulas.
a b

g g gg , l m
m l m m m

g g 0, l ! m
m l m

10.15

If a and b are both n-elements, then they are conguent (that is, differ by at most a scalar factor)
n n

a b ab , l n
n l n n n

a b 0, l ! n
n l n

10.16

These results give rise to a series of relationships among the exterior and interior products of factors of a simple element. These relationships are independent of the dimension of the space concerned. For example, applying the result to simple 2, 3 and 4-elements gives the following relationships at the interior product level:

2009 9 3

Grassmann Algebra Book.nb

569

Example: 2-elements
ToInteriorProductsBa aF 0
2 1 2

Ka a1 O a2 - Ka a2 O a1 0
2 2

Example: 3-elements
ToInteriorProductsBa aF 0
3 1 3

a1 a2 a a3 - a1 a3 a a2 + a2 a3 a a1 0
3 3 3

ToInteriorProductsBa aF 0
3 2 3

a a1 a2 a3 - a a1 a3 a2 + a a2 a3 a1 0
3 3 3

Example: 4-elements
ToInteriorProductsBa aF 0
4 1 4

Ka a1 O a2 a3 a4 - Ka a2 O a1 a3 a4 +
4 4

Ka a3 O a1 a2 a4 - Ka a4 O a1 a2 a3 0
4 4

ToInteriorProductsBa aF 0
4 2 4

a1 a2 Ka a3 a4 O - a1 a3 Ka a2 a4 O + a1 a4 Ka a2 a3 O +
4 4 4

a2 a3 Ka a1 a4 O - a2 a4 Ka a1 a3 O + a3 a4 Ka a1 a2 O 0
4 4 4

ToInteriorProductsBa aF 0
4 3 4

Ka a1 a2 a3 O a4 - Ka a1 a2 a4 O a3 +
4 4

Ka a1 a3 a4 O a2 - Ka a2 a3 a4 O a1 0
4 4

At the inner product level, some of the generalized products yield further relationships. For example, a a confirms the Zero Interior Sum theorem.
4 2 4

2009 9 3

Grassmann Algebra Book.nb

570

%BToInnerProductsBa aFF 0
4 2 4

H2 Ha1 a2 a3 a4 L - 2 Ha1 a3 a2 a4 L + 2 Ha1 a4 a2 a3 LL a1 a2 a3 a4 0

Products of non-simple elements


Suppose X is a general not necessarily simple m-element, We can then write it as a sum of terms:
m

X a1 + a2 + a3 +
m m m m

From the axioms for the exterior product it is straightforward to show that XXH-1Lm XX. This
m m m m

means, of course, that the exterior product of a general m-element with itself is zero if m is odd. In a similar manner, the formula for the quasi-commutativity of the generalized product is:
a b H- 1LHm-lL Hk-lL b a
m l k k l m

This shows that the generalized product of a general m-element with itself is:
X X H- 1LHm-lL X X
m l m m l m

Thus the generalization of the above result for the exterior product is that the generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is odd, or alternatively, if just one of m and l is odd.

a1 + a2 + a3 + a1 + a2 + a3 +
m m m l m m m

m-l

10.17

OddIntegers

Orthogonal factors
The right hand side of formula 10.11 can be expanded using the Interior Common Factor Theorem (see Chapter 6, The Interior Product) below:
n

a b ai b ai
m k i=1 k m -k k k

m -k

a a1 a1 a2 a2 ! an an
m k m -k k

m -k

To do this we put
a g A B,
m l a b

bg
k l

in this formula, allowing the expansion to begin to be written 2009 9 3

Grassmann Algebra Book.nb

571

in this formula, allowing the expansion to begin to be written


gAB g gg AB +
l a b l l l a b

Since g, A, and B are simple, they can be written as:


l a B

g g1 g2 gi gl
l

A A1 A2 Ai Aa
a

B B1 B2 Bi Ba
B

Each of the subsequent terms will, in the sum on the right hand side of this expansion, contain at least one of the Ai or Bj in the inner product. Thus, if g is totally orthogonal to both A and B, then
l a b

these terms will be zero and the formula becomes simply:

g A g B g A B g g g A B,
l a l l b l a b l l l a b

10.18

gi Aj 0, gi Bj 0
(A simple element is totally orthogonal to another simple element if each of its 1-element factors is orthogonal to each of the 1-element factors of the other element.)

Generalized products of basis elements


All of the basis elements of a Grassmann algebra have 1-elements in common with some other basis elements (except for the algebra of a 1-space). The formulas discussed above are part of the rule-base of GrassmannSimplify and so we can see how they work by applying GrassmannSimplify to various combinations of basis elements. It is also often the case that we work with an Euclidean metric. The examples below tabulate the effect of the generalized product on the basis elements of a Grassmann algebra in both the general metric and the Euclidean metric cases. These results will be important when we discuss hypercomplex and Clifford algebras in later chapters.

The code for the examples


Here is the piece of code used to generate the examples in the next section. If you want to generate your own examples, you will need to click anywhere in the box, and then press the Enter (or ShiftEnter) key.

2009 9 3

Grassmann Algebra Book.nb

572

TabulateBasisProducts@n_DBx_ y_F :=
l

ModuleB8R, T<, J; DeclareBasis@nD; T = ComposeTableB8"Generalized Products of Basis Elements"<, TableB:R = 8Bx yF, ToMetricElements@
l

ToScalarProducts@RDD>, 8l, 0, n<F, Range@0, nD, 8"General Metric ", "Euclidean Metric"<F; K; TF

Examples of basis products in 3-space


Here are tabulations of the generalized products of all the basis elements of a Grassmann algebra of 3-space.
TabulateBasisProducts@3DBe1 e2 e3 e1 e2 e3 F
l

Generalized Products of Basis Elements


General Metric 0 1 2 3 Euclidean Metric

0 0 0 e1 e2 e3 e1 e2 e3

0 0 0 1

TabulateBasisProducts@3DBe1 e2 e3 e1 e2 F
l

Generalized Products of Basis Elements


General Metric 0 1 2 3 Euclidean Metric

0 0 e1 e2 e3 e1 e2 0

0 0 e3 0

2009 9 3

Grassmann Algebra Book.nb

573

TabulateBasisProducts@3DBe1 e2 e3 e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

0 e1 e2 e3 e1 0 0

0 e2 e3 0 0

TabulateBasisProducts@3DBe1 e2 e1 e2 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

0 0 e1 e2 e1 e2 0

0 0 1 0

TabulateBasisProducts@3DBe1 e2 e1 e3 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

0 e1 e2 e3 e1 e1 e2 e1 e3 0

0 e2 e3 0 0

TabulateBasisProducts@3DBe1 e2 e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

0 e1 e2 e1 0 0

0 e2 0 0

2009 9 3

Grassmann Algebra Book.nb

574

TabulateBasisProducts@3DBe1 e2 e3 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

e1 e2 e3 e1 e2 e3 0 0

e1 e2 e3 0 0 0

TabulateBasisProducts@3DBe1 e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

0 e1 e1 0 0

0 1 0 0

TabulateBasisProducts@3DBe1 e2 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric 0 1 2 3

e1 e2 e1 e2 0 0

e1 e2 0 0 0

Finding common factors


Formula 10.11 tells us that if we have two elements with a common factor, their generalized product of an order equal to the grade of the common factor should take the form of an interior product with the common factor appearing as the second factor.
gA gB gAB g
l a l l b l a b l

This suggests a way in which we might be able to find the common factor of two elements. In Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the regressive product. Based on this theorem, we were able to derive an Interior Common Factor Theorem in Chapter 6. The formula defining the generalized product is a generalization of this theorem. Thus, it is not so surprising that the generalized product might have application to finding common factors. 2009 9 3

Grassmann Algebra Book.nb

575

This suggests a way in which we might be able to find the common factor of two elements. In Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the regressive product. Based on this theorem, we were able to derive an Interior Common Factor Theorem in Chapter 6. The formula defining the generalized product is a generalization of this theorem. Thus, it is not so surprising that the generalized product might have application to finding common factors. In Chapter 3 we explored finding common factors without yet having introduced the notion of metric. The generalized product formula above however, supposes that a metric has been introduced onto the underlying linear space. But although the interior product g A B g
l a b l

does depend on the metric, the factors g A B and g do not.


l a b l

The formulas for finding common factors using the regressive product developed in Chapter 3 required that the sum of the grades of the elements which contained the common factors be greater than the dimension of the underlying linear space. The formula above has the advantage of being independent of the dimension of the space. A concomitant disadvantage however, is that the product g A B may not be displayed in simple form, making it harder to extract the common
l a b

factor g.
l

We note also that even though the formula seems to explicitly display the common factor g, this is
l

only because the arguments g A and g B, to the generalized product are displayed explicitly in
l a l b

an already factorized form. In fact we are faced with the same situation that occurs with the common factor calculation based on the regressive product: a common factor is determined only up to congruence. We can see this noting that the formula above shows that for any scalar s, s g is
l

also a common factor.


A sg
l a

B
l

sg
l

1 s

gAB
l a b

sg
l

Example
To begin, suppose we are working in a space of large enough dimension to require none of the products to be zero because their grade exceeded the dimension of the space. For the purpose of this example we set it (arbitrarily) to be 8.
!8 ;

Suppose also that we have two elements of interest.


X = 3 e1 e2 e3 + 2 e1 e2 e5 14 e1 e3 e5 - 3 e2 e3 e4 + 2 e2 e4 e5 - 14 e3 e4 e5 ; Y = 6 e1 e2 e3 e4 - 30 e1 e2 e3 e5 4 e1 e2 e4 e5 - 15 e1 e3 e4 e5 + 42 e2 e3 e4 e5 ;

We check first to see if they are simple.

2009 9 3

Grassmann Algebra Book.nb

576

8SimpleQ@XD, SimpleQ@YD< 8True, True<

Next, we check to see if they have a common factor.


% @X YD 0

With the knowledge that they are simple, and (because their exterior product is zero) have a (simple) common factor (of still unknown grade) we begin computing the progressively higher orders of generalized products. The first non-zero generalized product will be an expression from which we can extract the common factor. The order of this non-zero generalized product will give the grade of the common factor. The zero-order generalized product is just the exterior product that we computed above. Its returning 0 means that the common factor is of grade at least 1.
% BX YF
0

The first-order generalized product also returns 0. This means that the common factor is of grade at least 2.
% BX YF
1

The second-order generalized product does not return 0. This means that the common factor is of grade 2.
Z = % BX YF
2

- 129 He1 e2 e3 e4 e5 e1 e3 L 86 He1 e2 e3 e4 e5 e1 e5 L + 36 He1 e2 e3 e4 e5 e2 e3 L + 24 He1 e2 e3 e4 e5 e2 e5 L - 129 He1 e2 e3 e4 e5 e3 e4 L 168 He1 e2 e3 e4 e5 e3 e5 L + 86 He1 e2 e3 e4 e5 e4 e5 L

Since we can only extract the common factor up to congruence, we can simply replace each term of the form e1 e2 e3 e4 e5 ei ej with ei ej .
Z1 = Z . e1 e2 e3 e4 e5 e_ e - 129 e1 e3 - 86 e1 e5 + 36 e2 e3 + 24 e2 e5 - 129 e3 e4 - 168 e3 e5 + 86 e4 e5

We can verify that Z1 is a common factor of each of X and Y by taking their generalized products of order 1 and seeing if they do indeed result in zero.

2009 9 3

Grassmann Algebra Book.nb

577

%BToInnerProductsB%B:Z1 X, Z1 Y>FFF
1 1

80, 0<

Since Z1 is simple, we can also factorize it.


Z2 = ExteriorFactorize@Z1 D - 129 e1 12 e2 43 - e4 56 e5 43 e3 + 2 e5 3

Remember that an exterior factorization is never unique, reflecting the fact that any linear combination of these factors is a factor of both X and Y.
Z3 = a e1 12 e2 43 - e4 56 e5 43 + b e3 + 2 e5 3 ;

%@8Z3 X, Z3 Y<D 80, 0<

10.7 Properties of the Generalized Product


Summary of properties
The following simple results may be obtained from the definition and results derived above.

The generalized product is distributive with respect to addition a+b g ag + bg


m k l p m l p k l p

10.19

g a+b
p l m k

ga + gb
p l m p l k

10.20

The generalized product with a zero element is zero 0a a0 0


l m m l

10.21

2009 9 3

Grassmann Algebra Book.nb

578

The generalized product with a scalar reduces to the field product


The only non-zero generalized product with a scalar is of order zero, leading to the ordinary field product.

a a a a aa a a a a
0 m m 0 m m m

10.22

a b ab ab a b
0

10.23

The generalized products of a scalar with an m-element are zero


For l greater than zero

aaaa 0
l m m l

l>0

10.24

Scalars may be factored out of generalized products Ka aO b a a b


m l k m l k

ab
m l k

10.25

The generalized product is quasi-commutative a b H- 1LHm-lL Hk-lL b a


m l k k l m

10.26

This relationship was proven in Section 10.3.

The generalized product of congruent elements is their interior product, or zero


The generalized product of congruent elements of grade m is equal to their interior product whenever the order l of the generalized product is equal to the grade of the elements, and is zero otherwise.

a b ab , l m
m l m m m

a b 0, l ! m
m l m

10.27

2009 9 3

Grassmann Algebra Book.nb

579

The generalized product containing a common factor is zero when l < m


The generalized product of elements containing a common factor is zero whenever the order of the generalized product l is less than the grade of the common factor m.

gA gB 0
m a l m b

l<m

10.28

The generalized product containing a common factor reduces to an interior product when l = m
The generalized product of elements containing a common factor reduces to an interior product whenever the order of the generalized product l is equal to the grade of the common factor.

gA gB gAB g
l a l l b l a b l

10.29

The generalized square of a general (not necessarily simple) element is zero if m-l is odd.
The generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is odd.

a1 + a2 + a3 + a1 + a2 + a3 +
m m m l m m m

m-l

10.30

OddIntegers The generalized product reduces to an exterior product when the common factor is totally orthogonal to the remaining factors.
The generalized product of elements containing a common factor reduces to an exterior product whenever the common factor is totally orthogonal to the other elements.

g A g B g A B g g g A B,
l a l l b l a b l l l a b

10.31

gi Aj 0, gi Bj 0

2009 9 3

Grassmann Algebra Book.nb

580

10.8 The Meaning of the Generalized Product


What does a generalized product represent?
In Chapters 1 and 2 we saw that the exterior product was intimately bound up with the notion of "linear dependence". In Chapter 6, we saw that the interior product generalized the notion of "orthogonality" to elements of arbitrary grade. The generalized products of varying orders are defined in terms of both the exterior and interior products, forming a suite of products (roughly speaking) which transition from the exterior to the interior. The question then arises, how might we interpret these generalized products in terms of the linear dependence and orthogonality of their factors. To explore this we start with the symmetric form of the generalized product:
m k K O K O l l

a b ai bj
m l k i=1 j=1 l l

ai bj
m-l k-l

0 < l < Min@m, kD b b1 b1 b2 b2 !


k l k-l l k-l

a a1 a1 a2 a2 !,
m l m-l l m-l

For simplicity, and without loss of generality (given the quasi-commutativity of the generalized product), we can assume k is less than or equal to m, so that l is then less than k. We begin by looking at the simplest case, that in which the order l is equal to 1. The formula above then becomes:
m m 1 k k

a b
i=1 j=1

ai bj
1 1

ai bj
m -1 k-1

a a1 a1 a2 a2 !,
m 1 m -1 1 m -1

b b1 b1 b2 b2 !
k 1 k-1 1 k-1

To get a clearer picture of the form of this sum, we can take a specific example, say for m equal to 3 and k equal to 2:
3 3 1 2 2 2 1

a b Hai bj L ai bj
i=1 j=1

a Ha1 L Ha2 a3 L Ha2 L H- a1 a3 L Ha3 L Ha1 a2 L


m

b Hb1 L Hb2 L Hb2 L H- b2 L


k

G = ToScalarProductsBa bF
3 1 2

- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2

Thus we can see that the generalized product is a linear combination of terms, each of which is an exterior product with a scalar product as coefficient. 2009 9 3

Grassmann Algebra Book.nb

581

Thus we can see that the generalized product is a linear combination of terms, each of which is an exterior product with a scalar product as coefficient. The distinguishing feature of this linear combination is the way in which it combines all the essentially different combinations of 1-element factors of the original elements. Before we proceed further, we will explore ways in which we can visualize the essential components that make up this type of expression.

Visualizing generalized products


Key to visualizing the generalized product is to show how it may be built up from four lists: the ai , bj , ai , and bj . In GrassmannAlgebra we can compose these lists by using the functions
l l m-l k-l

Spine and Cospine.

As a first example, we will take m equal to 3, k equal to 2, and l equal to 1, as above. Composing the lists and displaying them in MatrixForm for easier visualization gives
A = List SpineBa, 1F MatrixForm;
3

Ac = List CospineBa, 1F MatrixForm;


3

8A, Ac< a1 : a2 , a3 a2 a3 - Ha1 a3 L > a1 a2

B = :SpineBb, 1F> MatrixForm; Bc = :CospineBb, 1F> MatrixForm;


2 2

8B, Bc< 8H b1 b2 L, H b2 -b1 L<

We can compose a matrix of the interior products


AB a1 a2 a3 H b1 b2 L

and multiply it out (using the interior product as the product operation) by applying the GrassmannAlgebra function MatrixProduct (or its alias B). (Note that MatrixProduct removes any MatrixForm wrappers.)

2009 9 3

Grassmann Algebra Book.nb

582

Mi = B@A BD MatrixForm a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2

Similarly we can compose a matrix of the exterior products


Ac Bc a2 a3 - Ha1 a3 L a1 a2 H b2 -b1 L

and multiply it out (using the exterior product as the product operation). We won't simplify the double negative signs in order to retain the correspondence with the previous expressions.
Me = B@Ac BcD MatrixForm a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1

Finally, we can put these two matrices side by side to show the interior and exteior components of the final expression.
Mi Me a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2 a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1

The generalized product is now the sum of the element-by-element products (not the matrix product) of these matrices.
$@Plus Flatten@Mi Me . MatrixForm IdentityDD - Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2

Visualization examples
We can collect the operations of the the previous section into a simple function to easily visualize any generalized product.

2009 9 3

Grassmann Algebra Book.nb

583

VizualizeGeneralizedProductBX_ Y_F :=
l_

Module@8A, Ac, B, Bc, Mi, Me<, A = List Spine@X, lD; Ac = List Cospine@X, lD; B = 8Spine@Y, lD<; Bc = 8Cospine@Y, lD<; Mi = 9@A BD; Me = 9@Ac BcD; MatrixForm@MiD * MatrixForm@MeDD;

Example 1
In this first example we visualize all the non-zero generalized products of a 3-element and 2element.
VizualizeGeneralizedProductBa bF
3 0 2

H 1 1 L H a1 a2 a3 b1 b2 L VizualizeGeneralizedProductBa bF
3 1 2

a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2

a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1


3 2 2

VizualizeGeneralizedProductBa bF a1 a2 b1 b2 a1 a3 b1 b2 a2 a3 b1 b2 a3 1 -a2 1 a1 1

Example 2
In this example we visualize some non-zero generalized products of a 4-element and 3-element.
VizualizeGeneralizedProductBa bF
4 1 3

a1 b1 a2 b1 a3 b1 a4 b1

a1 b2 a2 b2 a3 b2 a4 b2

a1 b3 a2 b3 a3 b3 a4 b3

a2 a3 a4 b2 b3 - Ha1 a3 a4 L b2 b3 a1 a2 a4 b2 b3 - Ha1 a2 a3 L b2 b3

a2 a3 a4 - Hb1 b3 L - Ha1 a3 a4 L - Hb1 b3 a1 a2 a4 - Hb1 b3 L - Ha1 a2 a3 L - Hb1 b3

2009 9 3

Grassmann Algebra Book.nb

584

VizualizeGeneralizedProductBa bF
4 2 3

a1 a2 b1 b2 a1 a3 b1 b2 a1 a4 b1 b2 a2 a3 b1 b2 a2 a4 b1 b2 a3 a4 b1 b2 a3 a4 b3 - Ha2 a4 L b3 a2 a3 b3 a1 a4 b3 - Ha1 a3 L b3 a1 a2 b3

a1 a2 b1 b3 a1 a3 b1 b3 a1 a4 b1 b3 a2 a3 b1 b3 a2 a4 b1 b3 a3 a4 b1 b3

a1 a2 b2 b3 a1 a3 b2 b3 a1 a4 b2 b3 a2 a3 b2 b3 a2 a4 b2 b3 a3 a4 b2 b3 a3 a4 b1 - Ha2 a4 L b1 a2 a3 b1 a1 a4 b1 - Ha1 a3 L b1 a1 a2 b1

a3 a4 -b2 - Ha2 a4 L -b2 a2 a3 -b2 a1 a4 -b2 - Ha1 a3 L -b2 a1 a2 -b2

VizualizeGeneralizedProductBa bF
4 3 3

a1 a2 a3 b1 b2 b3 a1 a2 a4 b1 b2 b3 a1 a3 a4 b1 b2 b3 a2 a3 a4 b1 b2 b3

a4 1 -a3 1 a2 1 -a1 1

The components
Now that we are able to visualize the generalized product in terms of its exterior and interior product components, it becomes easier to see the influence on the product result of its factors. In the examples above we expressed the generalized product as the sum of the element-by-element (ordinary) products of an interior product matrix, and an exterior product matrix.
VizualizeGeneralizedProductBa bF
3 1 2

a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2

a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1

The interior products


The interior products of this generalized product of order 1 comprise all the essentially different inner products between a 1-element of a and a 1-element of b.
3 2

a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2

Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3

general element belonging to b. Expanding and simplifying the product gives


2

2009 9 3

Grassmann Algebra Book.nb

585

Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3

general element belonging to b. Expanding and simplifying the product gives


2

%@Ha a1 + b a2 + c a3 L Hd b1 + e b2 LD a d Ha1 b1 L + a e Ha1 b2 L + b d Ha2 b1 L + b e Ha2 b2 L + c d Ha3 b1 L + c e Ha3 b2 L

We can see from this that b is totally orthogonal to a if and only if every one of the 6 scalar
2 3

products is zero.

The exterior products


Similarly we can consider the exterior products
a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1

The exterior product of a general element belonging to a with a general element belonging to b is:
3 2

%@Ha a1 + b a2 + c a3 L Hd b1 + e b2 LD a d a1 b1 + a e a1 b2 + b d a2 b1 + b e a2 b2 + c d a3 b1 + c e a3 b2

Thus, if each of these exterior products is zero

Common elements
The generalized product is
G = ToScalarProductsBa bF
3 1 2

- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 G1 = $@G . b1 a1 D - Ha1 b2 L a1 a2 a3 + Ha1 a3 L a1 a2 b2 Ha1 a2 L a1 a3 b2 + Ha1 a1 L a2 a3 b2 G2 = $@G1 . b2 a2 D 0

2009 9 3

Grassmann Algebra Book.nb

586

!8 ; H = %BToInnerProductsBa bFF
4 1 3

- Ha4 b3 L a1 a2 a3 b1 b2 + Ha4 b2 L a1 a2 a3 b1 b3 Ha4 b1 L a1 a2 a3 b2 b3 + Ha3 b3 L a1 a2 a4 b1 b2 Ha3 b2 L a1 a2 a4 b1 b3 + Ha3 b1 L a1 a2 a4 b2 b3 Ha2 b3 L a1 a3 a4 b1 b2 + Ha2 b2 L a1 a3 a4 b1 b3 Ha2 b1 L a1 a3 a4 b2 b3 + Ha1 b3 L a2 a3 a4 b1 b2 Ha1 b2 L a2 a3 a4 b1 b3 + Ha1 b1 L a2 a3 a4 b2 b3 H1 = $@H . b1 a1 D - Ha1 b3 L a1 a2 a3 a4 b2 + Ha1 b2 L a1 a2 a3 a4 b3 Ha1 a4 L a1 a2 a3 b2 b3 + Ha1 a3 L a1 a2 a4 b2 b3 Ha1 a2 L a1 a3 a4 b2 b3 + Ha1 a1 L a2 a3 a4 b2 b3 H2 = $@H1 . b2 a2 D 0

General common elements


The generalized product is
G = ToScalarProductsBa bF
3 1 2

- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 G1 = %@G . b1 Ha a1 + b a2 + c a3 LD H- a Ha1 b2 L - b Ha2 b2 L - c Ha3 b2 LL a1 a2 a3 + Ha Ha1 a3 L + b Ha2 a3 L + c Ha3 a3 LL a1 a2 b2 + H- a Ha1 a2 L - b Ha2 a2 L - c Ha2 a3 LL a1 a3 b2 + Ha Ha1 a1 L + b Ha1 a2 L + c Ha1 a3 LL a2 a3 b2 G2 = %@G1 . b2 Hd a1 + e a2 + f a3 LD 0

2009 9 3

Grassmann Algebra Book.nb

587

A larger expression
!8 ; H = %BToInnerProductsBa bFF
4 1 3

- Ha4 b3 L a1 a2 a3 b1 b2 + Ha4 b2 L a1 a2 a3 b1 b3 Ha4 b1 L a1 a2 a3 b2 b3 + Ha3 b3 L a1 a2 a4 b1 b2 Ha3 b2 L a1 a2 a4 b1 b3 + Ha3 b1 L a1 a2 a4 b2 b3 Ha2 b3 L a1 a3 a4 b1 b2 + Ha2 b2 L a1 a3 a4 b1 b3 Ha2 b1 L a1 a3 a4 b2 b3 + Ha1 b3 L a2 a3 a4 b1 b2 Ha1 b2 L a2 a3 a4 b1 b3 + Ha1 b1 L a2 a3 a4 b2 b3 H1 = $@H . b3 z . a4 zD Hz b2 L z a1 a2 a3 b1 - Hz b1 L z a1 a2 a3 b2 + Hz a3 L z a1 a2 b1 b2 - Hz a2 L z a1 a3 b1 b2 + Hz a1 L z a2 a3 b1 b2 - Hz zL a1 a2 a3 b1 b2

If there is one element in common, orthogonal to the others, then


Ha1 a2 a3 zL Hb1 b2 zL == - Hz zL a1 a2 a3 b1 b2
1

H2 = $@H1 . b2 y . a2 yD 0

More completely
H3 = %@H . b1 Ha a1 + b a2 + c a3 + d a4 LD H- a Ha1 b3 L - b Ha2 b3 L - c Ha3 b3 L - d Ha4 b3 LL a1 a2 a3 a4 b2 + Ha Ha1 b2 L + b Ha2 b2 L + c Ha3 b2 L + d Ha4 b2 LL a1 a2 a3 a4 b3 + H- a Ha1 a4 L - b Ha2 a4 L - c Ha3 a4 L - d Ha4 a4 LL a1 a2 a3 b2 b3 + Ha Ha1 a3 L + b Ha2 a3 L + c Ha3 a3 L + d Ha3 a4 LL a1 a2 a4 b2 b3 + H- a Ha1 a2 L - b Ha2 a2 L - c Ha2 a3 L - d Ha2 a4 LL a1 a3 a4 b2 b3 + Ha Ha1 a1 L + b Ha1 a2 L + c Ha1 a3 L + d Ha1 a4 LL a2 a3 a4 b2 b3 H4 = %@H3 . b2 He a1 + f a2 + g a3 + h a4 LD 0

General Common elements in the matrix pair expression


a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1 88a2 a3 b2 , a2 a3 -b1 <, 8- Ha1 a3 L b2 , - Ha1 a3 L -b1 <, 8a1 a2 b2 , a1 a2 -b1 <<

2009 9 3

Grassmann Algebra Book.nb

588

Mi1 = MatrixForm@88a1 b1 , a1 b2 <, 8a2 b1 , a2 b2 <, 8a3 b1 , a3 b2 <<D . b1 Ha a1 + b a2 + c a3 L . b2 Hd a1 + e a2 + f a3 L a1 Ha a1 + b a2 + c a3 L a1 Hd a1 + e a2 + f a3 L a2 Ha a1 + b a2 + c a3 L a2 Hd a1 + e a2 + f a3 L a3 Ha a1 + b a2 + c a3 L a3 Hd a1 + e a2 + f a3 L Me1 = MatrixForm@88a2 a3 b2 , a2 a3 -b1 <, 8- Ha1 a3 L b2 , - Ha1 a3 L -b1 <, 8a1 a2 b2 , a1 a2 -b1 <<D . b1 Ha a1 + b a2 + c a3 L . b2 Hd a1 + e a2 + f a3 L a2 a3 Hd a1 + e a2 + f a3 L a2 a3 H- a a1 - b a2 - c a3 L - Ha1 a3 L Hd a1 + e a2 + f a3 L - Ha1 a3 L H- a a1 - b a2 - c a3 L a1 a2 Hd a1 + e a2 + f a3 L a1 a2 H- a a1 - b a2 - c a3 L Me2 = % Me1 d a1 a2 a3 - a a1 a2 a3 e a1 a2 a3 - b a1 a2 a3 f a1 a2 a3 - c a1 a2 a3

This reduces to
a1 d Ha a1 + b a2 + c a3 L a1 H- aL Hd a1 + e a2 + f a3 L a2 e Ha a1 + b a2 + c a3 L a2 H- bL Hd a1 + e a2 + f a3 L a3 f Ha a1 + b a2 + c a3 L a3 H- cL Hd a1 + e a2 + f a3 L

It can be seen that for every scalar product, there is a corresponding negative version which will cancell it when the terms are summed. Hence, if a generalised product of order 1 has either 1. factors with at least two 1-elements in common, or 2. factors which are totally orthogonal then it is zero. If a generalised product of order 1 is zero, then it may have 1. factors with at least two 1-elements in common, or 2. factors which are totally orthogonal These two cases are disjoint: factors cannot both be totally orthogonal and have elements in common. For elements in common will give rise to scalar products of the form
a1 a1

For suppose the simplest case

2009 9 3

Grassmann Algebra Book.nb

589

For 1 element orthogonal, and 1 common


G = ToScalarProductsBa bF
3 1 2

- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2

If b2 is orthogonal tp the a
Go = G . 8a1 b2 0, a2 b2 0, a3 b2 0< Ha3 b1 L a1 a2 b2 - Ha2 b1 L a1 a3 b2 + Ha1 b1 L a2 a3 b2 HHa1 a2 a3 L b1 L b2

A larger expression
!8 ; J = %BToInnerProductsBa bFF
4 2 3

Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 J1 = $@J . b3 z . a4 zD - Hz a3 b1 b2 L z a1 a2 + Hz a2 b1 b2 L z a1 a3 + Hz b2 a2 a3 L z a1 b1 Hz b1 a2 a3 L z a1 b2 - Hz a1 b1 b2 L z a2 a3 Hz b2 a1 a3 L z a2 b1 + Hz b1 a1 a3 L z a2 b2 Hz b2 a1 a2 L z a3 b1 - Hz b1 a1 a2 L z a3 b2 Hz a3 z b2 L a1 a2 b1 - Hz a3 z b1 L a1 a2 b2 Hz a2 z b2 L a1 a3 b1 + Hz a2 z b1 L a1 a3 b2 Hz a1 z b2 L a2 a3 b1 - Hz a1 z b1 L a2 a3 b2 + + +

2009 9 3

Grassmann Algebra Book.nb

590

J2 = $@J1 . b2 x . a3 xD Hx a2 z b1 - x b1 z a2 L x z a1 + H- Hx a1 z b1 L + x b1 z a1 L x z a2 + Hx z a1 a2 L x z b1 + Hx z z b1 L x a1 a2 Hx z z a2 L x a1 b1 + Hx z z a1 L x a2 b1 Hx z x b1 L z a1 a2 + Hx z x a2 L z a1 b1 Hx z x a1 L z a2 b1 + Hx z x zL a1 a2 b1 %@ToScalarProducts@J2DD H- Hx b1 L Hz a2 L + Hx a2 L Hz b1 LL x z a1 + HHx b1 L Hz a1 L - Hx a1 L Hz b1 LL x z a2 + H- Hx a2 L Hz a1 L + Hx a1 L Hz a2 LL x z b1 + H- Hx b1 L Hz zL + Hx zL Hz b1 LL x a1 a2 + HHx a2 L Hz zL - Hx zL Hz a2 LL x a1 b1 + H- Hx a1 L Hz zL + Hx zL Hz a1 LL x a2 b1 + HHx zL Hx b1 L - Hx xL Hz b1 LL z a1 a2 + H- Hx zL Hx a2 L + Hx xL Hz a2 LL z a1 b1 + HHx zL Hx a1 L - Hx xL Hz a1 LL z a2 b1 + I- Hx zL2 + Hx xL Hz zLM a1 a2 b1 J3 = $@J2 . b1 y . a2 yD H- Hx y z a1 L + x z y a1 - x a1 y zL x y z %@ToScalarProducts@J3DD 0 Ha1 y x zL Hy x zL 0
2

Bringing the formulas together Common elements 2


For common elements, the generalized products cease being zero when the order of the product equals or exceeds the grade of the common factor.

2009 9 3

Grassmann Algebra Book.nb

591

ag bg 0
m p l k p

l<p

ag bg abg g
m p l k p m k p p

l p

ag bg ! 0
m p l k p

l> p

g A g B g A B g g g A B,
l a l l b l a b l l l a b

10.32

gi Aj 0, gi Bj 0

ga gb gg
p p m l p k m p p k

a b
m l-p k

10.33 lp

g = g1 g2 ! gp

a gi b gi 0

gg
p p

a b H- 1Lp l
m l k

ga b g
p m l k p

H- 1Lp m a g b
m l p m k

g
p k

10.34

g = g1 g2 ! gp
p

a gi b gi 0

Hypothesis
gA gB g A B
l m m l k l m m-l k

g g g A B,
l l l m m-l k

gi Aj 0, gi Bj 0 A = Ha1 a2 g1 L; B = Hb1 b2 b3 g1 L; $BToInnerProductsBA BFF


0

2009 9 3

Grassmann Algebra Book.nb

592

AB1 = $BToScalarProductsBA BFF


1

- Hg1 g1 L a1 a2 b1 b2 b3 + Hb3 g1 L a1 a2 b1 b2 g1 Hb2 g1 L a1 a2 b1 b3 g1 + Hb1 g1 L a1 a2 b2 b3 g1 Ha2 g1 L a1 b1 b2 b3 g1 + Ha1 g1 L a2 b1 b2 b3 g1 AB2 = $BToScalarProductsBHa1 a2 b1 b2 b3 g1 L g1 FF


1

- Hg1 g1 L a1 a2 b1 b2 b3 + Hb3 g1 L a1 a2 b1 b2 g1 Hb2 g1 L a1 a2 b1 b3 g1 + Hb1 g1 L a1 a2 b2 b3 g1 Ha2 g1 L a1 b1 b2 b3 g1 + Ha1 g1 L a2 b1 b2 b3 g1 AB3 = $BToScalarProductsBA BFF


2

H- Ha2 g1 L Hb3 g1 L + Ha2 b3 L Hg1 g1 LL a1 b1 b2 + HHa2 g1 L Hb2 g1 L - Ha2 b2 L Hg1 g1 LL a1 b1 b3 + H- Ha2 b3 L Hb2 g1 L + Ha2 b2 L Hb3 g1 LL a1 b1 g1 + H- Ha2 g1 L Hb1 g1 L + Ha2 b1 L Hg1 g1 LL a1 b2 b3 + HHa2 b3 L Hb1 g1 L - Ha2 b1 L Hb3 g1 LL a1 b2 g1 + H- Ha2 b2 L Hb1 g1 L + Ha2 b1 L Hb2 g1 LL a1 b3 g1 + HHa1 g1 L Hb3 g1 L - Ha1 b3 L Hg1 g1 LL a2 b1 b2 + H- Ha1 g1 L Hb2 g1 L + Ha1 b2 L Hg1 g1 LL a2 b1 b3 + HHa1 b3 L Hb2 g1 L - Ha1 b2 L Hb3 g1 LL a2 b1 g1 + HHa1 g1 L Hb1 g1 L - Ha1 b1 L Hg1 g1 LL a2 b2 b3 + H- Ha1 b3 L Hb1 g1 L + Ha1 b1 L Hb3 g1 LL a2 b2 g1 + HHa1 b2 L Hb1 g1 L - Ha1 b1 L Hb2 g1 LL a2 b3 g1 + H- Ha1 g1 L Ha2 b3 L + Ha1 b3 L Ha2 g1 LL b1 b2 g1 + HHa1 g1 L Ha2 b2 L - Ha1 b2 L Ha2 g1 LL b1 b3 g1 + H- Ha1 g1 L Ha2 b1 L + Ha1 b1 L Ha2 g1 LL b2 b3 g1 AB4 = $@AB3 OrthogonalSimplify@88a1 a2 , g1 <, 8b1 b2 b3 , g1 <<DD Ha2 b3 L Hg1 g1 L a1 b1 b2 - Ha2 b2 L Hg1 g1 L a1 b1 b3 + Ha2 b1 L Hg1 g1 L a1 b2 b3 - Ha1 b3 L Hg1 g1 L a2 b1 b2 + Ha1 b2 L Hg1 g1 L a2 b1 b3 - Ha1 b1 L Hg1 g1 L a2 b2 b3 AB5 = $BToScalarProductsBa1 a2 b1 b2 b3 FF
1

- Ha2 b3 L a1 b1 b2 + Ha2 b2 L a1 b1 b3 - Ha2 b1 L a1 b2 b3 + Ha1 b3 L a2 b1 b2 - Ha1 b2 L a2 b1 b3 + Ha1 b1 L a2 b2 b3 AB4x = AB4 . Hg1 g1 L 1 Ha2 b3 L a1 b1 b2 - Ha2 b2 L a1 b1 b3 + Ha2 b1 L a1 b2 b3 Ha1 b3 L a2 b1 b2 + Ha1 b2 L a2 b1 b3 - Ha1 b1 L a2 b2 b3

Orthogonal elements 2
2009 9 3

Grassmann Algebra Book.nb

593

Orthogonal elements 2
Taking the exterior product of an element with a generalized product is equivalent to taking the exterior product only with the elements which are not orthogonal to it. Consider an element made up of two factors, B and G, and a third element A totally orthogonal to B. Then the generalized product of A with G B is equal to the exterior product of B with the generalized product of A and G. That is, there is a sort of quasi-associativity between the factors.

a gb a g b
m l p k m l p k

ai bj 0

l Min@m, pD

10.35

a gb 0
m l p k

ai bj 0

l > Min@m, pD

10.36

For orthogonal elements, the generalized products cease being zero when the order of the product equals or exceeds the grade of the common factor. We assume in the following that
di ej 0

Ka dO b e
m r l k p

!0

l<m+k

Ka dO b e
m r l k p

==

lm+k

Ka dO b e
m r l k p

l>m+k

A; !12 ; DeclareExtraVectorSymbols@ 8Subscript@a, _D, Subscript@b, _D, Subscript@g, _D, Subscript@d, _D, Subscript@e, _D<D 8p, q, r, s, t, u, v, w, x, y, z, a_ , b_ , g_ , d_ , e_ <

2009 9 3

Grassmann Algebra Book.nb

594

Ka dO e
m r l p

di ej 0

l : Min[m,p]

Ka dO e ! 0
m r l p

l<m

Ka dO e d a e
m r l p r m l p

l Min@m, pD

Ka dO e 0
m r l p

l>m

a gb a g b
m l p k m l p k

ai bj 0

l Min@m, pD

A = Ha1 a2 d1 L; B = He1 e2 e3 L; VizualizeGeneralizedProductBA BF


0

H 1 1 L H a1 a2 d1 e1 e2 e3 L VizualizeGeneralizedProductBA BF
1

OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D a1 e1 a1 e2 a1 e3 a2 e1 a2 e2 a2 e3 0 0 0 a2 d1 e2 e3 a2 d1 - He1 e3 L a2 d1 e1 e2 - Ha1 d1 L e2 e3 - Ha1 d1 L - He1 e3 L - Ha1 d1 L e1 e2 a1 a2 e2 e3 a1 a2 - He1 e3 L a1 a2 e1 e2 ToScalarProductsBHa1 a2 L He1 e2 e3 LF
1

- Ha2 e3 L a1 e1 e2 + Ha2 e2 L a1 e1 e3 - Ha2 e1 L a1 e2 e3 + Ha1 e3 L a2 e1 e2 - Ha1 e2 L a2 e1 e3 + Ha1 e1 L a2 e2 e3 VizualizeGeneralizedProductBA BF


2

OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D H a1 a2 e1 e2 a1 a2 e1 e3 a1 a2 e2 e3 L H 1 e3 1 -e2 1 e1 L

2009 9 3

Grassmann Algebra Book.nb

595

AB2a = $BToInnerProductsBA BF
2

OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<DF Ha1 a2 e3 e4 L d1 d2 d3 e1 e2 Ha1 a2 e2 e4 L d1 d2 d3 e1 e3 + Ha1 a2 e2 e3 L d1 d2 d3 e1 e4 + Ha1 a2 e1 e4 L d1 d2 d3 e2 e3 Ha1 a2 e1 e3 L d1 d2 d3 e2 e4 + Ha1 a2 e1 e2 L d1 d2 d3 e3 e4 AB2b = %BToInnerProductsBJHa1 a2 L He1 e2 e3 e4 LNF d1 d2 d3 F
2

Ha1 a2 e3 e4 L d1 d2 d3 e1 e2 Ha1 a2 e2 e4 L d1 d2 d3 e1 e3 + Ha1 a2 e2 e3 L d1 d2 d3 e1 e4 + Ha1 a2 e1 e4 L d1 d2 d3 e2 e3 Ha1 a2 e1 e3 L d1 d2 d3 e2 e4 + Ha1 a2 e1 e2 L d1 d2 d3 e3 e4 AB2a AB2b True VizualizeGeneralizedProductBA BF
3

OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D

Inner::incom : Length 0 of dimension 1 in 8< is incommensurate with length 1 of dimension 1 in 88e1 e2 e3 <<. More Inner::incom : Length 0 of dimension 1 in 8< is incommensurate with length 1 of dimension 1 in 881<<. More
Inner@CircleMinus, 8<, 88e1 e2 e3 <<, PlusD Inner@Wedge, 8<, 881<<, PlusD VizualizeGeneralizedProductBA BF
4

OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D 0 0 0 0 0 d3 1 -d2 1 d1 1 -a2 1 a1 1

2009 9 3

Grassmann Algebra Book.nb

596

10.9 The Zero Generalized Sum Theorem


The Zero Interior Sum theorem
Suppose in the Generalized Product Theorem that a is unity. Then we can expand the product
m

1 b in both the A and B forms to give:


m k k K O m m k m k-m k K O m k-m m

1 b 1 bj bj 1 bj bj
j=1 j=1

When m is equal to zero we have the trivial identity that:


1b b
0 k k

When m is greater than zero, the first A form is clearly zero because the grade of the left factor of the interior product (1) is less than that of the right factor. Hence the B form is zero also. Thus we have the immediate result that
k K O m

bj bj 0
j=1 k-m m

10.37
m k-m

b b1 b1 b2 b2 !
k m k-m

We might express this more mnemonically as


b b1 b1 b2 b2 b3 b3 !
k m 1 k-m m 1 k-m 2 2 m k-m

b b
k-m

+ b b
k-m

+ b2 b3 + ! 0
k-m m

If m is less than or equal to k-m, this sum is zero as shown by 10.16. However, the sum created by interchanging the order of the factors in the interior products is also zero, since each term in the sum now becomes zero by virtue of the second factor being of higher grade than the first.
b1 b1 + b2 b2 + b2 b3 + ! 0
l k-m l k-m l k-m

These two results are collected together and referred to as the Zero Interior Sum Theorem. It is valid for 0 < m < k.

2009 9 3

10.38

Grassmann Algebra Book.nb

597

b b1 b1 b2 b2 b3 b3 !
k m k-m m k-m m k-m

b1 b1 + b2 b2 + b2 b3 + ! 0
m k-m m k-m m k-m

b1 b1 + b2 b2 + b2 b3 + ! 0
k-m m k-m m k-m m

Composing a zero interior sum


We can easily compose a zero interior sum by noting its relationship to the base and cobase of an element. (See Chapter 3: The Regressive Product, for a discussion of Base and Cobase). For example, consider a simple element b. Its base and cobase are given by:
3

BaseBbF
3

81, b1 , b2 , b3 , b1 b2 , b1 b3 , b2 b3 , b1 b2 b3 < CobaseBbF


3

8b1 b2 b3 , b2 b3 , - Hb1 b3 L, b1 b2 , b3 , -b2 , b1 , 1<

If we take the interior products of the corresponding elements of this list (using Thread), and then add (Plus) together the cases (Cases) in which the grade of the second factor is equal to the chosen value of m (here we choose m equal to 1), we get the terms of the zero interior sum.
Plus CasesBThreadBBaseBbF CobaseBbFF, a_ b_ ; RawGrade@bD 1F
3 3

b1 b2 b3 + b1 b3 -b2 + b2 b3 b1

We then give our function a name and add a condition 0 < m < RawGrade[b] to restrict the formulas to the valid range of m.
ComposeZeroInteriorSum@m_D@b_D ; 0 < m < RawGrade@bD := Plus Cases@Thread@Base@bD Cobase@bDD, a_ b_ ; RawGrade@bD mD;

Examples
For a 2-element, the zero interior sum is obviously zero.

2009 9 3

10.38

Grassmann Algebra Book.nb

598

ComposeZeroInteriorSum@1DBbF
2

b1 b2 + b2 -b1

For a 3-element, with m equal to 1 (one 1-element factor to the right of the interior product sign), we get
ComposeZeroInteriorSum@1DBbF
3

b1 b2 b3 + b1 b3 -b2 + b2 b3 b1

Two factors to the right of the interior product sign means one to the left, and For m equal to 2 (two 1-element factors to the right of the interior product sign), we get
ComposeZeroInteriorSum@2DBbF
3

b1 b2 b3 + b2 - Hb1 b3 L + b3 b1 b2

In this case, each of the interior products is zero in its own right. However, remember that ComposeZeroInteriorSum is GrassmannAlgebra "composer", and hence by design, does not effect any simplifications. For an element of grade 4, we get non-trivial results for m equal to 1, and for m equal to 2.
ComposeZeroInteriorSum@1DBbF
4

b1 b2 b3 b4 + b1 b2 b4 -b3 + b1 b3 b4 b2 + b2 b3 b4 -b1 Z1 = ComposeZeroInteriorSum@2DBbF


4

b1 b2 b3 b4 + b1 b3 - Hb2 b4 L + b1 b4 b2 b3 + b2 b3 b1 b4 + b2 b4 - Hb1 b3 L + b3 b4 b1 b2

Some zero interior sums can be simplified. For example the previous result can be written
Z2 = $@Z1 D 2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 L

We can also verify that these expressions are indeed zero by converting them to their scalar product form. For example:
$@ToScalarProducts@Z2 DD 0

2009 9 3

Grassmann Algebra Book.nb

599

The Zero Generalized Sum theorem


The Zero Generalized Sum theorem is of the same form as the Zero Interior Sum theorem, but with the interior product replaced by the generalized product. The formulas are valid for b simple, 0 < m
k

< k, and l ! 0. The Zero Interior Sum theorem is the special case of the Zero Generalized Sum theorem in which l Min[m,k-m].

b b1 b1 b2 b2 b3 b3 !
k m k-m m l k-m m k-m m k-m

b1 b1 + b2 b2 + b2 b3 + ! 0
m l k-m m l k-m k-m l m

l!0 l!0

10.39

b1 b1 + b2 b2 + b2 b3 + ! 0
k-m l m k-m l m

To compose a zero generalized sum, we simply modify the code for the zero interior sum to replace the interior product by the generalized product, and add another argument and condition for l.
ComposeZeroGeneralizedSum@m_, l_D@b_D ; 0 < m < RawGrade@bD && l > 0 := Plus CasesB ThreadBBase@bD Cobase@bDF, a_ b_ ; RawGrade@bD mF;
l l

Examples
Here is the only valid non-trivial zero generalized sum for a 4-element that does not reduce to a zero interior sum.
ComposeZeroGeneralizedSum@2, 1DBbF
4

b1 b2 b3 b4 + b1 b3 - Hb2 b4 L + b1 b4 b2 b3 +
1 1 1

b2 b3 b1 b4 + b2 b4 - Hb1 b3 L + b3 b4 b1 b2
1 1 1

Here we tabulate the first 50 cases to confirm that they reduce to zero.

2009 9 3

Grassmann Algebra Book.nb

600

FlattenBTableB %BToScalarProductsBComposeZeroGeneralizedSum@m, lDBbFFF,


k

8k, 2, 8<, 8m, 1, k - 1<, 8l, 1, Min@m, k - mD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The mechanism behind zero generalized sums


To see the mechanism behind why these sums are zero, we take a specific case and follow through the steps. A general proof of the theorem need only abstract these fews simple steps. The proof will rely on the Zero Interior Sum theorem. Suppose we have a simple 5-element b.
5

b b1 b2 b3 b4 b5
5

and we wish to show that


b1 b1 + b2 b2 + b3 b3 + 0
m l 5-m m l 5-m m l 5-m

where 0 < m < 5, l > 0, and


b b1 b1 b2 b2 b3 b3 !
5 m 5-m m 5-m m 5-m

The generalized product is zero when l is greater than the minimum of 5 and 5-m. It is also quasicommutative, only changing by a (possible) sign when its factors are interchanged:
bi bi H- 1LHm-lL Hk-m-lL bi bi
m l k-m k-m l m

Thus the only essentially different nontrivial cases remaining are


F1 b1 b1 + b2 b2 + b3 b3 + 0
1 1 4 1 1 4 1 1 4

F2 b1 b1 + b2 b2 + b3 b3 + 0
2 1 3 2 1 3 2 1 3

F3 b1 b1 + b2 b2 + b3 b3 + 0
2 2 3 2 2 3 2 2 3

Since the terms in F1 and F3 are already interior products, they are zero by the Zero Interior Sum Theorem. It remains to show that F2 is zero To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5

previous section. Specifically, the terms of F2 can be written

2009 9 3

Grassmann Algebra Book.nb

601

To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5

previous section. Specifically, the terms of F2 can be written


Z1 = ComposeZeroGeneralizedSum@2, 1DBbF
5

b1 b2 b3 b4 b5 + b1 b2 b4 - Hb3 b5 L +
1 1

b1 b2 b5 b3 b4 + b1 b3 b4 b2 b5 + b1 b3 b5 - Hb2 b4 L +
1 1 1 1 1 1 1

b1 b4 b5 b2 b3 + b2 b3 b4 - Hb1 b5 L + b2 b3 b5 b1 b4 + b2 b4 b5 - Hb1 b3 L + b3 b4 b5 b1 b2
1

Before we proceed with the proof for this specific case, we verify first that Z1 is indeed zero.
%@ToScalarProducts@Z1 DD 0

1. Convert the generalized products into their interior product form


Z2 = ToInteriorProducts@Z1 D b1 Hb2 b3 b4 b5 L - b1 Hb2 b3 b5 b4 L + b1 Hb2 b4 b5 b3 L - b1 Hb3 b4 b5 b2 L - b2 Hb1 b3 b4 b5 L + b2 Hb1 b3 b5 b4 L - b2 Hb1 b4 b5 b3 L + b2 Hb3 b4 b5 b1 L + b3 Hb1 b2 b4 b5 L - b3 Hb1 b2 b5 b4 L + b3 Hb1 b4 b5 b2 L b3 Hb2 b4 b5 b1 L - b4 Hb1 b2 b3 b5 L + b4 Hb1 b2 b5 b3 L b4 Hb1 b3 b5 b2 L + b4 Hb2 b3 b5 b1 L + b5 Hb1 b2 b3 b4 L b5 Hb1 b2 b4 b3 L + b5 Hb1 b3 b4 b2 L - b5 Hb2 b3 b4 b1 L

2. Collect together the terms having the same exterior product factor
A simple way to do this is to replace the exterior product with an arbitrary scalar multiple. In this form it becomes clear that each sum of the terms associated with each scalar multiple is a zero interior sum, and hence zero, making the complete expression zero, and thus proving the theorem for this case.
Z3 = Z2 . bi_ B_ bi B Hb2 b3 b4 b5 L b1 - Hb2 b3 b5 b4 L b1 + Hb2 b4 b5 b3 L b1 - Hb3 b4 b5 b2 L b1 - Hb1 b3 b4 b5 L b2 Hb1 b3 b5 b4 L b2 - Hb1 b4 b5 b3 L b2 + Hb3 b4 b5 b1 L b2 Hb1 b2 b4 b5 L b3 - Hb1 b2 b5 b4 L b3 + Hb1 b4 b5 b2 L b3 Hb2 b4 b5 b1 L b3 - Hb1 b2 b3 b5 L b4 + Hb1 b2 b5 b3 L b4 Hb1 b3 b5 b2 L b4 + Hb2 b3 b5 b1 L b4 + Hb1 b2 b3 b4 L b5 Hb1 b2 b4 b3 L b5 + Hb1 b3 b4 b2 L b5 - Hb2 b3 b4 b1 L b5 + + -

2009 9 3

Grassmann Algebra Book.nb

602

3. Confirm the result by converting the interior products to scalar products.


To confirm that the complete expression is zero, we need first to declare the bi as scalar symbols so that GrassmannSimplify can effect the collection of terms resulting from expanding the interior products to scalar products.
DeclareExtraScalarSymbols@b_ D; %@ToScalarProducts@Z3 DD 0

10.10 The Triple Generalized Sum Conjecture


In this section we discuss an interesting conjecture associated with our definition of the Clifford product in Chapter 12. As already mentioned, we have defined the generalized Grassmann product to facilitate the definition of the Clifford product of general Grassmann expressions. As is well known, the Clifford product is associative. But in general, a Clifford product will involve both exterior and interior products. And the interior product is not associative. In this section we look at a conjecture, which, if established, will prove the validity of our definition of the Clifford product in terms of generalized Grassmann products.

The generalized product is not associative


We take a simple example to show that the generalized product is not associative. First we set up the assertion:
H = Kx yO z ! x Jy zN;
0 1 0 1

Then we convert the generalized products to scalar products.


ToScalarProducts@HD y Hx zL - x Hy zL # x Hy zL

Clearly the two sides of the relation are not in general equal. Hence the generalized product is not in general associative.

2009 9 3

Grassmann Algebra Book.nb

603

The triple generalized sum


Triple generalized products
We define a triple generalized product to be a product either of the form x y z or of the
m l k m p

form x y z . Since we have shown above that these forms will in general be different, we
m l k m p

will refer to the first one as the A form, and to the second as the B form.

Signed triple generalized products


We define the signed triple generalized product of form A to be the triple generalized product of form A multiplied by the factor H- 1LsA , where sA = ml+ 1 l H1 + lL + Hk + m L m + 1 m H1 + mL. 2 2 We define the signed triple generalized product of form B to be the triple generalized product of form B multiplied by the factor H- 1LsB , where sB = m l + 1 l H1 + lL + k m + 1 m H1 + mL. 2 2

Triple generalized sums


We define a triple generalized sum of form A to be the sum of all the signed triple generalized products of form A of the same elements and which have the same grade n.
n

H- 1Ls A x y
l=0 m l k

n-l p

z
1 2

where sA = m l +

1 2

l Hl + 1L + Hm + kL Hn - lL +

Hn - lL Hn - l + 1L.

We define a triple generalized sum of form B to be the sum of all the signed triple generalized products of form B of the same elements and which have the same grade n.
n

H - 1 L sB x y z
l=0 m l k n-l p

where sB = m l +

1 2

l Hl + 1L + k Hn - lL +

1 2

Hn - lL Hn - l + 1L.

For example, we tabulate below the triple generalized sums of form A for grades of 0, 1 and 2:

2009 9 3

Grassmann Algebra Book.nb

604

TableB:i, TripleGeneralizedSumA@iDBx, y, zF>, 8i, 0, 2<F


m k p

TableForm 0 1 2 xy z
m 0 k 0 p

H - 1 L 1+m -

x y z + H - 1 L 1+k+m
m 1 k 0 p

xy z
m 0 k 1 p

x y z + H- 1Lk
m 2 k 0 p

xy z - xy z
m 1 k 1 p m 0 k 2 p

Similarly the triple generalized sums of form B for grades of 0, 1 and 2 are
TableB:i, TripleGeneralizedSumB@iDBx, y, zF>, 8i, 0, 2<F
m k p

TableForm 0 1 2 x yz
m 0 k 0 p

H - 1 L 1+k x y z
m 0 k 1 p

+ H - 1 L 1+m x y z
m 1 k 0 p

- x yz
m 0 k 2 p

+ H - 1 L 2+k+m x y z
m 1 k 1 p

-x yz
m 2 k 0 p

The triple generalized sum conjecture


As we have shown above, a single triple generalized product is not in general associative. That is, the A form and the B form are not in general equal. However we conjecture that the triple generalized sum of form A is equal to the triple generalized sum of form B.
n n m l k n-l p

H- 1Ls A x y
l=0

z H - 1 L sB x y z
l=0
m l k n-l p

10.40

where sA = m l + and sB = m l +

1 2 1 2

l Hl + 1L + Hm + kL Hn - lL + l Hl + 1L + k Hn - lL +
1 2

1 2

Hn - lL Hn - l + 1L

Hn - lL Hn - l + 1L.

If this conjecture can be shown to be true, then the associativity of the Clifford product of a general Grassmann expression can be straightforwardly proven using the definition of the Clifford product in terms of exterior and interior products (or, what is equivalent, in terms of generalized products). That is, the rules of Clifford algebra may be entirely determined by the axioms and theorems of the Grassmann algebra, making it unnecessary, and indeed potentially inconsistent, to introduce any special Clifford algebra axioms.

Exploring the triple generalized sum conjecture


2009 9 3

Grassmann Algebra Book.nb

605

Exploring the triple generalized sum conjecture


We can explore the triple generalized sum conjecture by converting the generalized products into scalar products. For example first we generate the triple generalized sums.
A = TripleGeneralizedSumA@2DBx, y, zF
3 2

xy z + xy z- xy z
3 2 2 0 3 1 2 1 3 0 2 2

B = TripleGeneralizedSumB@2DBx, y, zF
3 2

- x yz
3 0 2 2

-x yz -x yz
3 1 2 1 3 2 2 0

We could demonstrate the equality of these two sums directly by converting their difference into scalar product form, thus allowing terms to cancel.
Expand@ToScalarProducts@A - BDD 0

However it is more instructive to convert each of the terms in the difference separately to scalar products.
X = List HA - BL :x y z , 3 0 2 2

xy z , x yz ,
3 2 2 0 3 1 2 1

x y z, x y z , 3 1 2 1 3 2 2 0

xy z >
3 0 2 2

2009 9 3

Grassmann Algebra Book.nb

606

X1 = Expand@ToScalarProducts@XDD 80, - Hx2 y2 L Hx3 y1 L z x1 + Hx2 y1 L Hx3 y2 L z x1 + Hx1 y2 L Hx3 y1 L z x2 - Hx1 y1 L Hx3 y2 L z x2 Hx1 y2 L Hx2 y1 L z x3 + Hx1 y1 L Hx2 y2 L z x3 , - Hz y2 L Hx3 y1 L x1 x2 + Hz y1 L Hx3 y2 L x1 x2 + Hz y2 L Hx2 y1 L x1 x3 - Hz y1 L Hx2 y2 L x1 x3 Hz y2 L Hx1 y1 L x2 x3 + Hz y1 L Hx1 y2 L x2 x3 , Hz y2 L Hx3 y1 L x1 x2 - Hz y1 L Hx3 y2 L x1 x2 Hz y2 L Hx2 y1 L x1 x3 + Hz y1 L Hx2 y2 L x1 x3 Hz x3 L Hx2 y2 L x1 y1 + Hz x2 L Hx3 y2 L x1 y1 + Hz x3 L Hx2 y1 L x1 y2 - Hz x2 L Hx3 y1 L x1 y2 + Hz y2 L Hx1 y1 L x2 x3 - Hz y1 L Hx1 y2 L x2 x3 + Hz x3 L Hx1 y2 L x2 y1 - Hz x1 L Hx3 y2 L x2 y1 Hz x3 L Hx1 y1 L x2 y2 + Hz x1 L Hx3 y1 L x2 y2 Hz x2 L Hx1 y2 L x3 y1 + Hz x1 L Hx2 y2 L x3 y1 + Hz x2 L Hx1 y1 L x3 y2 - Hz x1 L Hx2 y1 L x3 y2 , Hx2 y2 L Hx3 y1 L z x1 - Hx2 y1 L Hx3 y2 L z x1 Hx1 y2 L Hx3 y1 L z x2 + Hx1 y1 L Hx3 y2 L z x2 + Hx1 y2 L Hx2 y1 L z x3 - Hx1 y1 L Hx2 y2 L z x3 + Hz x3 L Hx2 y2 L x1 y1 - Hz x2 L Hx3 y2 L x1 y1 Hz x3 L Hx2 y1 L x1 y2 + Hz x2 L Hx3 y1 L x1 y2 Hz x3 L Hx1 y2 L x2 y1 + Hz x1 L Hx3 y2 L x2 y1 + Hz x3 L Hx1 y1 L x2 y2 - Hz x1 L Hx3 y1 L x2 y2 + Hz x2 L Hx1 y2 L x3 y1 - Hz x1 L Hx2 y2 L x3 y1 Hz x2 L Hx1 y1 L x3 y2 + Hz x1 L Hx2 y1 L x3 y2 , 0<

Converting back to a sum shows that the terms add to zero.


X2 = Plus X1 0

An algorithm to test the conjecture


It is simple to automate testing the conjecture by collecting the steps above in a function:

TestTripleGeneralizedSumConjecture[n_][x_,y_,z_]:= Expand[ToScalarProducts[TripleGeneralizedSumA[n][x,y,z]TripleGeneralizedSumB[n][x,y,z]]]
As an example of this procedure we run through the first 320 cases. A value of zero will validate the conjecture for that case.

2009 9 3

Grassmann Algebra Book.nb

607

TableBTestTripleGeneralizedSumConjecture@nDBx, y, zF,
m k p

8n, 0, 4<, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<F Flatten 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0< 0, 0, 0, 0, 0,

10.11 Exploring Conjectures


As we have already shown, it is easy to explore conjectures using Mathematica to compute individual cases. By 'explore' of course, we mean the generation of cases which either disprove the conjecture or increase our confidence that the conjecture is correct.

A conjecture
As an example we suggest a conjecture for a relationship amongst generalized products of order 1 of the following form, valid for any m, k, and p.

a b g + H- 1Lx b g a + H- 1Ly g a b 0
m 1 k 1 p k 1 p 1 m p 1 m 1 k

10.41

The signs H-1Lx and H-1Ly are at this point unknown, but if the conjecture is true we expect them to be simple functions of m, k, p and their products. We conjecture therefore that in any particular case they will be either +1 or |1.

2009 9 3

Grassmann Algebra Book.nb

608

Exploring the conjecture


The first step therefore is to test some cases to see if the formula fails for some combination of signs. We do that by setting up a function that calculates the formula for each of the possible sign combinations, returning True if any one of them satisfies the formula, and False otherwise. Then we create a table of cases. Below we let the grades m, k, and p range from 0 to 3 (that is, two instances of odd parity and two instances of even parity each, in all combinations). Although we have written the function as printing the intermediate results as they are calculated, we do not show this output as the results are summarized in the final list.

TestPostulate@8m_, k_, p_<D := ModuleB8X, Y, Z, a, b, g<, X = ToScalarProductsB a b gF;


m 1 k 1 p

Y = ToScalarProductsB b g aF;
k 1 p 1 m

Z = ToScalarProductsB g a bF; TrueQ@


p 1 m 1 k

X + Y + Z == 0 X + Y - Z == 0 X - Y + Z == 0 X - Y - Z == 0DF
Table@T = TestPostulate@8m, k, p<D; Print@88m, k, p<, T<D; T, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<D Flatten 8True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True<

It can be seen that the formula is valid for some values of x and y in the first 64 cases. This gives us some confidence that our efforts may not be wasted if we wished to prove the formula and/or find the values of x and y an terms of m, k, and p.

2009 9 3

Grassmann Algebra Book.nb

609

10.12 The Generalized Product of Intersecting Elements


The case l < p
Consider three simple elements a, b and g . The elements ga and gb may then be considered
m k p p m p k

elements with an intersection g. Generalized products of such intersecting elements are zero
p

whenever the grade l of the product is less than the grade of the intersection.

ga gb 0
p m l p k

l<p

10.42

We can test this by reducing the expression to scalar products and tabulating cases.
FlattenBTableBA = ToScalarProductsB g a g b F;
p m l p k

Print@88m, k, p, l<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 1, 3<, 8l, 0, p - 1<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
k

Of course, this result is also valid in the case that either or both the grades of a and b is zero.
m

g gb ga g g g 0
p l p k p m l p p l p

l<p

10.43

The case l ! p
For the case where l is equal to, or greater than the grade of the intersection p, the generalized product of such intersecting elements may be expressed as the interior product of the common factor g with a generalized product of the remaining factors. This generalized product is of order
p

lower by the grade of the common factor. Hence the formula is only valid for l p (since the generalized product has only been defined for non-negative orders).

10.44
2009 9 3

Grassmann Algebra Book.nb

610

g a g b H- 1Lp Hl-pL
p m l p k

ga
p m

l-p k

b g
p

g a g b H- 1Lp m a
p m l p k

m l-p

gb
p k

g
p

We can test these formulae by reducing the expressions on either side to scalar products and tabulating cases. For l > p, the first formula gives
FlattenBTableB A = ToScalarProductsB g a g b - H- 1Lp Hl-pL
p m l p k

ga
p m

l-p k

b g F;
p

Print@88m, k, p, l<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<, 8l, p + 1, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The case of l = p is tabulated in the next section. The second formula can be derived from the first by using the quasi-commutativity of the generalized product. To check, we tabulate the first few cases:
FlattenB TableBToScalarProductsB g a g b - H- 1Lp m a
p m l p k

m l-p

gb
p k

g F,
p

8m, 1, 2<, 8k, 1, m<, 8p, 0, m<, 8l, p + 1, Min@p + m, p + kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

Finally, we can put m = l - p, and rearrange the above to give:

a gb
m m p k

g H- 1Lp m
p

ag b g
m p m k p

10.45

FlattenB TableBToScalarProductsB a g b
m m p k

g - H- 1Lp m
p

a g b g F,
m p m k p

8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8m, 0, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The special case of l = p


10.44
2009 9 3

Grassmann Algebra Book.nb

611

The special case of l = p


By putting l = p into the first formula of [10.31] we get:
ga gb gab g
p m p p k p m k p

By interchanging g, a, and b we also have two simple alternate expressions


p m k

ga gb gab g
p m p p k p m k p

ag gb agb g
m p p p k m p k p

10.46

ag bg abg g
m p p k p m k p p

Again we can demonstrate this identity by converting to scalar products.


FlattenBTableBA = ToScalarProductsB g a g b - g a b gF;
p m p p k p m k p

Print@88m, k, p<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.13 The Generalized Product of Orthogonal Elements


The generalized product of totally orthogonal elements
It is simple to see from the definition of the generalized product that if a and b are totally
m k

orthogonal, and l is not zero, then their generalized product is zero.

ab 0
m l k

ai bj 0

l>0

10.47

We can verify this easily:

2009 9 3

Grassmann Algebra Book.nb

612

FlattenBTableBToScalarProductsBa bF .
m l k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 4<, 8k, 0, m<, 8l, 1, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The GrassmannAlgebra function OrthogonalSimplificationRules generates a list of rules which put to zero all the scalar products of a 1-element from a with a 1-element from b.
m k

The generalized product of partially orthogonal elements


Consider the generalized product a gb of an element a and another element gb in which a
m l p k m p k m

and b are totally orthogonal, but g is arbitrary.


k p

But since ai bj 0, the interior products of a and any factors of gb in the expansion of the
m p k k

generalized product will be zero whenever they contain any factors of b. That is, the only non-zero terms in the expansion of the generalized product are of the form agi g i b. Hence:
m l p-l k

a gb a g b
m l p k m l p k

ai bj 0

l Min@m, pD

10.48

a gb 0
m l p k

ai bj 0

l > Min@m, pD

10.49

FlattenBTableBToScalarProductsBa g b - a g b F .
m l p k m l p k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 3<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@m, pD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

By using the quasi-commutativity of the generalized and exterior products, we can transform the above results to:

2009 9 3

10.50

Grassmann Algebra Book.nb

613

ag b
m p l k

H- 1L m l a g b
m p l k

ai bj 0

l Min@k, pD

ga b 0
p m l k

ai bj 0

l > Min@k, pD

10.51

FlattenBTableBToScalarProductsB a g b - H- 1Lm l a g b F .
m p l k m p l k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@k, pD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.14 The Generalized Product of Intersecting Orthogonal Elements


The case l < p
Consider three simple elements a, b and g . The elements ga and gb may then be considered
m k p p m p k

elements with an intersection g. As has been shown in Section 10.12, generalized products of such
p

intersecting elements are zero whenever the grade l of the product is less than the grade of the intersection. Hence the product will be zero for l < p, independent of the orthogonality relationships of its factors.

The case l ! p
Consider now the case of three simple elements a, b and g where g is totally orthogonal to both a
m k p p m m

and b (and hence to ab). A simple element g is totally orthogonal to an element a if and only if
k m m k p p

agi 0 for all gi belonging to g.

The generalized product of the elements ga and gb of order l p is a scalar factor gg times
p m m p k p p

the generalized product of the factors a and b, but of an order lower by the grade of the common
k

factor; that is, of order l|p. 2009 9 3

10.50

Grassmann Algebra Book.nb

614

The generalized product of the elements ga and gb of order l p is a scalar factor gg times
p m m p k p p

the generalized product of the factors a and b, but of an order lower by the grade of the common
k

factor; that is, of order l|p.

ga gb gg
p p m l p k m p p k

a b
m l-p k

10.52 lp

g = g1 g2 ! gp

a gi b gi 0

Note here that the factors of g are not necessarily orthogonal to each other. Neither are the factors
p

of a. Nor the factors of b.


m k

We can test this out by making a table of cases. Here we look at the first 25 cases.
FlattenBTableBToScalarProductsB g a g b - g g
p m l p k p p

a b F .
m l-p k

OrthogonalSimplificationRulesB::a, g>, :b, g>>F,


m p k p

8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, p, p + Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

By combining the results of the previous section with those of this section, we see that we can also write that:
ga b a gb
p m l k m l p k

gg
p p

a b H- 1Lp l
m l k

ga b g
p m l k p

H- 1Lp m a g b
m l p m k

g
p k

10.53

g = g1 g2 ! gp
p

a gi b gi 0

We check the second rule by taking the first 25 cases.

2009 9 3

Grassmann Algebra Book.nb

615

FlattenB TableBToScalarProductsB a g b
m l p k

g - H- 1Lp m g g
p p p

ab
m l k

F .

OrthogonalSimplificationRulesB::a, g>, :b, g>>F,


m p k p

8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, 0, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.15 Generalized Products in Lower Dimensional Spaces


Generalized products in 0, 1, and 2-spaces
In this section we summarize the properties of generalized products for spaces of dimension 0, 1 and 2. In spaces of all dimensions, generalized products in which one of the factors is a scalar will reduce to either the usual (field or linear space) product by the scalar quantity (when the order of the product is zero) or else to zero (when the order of the product is greater than zero). Except for a brief discussion of the implications of this for 0-space, further discussions will assume that the factors of the generalized products are not scalar. In spaces of dimension 0, 1, or 2, all generalized products reduce either to the ordinary field product, the exterior product, the interior product, or zero. Hence they bring little new to a study of these spaces. These results will be clear upon reference to the simple properties of generalized products discussed in previous sections. The only result that may not be immediately obvious is that of the generalized product of order one of two 2-elements in a 2-space, for example a b. In a 2-space, a
2 1 2 2

and b must be congruent (differ only by (possibly) a scalar factor), and hence by formula 10.17 is
2

zero. If one of the factors is a scalar, then the only non-zero generalized product is that of order zero, equivalent to the ordinary field product (and, incidentally, also to the exterior and interior products).

0-space
In a space of zero dimensions (the underlying field of the Grassmann algebra), the generalized product reduces to the exterior product and hence to the usual scalar (field) product.

2009 9 3

Grassmann Algebra Book.nb

616

ToScalarProductsBa bF
0

ab

Higher order products (for example a b) are of course zero since the order of the product is
1

greater than the minimum of the grades of the factors (which in this case is zero). In sum: The only non-zero generalized product in a space of zero dimensions is the product of order zero, equivalent to the underlying field product.

1-space
Products of zero order
In a space of one dimension there is only one basis element, so the product of zero order (that is, the exterior product) of any two elements is zero:
Ha e1 L Hb e1 L a b e1 e1 0
0

ToScalarProductsBHa e1 L Hb e1 LF
0

Products of the first order


The product of the first order reduces to the scalar product.
Ha e1 L Hb e1 L a b He1 e1 L
1

ToScalarProductsBHa e1 L Hb e1 LF
1

a b He1 e1 L

In sum: The only non-zero generalized product of elements (other than where one is a scalar) in a space of one dimension is the product of order one, equivalent to the scalar product.

2-space
Products of zero order
The product of zero order of two elements reduces to their exterior product. Hence in a 2-space, the only non-zero products (apart from when one is a scalar) is the exterior product of two 1-elements.
Ha e1 + b e2 L Hc e1 + d e2 L Ha e1 + b e2 L Hc e1 + d e2 L
0

Products of the first order


2009 9 3

Grassmann Algebra Book.nb

617

Products of the first order


The product of the first order of two 1-elements is commutative and reduces to their scalar product.
Ha e1 + b e2 L Hc e1 + d e2 L Ha e1 + b e2 L Hc e1 + d e2 L
1

ToScalarProductsBHa e1 + b e2 L Hc e1 + d e2 LF
1

a c He1 e1 L + b c He1 e2 L + a d He1 e2 L + b d He2 e2 L

The product of the first order of a 1-element and a 2-element is also commutative and reduces to their interior product.
Ha e1 + b e2 L Hc e1 e2 L
1

Hc e1 e2 L Ha e1 + b e2 L Hc e1 e2 L Ha e1 + b e2 L
1

ToScalarProductsBHa e1 + b e2 L Hc e1 e2 LF
1

- a c He1 e2 L e1 - b c He2 e2 L e1 + a c He1 e1 L e2 + b c He1 e2 L e2

The product of the first order of two 2-elements is zero. This can be determined directly from [10.17].
aa 0,
m l m

l!m

ToScalarProductsBHa e1 e2 L Hb e1 e2 LF
1

Products of the second order


Products of the second order of two 1-elements, or of a 1-element and a 2-element are necessarily zero since the order of the product is greater than the minimum of the grades of the factors.
ToScalarProductsBHa e1 + b e2 L Hc e1 + d e2 LF
2

Products of the second order of two 2-elements reduce to their inner product:
Ha e1 e2 L Hb e1 e2 L Ha e1 e2 L Hb e1 e2 L
2

ToInnerProductsBHa e1 e2 L Hb e1 e2 LF
2

a e1 e2 b e1 e2

2009 9 3

Grassmann Algebra Book.nb

618

In sum: The only non-zero generalized products of elements (other than where one is a scalar) in a space of two dimensions reduce to exterior, interior, inner or scalar products.

10.16 Generalized Products in 3-Space


To be completed

10.17 Summary
To be completed

2009 9 3

Grassmann Algebra Book.nb

619

11 Exploring Hypercomplex Algebra

11.1 Introduction
In this chapter we explore the relationship of Grassmann algebra to the hypercomplex algebras. Typical examples of hypercomplex algebras are the algebras of the real numbers !, complex numbers #, quaternions $, octonions (or Cayley numbers) %, sedenions &, and Clifford algebras !. We will show that each of these algebras can be generated as an algebra of Grassmann numbers under a new product which we take the liberty of calling the hypercomplex product. This hypercomplex product of an m-element a and a k-element b is denoted ab and defined as a
m k m k m k

sum of signed generalized Grassmann products of a with b.

Min@m,kD

ab
m k

l=0

m,l,k

ab
m l k

11.1

Here, the sign s is a function of the grades m and k of the factors and the order l of the generalized product and takes the values +1 or -1 depending on the type of hypercomplex product being defined. Note particularly that this approach does not require the algebra to be defined in terms of generators or basis elements other than those of the Grassmann algebra. We will see in what follows that this leads to considerably more insight on the nature of the hypercomplex algebras than afforded by the traditional algebraic approach. The generalized Grassmann products on the right-hand side subsume and extend the principal dimension-independent products of geometric linear algebra: the exterior and interior products. It is enticing to postulate that many algebras with a geometric interpretation may be defined by sums of generalized products. Our approach then, is not to define a hypercomplex algebra by defining new (hypercomplex) elements, but rather to define a new product called the hypercomplex product which acts on ordinary Grassmann numbers. This approach has the conceptual advantage that the hypercomplex algebra can now sit comfortably inside the Grassmann algebra and be used in geometric constructions just like the exterior and interior products. Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann numbers acting under the hypercomplex product. The hypercomplex product defined above, in which the hypercomplex signs s are not yet specified, may be viewed as a definition of the most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by constraining these signs to have given values. The traditional hypercomplex algebras will have one set of constraints, while the Clifford algebras will have another. And just as with the Grassmann 2009 9 3 they will have significantly different properties depending on the dimension of their algebras, underlying space. We will also see that one of the major sources of difference is due to the fact that spaces of dimension higher than three can have non-simple elements. For example, this is the reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
m,l,k

m,l,k

Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k

Grassmann Algebra Book.nb

620

which the hypercomplex signs s are not yet specified, may be viewed as a definition of the most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by constraining these signs to have given values. The traditional hypercomplex algebras will have one set of constraints, while the Clifford algebras will have another. And just as with the Grassmann algebras, they will have significantly different properties depending on the dimension of their underlying space. We will also see that one of the major sources of difference is due to the fact that spaces of dimension higher than three can have non-simple elements. For example, this is the reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of three dimensions, is the largest hypercomplex algebra with a scalar norm. There is some care needed with terminology. In our approach to hypercomplex algebras, there are no hypercomplex numbers per se, but only different types of hypercomplex product (that is, with different sets of constraints on the hypercomplex signs), and spaces of different dimension. These two factors together give the traditional hypercomplex algebras their individual flavours. So, what then is a quaternion? From our approach, it is not an object, but rather a Grassmann number living in a two-dimensional subspace (the Grassmann algebra on an underlying linear space of two dimensions has 4 basis elements) responding to a specific type of hypercomplex product. With this meaning, and in order to concord with traditional usage, we shall continue to speak of hypercomplex numbers, such as quaternions, as if they were objects in their own right. "Hypercomplex number" should be read as "Grassmann number under the hypercomplex product". "Quaternion" should be read as "Hypercomplex number in a Grassmann algebra of two dimensions". More carefully, by "Grassmann algebra of two dimensions", we mean a Grassmann algebra constructed on a two-dimensional subspace of the underlying linear space. It turns out that the real numbers are hypercomplex numbers in a subspace of zero dimensions, the complex numbers are hypercomplex numbers in a subspace of one dimension, the quaternions are hypercomplex numbers in a subspace of two dimensions, the octonions are hypercomplex numbers in a subspace of three dimensions, and the sedenions are hypercomplex numbers in a subspace of four dimensions. The number of base elements in the hypercomplex algebra is equal to the number of basis elements in the corresponding Grassmann algebra. We can see from this approach that different hypercomplex numbers can now live and work in the same space. For example, complex numbers can operate on vectors, or quaternions on octonions. In Chapter 12: Exploring Clifford Algebra we will also show that the Clifford product may be defined in a space of any number of dimensions as a hypercomplex product with signs:
m,l,k

H- 1Ll Hm-lL+ 2

l Hl-1L

We could also explore the algebras generated by products of the type defined by definition 11.1, but which have some of the s
m,l,k m,l,k

defined to be zero. For example the relations,


m,l,k

1, l 0

0, l > 0

defines the algebra with the hypercomplex product reducing to the exterior product, and
m,l,k

1, l = Min@m, kD

m,l,k

0, l ! Min@m, kD

defines the algebra with the hypercomplex product reducing to the interior product. Both of these definitions however lead to products having zero divisors, that is, some products can be zero even though neither of its factors is zero. Because one of the principally useful characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this chapter to exploring just those algebras. 2009 9 3

Grassmann Algebra Book.nb

621

Both of these definitions however lead to products having zero divisors, that is, some products can be zero even though neither of its factors is zero. Because one of the principally useful characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this chapter to exploring just those algebras. That is, we shall always assume that s
m,l,k

is not zero.
m,l,k

We begin by exploring the constraints we would require on the hypercomplex signs s in definition 11.1 which will lead to algebras with a scalar norm. It turns out that we can ensure a scalar norm (up to the Octonions) by only constraining a subset of the signs. The specification of the rest of the signs will then be explored by considering spaces of increasing dimension, starting with a space of zero dimensions generating the real numbers. In order to embed the algebras defined on lower dimensional spaces within those defined on higher dimensional spaces we will maintain the lower dimensional relations we determine for the hypercomplex signs when we explore the higher dimensional spaces. Finally, we note that this approach to hypercomplex numbers is invariant in a tensorial sense. The traditional product tables for the hypercomplex numbers are recovered in our approach by assuming the underlying space has an Euclidean metric. However, we will show that the hypercomplex product definition in 11.1 above is still meaningful under different metrics. Indeed, a simple change of metric can lead to a whole range of new hypercomplex algebras.

11.2 Some Initial Definitions and Properties


Distributivity
Distributivity of hypercomplex products of sums of elements of the same grade can be shown from definition 11.1 using the distributivity of the generalized Grassmann products.

Ka1 + a2 + O b a1 b + a2 b +
m m k m k m k

11.2

a b1 + b2 + a b1 + a b2 +
m k k m k m k

We extend this by definition to sums of elements of different grades.

a1 + a2 + b a1 b + a2 b +
m1 m2 k m1 k m2 k

11.3

a b1 + b2 + a b1 + a b2 +
m k1 k2 m k1 m k2

2009 9 3

Grassmann Algebra Book.nb

622

Factorization of scalars
By using the definition 11.1 of the hypercomplex product and the properties of the generalized Grassmann product we can easily show that scalars may be factorized out of any hypercomplex products:

Ka aO b a a b a a b
m k m k m k

11.4

Multiplication by scalars
The next property we will require of the hypercomplex product is that it behaves as expected when one of the elements is a scalar. That is:

ab ba b a
m m m

11.5
m,l,k

From the relations 11.5 and the definition 11.1 we can determine the constraints on the which will accomplish this.
Min@m,0D

ab
m

l=0 Min@0,mD

m,l,0

ab
m l

m,0,0

ab
m

m,0,0

ba
m

ba
m

l=0

0,l,m

ba
l m

0,0,m

ba
m

0,0,m

ba
m

Hence the first constraints we impose on the s multiplication by scalars are that:
m,0,0

m,l,k

to ensure the properties we require for

0,0,m

11.6

Hypercomplex scalar constraints


We call this pair of constraints HypercomplexScalarConstraints (alias Hsc) and encode them as a pair of rules.

2009 9 3

Grassmann Algebra Book.nb

623

HypercomplexScalarConstraints := : s
m_,0,0

1,

0,0,k_

1>

11.7

These rules are the first in a collection which we will eventually use to define the hypercomplex algebras.

The real numbers !


The real numbers are a simple consequence of the relations determined above for scalar multiplication. When the grades of both the factors are zero we have that
0,0,0

1. Hence:

ab ba a b

11.8

The hypercomplex product in a Grassmann algebra of zero dimensions is therefore equivalent to the (usual) real field product of the underlying linear space. Hypercomplex numbers in a Grassmann algebra of zero dimensions are therefore (isomorphic to) the real numbers.

The conjugate
The conjugate of a Grassmann number X is denoted Xc and is defined as the body Xb (scalar part) of X minus the soul Xs (non-scalar part) of X.

X Xb + Xs Example
Let X be a Grassmann number in 3 dimensions.
!3 ; X = 0a

Xc Xb - Xs

11.9

a0 + a1 e1 + a2 e2 + a3 e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 + a7 e1 e2 e3

The body and soul of X can be computed as


8Body@XD, Soul@XD< 8a0 , a1 e1 + a2 e2 + a3 e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 + a7 e1 e2 e3 <

The conjugate of X can be computed as


GrassmannConjugate@XD a0 - a1 e1 - a2 e2 - a3 e3 - a4 e1 e2 - a5 e1 e3 - a6 e2 e3 - a7 e1 e2 e3

11.3 The Norm of Grassmann number


2009 9 3

Grassmann Algebra Book.nb

624

11.3 The Norm of Grassmann number


The norm
One of the important properties that !, #, $, and % possess in common is that of having a positive scalar norm. The norm of a Grassmann number X is denoted C@XD and is defined as the hypercomplex product of X with its conjugate.

,@XD X Xc HXb + Xs L HXb - Xs L = Xb 2 - Xs Xs

11.10

The norm of an m-element a is then denoted CBaF and defined as the hypercomplex product of a
m m m

with its conjugate ac . If a is not scalar, then ac is simply -a. Hence


m m m m

,BaF a ac -a a
m m m m m

m!0

11.11

The norm of a simple m-element


If a is simple, then the generalized Grassmann product a a is zero for all l except l equal to m,
m m l m

in which case the generalized product (and hence the hypercomplex product) becomes an inner product. Thus for m not zero we have:
aa
m m m,m,m

aa
m m

m!0

Equation 11.11 then implies that for the norm to be a positive scalar quantity we must have
m,m,m

s -1.

,BaF -a a a a
m m m m m

a simple
m

11.12

m,m,m

-1

m!0

11.13

The norm of a scalar a is just a2 aa, so formula 11.12 applies also for m equal to zero.

Hypercomplex square constraints


2009 9 3

Grassmann Algebra Book.nb

625

Hypercomplex square constraints


We call these constraint the HypercomplexSquareConstraints (alias Hsq) and encode them as the rules.

HypercomplexSquareConstraints := :
m_,m_,m_

; m ! 0 - 1>;

11.14

These rules are the second in a collection which we will eventually use to define the hypercomplex algebras. We remark that, although the constraints 11.14 have been determined from a consideration of the hypercomplex product of a simple element with itself, it must also now apply to the interior product term of any hypercomplex product of two elements of the same grade. From formula 11.1, the hypercomplex product of two elements of the same grade can be written
m

ab
m m l=0

m,l,m

ab
m l m

m,0,m

ab+
m 0 m

m,1,m

ab++
m 1 m

m,m,m

ab
m m m

The last term involves s , and we must take it to be -1 here also. This expansion also reminds us that while a a is necessarily scalar, ab is not.
m m m m

m,m,m

The antisymmetry of products of elements of different grade


We consider a general Grassmann number X a+A, where a is its body (scalar component) and A is its soul (non-scalar component). The norm C[X] of X is then:
C@XD X Xc Ha + AL Ha - AL a2 - A A

To investigate the norm further we look at the product AA. Suppose A is the sum of two (not necessarily simple) elements of different grade: an m-element a and a k-element b . Then AA
m k

becomes:
AA a + b a + b aa + ab + ba + bb
m k m k m m m k k m k k

2009 9 3

Grassmann Algebra Book.nb

626

We would like the norm of a general Grassmann number to be a scalar quantity. But none of the generalized product components of ab or ba can be scalar if m and k are different, so we must
m k k m

choose to make ba equal to - ab as a fundamental defining axiom of the hypercomplex


k m m k

product, thus eliminating both products from the expression for the norm. This requirement of antisymmetry can always be satisfied because, since the grades are different, the two products are always distinguishable.

ab - ba
m k k m

11.15

Hypercomplex antisymmetry constraints


It haas been shown in the previous section that the hypercomplex product of two elements a and b
m k

of different grade must be antisymmetric in order to enable the norm of a general sum of simple elements to be scalar. We now explore what constraints are required on the hypercomplex signs in order to effect this antisymmetry. We begin with the definition of the hypercomplex product.
Min@m,kD

ab
m k

l=0

m,l,k

ab
m l k

Now write the definition for the factors in the reverse order, and reverse the order of the generalized Grassmann products on the right hand side back to their initial order. (The formula for doing this is in the previous chapter).
Min@m,kD

ba
k m

l=0

k,l,m

Min@m,kD

ba
k l m

l=0

k,l,m

H- 1LHm-lL Hk-lL a b
m l k

Hence if we are to require that a b -b a then we must have


m k k k,l,m m m,l,k

- H- 1LHm-lL Hk-lL
m,l,k

This in turn implies that if (m-l)(k-l) is odd, then s

k,l,m

if (m-l)(k-l) is even, then s

m,l,k

k,l,m

s .

Below is a table of parities of (m-l)(k-l) as a function of the parities of m, k, and l.

2009 9 3

Grassmann Algebra Book.nb

627

m even even even even odd odd odd odd

k even even odd odd even even odd odd

l even odd even odd even odd even odd

m-l even odd even odd odd even odd even

k - l H m - lL Hk - lL even even odd odd odd even even even even even odd even odd odd even even

This table can be summarized by the rule: If m and k are of the same parity, but l is of the opposite parity, then the parity of (m-l)(k-l) is odd; otherwise it is even. This can be encoded by the constraint: We encode these rules as the HypercomplexAntisymmetryConstraints (alias Has).

HypercomplexAntisymmetryConstraints := :
m_?EvenQ,l_?OddQ,k_?EvenQ

; m < k s ,
k,l,m

k,l,m

m_?OddQ,l_?EvenQ,k_?OddQ

11.16

; m < k s ,
k,l,m

m_,l_,k_

; m < k - s >
m,l,k

Note that this constraint does not constrain the values of the remaining s

for m > k.

The norm of a Grassmann number in terms of hypercomplex products


More generally, suppose A were the sum of several elements of different grade.
A a +b + g +
m k p

The antisymmetry axiom then allows us to write AA as the sum of hypercomplex squares of the components of different grade of A.
AA a +b + g + a +b + g + aa + bb + gg +
m k p m k p m m k k p p

And hence the norm is expressed simply as


C@XD a2 - a a - b b - g g -
m m k k p p

2009 9 3

Grassmann Algebra Book.nb

628

,Ba + a + b + g + F a2 - a a - b b - g g -
m k p m m k k p p

11.17

The norm of a Grassmann number of simple components


Consider a Grassmann number A in which the elements a, b, of each grade are simple.
m k

A a +b + g +
m k p

Then by 11.12 and 11.15 we have that the norm of A is positive and may be written as the sum of inner products of the individual simple elements.

,Ba + a + b + g + F a2 + a a + b b + g g +
m k p m m k k p p

11.18

The norm of a non-simple element


We now turn our attention to the case in which one of the component elements of X may not be simple. This is the general case for Grassmann numbers in spaces of dimension greater than 3, and indeed is the reason why the suite of hypercomplex numbers with positive scalar norm ends with the Octonions. An element of any grade in a space of any dimension 3 or less must be simple. So in spaces of dimension 0, 1, 2, and 3 the norm of an element will always be a scalar. We now focus our attention on the product aa (where a may or may not be simple). The
m m m

following analysis will be critical to our understanding of the properties of hypercomplex numbers in spaces of dimension greater than 3. Suppose a is the sum of two simple elements a1 and a2 . The product aa then becomes:
m m m m m

a a Ka1 + a2 O Ka1 + a2 O a1 a1 + a1 a2 + a2 a1 + a2 a2
m m m m m m m m m m m m m m

Since a1 and a2 are simple, a1 a1 and a2 a2 are scalar, and can be written:
m m m m m m

a1 a1 + a2 a2 - a1 a1 - a2 a2
m m m m m m m m

The expression of the remaining terms a1 a2 and a2 a1 in terms of exterior and interior products
m m m m

is more interesting and complex. We discuss this in section 11.4.

2009 9 3

Grassmann Algebra Book.nb

629

Summary of the hypercomplex constraints


Because of their importance in what follows, we collect here a summary of the constraints we will need to apply to the hypercomplex signs in order to get the hypercomplex product to operate as expected with scalars, and generate a positive scalar norm.

Scalar constraints
HypercomplexScalarConstraints : s 1,
m_,0,0 0,0,k_

s 1>

Square constraints
HypercomplexSquareConstraints :
m_,m_,m_

; m # 0 - 1>

Antisymmetry constraints
HypercomplexAntisymmetryConstraints :
m_?EvenQ,l_?OddQ,k_?EvenQ

; m < k s ,
k,l,m m_,l_,k_

k,l,m

m_?OddQ,l_?EvenQ,k_?OddQ

; m < k s ,

; m < k - s >

k,l,m

Note that whereas the scalar and square constraints gave specific values, these antisymmetry constraints still leave half of each pair unspecified.

Hypercomplex constraints
We collect these as our hypercomplex constraints which we designate H.
H := Flatten@8HypercomplexScalarConstraints, HypercomplexSquareConstraints, HypercomplexAntisymmetryConstraints<D

Residual constraints
Although these constraints cover a significnt range of values of the hypercomplex signs, there are two residual constraints that must be specified by other considerations.
m,l,k

m>k m>l

m,l,m

We shall explore these further in the sections below. 2009 9 3

Grassmann Algebra Book.nb

630

We shall explore these further in the sections below.

11.4 Products of two different elements of the same grade


The symmetrized product of two m-elements
We now investigate the remaining types of hypercomplex products which can show up as terms in the computation of a norm: products of two different elements of the same grade. These types of term introduce the most complexity into the nature of the hypercomplex product. It is their potential to introduce non-scalar terms into the (usually defined) norms of hypercomplex numbers of order higher than the Octonions that makes them of most interest. Returning to our definition of the hypercomplex product 11.1, and specializing it to the case where both elements are of the same grade we get:
m

a1 a2
m m l=0

m,l,m

a1 a2
m l m

Reversing the order of the factors in the hypercomplex product gives:


m

a2 a1
m m l=0

m,l,m

a2 a1
m l m

The generalized products on the right may now be reversed together with the change of sign H- 1LHm-lL Hm-lL = H- 1LHm-lL .
m

a2 a1 H- 1LHm-lL
m m l=0

m,l,m

a1 a2
m l m

The sum a1 a2 +a2 a1 may now be written as:


m m m m

a1 a2 + a2 a1 I1 + H- 1LHm-lL M
m m m m l=0

m,l,m

a1 a2
m l m

11.19

Because we will need to refer to this sum of products subsequently, we will call it the symmetrized product of two m-elements (because the expression does not change if the order of the elements is reversed).

2009 9 3

Grassmann Algebra Book.nb

631

Symmetrized products for elements of different grades


For two (in general, different) simple elements of the same grade a1 and a2 we have shown above
m m

that:
m

a1 a2 + a2 a1 I1 + H- 1Lm-l M
m m m m l=0

m,l,m

a1 a2
m l m

The term 1+H-1Lm-l will be 2 for m and l both even or both odd, and zero otherwise. To get a clearer picture of what this formula entails, we write it out explicitly for the lower values of m. Note that we use the fact already established that s -1.
m,m,m

m=0 a b + b a == 2 a b m=1 a1 a2 + a2 a1 == - 2 a1 a2 m=2 a1 a2 + a2 a1 == 2


2 2 2 2 2,0,2

11.20

11.21

a1 a2 - a1 a2
2 2 2 2

11.22

m=3 a1 a2 + a2 a1 == 2
3 3 3 3 3,1,3

a1 a2 - a1 a2
3 1 3 3 3

11.23

m=4 a1 a2 + a2 a1 == 2
4 4 4 4 4,0,4

a1 a2 +
4 4

4,2,4

a1 a2 - a1 a2
4 2 4 4 4

11.24

2009 9 3

Grassmann Algebra Book.nb

632

The body of a hypercomplex square


We now break the symmetrized product up into its body (scalar component) and soul (non-scalar components). It is apparent from formula 11.19 and the examples given above, that the body of the symmetrized product a1 a2 +a2 a1 is the inner product term in which l becomes equal to m, that is:
m m m m

m,m,m

Ka1 a2 O .
m m m,m,m

And since the hypercomplex square constraint requires that s -1,we can thus write:

BodyBa1 a2 + a2 a1 F - 2 a1 a2
m m m m m m

11.25

Suppose now that we have an m-element written as the sum of two m-elements.
a a1 + a2
m m m

Calculating the the body of aa gives:


m m

BodyBa aF BodyBa1 a1 F + BodyBa1 a2 + a2 a1 F + BodyBa2 a2 F


m m m m m m m m m m

BodyBa aF -a1 a1 - 2 a1 a2 - a2 a2 - a a
m m m m m m m m m m

Hence, even if a component element like a is not simple, the body of its hypercomplex square is
m

given by the same expression as if it were simple, that is, the negative of its inner product square.

BodyBa aF - a a
m m m m

11.26

It is straightforward to see that this is valid for aa being a sum of any number of m-elements. Thus we can say that the body of the hypercomplex square of an arbitrary m-element is the negative of its inner product square.

The soul of a hypercomplex square


The soul of aa is then the non-scalar residue given by the formula:
m m

2009 9 3

Grassmann Algebra Book.nb

633

m -2

SoulBa aF I1 + H- 1LHm-lL M
m m l=0

m,l,m

a1 a2
m l m

11.27

(Here we have used the fact that because of the coefficient I1 + H- 1LHm-lL M, the terms with l equal to m|1 is always zero, so we only need to sum to m|2). For convenience in generating examples of this expression, we temporarily define the soul of aa
m m

as a function of m, and denote it h[m]. To make it easier to read we convert the generalized products where possible to exterior and interior products.
h@m_D :=
m -2

I1 + H- 1LHm-lL M
l=0

m,l,m

a1 a2 SimplifyGeneralizedProducts
m l m

With this function we can make a palette of souls for various values of m. We do this below for m from 1 to 6.
ComposePalette@8"Soul of a Hypercomplex Square"<, Table@8m, h@mD<, 8m, 1, 6<DD

Soul of a Hypercomplex Square 1 2 3 4 5 6 2


6,0,6

0 2 2 2 2 s
4,0,4 2,0,2

a1 a2
2 2

3,1,3

a1 a2
3 1 3 4,2,4

a1 a2 + 2
4 4

a1 a2
4 2 4

5,1,5

a1 a2 + 2
5 1 5 6,2,6

5,3,5

a1 a2
5 3 5 6,4,6

a1 a2 + 2
6 6

a1 a2 + 2
6 2 6

a1 a2
6 4 6

Of particular note is the soul of the symmetrized product of two different simple 2-elements.
2
2,0,2

a1 a2
2 2

This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these spaces under the hypercomplex product is guaranteed to be scalar.

2009 9 3

Grassmann Algebra Book.nb

634

This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these spaces under the hypercomplex product is guaranteed to be scalar. In a space of 4 dimensions, on the other hand, this element may not be zero. Hence the space of dimension 3 is the highest-dimensional space in which the norm of a Grassmann number is guaranteed to be scalar.

Summary of results of this section


Consider a general Grassmann number X.
X a + a +b + g +
m k p

Here, we suppose a to be scalar, and the rest of the terms to be nonscalar simple or non-simple elements. The generalized hypercomplex norm C[X] of X may be written as:
C@XD a2 - a a - b b - g g -
m m k k p p

The hypercomplex square aa of an element, has in general, both a body and a soul.
m m

The body of aa is the negative of the inner product.


m m

BodyBa aF - a a
m m m m

The soul of aa depends on the terms of which it is composed. If aa1 +a2 +a3 + then
m m m m m m m -2 m,l,m

SoulBa aF I1 + H- 1LHm-lL M
m m l=0

a1 a2 + a1 a3 + a2 a3 +
m l m m l m m l m

If the component elements of different grade of X are simple, then its soul is zero, and its norm becomes the scalar:
C@XD a2 + a a + b b + g g +
m m k k p p

It is only in 3-space that we are guaranteed that all the components of a Grassmann number are simple. Therefore it is only in 3-space that we are guaranteed that the norm of a Grassmann number under the hypercomplex product is scalar.

2009 9 3

Grassmann Algebra Book.nb

635

11.5 The Complex Numbers !


Constraints on the hypercomplex signs
Complex numbers may be viewed as hypercomplex numbers in a space of one dimension. In one dimension all 1-elements are of the form ae, where a is a scalar and e is the basis element. Let a be a 1-element. From the definition of the hypercomplex product we see that the hypercomplex product of a 1-element with itself is the (possibly signed) scalar product.
Min@1,1D

aa

l=0

1,l,1

aa
l

1,0,1

aa +

1,1,1

aa

1,1,1

aa

In the usual notation, the product of two complex numbers would be written:
Ha + bL Hc + dL Ha c - b dL + H b c + a dL

In the general hypercomplex notation we have:


Ha + b eL Hc + d eL a c + a Hd eL + Hb eL c + Hb eL Hd eL

Simplifying this using the relations 11.6 and 11.7 above gives:
Ha c + b d e eL + Hb c + a dL e Ka c + b d
1,1,1

e eO + Hb c + a dL e
1,1,1

Isomorphism with the complex numbers is then obtained by constraining s

ee to be -1.

One immediate interpretation that we can explore to satisfy this is that e is a unit 1-element (with
ee 1), and
1,1,1

s -1.

This then is the constraint we will impose to allow the incorporation of complex numbers in the hypercomplex structure.
1,1,1

-1

11.28

Complex numbers as Grassmann numbers under the hypercomplex product


The imaginary unit is then equivalent to a unit basis element (e, say) of the 1-space under the hypercomplex product.

2009 9 3

Grassmann Algebra Book.nb

636

ee 1

11.29

- - 1

11.30

In this interpretation, instead of the being a new entity with the special property 2 -1, the focus is shifted to interpret it as a unit 1-element acting under a new product operation . Complex numbers are then interpreted as Grassmann numbers in a space of one dimension under the hypercomplex product operation. For example, the norm of a complex number a+b a+be a+b can be calculated as:
C@a + bD Ha + bL Ha - bL a2 - b b a2 + b b a2 + b2

,@a + bD a2 + b b a2 + b2

11.31

11.6 The Hypercomplex Product in a 2-Space


Tabulating products in 2-space
From our previous interpretation of complex numbers as Grassmann numbers in a space of one dimension under the hypercomplex product operation, we shall require any one dimensional subspace of the 2-space under consideration to have the same properties. The product table for the types of hypercomplex products that can be encountered in a 2-space can be entered by using the GrassmannAlgebra function HypercomplexProductTable.
T = HypercomplexProductTable@81, a, b, a b<D 881 1, 1 a, 1 b, 1 Ha bL<, 8a 1, a a, a b, a Ha bL<, 8b 1, b a, b b, b Ha bL<, 8Ha bL 1, Ha bL a, Ha bL b, Ha bL Ha bL<<

To make this easier to read we format this with the GrassmannAlgebra function TableTemplate.
TableTemplate@TD

11 a1 b1

1a aa ba

1b ab bb

1 Ha bL a Ha bL b Ha bL

Ha bL 1 Ha bL a Ha bL b Ha bL Ha bL
Products where one of the factors is a scalar have already been discussed. Products of a 1-element with itself have been discussed in the section on complex numbers. Applying these results to the table above enables us to simplify it to: 2009 9 3

Grassmann Algebra Book.nb

637

Products where one of the factors is a scalar have already been discussed. Products of a 1-element with itself have been discussed in the section on complex numbers. Applying these results to the table above enables us to simplify it to:

1 a b

a - Ha aL ba

b ab - Hb bL

ab a Ha bL b Ha bL

a b Ha bL a Ha bL b Ha bL Ha bL
In this table there are four essentially different new products which we have not yet discussed.
ab a Ha bL Ha bL a Ha bL Ha bL

In the next three subsections we will take each of these products and, with a view to developing the quaternion algebra, show how they may be expressed in terms of exterior and interior products.

The hypercomplex product of two 1-elements


From the definition, and the constraint s -1 derived above from the discussion on complex numbers we can write the hypercomplex product of two (possibly) distinct 1-elements as
ab
1,0,1 1,1,1

ab +

1,1,1

Ha bL

1,0,1

a b - Ha bL

The hypercomplex product ba can be obtained by reversing the sign of the exterior product, since the scalar product is symmetric.
1,0,1

ab

a b - Ha bL a b - Ha bL

ba - s

1,0,1

11.32

The hypercomplex product of a 1-element and a 2-element


From the definition of the hypercomplex product 11.1 we can obtain expressions for the hypercomplex products of a 1-element and a 2-element in a 2-space. Since the space is only of two dimensions, the 2-element may be represented without loss of generality as a product which incorporates the 1-element as one of its factors.
Min@1,2D

a Ha bL

l=0 Min@2,1D

1,l,2

a Ha bL
l

1,0,2

a Ha bL +

1,1,2

Ha bL a

Ha bL a

l=0

2,l,1

Ha bL a
l

2,0,1

Ha bL a +

2,1,1

Ha bL a

2009 9 3

Grassmann Algebra Book.nb

638

Since the first term involving only exterior products is zero, we obtain:
1,1,2

a Ha bL Ha bL a

s s

Ha bL a Ha bL a

2,1,1

11.33

The hypercomplex square of a 2-element


All 2-elements in a space of 2 dimensions differ by only a scalar factor. We need only therefore consider the hypercomplex product of a 2-element with itself. The generalized product of two identical elements is zero in all cases except for that which reduces to the inner product. From this fact and the definition of the hypercomplex product we obtain
Min@2,2D

Ha bL Ha bL ==

l=0

2,l,2

Ha bL Ha bL ==
l

2,2,2

Ha bL Ha bL

Ha bL Ha bL ==

2,2,2

Ha bL Ha bL

11.34

The product table in terms of exterior and interior products


Collecting together the results above, we can rewrite the hypercomplex product table solely in terms of exterior and interior products and some (as yet) undetermined signs.

1 a b b - s

a - Ha aL
1,0,1 1,0,1

b s a b - Ha bL - Hb bL
2,1,1 1,1,2

ab s s Ha bL a 11.35 Ha bL b

a b - Ha bL Ha bL a s

1,1,2

2,1,1

Ha bL b

2,2,2

Ha bL Ha b

2009 9 3

Grassmann Algebra Book.nb

639

11.7 The Quaternions "


The product table for orthonormal elements
Suppose now that a and b are orthonormal. Then:
aa bb 1 ab 0 Ha bL a Ha aL b Ha bL Ha bL Ha aL Hb bL

The hypercomplex product table then becomes:

1 a b ab

a -1 - s
1,0,1

b
1,0,1

ab
1,1,2

ab

b a

ab b

-1 - s
2,1,1

- s a s

1,1,2

2,1,1

2,2,2

Generating the quaternions


Exploring a possible isomorphism with the quaternions leads us to the correspondence:

a(

b)

ab #

11.36

In terms of the quaternion units, the product table becomes:

1 $

$ -1

%
1,0,1

& &
1,1,2

% $

% - 1,0,1 s & &


2,1,1

-1 - s
2,1,1

- s $ s

1,1,2

2,2,2

To obtain the product table for the quaternions we therefore require:


1,0,1 1,1,2 2,1,1 2,2,2

-1

-1

11.37

These values give the quaternion table as expected.

2009 9 3

Grassmann Algebra Book.nb

640

These values give the quaternion table as expected.

1 $

$ -1

& 11.38

& -% $

% - & -1 & %

-$ -1

Substituting these values back into the original table 11.17 gives a hypercomplex product table in terms only of exterior and interior products. This table defines the real, complex, and quaternion product operations.

1 a b ab

a - Ha aL - a b - Ha bL Ha bL a

b a b - Ha bL - Hb bL Ha bL b

ab - Ha bL a - Ha bL b - Ha bL Ha bL 11.39

The norm of a quaternion


Let Q be a quaternion given in basis-free form as an element of a 2-space under the hypercomplex product.

Q a + a + ab
Here, a is a scalar, a is a 1-element and ab is congruent to any 2-element of the space.

11.40

The norm of Q is denoted C[Q] and given as the hypercomplex product of Q with its conjugate Qc . expanding using formula 11.5 gives:
C@QD Ha + a + a bL Ha -a -a bL a2 - Ha + a bL Ha + a bL

Expanding the last term gives:


C@QD a2 - a a - a Ha bL - Ha bL a - Ha bL Ha bL

From table 11.21 we see that:


a Ha bL - Ha bL a Ha bL a Ha bL a

Whence:
a Ha bL - Ha bL a

Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the hypercomplex product or the inner product. 2009 9 3

Grassmann Algebra Book.nb

641

Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the hypercomplex product or the inner product.

,@QD a2 - a a - Ha bL Ha bL ,@QD a2 + a a + Ha bL Ha bL
In terms of 3, 4, and !
Q=a+b3+c4 +d! C@QD a2 - Hb 3 + c 4L Hb 3 + c 4L - Hd !L Hd !L a2 - b2 H3 3L - c2 H4 4L - d2 H! !L a2 + b2 H3 3L + c2 H4 4L + d2 H! !L a2 + b2 + c2 + d2

11.41

11.42

The Cayley-Dickson algebra


If we set a and b to be orthogonal in the general hypercomplex product table 11.21, we retrieve the multiplication table for the Cayley-Dickson algebra with 4 generators and 2 parameters.

1 a b

a - Ha aL - ab

b ab - Hb bL

ab - Ha aL b Hb bL a 11.43

a b Ha aL b - Hb bL a - Ha aL Hb bL
In the notation we have used, the generators are 1, a, b, and ab. The parameters are aa and bb.

2009 9 3

Grassmann Algebra Book.nb

642

12 Exploring Clifford Algebra

12.1 Introduction
In this chapter we define a new type of algebra: the Clifford algebra. Clifford algebra was originally proposed by William Kingdon Clifford (Clifford ) based on Grassmann's ideas. Clifford algebras have now found an important place as mathematical systems for describing some of the more fundamental theories of mathematical physics. Clifford algebra is based on a new kind of product called the Clifford product. In this chapter we will show how the Clifford product can be defined straightforwardly in terms of the exterior and interior products that we have already developed, without introducing any new axioms. This approach has the dual advantage of ensuring consistency and enabling all the results we have developed previously to be applied to Clifford numbers. In the previous chapter we have gone to some lengths to define and explore the generalized Grassmann product. In this chapter we will use the generalized Grassmann product to define the Clifford product in its most general form as a simple sum of generalized products. Computations in general Clifford algebras can be complicated, and this approach at least permits us to render the complexities in a straightforward manner. From the general case, we then show how Clifford products of intersecting and orthogonal elements simplify. This is the normal case treated in introductory discussions of Clifford algebra. Clifford algebras occur throughout mathematical physics, the most well known being the real numbers, complex numbers, quaternions, and complex quaternions. In this book we show how Clifford algebras can be firmly based on the Grassmann algebra as a sum of generalized Grassmann products. Historical Note The seminal work on Clifford algebra is by Clifford in his paper Applications of Grassmann's Extensive Algebra that he published in the American Journal of Mathematics Pure and Applied in 1878 (Clifford ). Clifford became a great admirer of Grassmann and one of those rare contemporaries who appears to have understood his work. The first paragraph of this paper contains the following passage.
Until recently I was unacquainted with the Ausdehnungslehre, and knew only so much of it as is contained in the author's geometrical papers in Crelle's Journal and in Hankel's Lectures on Complex Numbers. I may, perhaps, therefore be permitted to express my profound admiration of that extraordinary work, and my conviction that its principles will exercise a vast influence on the future of mathematical science.

2009 9 3

Grassmann Algebra Book.nb

643

12.2 The Clifford Product


Definition of the Clifford product
The Clifford product of elements a and b is denoted by ab and defined to be
m k m k

Min@m,kD

ab
m k

l=0

H- 1Ll Hm-lL+ 2

l Hl-1L

ab
m l k

12.1

Here, a b is the generalized Grassmann product of order l.


m l k

The most surprising property of the Clifford product is its associativity, even though it is defined in terms of products which are non-associative. The associativity of the Clifford products is not directly evident from the definition.

Tabulating Clifford products


In order to see what this formula gives we can tabulate the Clifford products in terms of generalized products. Here we maintain the grade of the first factor general, and vary the grade of the second.
TableB:a b, SumBCreateSignedGeneralizedProducts@lDBa, bF,
m k m k

8l, 0, k<F>, 8k, 0, 4<F SimplifySigns TableForm ab


m 0

ab
m 0 0

ab
m

a b - H- 1Lm Ka bO
m 0 m 1

ab
m 2

a b - H- 1Lm a b - a b
m 0 2 m 1 2 m 2 2

ab
m 3

a b - H- 1Lm a b - a b + H- 1Lm a b
m 0 3 m 1 3 m 2 3 m 3 3

ab
m 4

a b - H- 1Lm a b - a b + H- 1Lm a b + a b
m 0 4 m 1 4 m 2 4 m 3 4 m 4 4

2009 9 3

Grassmann Algebra Book.nb

644

The grade of a Clifford product


It can be seen that, from its definition 12.1 in terms of generalized products, the Clifford product of a and b is a sum of elements with grades ranging from m + k to | m - k | in steps of 2. All the
m k

elements of a given Clifford product are either of even grade (if m + k is even), or of odd grade (if m + k is odd). To calculate the full range of grades in a space of unspecified dimension we can use the GrassmannAlgebra function RawGrade. For example:
RawGradeBa bF
3 2

81, 3, 5<

Given a space of a certain dimension, some of these elements may necessarily be zero (and thus result in a grade of Grade0), because their grade is larger than the dimension of the space. For example, in a 3-space:
&3 ; GradeBa bF
3 2

81, 3, Grade0<

In the general discussion of the Clifford product that follows, we will assume that the dimension of the space is high enough to avoid any terms of the product becoming zero because their grade exceeds the dimension of the space. In later more specific examples, however, the dimension of the space becomes an important factor in determining the structure of the particular Clifford algebra under consideration.

Clifford products in terms of generalized products


As can be seen from the definition, the Clifford product of a simple m-element and a simple kelement can be expressed as the sum of signed generalized products. For example, ab may be expressed as the sum of three signed generalized products of grades 1, 3
3 2

and 5. In GrassmannAlgebra we can effect this conversion by applying the ToGeneralizedProducts function.
ToGeneralizedProductsBa bF
3 2

ab+ab-ab
3 0 2 3 1 2 3 2 2

Multiple Clifford products can be expanded in the same way. For example:

2009 9 3

Grassmann Algebra Book.nb

645

ToGeneralizedProductsBa b gF
3 2

ab g+ ab g+- ab g+
3 0 2 0 3 1 2 0 3 2 2 0

ab g+ ab g+- ab g
3 0 2 1 3 1 2 1 3 2 2 1

ToGeneralizedProducts can also be used to expand Clifford products in any Grassmann expression, or list of Grassmann expressions. For example we can express the Clifford product of two general Grassmann numbers in 2-space in terms of generalized products. &2 ; X = CreateGrassmannNumber@xD CreateGrassmannNumber@yD Hx0 + e1 x1 + e2 x2 + x3 e1 e2 L Hy0 + e1 y1 + e2 y2 + y3 e1 e2 L X1 = ToGeneralizedProducts@XD x0 y0 + x0 He1 y1 L + x0 He2 y2 L + x0 Hy3 e1 e2 L +
0 0 0 0

He1 x1 L y0 + He1 x1 L He1 y1 L + He1 x1 L He2 y2 L +


0 0 0 0 0

He1 x1 L Hy3 e1 e2 L + He2 x2 L y0 + He2 x2 L He1 y1 L +


0 0

He2 x2 L He2 y2 L + He2 x2 L Hy3 e1 e2 L + Hx3 e1 e2 L y0 +


0 0

Hx3 e1 e2 L He1 y1 L + Hx3 e1 e2 L He2 y2 L +


0 0 0 1

Hx3 e1 e2 L Hy3 e1 e2 L + He1 x1 L He1 y1 L + He1 x1 L He2 y2 L +


1

He1 x1 L Hy3 e1 e2 L + He2 x2 L He1 y1 L + He2 x2 L He2 y2 L +


1 1 1 1

He2 x2 L Hy3 e1 e2 L - Hx3 e1 e2 L He1 y1 L - Hx3 e1 e2 L He2 y2 L 1 1

Hx3 e1 e2 L Hy3 e1 e2 L - Hx3 e1 e2 L Hy3 e1 e2 L


1 2

Clifford products in terms of interior products


The primary definitions of Clifford and generalized products are in terms of exterior and interior products. To transform a Clifford product to exterior and interior products, we can either apply the GrassmannAlgebra function ToInteriorProducts to the results obtained above, or directly to the expression involving the Clifford product.
ToInteriorProductsBa bF
3 2

- Ha1 a2 a3 b1 b2 L + Ha1 a2 a3 b1 L b2 Ha1 a2 a3 b2 L b1 + a1 a2 a3 b1 b2

Note that ToInteriorProducts works with explicit elements, but also as above, automatically creates a simple exterior product from an underscripted element.
ToInteriorProducts expands the Clifford product by using the A form of the generalized product. A second function ToInteriorProductsB expands the Clifford product by using the B form of the expansion to give a different but equivalent expression.

2009 9 3

Grassmann Algebra Book.nb

646

ToInteriorProducts expands the Clifford product by using the A form of the generalized product. A second function ToInteriorProductsB expands the Clifford product by using the B form of the expansion to give a different but equivalent expression. ToInteriorProductsBBa bF
3 2

- Ha1 a2 a3 b1 b2 L - a1 a2 a3 b1 b2 + a1 a2 a3 b2 b1 + a1 a2 a3 b1 b2

In this example, the difference is evidenced in the second and third terms.

Clifford products in terms of inner products


Clifford products may also be expressed in terms of inner products, some of which may be scalar products. From this form it is easy to read the grade of the terms.
ToInnerProductsBa bF
3 2

- Ha2 a3 b1 b2 L a1 + Ha1 a3 b1 b2 L a2 Ha1 a2 b1 b2 L a3 - Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 - Ha2 b1 L a1 a3 b2 Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 + a1 a2 a3 b1 b2

Clifford products in terms of scalar products


Finally, Clifford products may be expressed in terms of scalar and exterior products only.
ToScalarProductsBa bF
3 2

Ha2 b2 L Ha3 b1 L a1 - Ha2 b1 L Ha3 b2 L a1 Ha1 b2 L Ha3 b1 L a2 + Ha1 b1 L Ha3 b2 L a2 + Ha1 b2 L Ha2 b1 L a3 - Ha1 b1 L Ha2 b2 L a3 - Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 - Ha2 b1 L a1 a3 b2 Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 + a1 a2 a3 b1 b2

12.3 The Reverse of an Exterior Product


Defining the reverse
We will find in our discussion of Clifford products that many operations and formulae are simplified by expressing some of the exterior products in a form which reverses the order of the 1element factors in the products. We denote the reverse of a simple m-element a by a , and define it to be:
m m

2009 9 3

Grassmann Algebra Book.nb

647

We denote the reverse of a simple m-element a by a , and define it to be:


m m

H a 1 a 2 ! a m L a m a m -1 ! a 1
We can easily work out the number of permutations to achieve this rearrangement as
1 2

12.2

m Hm - 1L. Mathematica automatically simplifies H- 1L 2


1

m Hm -1L

to m Hm-1L .

a H- 1L 2
m

m H m -1L

a m H m -1L a
m m

12.3

The operation of taking the reverse of an element is called reversion. In Mathematica, the superscript dagger is represented by SuperDagger. Thus
SuperDaggerBaF returns a .
m m

The pattern of signs, as m increases from zero, alternates in pairs:


1, 1, - 1, - 1, 1, 1, - 1, - 1, 1, 1, - 1, - 1, !

Computing the reverse


In GrassmannAlgebra you can take the reverse of the exterior products in a Grassmann expression or list of Grassmann expressions by using the function GrassmannReverse. GrassmannReverse expands products and factors scalars before reversing the factors. It will operate with graded variables of either numeric or symbolic grade. For example:
GrassmannReverseB:1, x, x y, a 3, b, a b - g >F
3 k m k p

81, x, y x, 3 a3 a2 a1 , bk ! b2 b1 , am ! a2 a1 bk ! b2 b1 - am ! a2 a1 gp ! g2 g1 <

On the other hand, if we wish just to put exterior products into reverse form (that is, reversing the factors, and changing the sign of the product so that the result remains equal to the original), we can use the GrassmannAlgebra function ReverseForm.
ReverseFormB:1, x, x y, a, b, a b - g >F
3 k m k p

91, x, - Hy xL, - Ha3 a2 a1 L, H-1+kL k bk ! b2 b1 , H-1+kL k+H-1+m L m H a m ! a 2 a 1 b k ! b 2 b 1 L H-1+mL m+H-1+pL p Ham ! a2 a1 gp ! g2 g1 L=

12.4 Special Cases of Clifford Products


2009 9 3

Grassmann Algebra Book.nb

648

12.4 Special Cases of Clifford Products


The Clifford product with scalars
The Clifford product of a scalar with any element simplifies to the usual (field) product.
ToScalarProductsBa aF
m

a a1 a2 ! am

aa a a
m m

12.4

Hence the Clifford product of any number of scalars is just their underlying field product.
ToScalarProducts@a b cD abc

The Clifford product of 1-elements


The Clifford product of two 1-elements is just the sum of their interior (here scalar) and exterior products. Hence it is of grade {0, 2}.
ToScalarProducts@x yD xy + xy

xy xy + xy
The Clifford product of any number of 1-elements can be computed.
ToScalarProducts@x y zD z Hx yL - y Hx zL + x Hy zL + x y z ToScalarProducts@w x y zD H w zL Hx yL - H w yL Hx zL + H w xL Hy zL + Hy zL w x - Hx zL w y + Hx yL w z + H w zL x y - H w yL x z + H w xL y z + w x y z

12.5

2009 9 3

Grassmann Algebra Book.nb

649

The Clifford product of an m-element and a 1-element


The Clifford product of an arbitrary simple m-element and a 1-element results in just two terms: their exterior and interior products.

xa xa + ax
m m m

12.6

a x a x - H- 1L m a x
m m m

12.7

By rewriting equation 12.7 as:


H- 1Lm a x x a - a x
m m m

we can add and subtract equations 12.6 and 12.7 to express the exterior product and interior products of a 1-element and an m-element in terms of Clifford products

xa
m

1 2 1 2

Kx a + H- 1L m a xO
m m

12.8

ax
m

Kx a - H- 1L m a xO
m m

12.9

The Clifford product of an m-element and a 2-element


The Clifford product of an arbitrary m-element and a 2-element is given by just three terms, an exterior product, a generalized product of order 1, and an interior product.

a b a b - H- 1L m a b - a b
m 2 m 2 m 1 2 m 2

12.10

The Clifford product of two 2-elements


By way of example we explore the various forms into which we can cast a Clifford product of two 2-elements. The highest level is into a sum of generalized products. From this we can expand the terms into interior, inner or scalar products where appropriate. We expect the grade of the Clifford product to be a composite one.

2009 9 3

Grassmann Algebra Book.nb

650

RawGrade@Hx yL Hu vLD 80, 2, 4< ToGeneralizedProducts@Hx yL Hu vLD xy uv - xy uv - xy uv


0 1 2

The first of these generalized products is equivalent to an exterior product of grade 4. The second is a generalized product of grade 2. The third reduces to an interior product of grade 0. We can see this more explicitly by converting to interior products.
ToInteriorProducts@Hx yL Hu vLD - Hu v x yL - Hx y uL v + Hx y vL u + u v x y

We can convert the middle terms to inner (in this case scalar) products.
ToInnerProducts@Hx yL Hu vLD - Hu v x yL + Hv yL u x Hv xL u y - Hu yL v x + Hu xL v y + u v x y

Finally we can express the Clifford number in terms only of exterior and scalar products.
ToScalarProducts@Hx yL Hu vLD Hu yL Hv xL - Hu xL Hv yL + Hv yL u x Hv xL u y - Hu yL v x + Hu xL v y + u v x y

The Clifford product of two identical elements


The Clifford product of two identical elements g is, by definition
p p p p

g g H- 1Ll Hp-lL+ 2
l=0

l Hl-1L

gg
p l p

Since the only non-zero generalized product of the form g g is that for which l = p, that is gg,
p l p p p

we have immediately that:


1

g g H- 1L 2
p p

p Hp-1L

gg
p p

Or, alternatively:

g g g g g g
p p p p p p

12.11

12.5 Alternate Forms for the Clifford Product


2009 9 3

Grassmann Algebra Book.nb

651

12.5 Alternate Forms for the Clifford Product


Alternate expansions of the Clifford product
The Clifford product has been defined in Section 12.2 as:
Min@m,kD

ab
m k

l=0

H- 1Ll Hm-lL+ 2

l Hl-1L

ab
m l k

Alternate forms for the generalized product have been discussed in Chapter 10. The generalized product, and hence the Clifford product, may be expanded by decomposition of a, by
m

decomposition of b, or by decomposition of both a and b.


k m k

The Clifford product expressed by decomposition of the first factor


The generalized product, expressed by decomposition of a is:
m

m K O l

a b ai b ai
m l k

a a1 a1 a2 a2 !
m l m-l l m-l

i=1

m-l

Substituting this into the expression for the Clifford product gives:
m K O Min@m,kD l

ab
m k

l=0

H- 1Ll Hm-lL+ 2
i=1
l m-l

l Hl-1L

m-l

ai b ai
k l

a a1 a1 a2 a2 !
m l m-l

Our objective is to rearrange the formula for the Clifford product so that the signs are absorbed into the formula, thus making the form of the formula independent of the values of m and l. We can do
1

this by writing H-1L 2

l Hl-1L

ai ai (where ai is the reverse of ai ) and interchanging the


l l l l

order of the decomposition of a into a l-element and a (m-l)-element to absorb the H-1Ll Hm-lL
m

factor: H-1Ll Hm-lL a a1 a1 a2 a2 ! .


m m-l l m-l l

2009 9 3

Grassmann Algebra Book.nb

652

m K O Min@m,kD l

ab
m k

l=0

ai b ai
i=1
m-l m-l k l

a a1 a1 a2 a2 !
m m-l l l

Because of the symmetry of the expression with respect to l and (m-l), we can write m equal to (m-l) and then l becomes (m-m). This enables us to write the formula a little simpler by arranging for the factors of grade m to come before those of (m-m). Finally, because of the inherent arbitrariness of the symbol m, we can change it back to l to get the formula in a more accustomed form.
m

Min@m,kD KlO

ab
m k

l=0

ai b ai
i=1

m-l

12.12

a a1 a1 a2 a2 !
m l m-l l m-l

The right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can develop the Clifford product ab as a sum of interior products in this particular form by using
m k m

ToInteriorProductsD (note the final 'D'). Since a is to be decomposed, it must have an

explicit numerical grade.

Example 1
We can expand any explicit elements.
ToInteriorProductsD@Hx yL Hu vLD x y v u + x Hu v yL - y Hu v xL + u v x y

Note that this is a different (although equivalent) result from that obtained using ToInteriorProducts.
ToInteriorProducts@Hx yL Hu vLD - Hu v x yL - Hx y uL v + Hx y vL u + u v x y

Example 2
We can also enter the elements as graded variables and have GrassmannAlgebra create the requisite products.
ToInteriorProductsDBa bF
2 2

b1 b2 a2 a1 + a1 Hb1 b2 a2 L - a2 Hb1 b2 a1 L + a1 a2 b1 b2

Example 3
2009 9 3

Grassmann Algebra Book.nb

653

Example 3
The formula shows that it does not depend on the grade of b for its form. Thus we can still obtain
k

an expansion for general b. For example we can take:


k

A = ToInteriorProductsDBa b F
2 k

b a2 a1 + a1 b a2 - a2 b a1 + a1 a2 b
k k k k

Example 4
Note that if k is less than the grade of the first factor, some of the interior product terms may be zero, thus simplifying the expression.
B = ToInteriorProductsDBa b F
2

- Ha1 a2 bL + b a1 a2

If we put k equal to 1 in the expression derived for general k in Example 3 we get:


A1 = A . k 1 b a2 a1 + a1 Hb a2 L - a2 Hb a1 L + a1 a2 b

Although this does not immediately look like the expression B above, we can see that it is the same by noting that the first term of A1 is zero, and expanding their difference to scalar products.
ToScalarProducts@B - A1D 0

Alternative expression by decomposition of the first factor


An alternative expression for the Clifford product expressed by a decomposition of a is obtained
m

by reversing the order of the factors in the generalized products, and then expanding the generalized products in their B form expansion.
Min@m,kD

ab
m k

l=0

H- 1Ll Hm-lL+ 2

l Hl-1L+Hm-lL Hk-lL

ba
k l m

m K O Min@m,kD l

l=0
1

H- 1Lk Hm-lL+ 2
i=1
l Hl-1L

l Hl-1L

b ai ai
k m-l l

As before we write H-1L 2 sign H- 1L 2009 9 3


k Hm-lL

ai ai and interchange the order of b and ai to absorb the


l l k m-l

to get:

Grassmann Algebra Book.nb

654

As before we write H-1L 2 sign H- 1L


k Hm-lL

l Hl-1L

ai ai and interchange the order of b and ai to absorb the


l l k m-l

to get:
m K O Min@m,kD l

ab
m k

l=0

i=1
l

m-l

ai b ai
k l

a a1 a1 a2 a2 !
m l m-l m-l

In order to get this into a form for direct comparison of our previously derived result in formula 12.12, we write H-1Ll Hm-lL a a1 a1 a2 a2 ! , then interchange l and (m-l) as before to
m m-l l m-l l

get finally:
m

Min@m,kD KlO

ab
m k

l=0

H- 1Ll Hm-lL ai b ai
i=1

m-l

12.13

a a1 a1 a2 a2 !
m l m-l l m-l

As before, the right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can develop the Clifford product ab in this form by using
m k

ToInteriorProductsC. This is done in the Section 12.6 below.

The Clifford product expressed by decomposition of the second factor


If we wish to expand a Clifford product in terms of the second factor b, we can use formulas A and
k

B of the generalized product theorem (Section 10.5) and substitute directly to get either of:
k K O Min@m,kD l

ab
m k

l=0

H- 1Ll Hm-lL a bj
j=1 m l

bj
k-l

12.14

b b1 b1 b2 b2 !
k l k-l l k-l

2009 9 3

Grassmann Algebra Book.nb

655

k K O Min@m,kD l

ab
m k

l=0

H- 1Ll Hm-lL a bj bj
j=1 m k-l l

12.15

b b1 b1 b2 b2 !
k l k-l l k-l

The Clifford product expressed by decomposition of both factors


We can similarly decompose the Clifford product in terms of both a and b. The sign
m k

H- 1L 2

l Hl-1L

can be absorbed by taking the reverse of either ai or bj .


l l

Min@m,kD

m k K OK O l l l

ab
m k

l=0

H- 1Ll Hm-lL ai bj
i=1 j=1 m k K OK O l l l

ai bj
m-l k-l

Min@m,kD

12.16

ab
m k

l=0

H- 1Ll Hm-lL ai bj
i=1 j=1 l l

ai bj
m-l k-l

a a1 a1 a2 a2 !
m l m-l l m-l

b b1 b1 b2 b2 !
k l k-l l k-l

12.6 Writing Down a General Clifford Product


The form of a Clifford product expansion
We take as example the Clifford product ab and expand it in terms of the factors of its first factor
4 k

in both the D form and the C form. We see that all terms except the interior and exterior products at the ends of the expression appear to be different. There are of course equal numbers of terms in either form. Note that if the parentheses were not there, the expressions would be identical except for (possibly) the signs of the terms. Although these central terms differ between the two forms, we can show by reducing them to scalar products that their sums are the same.

2009 9 3

Grassmann Algebra Book.nb

656

The D form
X = ToInteriorProductsDBa bF
4 k

b a4 a3 a2 a1 + a1 b a4 a3 a2 k k

a2 b a4 a3 a1 + a3 b a4 a2 a1 k k

a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 +
k k k

a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 +
k k k

a3 a4 b a2 a1 + a1 a2 a3 b a4 - a1 a2 a4 b a3 +
k k k

a1 a3 a4 b a2 - a2 a3 a4 b a1 + a1 a2 a3 a4 b
k k k

The C form
Note that since the exterior product has higher precedence in Mathematica than the interior product, a1 ba2 is equivalent to a1 b a2 , and no parentheses are necessary in the terms of
k k

the C form.
ToInteriorProductsCBa bF
4 k

b a4 a3 a2 a1 - a1 b a4 a3 a2 +
k k k k k k k

a2 b a4 a3 a1 - a3 b a4 a2 a1 +
k k k k k k k k k

a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 + a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 + a3 a4 b a2 a1 - a1 a2 a3 b a4 + a1 a2 a4 b a3 a1 a3 a4 b a2 + a2 a3 a4 b a1 + a1 a2 a3 a4 b

Either form is easy to write down directly by observing simple mnemonic rules.

A mnemonic way to write down a general Clifford product


We can begin developing the D form mnemonically by listing the ai . This list has the same form
l

as the basis of the Grassmann algebra of a 4-space.

1. Calculate the ai
l

2009 9 3

Grassmann Algebra Book.nb

657

1. Calculate the ai
l

These are all the essentially different combinations of factors of all grades. The list has the same form as the basis of the Grassmann algebra whose n-element is a.
m

DeclareBasis@4, aD; B = BasisL@D 81, a1 , a2 , a3 , a4 , a1 a2 , a1 a3 , a1 a4 , a2 a3 , a2 a4 , a3 a4 , a1 a2 a3 , a1 a2 a4 , a1 a3 a4 , a2 a3 a4 , a1 a2 a3 a4 <

2. Calculate the cobasis with respect to a


m

A = CobasisL@D 8a1 a2 a3 a4 , a2 a3 a4 , - Ha1 a3 a4 L, a1 a2 a4 , - Ha1 a2 a3 L, a3 a4 , - Ha2 a4 L, a2 a3 , a1 a4 , - Ha1 a3 L, a1 a2 , a4 , -a3 , a2 , -a1 , 1<

3. Take the reverse of the cobasis


R = GrassmannReverse@AD 8a4 a3 a2 a1 , a4 a3 a2 , - Ha4 a3 a1 L, a4 a2 a1 , - Ha3 a2 a1 L, a4 a3 , - Ha4 a2 L, a3 a2 , a4 a1 , - Ha3 a1 L, a2 a1 , a4 , -a3 , a2 , -a1 , 1<

4. Take the interior product of each of these elements with b


k

S = ThreadBb RF
k

:b a4 a3 a2 a1 , b a4 a3 a2 , b - Ha4 a3 a1 L, b a4 a2 a1 ,
k k k k k k k k k

b - Ha3 a2 a1 L, b a4 a3 , b - Ha4 a2 L, b a3 a2 , b a4 a1 , b - Ha3 a1 L, b a2 a1 , b a4 , b -a3 , b a2 , b -a1 , b 1>


k k k k k k k

2009 9 3

Grassmann Algebra Book.nb

658

5. Take the exterior product of the elements of the two lists B and S
T = Thread@B SD :1 b a4 a3 a2 a1 , a1 b a4 a3 a2 ,
k k

a2 b - Ha4 a3 a1 L , a3 b a4 a2 a1 , a4 b - Ha3 a2 a1 L ,
k k k

a1 a2 b a4 a3 , a1 a3 b - Ha4 a2 L , a1 a4 b a3 a2 ,
k k k

a2 a3 b a4 a1 , a2 a4 b - Ha3 a1 L , a3 a4 b a2 a1 ,
k k k

a1 a2 a3 b a4 , a1 a2 a4 b -a3 , a1 a3 a4 b a2 ,
k k k

a2 a3 a4 b -a1 , a1 a2 a3 a4 b 1 >
k k

6. Add the terms and simplify the result by factoring out the scalars
U = FactorScalars@Plus TD b a4 a3 a2 a1 + a1 b a4 a3 a2 k k

a2 b a4 a3 a1 + a3 b a4 a2 a1 k k

a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 +
k k k

a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 +
k k k

a3 a4 b a2 a1 + a1 a2 a3 b a4 - a1 a2 a4 b a3 +
k k k

a1 a3 a4 b a2 - a2 a3 a4 b a1 + a1 a2 a3 a4 b
k k k

7. Compare to the original expression


XU True

2009 9 3

Grassmann Algebra Book.nb

659

12.7 The Clifford Product of Intersecting Elements


General formulae for intersecting elements
Suppose two simple elements ga and gb which have a simple element g in common. Then by
p m p k p

definition their Clifford product may be written as a sum of generalized products.


Min@m,kD+p

ga gb
p m p k

l=0

H- 1Ll Hp+m-lL + 2

l Hl-1L

ga gb
p m l p k

But it has been shown in Section 10.12 that for l p that:


g a g b H- 1Lp Hl-pL
p m l p k

ga
p m

-p+l k

b g
p

Substituting in the formula above gives:


Min@m,kD+p

ga gb
p m p k

l=0

H- 1Lp+l Hm-lL + 2

l Hl-1L

ga
p m

l-p k

b g
p

Since the terms on the right hand side are zero for l < p, we can define m = l - p and rewrite the right hand side as:
Min@m,kD

ga gb
p m p k

H- 1Lw

m=0

H- 1Lm Hp+m-mL + 2

m Hm-1L

ga b g
p m m k p

where w = m p +

1 2

p Hp - 1L. Hence we can finally cast the right hand side as a Clifford product
p

by taking out the second g factor.

ga gb
p m p k

H - 1 L m p+ 2

p Hp-1L

ga b g
p m k p

12.17

In a similar fashion we can show that the right hand side can be written in the alternative form:
1

ga gb
p m p k

H- 1L 2

p Hp-1L

a gb
m p k

g
p

12.18

By reversing the order of a and g in the first factor, formula 12.17 can be written as:
m p

2009 9 3

Grassmann Algebra Book.nb

660

ag gb
m p p k

H- 1L 2

p Hp-1L

ga b g
p m k p

12.19

And by noting that the reverse, g of g is H-1L 2


p p p p

p Hp-1L

g, we can absorb the sign into the


p

formulae by changing one of the g to g . For example:

a g g b
m p p k

ga b g
p m k p

12.20

In sum: we can write:

a g g b
m p p k

H- 1L m p

ag b g
m p k p

12.21

a g g b
m p p k

H- 1L m p a g b
m p k

g
p

12.22

ag b g a gb
m p k p m p k

g
p

12.23

The relations derived up to this point are completely general and apply to Clifford products in an arbitrary space with an arbitrary metric. For example they do not require any of the elements to be orthogonal.

Special cases of intersecting elements


We can derive special cases of the formulae derived above by putting a and b equal to unity. First,
m k

put b equal to unity. Remembering that the Clifford product with a scalar reduces to the ordinary
k

product we obtain:
a g g H- 1Lm p a g g H- 1Lm p a g g
m p p m p p m p p

Next, put a equal to unity, and then, to enable comparison with the previous formulae, replace b by
m k

a.
m

2009 9 3

Grassmann Algebra Book.nb

661

g g a
p p m

ga g ga g
p m p p m p

Finally we note that, since the far right hand sides of these two equations are equal, all the expressions are equal.

a g g g g a
m p p p p m

12.24 ag g
m p p

g a g g a g H- 1L
p m p p m p

mp

By putting a to unity in these equations we can recover the relation 12.11 between the Clifford
m

product of identical elements and their interior product.

12.8 The Clifford Product of Orthogonal Elements


The Clifford product of totally orthogonal elements
In Chapter 10 on generalized products we showed that if a and b are totally orthogonal (that is,
m k k m l k

ai bj 0 for each ai belonging to a and bj belonging to b, then a b==0, except when l = 0.


m

Thus we see immediately from the definition of the Clifford product that the Clifford product of two totally orthogonal elements is equal to their exterior product.

ab ab
m k m k

ai bj 0

12.25

The Clifford product of partially orthogonal elements


Suppose now we introduce an arbitrary element g into ab, and expand the expression in terms of
p m k

generalized products.
Min@m,k+pD

a gb
m p k

l=0

H - 1 L l Hm - lL+ 2

l Hl-1L

a gb
m l p k

But from formula 10.35 we have that:

2009 9 3

Grassmann Algebra Book.nb

662

a gb a g b
m l p k m l p k

ai bj 0

l Min@m, pD

Hence:

a gb ag b
m p k m p k

ai bj 0

12.26

Similarly, expressing ag b in terms of generalized products and substituting from equation


m p k

10.37 gives:
Min@p,kD

ag b
m p k
1

l=0

H- 1Ll Hm+p-lL + 2

l Hl-1L+m l

a g b
m p l k

But H- 1Ll Hm+p-lL + 2

l Hl-1L+m l

H- 1Ll Hp-lL + 2

l Hl-1L

, hence we can write:

ag b a gb
m p k m p k

ai bj 0

12.27

Testing the formulae


To test any of these formulae we may always tabulate specific cases. Here we convert the difference between the sides of equation 12.27 to scalar products and then put to zero any products whose factors are orthogonal. To do this we use the GrassmannAlgebra function OrthogonalSimplificationRules. We verify the formula for the first 50 cases.
? OrthogonalSimplificationRules OrthogonalSimplificationRules@88X1,Y1<,8X2,Y2<,!<D develops a list of rules which put to zero all the scalar products of a 1-element from Xi and a 1-element from Yi. Xi and Yi may be either variables, basis elements, exterior products of these, or graded variables.

2009 9 3

Grassmann Algebra Book.nb

663

FlattenBTableBToScalarProductsB a g b - a g b F .
m p k m p k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 3<, 8k, 0, m<, 8p, 0, m + 2<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

12.9 The Clifford Product of Intersecting Orthogonal Elements


Orthogonal union
We have shown in Section 12.7 above that the Clifford product of arbitrary intersecting elements may be expressed by:
a g g b
m p p k

H- 1Lm p

ag b g
m p k p

a g g b
m p p k

H- 1Lm p a g b
m p k

g
p

But we have also shown in Section 12.8 that if a and b are totally orthogonal, then:
m k

ag b a gb
m p k m p k

a gb ag b
m p k m p k

Hence the right hand sides of the first equations may be written, in the orthogonal case, as:

a g g b
m p p k

H- 1L m p a g b
m p k

g
p

ai bj 0

12.28

a g g b
m p p k

H- 1L m p

ag b g
m p k p

ai bj 0

12.29

Orthogonal intersection
2009 9 3

Grassmann Algebra Book.nb

664

Orthogonal intersection
Consider the case of three simple elements a, b and g where g is totally orthogonal to both a and
m k p p m m

b (and hence to ab). A simple element g is totally orthogonal to an element a if and only if
k m m k p p m k

agi 0 for all gi belonging to g. Here, we do not assume that a and b are orthogonal.

As before we consider the Clifford product of intersecting elements, but because of the orthogonality relationships, we can replace the generalized products on the right hand side by the right hand side of equation 10.39.
Min@m,kD+p

ga gb
p m p k

l=0

H- 1Ll Hp+m-lL + 2

l Hl-1L

ga gb
p m l p k

Min@m,kD+p

l=p

H- 1Ll Hp+m-lL + 2

l Hl-1L

gg
p p

a b
m l-p k

Let m = l - p, then this equals:


Min@m,kD

ga gb
p m p k

gg
p p

m=0

H- 1LHm+pL Hm-mL + 2
Min@m,kD

Hm+pL Hm+p-1L

ab
m m k

H - 1 L m p+ 2

p Hp-1L

gg
p p

m=0

H- 1Lm Hm-mL + 2

m Hm-1L

ab
m m k

Hence we can cast the right hand side as a Clifford product.


ga gb
p m p k

H - 1 L m p+ 2

p Hp-1L

gg
p p

ab
m k

Or finally, by absorbing the signs into the left-hand side as:

a g g b
m p p k

gg
p p

ab
m k

a gi b gi 0
m k

12.30

Note carefully, that for this formula to be true, a and b are not necessarily orthogonal. However, if
m k

they are orthogonal we can express the Clifford product on the right hand side as an exterior product.

2009 9 3

Grassmann Algebra Book.nb

665

a g g b
m p p k

12.31

gg
p p

ab
m k

ai bj ai gs bj gs 0

12.10 Summary of Special Cases of Clifford Products


Arbitrary elements
The main results of the preceding sections concerning Clifford Products of intersecting and orthogonal elements are summarized below.

g is an arbitrary repeated factor


p

g g g g g g
p p p p p p

12.32

g g g g g g
p p p p p p

12.33

g g g g g g g g g
p p p p p p p p p

12.34

gg gg gg
p p p p p p

12.35

2009 9 3

Grassmann Algebra Book.nb

666

g is an arbitrary repeated factor, a is arbitrary


p m

a g g g g a
m p p p p m

12.36 ag g
m p p

g a g g a g H- 1L
p m p p m p

mp

g is an arbitrary repeated factor, a and b are arbitrary


p m k

a g g b
m p p k

H- 1L m p

ag b g
m p k p

12.37

a g g b
m p p k

H- 1L m p a g b
m p k

g
p

12.38

Arbitrary and orthogonal elements


g is arbitrary, a and b are totally orthogonal
p m k

a gb ag b
m p k m p k

ai bj 0

12.39

ag b a gb
m p k m p k

ai bj 0

12.40

g is an arbitrary repeated factor, a and b are totally orthogonal


p m k

a g g b
m p p k

H- 1L m p a g b
m p k

g
p

ai bj 0

12.41

2009 9 3

Grassmann Algebra Book.nb

667

a g g b
m p p k

H- 1L m p

ag b g
m p k p

ai bj 0

12.42

a and b are arbitrary, g is a repeated factor totally orthogonal to both a and b


m k p m k

a g g b
m p p k

12.43

gg
p p

ab
m k

ai gs bj gs 0

Orthogonal elements
a and b are totally orthogonal
m k

ab ab
m k m k

ai bj 0

12.44

a, b, and g are totally orthogonal


m k p

ag gb
m p p k

12.45

g g
p p

ab
m k

ai bj ai gs bj gs 0

Calculating with Clifford products


The previous section summarizes some alternative formulations for a Clifford product when its factors contain a common element, and when they have an orthogonality relationship between them. In this section we discuss how these relations can make it easy to calculate with these type of Clifford products.

2009 9 3

Grassmann Algebra Book.nb

668

The Clifford product of totally orthogonal elements reduces to the exterior product
If we know that all the factors of a Clifford product are totally orthogonal, then we can interchange the Clifford product and the exterior product at will. Hence, for totally orthogonal elements, the Clifford and exterior products are associative, and we do not need to include parentheses.

abg abg abg abg


m k p m k p m k p m k p

ai bj ai gs bj gs 0

12.46

Note carefully however, that this associativity does not extend to the factors of the m-, k-, or pelements unless the factors of the m-, k-, or p-element concerned are mutually orthogonal. In which case we could for example then write:

a1 a2 ! a m a1 a2 ! a m

ai aj 0

i!j

12.47

Example
For example if xy and z are totally orthogonal, that is xz 0 and yz 0, then we can write
Hx yL z x y z x Hy zL - y Hx zL

But since x is not necessarily orthogonal to y, these are not the same as
x y z Hx yL z x Hy zL

We can see the difference by reducing the Clifford products to scalar products.
ToScalarProducts@8Hx yL z, x y z, x Hy zL, - y Hx zL<D . OrthogonalSimplificationRules@88x y, z<<D 8x y z, x y z, x y z, x y z< ToScalarProducts@8x y z, Hx yL z, x Hy zL<D . OrthogonalSimplificationRules@88x y, z<<D 8z Hx yL + x y z, z Hx yL + x y z, z Hx yL + x y z<

2009 9 3

Grassmann Algebra Book.nb

669

12.11 Associativity of the Clifford Product


Associativity of orthogonal elements
In what follows we use formula 12.45 to show that the Clifford product as defined at the beginning of this chapter is associative for totally orthogonal elements. This result will then enable us to show that by re-expressing the elements in terms in an orthogonal basis, the Clifford product is, in general, associative. Note that the approach we have adopted in this book has not required us to adopt ab initio the associativity of the Clifford product as an axiom, but rather show that associativity is a natural consequence of its definition. Formula 12.45 tells us that the Clifford product of (possibly) intersecting and totally orthogonal elements is given by:
ag gb
m p p k

g g
p p

ab
m k

ai bj ai gs bj gs 0

Note that it is not necessary that the factors of a be orthogonal to each other. This is true also for
m

the factors of b or of g.
k p

We begin by writing a Clifford product of three elements, and associating the first pair. The elements contain factors which are specific to the element (a, b, w), pairwise common (g, d, e),
m k r p q s

and common to all three (q). This product will therefore represent the most general case since other
z

cases may be obtained by letting one or more factors reduce to unity.


aeqg gqbd
m s z p p z k q

dewq
q s r z

q g q g
z p z p

aebd dewq
m s k q q s r z

q g q g
z p z p

H- 1Lsk

e d e d
s q s q

a b Kw qO
m k r z

H- 1Lsk

q g q g
z p z p

e d e d
s q s q

abwq
m k r z

On the other hand we can associate the second pair to obtain the same result.

2009 9 3

Grassmann Algebra Book.nb

670

aeqg
m s z p

gqbd dewq
p z k q q s r z

a e q g H- 1Lzk+z Hs+rL
m s z p

q d q d
z q z q

gb ew
p k s r

H- 1Lzk+z Hs+rL

q d q d
z q z q

aeqg gbew
m s z p p k s r

H- 1Lzk+z Hs+rL+sz+ks H- 1Lzk+zr+ks H- 1Lsk

q d q d
z q z q

aqeg gebw
m z s p p s k r

q d q d
z q z q

e g e g
s p s p

Ka qO b w
m z k r

q g q g
z p z p

e d e d
s q s q

abwq
m k r z

In sum: We have shown that the Clifford product of possibly intersecting but otherwise totally orthogonal elements is associative. Any factors which make up a given individual element are not specifically involved, and hence it is of no consequence to the associativity of the element with another whether or not these factors are mutually orthogonal.

A mnemonic formula for products of orthogonal elements


Because q and g are orthogonal to each other, and e and d are orthogonal to each other we can
z p s q

rewrite the inner products in the alternative forms:


q g q g Kq qO g g
z p z p z z p p

e d e d e e
s q s q s s

d d
q q

The results of the previous section may then summarized as:

aeqg gqbd dewq


m s z p p z k q q s r z

12.48
r z

H- 1Lsk g g Kq qO Ke eO d d
p p z z s s q q

abwq
m k

A mnemonic for making this transformation is then 1. Rearrange the factors in a Clifford product to get common factors adjacent to the Clifford product symbol, taking care to include any change of sign due to the quasi-commutativity of the exterior product. 2. Replace the common factors by their inner product, but with one copy being reversed. 3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the exterior product.

2009 9 3

A mnemonic for making this transformation is then 1. Rearrange the factors in a Clifford product to get common factors adjacent Grassmann to Algebra the Clifford Book.nb 671 product symbol, taking care to include any change of sign due to the quasi-commutativity of the exterior product. 2. Replace the common factors by their inner product, but with one copy being reversed. 3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the exterior product. Remember that for these relations to hold all the elements must be totally orthogonal to each other. Note that if, in addition, the 1-element factors of any of these elements, g say, are orthogonal to
p

each other, then:


g g Hg1 g1 L Hg2 g2 L ! Hgp gp L
p p

Associativity of non-orthogonal elements


Consider the Clifford product ab g where there are no restrictions on the factors a, b and g. It
m k p m k p

has been shown in Section 6.3 that an arbitrary simple m-element may be expressed in terms of m orthogonal 1-element factors (the Gram-Schmidt orthogonalization process). Suppose that n such orthogonal 1-elements e1 ,e2 ,!,en have been found in terms of which a, b, and g can be
m k p

expressed. Writing the m-elements, k-elements and p-elements formed from the ei as ej , er , and
m k

es respectively, we can write:


p

a S aj ej
m m

b S br er
k k

g S cs es
p p

Thus we can write the Clifford product as:


a b g S aj ej S br er
m k p m k

S cs es S S S aj br cs ej er es
p m k p

But we have already shown in the previous section that the Clifford product of orthogonal elements is associative. That is:
ej er es ej er es ej er es
m k p m k p m k p

Hence we can write:


a b g S S S aj br cs ej er es a b g
m k p m k p m k p

We thus see that the Clifford product of general elements is associative.

ab g a bg abg
m k p m k p m k p

12.49

The associativity of the Clifford product is usually taken as an axiom. However, in this book we have chosen to show that associativity is a consequence of our definition of the Clifford product in terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford algebras are consistent. 2009 9 3

Grassmann Algebra Book.nb

672

The associativity of the Clifford product is usually taken as an axiom. However, in this book we have chosen to show that associativity is a consequence of our definition of the Clifford product in terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford algebras are consistent.

Testing the general associativity of the Clifford product


We can easily create a function in Mathematica to test the associativity of Clifford products of elements of different grades in spaces of different dimensions. Below we define a function called CliffordAssociativityTest which takes the dimension of the space and the grades of three elements as arguments. The steps are as follows: Declare a space of the given dimension. It does not matter what the metric is since we do not use it. Create general elements of the given grades in a space of the given dimension. To make the code more readable, define a function which converts a product to scalar product form. Compute the products associated in the two different ways, and subtract them. The associativity test is successful if a result of 0 is returned.
CliffordAssociativityTest@n_D@m_, k_, p_D := ModuleB8X, Y, Z, S<, &n ; X = CreateElementBxF;
m

Y = CreateElementByF; Z = CreateElementBzF;
k p

S@x_D := ToScalarProducts@xD; S@S@X YD ZD - S@X S@Y ZDDF

We can either test individual cases, for example 2-elements in a 4-space:


CliffordAssociativityTest@4D@2, 2, 2D 0

Or, we can tabulate a number of results together. For example, elements of all grades in all spaces of dimension 0, 1, 2, and 3.
Table@CliffordAssociativityTest@nD@m, k, pD, 8n, 0, 3<, 8m, 0, n<, 8k, 0, n<, 8p, 0, m<D 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

2009 9 3

Grassmann Algebra Book.nb

673

12.12 Decomposing a Clifford Product


Even and odd generalized products
A Clifford product ab can be decomposed into two sums, those whose terms are generalized
m k

products of even order ab


m k e

, and those whose terms are generalized products of odd order

ab
m k o

. That is:

ab ab
m k m k e

+ ab
m k o

12.50

where, from the definition of the Clifford product:


Min@m,kD

ab
m k

l=0

H- 1Ll Hm-lL+ 2

l Hl-1L

ab
m l k

we can take just the even generalized products to get:


Min@m,kD

ab
m k e

l = 0,2,4,!

H- 1L 2

ab
m l k

12.51

(Here, the limit of the sum is understood to mean the greatest even integer less than or equal to Min[m,k].) The odd generalized products are:
Min@m,kD

ab
m k o

H- 1L

l = 1,3,5,!

H- 1L

l+1 2

ab
m l k

12.52

(In this case, the limit of the sum is understood to mean the greatest odd integer less than or equal to Min[m,k].) Note carefully that the evenness and oddness to which we refer is to the order of the generalized product not to the grade of the Clifford product.

2009 9 3

Grassmann Algebra Book.nb

674

The Clifford product in reverse order


The expression for the even and odd components of the Clifford product taken in the reverse order is simply obtained from the formulae above by using the quasi-commutativity of the generalized product.
Min@m,kD
l

Min@m,kD

ba
k m e

l = 0,2,4,!

H- 1L 2

ba
k l m

l = 0,2,4,!

H- 1L 2 +Hm-lL Hk-lL a b
m l k

But, since l is even, this simplifies to:


Min@m,kD

ba
k m e

H- 1L

mk

l = 0,2,4,!

H- 1L 2

ab
m l k

12.53

Hence, the even terms of both products are equal, except when both factors of the product are of odd grade.

ba
k m e

H- 1L m k a b
m k e

12.54

In a like manner we can show that for l odd:


Min@m,kD

ba
k

m o

H- 1L

l+1

l = 1,3,5,!

H- 1L

+Hm-lL Hk-lL

ab
m l k

Min@m,kD

ba
k

m o

- H- 1L

mk

H- 1L

l = 1,3,5,!

H- 1L

l+1 2

ab
m l k

12.55

ba
k m o

- H- 1L m k a b
m k o

12.56

The decomposition of a Clifford product


Finally, therefore, we can write:

2009 9 3

Grassmann Algebra Book.nb

675

ab ab
m k m k e

+ ab
m k o

12.57

H- 1L m k b a a b
k m m k e

- ab
m k o

12.58

which we can add and subtract to give:

ab
m k e

1 2

a b + H- 1L m k b a
m k k m

12.59

ab
m k o

1 2

a b - H- 1L m k b a
m k k m

12.60

Example: An m-element and a 1-element


Putting k equal to 1 gives:

ab
m

1 2

Ka b + H- 1L m b a O
m m

12.61

1 2

Ka b - H- 1L m b a O - H- 1L m a b
m m m

12.62

Example: An m-element and a 2-element


Putting k equal to 2 gives:

1 2 1 2

ab + ba
m 2 2 m

ab - ab
m 2 m 2

12.63

ab - ba
m 2 2 m

- H- 1L m a b
m 1 2

12.64

12.13 Clifford Algebra


2009 9 3

Grassmann Algebra Book.nb

676

12.13 Clifford Algebra


Generating Clifford algebras
Up to this point we have concentrated on the definition and properties of the Clifford product. We are now able to turn our attention to the algebras that such a product is capable of generating in different spaces. Broadly speaking, an algebra can be constructed from a set of elements of a linear space which has a product operation, provided all products of the elements are again elements of the set. In what follows we shall discuss Clifford algebras. The generating elements will be selected subsets of the basis elements of the full Grassmann algebra on the space. The product operation will be the Clifford product which we have defined in the first part of this chapter in terms of the generalized Grassmann product. Thus, Clifford algebras may viewed as living very much within the Grassmann algebra, relying on it for both its elements and its operations. In this development therefore, a particular Clifford algebra is ultimately defined by the values of the scalar products of basis vectors of the underlying Grassmann algebra, and thus by the metric on the space. In many cases we will be defining the specific Clifford algebras in terms of orthogonal (not necessarily orthonormal) basis elements. Clifford algebras include the real numbers, complex numbers, quaternions, biquaternions, and the Pauli and Dirac algebras.

Real algebra
All the products that we have developed up to this stage, the exterior, interior, generalized Grassmann and Clifford products possess the valuable and consistent property that when applied to scalars yield results equivalent to the usual (underlying field) product. The field has been identified with a space of zero dimensions, and the scalars identified with its elements. Thus, if a and b are scalars, then:

ab ab ab a b a b
0

12.65

Thus the real algebra is isomorphic with the Clifford algebra of a space of zero dimensions.

2009 9 3

Grassmann Algebra Book.nb

677

Clifford algebras of a 1-space


We begin our discussion of Clifford algebras with the simplest case: the Clifford algebras of 1space. Suppose the basis for the 1-space is e1 , then the basis for the associated Grassmann algebra is 81,e1 <.
&1 ; BasisL@D 81, e1 <

There are only four possible Clifford products of these basis elements. We can construct a table of these products by using the GrassmannAlgebra function CliffordProductTable.
CliffordProductTable@D 881 1, 1 e1 <, 8e1 1, e1 e1 <<

Usually however, to make the products easier to read and use, we will display them in the form of a palette using the GrassmannAlgebra function PaletteForm. (We can click on the palette to enter any of its expressions into the notebook).
C1 = CliffordProductTable@D; PaletteForm@C1 D

11

1 e1

e1 1 e1 e1
In the general case any Clifford product may be expressed in terms of exterior and interior products. We can see this by applying ToInteriorProducts to the table (although only interior (here scalar) products result from this simple case),
C2 = ToInteriorProducts@C1 D; PaletteForm@C2 D

e1

e1 e1 e1
Different Clifford algebras may be generated depending on the metric chosen for the space. In this example we can see that the types of Clifford algebra which we can generate in a 1-space are dependent only on the choice of a single scalar value for the scalar product e1 e1 . The Clifford product of two general elements of the algebra is
Ha + b e1 L Hc + d e1 L ToScalarProducts a c + b d He1 e1 L + b c e1 + a d e1

It is clear to see that if we choose e1 e1 -1, we have an algebra isomorphic to the complex algebra. The basis 1-element e1 then plays the role of the imaginary unit . We can generate this particular algebra immediately by declaring the metric, and then generating the product table.

2009 9 3

Grassmann Algebra Book.nb

678

DeclareBasis@8<D; DeclareMetric@88- 1<<D; CliffordProductTable@D ToScalarProducts ToMetricForm PaletteForm

-1
However, our main purpose in discussing this very simple example in so much detail is to emphasize that even in this case, there are an infinite number of Clifford algebras on a 1-space depending on the choice of the scalar value for e1 e1 . The complex algebra, although it has surely proven itself to be the most useful, is just one among many. Finally, we note that all Clifford algebras possess the real algebra as their simplest even subalgebra.

12.14 Clifford Algebras of a 2-Space


The Clifford product table in 2-space
In this section we explore the Clifford algebras of 2-space. As might be expected, the Clifford algebras of 2-space are significantly richer than those of 1-space. First we declare a (not necessarily orthogonal) basis for the 2-space, and generate the associated Clifford product table.
&2 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

11 e1 1 e2 1

1 e1 e1 e1 e2 e1

1 e2 e1 e2 e2 e2

1 He1 e2 L e1 He1 e2 L e2 He1 e2 L

He1 e2 L 1 He1 e2 L e1 He1 e2 L e2 He1 e2 L He1 e2 L


To see the way in which these Clifford products reduce to generalized Grassmann products we can apply the GrassmannAlgebra function ToGeneralizedProducts.
C2 = ToGeneralizedProducts@C1 D; PaletteForm@C2 D

11
0

1 e1
0

1 e2
0 1

e1 1
0

e1 e1 + e1 e1
0

e1 e2 + e1 e2
0 1

e2 1
0 0

e2 e1 + e2 e1
0 1 0 1

e2 e2 + e2 e2
0 1 0

He1 e2 L 1 He1 e2 L e1 - He1 e2 L e1 He1 e2 L e2 - He1 e2 L


The next level of expression is obtained by applying ToInteriorProducts to reduce the generalized products to exterior and interior products. 2009 9 3

Grassmann Algebra Book.nb

679

The next level of expression is obtained by applying ToInteriorProducts to reduce the generalized products to exterior and interior products.
C3 = ToInteriorProducts@C2 D; PaletteForm@C3 D

1 e1 e2

e1 e1 e1 e1 e2 - e1 e2

e2 e1 e2 + e1 e2 e2 e2

e1 e1 e2 e1 e2

e1 e2 - He1 e2 e1 L - He1 e2 e2 L - He1 e2 e1 e2 L - He1 e2


Finally, we can expand the interior products to scalar products.
C4 = ToScalarProducts@C3 D; PaletteForm@C4 D

1 e1 e2

e1 e1 e1 e1 e2 - e1 e2

e2 e1 e2 + e1 e2 e2 e2 - He1 - He2

e1 e2 He1 e2 L e1 - He1 e1 L e2 He2 e2 L e1 - He1 e2 L e2 He1 e2 L2


It should be noted that the GrassmannAlgebra operation ToScalarProducts also applies the function ToStandardOrdering. Hence scalar products are reordered to present the basis element with lower index first. For example the scalar product e2 e1 does not appear in the table above.

Product tables in a 2-space with an orthogonal basis


This is the Clifford product table for a general basis. If however,we choose an orthogonal basis in which e1 e2 is zero, the table simplifies to:
C5 = C4 . e1 e2 0; PaletteForm@C5 D

1 e1 e2

e1 e1 e1 - He1 e2 L

e2 e1 e2 e2 e2

e1 e2 He1 e1 L e2 - He2 e2 L e1

e1 e2 - He1 e1 L e2 He2 e2 L e1 - He1 e1 L He2 e2 L


This table defines all the Clifford algebras on the 2-space. Different Clifford algebras may be generated by choosing different metrics for the space, that is, by choosing the two scalar values e1 e1 and e2 e2 . Note however that allocation of scalar values a to e1 e1 and b to e2 e2 would lead to essentially the same structure as allocating b to e1 e1 and a to e2 e2 . In the rest of what follows however, we will restrict ourselves to metrics in which e1 e1 %1 and e2 e2 %1, whence there are three cases of interest. 2009 9 3

Grassmann Algebra Book.nb

680

In the rest of what follows however, we will restrict ourselves to metrics in which e1 e1 %1 and e2 e2 %1, whence there are three cases of interest.
8e1 e1 + 1, e2 e2 + 1< 8e1 e1 + 1, e2 e2 - 1< 8e1 e1 - 1, e2 e2 - 1<

As observed previously the case 8e1 e1 -1,e2 e2 +1< is isomorphic to the case 8e1 e1 +1,e2 e2 -1<, so we do not need to consider it.

Case 1: 8e1 e1 + 1, e2 e2 + 1<


This is the standard case of a 2-space with an orthonormal basis. Making the replacements in the table gives:
C6 = C5 . 8e1 e1 + 1, e2 e2 + 1<; PaletteForm@C6 D

1 e1 e2 e1 e2

e1 1 - He1 e2 L - e2

e2 e1 e2 1 e1

e1 e2 e2 - e1 -1

Inspecting this table for interesting structures or substructures, we note first that the even subalgebra (that is, the algebra based on products of the even basis elements) is isomorphic to the complex algebra. For our own explorations we can use the palette to construct a product table for the subalgebra, or we can create a table using the GrassmannAlgebra function TableTemplate, and edit it by deleting the middle rows and columns.
TableTemplate@C6 D

1 e1 e2 e1 e2 1 e1 e2

e1 1 - He1 e2 L - e2 e1 e2 -1

e2 e1 e2 1 e1

e1 e2 e2 - e1 -1

If we want to create a palette for the subalgebra, we have to edit the normal list (matrix) form and then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra function EvenCliffordProductTable which creates a Clifford product table from just the basis elements of even grade. We then set the metric we want, convert the Clifford product to scalar products, evaluate the scalar products according to the chosen metric, and display the resulting table as a palette.

2009 9 3

If we want to create a palette for the subalgebra, we have to edit the normal list Algebra (matrix) form and Grassmann Book.nb 681 then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra function EvenCliffordProductTable which creates a Clifford product table from just the basis elements of even grade. We then set the metric we want, convert the Clifford product to scalar products, evaluate the scalar products according to the chosen metric, and display the resulting table as a palette.
C7 = EvenCliffordProductTable@D; DeclareMetric@81, 0<, 80, 1<D; PaletteForm@ToMetricForm@ToScalarProducts@C7 DDD

1 e1 e2

e1 e2 -1

Here we see that the basis element e1 e2 has the property that its (Clifford) square is -1. We can see how this arises by carrying out the elementary operations on the product. Note that e1 e1 e2 e2 1 since we have assumed the 2-space under consideration is Euclidean.
He1 e2 L He1 e2 L - He2 e1 L He1 e2 L - He1 e1 L He2 e2 L - 1

Thus the pair 81,e1 e2 < generates an algebra under the Clifford product operation, isomorphic to the complex algebra. It is also the basis of the even Grassmann algebra of L.
2

Case 2: 8e1 e1 + 1, e2 e2 - 1<


Here is an example of a Clifford algebra which does not have any popular applications of which the author is aware.
C7 = C5 . 8e1 e1 + 1, e2 e2 - 1<; PaletteForm@C7 D

1 e1 e2 e1 e2

e1 1 - He1 e2 L - e2

e2 e1 e2 -1 - e1

e1 e2 e2 e1 1

Case 3: 8e1 e1 - 1, e2 e2 - 1<


Our third case in which the metric is {{-1,0},{0,-1}}, is isomorphic to the quaternions.
C8 = C5 . 8e1 e1 - 1, e2 e2 - 1<; PaletteForm@C8 D

1 e1 e2 e1 e2

e1 -1 - He1 e2 L e2

e2 e1 e2 -1 - e1

e1 e2 - e2 e1 -1

We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we Mathematica's double-struck symbols and choose the correspondence 8e1 3,e2 4,e1 e2 !<.

2009 9 3

Grassmann Algebra Book.nb

682

We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we Mathematica's double-struck symbols and choose the correspondence 8e1 3,e2 4,e1 e2 !<.
C9 = C8 . 8e1 3, e2 4, e1 e2 !<; PaletteForm@C9 D

&

$ -1

& -% $

% -& -1 & %

-$ -1

Having verified that the structure is indeed quaternionic, let us return to the original specification in terms of the basis of the Grassmann algebra. A quaternion can be written in terms of these basis elements as:
Q a + b e1 + c e2 + d e1 e2 Ha + b e1 L + Hc + d e1 L e2

Now, because e1 and e2 are orthogonal, e1 e2 is equal to e1 e2 . But for any further calculations we will need to use the Clifford product form. Hence we write

Q a + b e1 + c e2 + d e1 e2 Ha + b e1 L + Hc + d e1 L e2

12.66

Hence under one interpretation, each of e1 and e2 and their Clifford product e1 e2 behaves as a different imaginary unit. Under the second interpretation, a quaternion is a complex number with imaginary unit e2 , whose components are complex numbers based on e1 as the imaginary unit.

12.15 Clifford Algebras of a 3-Space


The Clifford product table in 3-space
In this section we explore the Clifford algebras of 3-space. First we declare a (not necessarily orthogonal) basis for the 3-space, and generate the associated Clifford product table. Because of the size of the table, only the first few columns are shown in the print version.

2009 9 3

Grassmann Algebra Book.nb

683

&3 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

11 e1 1 e2 1 e3 1 He1 e2 L 1 He1 e3 L 1 He2 e3 L 1

1 e1 e1 e1 e2 e1 e3 e1 He1 e2 L e1 He1 e3 L e1 He2 e3 L e1

1 e2 e1 e2 e2 e2 e3 e2 He1 e2 L e2 He1 e3 L e2 He2 e3 L e2

1 e3 e1 e3 e2 e3 e3 e3 He1 e2 L e3 He1 e3 L e3 He2 e3 L e3

He1 e2 e3 L 1 He1 e2 e3 L e1 He1 e2 e3 L e2 He1 e2 e3 L

!"3 : The Pauli algebra


Our first exploration is to Clifford algebras in Euclidean space, hence we accept the default metric which was automatically declared when we declared the basis.
MatrixForm@MetricD 1 0 0 0 1 0 0 0 1

Since the basis elements are orthogonal, we can use the GrassmannAlgebra function CliffordToOrthogonalScalarProducts for computing the Clifford products. This function is faster for larger calculations than the more general ToScalarProducts used in the previous examples.
C2 = ToMetricForm@CliffordToOrthogonalScalarProducts@C1 DD; PaletteForm@C2 D

1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3

e1 1 - He1 e2 L - He1 e3 L - e2 - e3 e1 e2 e3 e2 e3

e2 e1 e2 1 - He2 e3 L e1 - He1 e2 e3 L - e3 - He1 e3 L

e3 e1 e3 e2 e3 1 e1 e2 e3 e1 e2 e1 e2

e1 e2 e2 - e1 e1 e2 e3 -1 e2 e3 - He1 e3 L - e3

e1 e3 - He1 - He2 e1 e2

We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra. Pauli's representation of the algebra was by means of 2!2 matrices over the complex field: 2009 9 3

Grassmann Algebra Book.nb

684

We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra. Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:
0 1 O 1 0 0 - O 0 1 0 O 0 -1

s1 K

s2 K

s3 K

The isomorphism is direct and straightforward:


8I 1, s1 e1 , s2 e2 , s3 e3 , s1 s2 e1 e2 , s1 s3 e1 e3 , s2 s3 e2 e3 , s1 s2 s3 e1 e2 e3 <

To show this we can: 1) Replace the symbols for the basis elements of the Grassmann algebra in the table above by symbols for the Pauli matrices. Replace the exterior product operation by the matrix product operation. 2) Replace the symbols by the actual matrices, allowing Mathematica to perform the matrix products. 3) Replace the resulting Pauli matrices with the corresponding basis elements of the Grassmann algebra. 4) Verify that the resulting table is the same as the original table. We perform these steps in sequence. Because of the size of the output, only the electronic version will show the complete tables.

Step 1: Replace symbols for entities and operations


C3 = HC2 ReplaceNegativeUnitL . 81 I, e1 s1 , e2 s2 , e3 s3 < . Wedge Dot; PaletteForm@C3 D

I s1 s2 s3 s1 .s2 s1 .s3 s2 .s3 s1 .s2 .s3

s1 I -s1 .s2 -s1 .s3 -s2 -s3 s1 .s2 .s3 s2 .s3

s2 s1 .s2 I -s2 .s3 s1 -s1 .s2 .s3 -s3 -s1 .s3

s3 s1 .s3 s2 .s3 I s1 .s2 .s3 s1 s2 s1 .s2

s1 .s2 s2 -s1 s1 .s2 .s3 -I s2 .s3 -s1 .s3 -s3

s1 .s3 s3 -s1 .s2 .s3 -s1 -s2 .s3 -I s1 .s2 s2

s2 s1 . s3 s1 -s1

2009 9 3

Grassmann Algebra Book.nb

685

Step 2: Substitute matrices and calculate


C4 = C3 . :I K MatrixForm@C4 D
1 0 0 K 1 0 K 1 K 0 K 0 0 K 1 0 K K 0 K 0 O 1 1 O 0 - O 0 0 O -1 0 O - -1 O 0 O 0 0 O 0 1 1 K 0 - K 0 0 K -1 0 K - -1 K 0 K 0 0 K K 1 0 - O K O 0 0 0 0 O K O 1 0 - 0 1 0 O K O 0 1 1 0 - O K O 0 - 0 0 1 O K O 0 1 0 0 - 0 O K O 1 0 - 0 -1 0 O K O 0 1 0 1 O K O 0 -1 0 1 0 0 K 1 0 K 1 K 0 K 0 0 K 1 0 K K 0 K 0 0 O K O -1 0 - -1 0 - O K O 0 0 0 -1 O K O 0 -1 0 0 0 O K O 1 0 0 -1 0 O K O 0 -1 1 0 O K O 0 0 - 0 1 O K O 0 -1 0 0 -1 0 O K O - 0 1 0 1 1 K 0 - K 0 0 K -1 0 K - -1 K 0 K 0 0 K K -1 0 0 O K O K O 0 0 0 0 0 0 O K O K O -1 0 0 0 1 0 0 1 O K O K O - 0 -1 -1 0 -1 0 0 O K O K O 0 - 0 0 - - 0 -1 -1 0 O K O K O 0 1 0 0 1 0 - 0 0 - O K O K O -1 0 0 0 -1 0 0 -1 O K O K O - 0 -1 -1 0 - 0 -1 -1 0 O K O K O 0 -1 0 0 -1

1 0 0 1 0 - 1 0 O , s1 K O , s2 K O, s3 K O>; 0 1 1 0 0 0 -1

Step 3:
Here we let the first row (column) of the product table correspond back to the basis elements of the Grassmann representation, and make the substitution: throughout the whole table.
C5 = C4 . Thread@First@C4 D BasisL@DD . Thread@- First@C4D - BasisL@DD 881, e1 , e2 , e3 , e1 e2 , e1 e3 , e2 e3 , e1 e2 e3 <, 8e1 , 1, e1 e2 , e1 e3 , e2 , e3 , e1 e2 e3 , e2 e3 <, 8e2 , - He1 e2 L, 1, e2 e3 , - e1 , - He1 e2 e3 L, e3 , - He1 e3 L<, 8e3 , - He1 e3 L, - He2 e3 L, 1, e1 e2 e3 , - e1 , - e2 , e1 e2 <, 8e1 e2 , - e2 , e1 , e1 e2 e3 , - 1, - He2 e3 L, e1 e3 , - e3 <, 8e1 e3 , - e3 , - He1 e2 e3 L, e1 , e2 e3 , - 1, - He1 e2 L, e2 <, 8e2 e3 , e1 e2 e3 , - e3 , e2 , - He1 e3 L, e1 e2 , - 1, - e1 <, 8e1 e2 e3 , e2 e3 , - He1 e3 L, e1 e2 , - e3 , e2 , - e1 , - 1<<

Step 4: Verification
Verify that this is the table with which we began.
C5 C2 True

2009 9 3

Grassmann Algebra Book.nb

686

!"3 + : The Quaternions


Multiplication of two even elements always generates an even element, hence the even elements form a subalgebra. In this case the basis for the subalgebra is composed of the unit 1 and the bivectors ei ej .
C4 = EvenCliffordProductTable@D CliffordToOrthogonalScalarProducts ToMetricForm; PaletteForm@C4 D

1 e1 e2 e1 e3

e1 e2 -1 e2 e3

e1 e3 - He2 e3 L -1 e1 e2

e2 e3 e1 e3 - He1 e2 L -1

e2 e3 - He1 e3 L

From this multiplication table we can see that the even subalgebra of the Clifford algebra of 3space is isomorphic to the quaternions. To see the isomorphism more clearly, replace the bivectors by 3, 4, and !.
C5 = C4 . 8e2 e3 3, e1 e3 4, e1 e2 !<; PaletteForm@C5 D

&

$ %

& -1 -$ % $

-1 -& -1

$ -% &

The Complex subalgebra


The subalgebra generated by the pair of basis elements 81,e1 e2 e3 < is isomorphic to the complex algebra. Although, under the Clifford product, each of the 2-elements behaves like an imaginary unit, it is only the 3-element e1 e2 e3 that also commutes with each of the other basis elements.

Biquaternions
We now explore the metric in which 8e1 e1 - 1, e2 e2 - 1, e3 e3 - 1<. To generate the Clifford product table for this metric we enter:

2009 9 3

Grassmann Algebra Book.nb

687

DeclareMetric@DiagonalMatrix@8- 1, - 1, - 1<DD; C6 = CliffordProductTable@D ToScalarProducts ToMetricForm; PaletteForm@C6 D

1 e1 e2 e3 e1 e2 e1 e3 e2 e3

e1 -1 - He1 e2 L - He1 e3 L e2 e3 e1 e2 e3

e2 e1 e2 -1 - He2 e3 L - e1 - He1 e2 e3 L e3 e1 e3

e3 e1 e3 e2 e3 -1 e1 e2 e3 - e1 - e2 - He1 e2 L

e1 e2 - e2 e1 e1 e2 e3 -1 - He2 e3 L e1 e3 - e3

e1 - He1 e1 e2 - He1 e2

e1 e2 e3 - He2 e3 L

Note that every one of the basis elements of the Grassmann algebra (except 1 and e1 e2 e3 ) acts as an imaginary unit under the Clifford product. This enables us to build up a general element of the algebra as a sum of nested complex numbers, or of nested quaternions. To show this, we begin by writing a general element in terms of the basis elements of the Grassmann algebra:
QQ a + b e1 + c e2 + d e3 + e e1 e2 + f e1 e3 + g e2 e3 + h e1 e2 e3

Or, as previously argued, since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford product and rearrange the terms in the sum to give
QQ Ha + b e1 + c e2 + e e1 e2 L + Hd + f e1 + g e2 + h e1 e2 L e3

This sum may be viewed as the complex sum of two quaternions


Q1 a + b e1 + c e2 + e e1 e2 Ha + b e1 L + Hc + e e1 L e2 Q2 d + f e1 + g e2 + h e1 e2 Hd + f e1 L + Hg + h e1 L e2 QQ Q1 + Q2 e3

Historical Note This complex sum of two quaternions was called a biquaternion by Hamilton [Hamilton, Elements of Quaternions, p133] but Clifford in a footnote to his Preliminary Sketch of Biquaternions [Clifford, Mathematical Papers, Chelsea] says 'Hamilton's biquaternion is a quaternion with complex coefficients; but it is convenient (as Prof. Pierce remarks) to suppose from the beginning that all scalars may be complex. As the word is thus no longer wanted in its old meaning, I have made bold to use it in a new one." Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of Quaternions] for a complex vector, that is, for a vector x + y, where x and y are vectors; and the word biquaternion for a complex quaternion q0 + q1 , where q0 and q1 are quaternions. He emphasizes here that " is the (scalar) imaginary of algebra, and not a symbol for a geometrically real right versor "

2009 9 3

Grassmann Algebra Book.nb

688

Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of Quaternions] for a complex vector, that is, for a vector x + y, where x and y are vectors; and the word biquaternion for a complex quaternion q0 + q1 , where q0 and q1 are quaternions. He emphasizes here that " is the (scalar) imaginary of algebra, and not a symbol for a geometrically real right versor " Hamilton introduces his biquaternion as the quotient of a bivector (his usage) by a (real) vector.

12.16 Clifford Algebras of a 4-Space


The Clifford product table in 4-space
In this section we explore the Clifford algebras of 4-space. First we declare a (not necessarily orthogonal) basis for the 4-space, and generate the associated Clifford product table. Because of the size of the table, only the first few columns are shown in the print version.
&4 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

11 e1 1 e2 1 e3 1 e4 1 He1 e2 L 1 He1 e3 L 1 He1 e4 L 1 He2 e3 L 1 He2 e4 L 1 He3 e4 L 1 He1 e2 e3 L 1 He1 e2 e4 L 1 He1 e3 e4 L 1 He2 e3 e4 L 1

1 e1 e1 e1 e2 e1 e3 e1 e4 e1 He1 e2 L e1 He1 e3 L e1 He1 e4 L e1 He2 e3 L e1 He2 e4 L e1 He3 e4 L e1 He1 e2 e3 L e1 He1 e2 e4 L e1 He1 e3 e4 L e1 He2 e3 e4 L e1

1 e2 e1 e2 e2 e2 e3 e2 e4 e2 He1 e2 L e2 He1 e3 L e2 He1 e4 L e2 He2 e3 L e2 He2 e4 L e2 He3 e4 L e2 He1 e2 e3 L e2 He1 e2 e4 L e2 He1 e3 e4 L e2 He2 e3 e4 L e2 H H H H

He1 e2 e3 e4 L 1 He1 e2 e3 e4 L e1 He1 e2 e3 e4 L e2 He1

!"4 : The Clifford algebra of Euclidean 4-space


2009 9 3

Grassmann Algebra Book.nb

689

!"4 : The Clifford algebra of Euclidean 4-space


Our first exploration is to the Clifford algebra of Euclidean space, hence we accept the default metric which was automatically declared when we declared the basis.
Metric 881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<

Since the basis elements are orthogonal, we can use the faster GrassmannAlgebra function CliffordToOrthogonalScalarProducts for computing the Clifford products.
C2 = CliffordToOrthogonalScalarProducts@C1 D ToMetricForm; PaletteForm@C2 D

1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4

e1 1 - He1 e2 L - He1 e3 L - He1 e4 L - e2 - e3 - e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e2 e4 e3 e4 - He1 e2 e3 e4 L - He2 e3 e4 L

e2 e1 e2 1 - He2 e3 L - He2 e4 L e1 - He1 e2 e3 L - He1 e2 e4 L - e3 - e4 e2 e3 e4 - He1 e3 L - He1 e4 L e1 e2 e3 e4 e3 e4 e1 e3 e4

e3 e1 e3 e2 e3 1 - He3 e4 L e1 e2 e3 e1 - He1 e3 e4 L e2 - He2 e3 e4 L - e4 e1 e2 - He1 e2 e3 e4 - He1 e4 L - He2 e4 L - He1 e2 e4 L

This table is well off the page in the printed version, but we can condense the notation for the basis elements of L by replacing ei !ej by ei,!,j . To do this we can use the GrassmannAlgebra function ToBasisIndexedForm. For example

2009 9 3

Grassmann Algebra Book.nb

690

1 - He1 e2 e3 L ToBasisIndexedForm 1 - e1,2,3

To display the table in condensed notation we make up a rule for each of the basis elements.
C3 = C2 . Reverse@Thread@BasisL@D ToBasisIndexedForm@BasisL@DDDD; PaletteForm@C3 D

1 e1 e2 e3 e4 e1,2 e1,3 e1,4 e2,3 e2,4 e3,4 e1,2,3 e1,2,4 e1,3,4 e2,3,4 e1,2,3,4

e1 1 - e1,2 - e1,3 - e1,4 - e2 - e3 - e4 e1,2,3 e1,2,4 e1,3,4 e2,3 e2,4 e3,4 - e1,2,3,4 - e2,3,4

e2 e1,2 1 - e2,3 - e2,4 e1 - e1,2,3 - e1,2,4 - e3 - e4 e2,3,4 - e1,3 - e1,4 e1,2,3,4 e3,4 e1,3,4

e3 e1,3 e2,3 1 - e3,4 e1,2,3 e1 - e1,3,4 e2 - e2,3,4 - e4 e1,2 - e1,2,3,4 - e1,4 - e2,4 - e1,2,4

e4 e1,4 e2,4 e3,4 1 e1,2,4 e1,3,4 e1 e2,3,4 e2 e3 e1,2,3,4 e1,2 e1,3 e2,3 e1,2,3

e1,2 e2 - e1 e1,2,3 e1,2,4 -1 e2,3 e2,4 - e1,3 - e1,4 e1,2,3,4 - e3 - e4 e2,3,4 - e1,3,4 - e3,4

e1,3 e3 - e1,2,3 - e1 e1,3,4 - e2,3 -1 e3,4 e1,2 - e1,2,3,4 - e1,4 e2 - e2,3,4 - e4 e1,2,4 e2,4

e e4 -e -e e1, e e e2 e2 e3 -e -

!"4 + : The even Clifford algebra of Euclidean 4-space


The even subalgebra is composed of the unit 1, the bivectors ei ej , and the single basis 4-element e1 e2 e3 e4 .

2009 9 3

Grassmann Algebra Book.nb

691

C3 = EvenCliffordProductTable@D CliffordToOrthogonalScalarProducts ToMetricForm; PaletteForm@C3 D

1 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e4

e1 e2 -1 e2 e3 e2 e4 - He1 e3 L - He1 e4 L e1 e2 e3 e4 - He3 e4 L

e1 e3 - He2 e3 L -1 e3 e4 e1 e2 - He1 e2 e3 e4 L - He1 e4 L e2 e4

e1 e4 - He2 e4 L - He3 e4 L -1 e1 e2 e3 e4 e1 e2 e1 e3 - He2 e3 L e1

!"1,3 : The Dirac algebra


To generate the Dirac algebra we need the Minkowski metric in which there is one time-like basis element e1 e1 +1, and three space-like basis elements ei ei -1.
8e1 e1 1, e2 e2 - 1, e3 e3 - 1, e4 e4 - 1<

To generate the Clifford product table for this metric we enter:


&4 ; DeclareMetric@DiagonalMatrix@81, - 1, - 1, - 1<DD 881, 0, 0, 0<, 80, - 1, 0, 0<, 80, 0, - 1, 0<, 80, 0, 0, - 1<<

2009 9 3

Grassmann Algebra Book.nb

692

C6 = CliffordToOrthogonalScalarProducts@C1 D ToMetricForm; PaletteForm@C6 D

1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4

e1 1 - He1 e2 L - He1 e3 L - He1 e4 L - e2 - e3 - e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e2 e4 e3 e4 - He1 e2 e3 e4 L - He2 e3 e4 L

e2 e1 e2 -1 - He2 e3 L - He2 e4 L - e1 - He1 e2 e3 L - He1 e2 e4 L e3 e4 e2 e3 e4 e1 e3 e1 e4 e1 e2 e3 e4 - He3 e4 L - He1 e3 e4 L

e3 e1 e3 e2 e3 -1 - He3 e4 L e1 e2 e3 - e1 - He1 e3 e4 L - e2 - He2 e3 e4 L e4 - He1 e2 L - He1 e2 e3 e4 e1 e4 e2 e4 e1 e2 e4

To confirm that this structure is isomorphic to the Dirac algebra we can go through the same procedure that we followed in the case of the Pauli algebra.

2009 9 3

Grassmann Algebra Book.nb

693

Step 1: Replace symbols for entities and operations


C7 = HC6 ReplaceNegativeUnitL . 81 I, e1 g0 , e2 g1 , e3 g2 , e4 g3 < . Wedge Dot; PaletteForm@C7 D

I g0 g1 g2 g3 g0 .g1 g0 .g2 g0 .g3 g1 .g2 g1 .g3 g2 .g3 g0 .g1 .g2 g0 .g1 .g3 g0 .g2 .g3 g1 .g2 .g3 g0 .g1 .g2 .g3

g0 I -g0 .g1 -g0 .g2 -g0 .g3 -g1 -g2 -g3 g0 .g1 .g2 g0 .g1 .g3 g0 .g2 .g3 g1 .g2 g1 .g3 g2 .g3 -g0 .g1 .g2 .g3 -g1 .g2 .g3

g1 g0 .g1 -I -g1 .g2 -g1 .g3 -g0 -g0 .g1 .g2 -g0 .g1 .g3 g2 g3 g1 .g2 .g3 g0 .g2 g0 .g3 g0 .g1 .g2 .g3 -g2 .g3 -g0 .g2 .g3

g2 g0 .g2 g1 .g2 -I -g2 .g3 g0 .g1 .g2 -g0 -g0 .g2 .g3 -g1 -g1 .g2 .g3 g3 -g0 .g1 -g0 .g1 .g2 .g3 g0 .g3 g1 .g3 g0 .g1 .g3

g3 g0 .g3 g1 .g3 g2 .g3 -I g0 .g1 .g3 g0 .g2 .g3 -g0 g1 .g2 .g3 -g1 -g2 g0 .g1 .g2 . -g0 .g1 -g0 .g2 -g1 .g2 -g0 .g1 .g2

Step 2: Substitute matrices and calculate


1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 -1 0 0 0 -1 0 0 0 1 0 0 -1 0 -1 0 , 1 0 0 0 0 0

C8 = C7 . :I

, g0

, g1

g2

0 0 0 0 0 - 0 , g3 0 - 0 0 0 0 0

0 0 -1 0 0 0 1 0 0 0 -1 0

0 1 >; MatrixForm@C8 D 0 0

2009 9 3

Grassmann Algebra Book.nb

694

1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 1 0 0 0 0 -1 1 0 0 0 0 -1 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 0 0 -1 -1 0 0 0 0 1 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 0 0 - 0 0 0 0 - - 0 0 0 0 - 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0

0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 -1 0 0 1 0 0 0 0 0 0 -1 0 0 1 0 -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0

0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 - 0 0 0 0 - - 0 0 0 0 - 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0

0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 - 0 0 0 0

0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0

Step 3:
2009 9 3

Grassmann Algebra Book.nb

695

Step 3:
C9 = C8 . Thread@First@C8 D BasisL@DD . Thread@- First@C8 D - BasisL@DD;

Step 4: Verification
C9 C6 True

!"0,4 : The Clifford algebra of complex quaternions


To generate the Clifford algebra of complex quaternions we need a metric in which all the scalar products of orthogonal basis elements are of the form ei ej (i not equal to j) are equal to -1. That is
8e1 e1 - 1, e2 e2 - 1, e3 e3 - 1, e4 e4 - 1<

To generate the Clifford product table for this metric we enter:


&4 ; DeclareMetric@DiagonalMatrix@8- 1, - 1, - 1, - 1<DD 88- 1, 0, 0, 0<, 80, - 1, 0, 0<, 80, 0, - 1, 0<, 80, 0, 0, - 1<<

2009 9 3

Grassmann Algebra Book.nb

696

C6 = CliffordToOrthogonalScalarProducts@C1 D ToMetricForm; PaletteForm@C6 D

1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4

e1 -1 - He1 e2 L - He1 e3 L - He1 e4 L e2 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 - He2 e3 L - He2 e4 L - He3 e4 L - He1 e2 e3 e4 L e2 e3 e4

e2 e1 e2 -1 - He2 e3 L - He2 e4 L - e1 - He1 e2 e3 L - He1 e2 e4 L e3 e4 e2 e3 e4 e1 e3 e1 e4 e1 e2 e3 e4 - He3 e4 L - He1 e3 e4 L

e3 e1 e3 e2 e3 -1 - He3 e4 L e1 e2 e3 - e1 - He1 e3 e4 L - e2 - He2 e3 e4 L e4 - He1 e2 L - He1 e2 e3 e4 e1 e4 e2 e4 e1 e2 e4

Note that both the vectors and bivectors act as an imaginary units under the Clifford product. This enables us to build up a general element of the algebra as a sum of nested complex numbers, quaternions, or complex quaternions. To show this, we begin by writing a general element in terms of the basis elements of the Grassmann algebra:
X = CreateGrassmannNumber@aD a0 + e1 a1 + e2 a2 + e3 a3 + e4 a4 + a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4 + a11 e1 e2 e3 + a12 e1 e2 e4 + a13 e1 e3 e4 + a14 e2 e3 e4 + a15 e1 e2 e3 e4

We can then factor this expression to display it as a nested sum of numbers of the form Ck ai +aj e1 . (Of course any basis element will do upon which to base the factorization.)
X Ia0 + a1 e1 M + Ia2 + a5 e1 M e2 + IIa3 + a6 e1 M + Ia8 + a11 e1 M e2 M e3 + IIa4 + a7 e1 M + Ia9 + a12 e1 M e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M e2 M e3 M e4

Which can be rewritten in terms of the Ci as

2009 9 3

Grassmann Algebra Book.nb

697

X C1 + C2 e2 + HC3 + C4 e2 L e3 + HC5 + C6 e2 + HC7 + C8 e2 L e3 L e4

Again, we can rewrite each of these elements as Qk Ci +Cj e2 to get


X Q1 + Q2 e3 + HQ3 + Q4 e3 L e4

Finally, we write QQk Qi +Qj e3 to get


X QQ1 + QQ2 e4

Since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford product to give
X Ia0 + a1 e1 M + Ia2 + a5 e1 M e2 + IIa3 + a6 e1 M + Ia8 + a11 e1 M e2 M e3 + IIa4 + a7 e1 M + Ia9 + a12 e1 M e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M e2 M e3 M e4 C1 + C2 e2 + HC3 + C4 e2 L e3 + HC5 + C6 e2 + HC7 + C8 e2 L e3 L e4 Q1 + Q2 e3 + HQ3 + Q4 e3 L e4 QQ1 + QQ2 e4

12.17 Rotations
To be completed

2009 9 3

Grassmann Algebra Book.nb

698

13 Exploring Grassmann Matrix Algebra

13.1 Introduction
This chapter introduces the concept of a matrix of Grassmann numbers, which we will call a Grassmann matrix. Wherever it makes sense, the operations discussed will work also for listed collections of the components of tensors of any order as per the Mathematica representation: a set of lists nested to a certain number of levels. Thus, for example, an operation may also work for vectors or a list containing only one element. We begin by discussing some quick methods for generating matrices of Grassmann numbers, particularly matrices of symbolic Grassmann numbers, where it can become tedious to define all the scalar factors involved. We then discuss the common algebraic operations like multiplication by scalars, addition, multiplication, taking the complement, finding the grade of elements, simplification, taking components or determining the type of elements involved. Separate sections are devoted to discussing the notions of transpose, determinant and adjoint of a Grassmann matrix. These notions do not carry over directly into the algebra of Grassmann matrices due to the non-commutative nature of Grassmann numbers. Matrix powers are then discussed, and the inverse of a Grassmann matrix defined in terms of its positive integer powers. Non-integer powers are defined for matrices whose bodies have distinct eigenvalues. The determination of the eigensystem of a Grassmann matrix is discussed for this class of matrix. Finally, functions of Grassmann matrices are discussed based on the determination of their eigensystems. We show that relationships that we expect of scalars, for example Log[Exp[A]] = A or Sin@AD2 + Cos@AD2 = 1, can still apply to Grassmann matrices.

13.2 Creating Matrices


Creating general symbolic matrices
Unless a Grassmann matrix is particularly simple, and particularly when a large symbolic matrix is required, it is useful to be able to generate the initial form of a the matrix automatically. We do this with the GrassmannAlgebra CreateMatrixForm function.

2009 9 3

Grassmann Algebra Book.nb

699

? CreateMatrixForm CreateMatrixForm@DD@X,SD constructs an array of the specified dimensions with copies of the expression X formed by indexing its scalars and variables with subscripts. S is an optional list of excluded symbols. D is a list of dimensions of the array Hwhich may be symbolic for lists and matricesL. Note that an indexed scalar is not recognised as a scalar unless it has an underscripted 0, or is declared as a scalar.

Suppose we require a 2!2 matrix of general Grassmann numbers of the form X (given below) in a 2-space.
X = a + b e1 + c e2 + d e1 e2 a + b e1 + c e2 + d e1 e2

We declare the 2-space, and then create a 2!2 matrix of the form X.
&2 ; M = CreateMatrixForm@82, 2<D@XD; MatrixForm@MD a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2

We need to declare the extra scalars that we have formed. This can be done with just one pattern (provided we wish all symbols with this pattern to be scalar).
DeclareExtraScalars@8Subscript@_, _, _D<D :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, __,_ , _>
0

Creating matrices with specific coefficients


To enter a Grassmann matrix with specific coefficients, enter a placeholder as the kernel symbol. An output will be returned which will be copied to an input form as soon as you attempt to edit it. You can select a placeholder and enter the value you want. The tab key can be used to select the next placeholder. The steps are: Generate the form you want.
Y = CreateGrassmannNumber@D + e1 + e2 + e1 e2 CreateMatrixForm@82, 2<D@YD 88 + e1 + e2 + e1 e2 , + e1 + e2 + e1 e2 <, 8 + e1 + e2 + e1 e2 , + e1 + e2 + e1 e2 <<

Start editing the form above.

2009 9 3

Grassmann Algebra Book.nb

700

882 + e1 + e2 + e1 e2 , + e1 + e2 + e1 e2 <, 8 + e1 + e2 + e1 e2 , + e1 + e2 + e1 e2 <<

Finish off your editing and give the matrix a name.


A = 882 + 3 e1 , e1 + 6 e2 <, 85 - 9 e1 e2 , e2 + e1 e2 <<; MatrixForm@AD K 2 + 3 e1 e1 + 6 e2 O 5 - 9 e1 e2 e2 + e1 e2

Creating matrices with variable elements


To create a matrix in which the Grassmann numbers are expressed in terms of x, y, and z, say, you can temporarily declare the basis as {x,y,z} before generating the form with CreateGrassmannNumber. For example:
B = Basis; DeclareBasis@8x, y, z<D 8x, y, z< Z = CreateGrassmannNumber@D + x + y + z + xy + xz + yz + xyz

Here we create a vector of elements of the form Z.


CreateMatrixForm@83<D@ZD MatrixForm + x + y + z + xy + xz + yz + xyz + x + y + z + xy + xz + yz + xyz + x + y + z + xy + xz + yz + xyz

Remember to return to your original basis.


DeclareBasis@BD 8e1 , e2 , e3 <

13.3 Manipulating Matrices


Matrix simplification
Simplification of the elements of a matrix may be effected by using GrassmannSimplify. For example, suppose we are working in a Euclidean 3-space and have a matrix B:

2009 9 3

Grassmann Algebra Book.nb

701

&3 ; B = 881 + x x, 2 + e1 e2 <, 8e1 e2 , 3 + x y z w<<; MatrixForm@BD 1 + xx 2 + e1 e2 e1 e2 3 + x y z w

To simplify the elements of the matrix we enter:


% @BD 881, 2 + e1 e2 <, 8e3 , 3<<

If we wish to simplify any orthogonal elements, we can use ToMetricForm. Since we are in a Euclidean space the scalar product term is zero.
ToMetricForm@%@BDD 881, 2<, 8e3 , 3<<

The grades of the elements of a matrix


The GrassmannAlgebra function Grade returns a matrix with each element replaced by a list of the different grades of its components. Thus, if an element is composed of several components of the same grade, only that one grade will be returned for that element. For example, take the matrix A created in the previous section:
A MatrixForm K 2 + 3 e1 e1 + 6 e2 O 5 - 9 e1 e2 e2 + e1 e2

Grade@AD MatrixForm K 80, 1< 1 O 80, 2< 81, 2<

Taking components of a matrix


We take the matrix M created in Section 13.2.
M MatrixForm a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2

The body of M is given by:

2009 9 3

Grassmann Algebra Book.nb

702

Body@MD MatrixForm a1,1 a1,2 a2,1 a2,2

and its soul by:


Soul@MD MatrixForm e1 b1,1 + e2 c1,1 + d1,1 e1 e2 e1 b1,2 + e2 c1,2 + d1,2 e1 e2 e1 b2,1 + e2 c2,1 + d2,1 e1 e2 e1 b2,2 + e2 c2,2 + d2,2 e1 e2

The even components are extracted by using EvenGrade.


EvenGrade@MD MatrixForm a1,1 + d1,1 e1 e2 a1,2 + d1,2 e1 e2 a2,1 + d2,1 e1 e2 a2,2 + d2,2 e1 e2

The odd components are extracted by using OddGrade.


OddGrade@MD MatrixForm e1 b1,1 + e2 c1,1 e1 b1,2 + e2 c1,2 e1 b2,1 + e2 c2,1 e1 b2,2 + e2 c2,2

Finally, we can extract the elements of any specific grade using ExtractGrade.
ExtractGrade@2D@MD MatrixForm d1,1 e1 e2 d1,2 e1 e2 d2,1 e1 e2 d2,2 e1 e2

Determining the type of elements


We take the matrix A created in Section 13.2.
A MatrixForm K 2 + 3 e1 e1 + 6 e2 O 5 - 9 e1 e2 e2 + e1 e2

EvenGradeQ, OddGradeQ, EvenSoulQ and GradeQ all interrogate the individual elements of a matrix. EvenGradeQ@AD MatrixForm K False False O True False

2009 9 3

Grassmann Algebra Book.nb

703

OddGradeQ@AD MatrixForm K False True O False False

EvenSoulQ@AD MatrixForm K False False O False False

GradeQ@1D@AD MatrixForm K False True O False False

To check the type of a complete matrix we can ask whether it is free of either True or False values.
FreeQ@GradeQ@1D@AD, FalseD False FreeQ@EvenSoulQ@AD, TrueD True

13.4 Matrix Algebra


Multiplication by scalars
Due to the Listable attribute of the Times operation in Mathematica, multiplication by scalars behaves as expected. For example, multiplying the Grassmann matrix A created in Section 13.2 by the scalar a gives:
A MatrixForm K 2 + 3 e1 e1 + 6 e2 O 5 - 9 e1 e2 e2 + e1 e2

a A MatrixForm K a H2 + 3 e1 L a He1 + 6 e2 L O a H5 - 9 e1 e2 L a He2 + e1 e2 L

Matrix addition
Due to the Listable attribute of the Plus operation in Mathematica, matrices of the same size add automatically.

2009 9 3

Grassmann Algebra Book.nb

704

A+M 882 + 3 e1 + a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 , e1 + 6 e2 + a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 <, 85 + a2,1 + e1 b2,1 + e2 c2,1 - 9 e1 e2 + d2,1 e1 e2 , e2 + a2,2 + e1 b2,2 + e2 c2,2 + e1 e2 + d2,2 e1 e2 <<

We can collect like terms by using GrassmannSimplify.


%@ A + M D 882 + a1,1 + e1 H3 + b1,1 L + e2 c1,1 + d1,1 e1 e2 , a1,2 + e1 H1 + b1,2 L + e2 H6 + c1,2 L + d1,2 e1 e2 <, 85 + a2,1 + e1 b2,1 + e2 c2,1 + H- 9 + d2,1 L e1 e2 , a2,2 + e1 b2,2 + e2 H1 + c2,2 L + H1 + d2,2 L e1 e2 <<

The complement of a matrix


Since the complement is a linear operation, the complement of a matrix is just the matrix of the complements of the elements.
M MatrixForm a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2

This can be simplified by using GrassmannSimplify. In a Euclidean space this gives:


%@ M D MatrixForm e2 b1,1 - e1 c1,1 + d1,1 + a1,1 e1 e2 e2 b1,2 - e1 c1,2 + d1,2 + a1,2 e1 e2 e2 b2,1 - e1 c2,1 + d2,1 + a2,1 e1 e2 e2 b2,2 - e1 c2,2 + d2,2 + a2,2 e1 e2

Matrix products
As with the product of two Grassmann numbers, entering the product of two matrices of Grassmann numbers, does not effect any multiplication. For example, take the matrix A in a 2space created in the previous section.
MatrixForm@AD MatrixForm@AD K 2 + 3 e1 e1 + 6 e2 2 + 3 e1 e1 + 6 e2 OK O 5 - 9 e1 e2 e2 + e1 e2 5 - 9 e1 e2 e2 + e1 e2

To multiply out the product AA, we use the GrassmannAlgebra MatrixProduct function which performs the multiplication. The short-hand for MatrixProduct is a script capital M $, obtained by typing scM.

2009 9 3

Grassmann Algebra Book.nb

705

B@ A A D 88H2 + 3 e1 L H2 + 3 e1 L + He1 + 6 e2 L H5 - 9 e1 e2 L, H2 + 3 e1 L He1 + 6 e2 L + He1 + 6 e2 L He2 + e1 e2 L<, 8H5 - 9 e1 e2 L H2 + 3 e1 L + He2 + e1 e2 L H5 - 9 e1 e2 L, H5 - 9 e1 e2 L He1 + 6 e2 L + He2 + e1 e2 L He2 + e1 e2 L<<

To take the product and simplify at the same time, apply the GrassmannSimplify function.
%@B@A ADD MatrixForm K 4 + 17 e1 + 30 e2 2 e1 + 12 e2 + 19 e1 e2 O 10 + 15 e1 + 5 e2 - 13 e1 e2 5 e1 + 30 e2

Regressive products
We can also take the regressive product.
%@B@A ADD MatrixForm K e1 e2 H- 9 e1 - 54 e2 L e1 e2 H19 + e1 + 6 e2 L O e1 e2 H- 13 - 27 e1 - 9 e2 - 9 e1 e2 L e1 e2 H- 9 e1 - 52 e2 + e1 e2 L

Interior products
We calculate an interior product of two matrices in the same way.
P = %@B@A ADD 884 + 9 He1 e1 L + 11 e1 + 30 e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L<, 810 - 27 He1 e2 e1 L - 9 He1 e2 e1 e2 L + 5 e2 - 13 e1 e2 , e2 e2 - 9 He1 e2 e1 L - 53 He1 e2 e2 L + e1 e2 e1 e2 <<

To convert the interior products to scalar products, use the GrassmannAlgebra function ToScalarProducts on the matrix.
P1 = ToScalarProducts@PD 984 + 9 He1 e1 L + 11 e1 + 30 e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L<, 910 + 9 He1 e2 L2 - 9 He1 e1 L He2 e2 L + 27 He1 e2 L e1 + 5 e2 27 He1 e1 L e2 - 13 e1 e2 , - He1 e2 L2 + e2 e2 + He1 e1 L He2 e2 L + 9 He1 e2 L e1 + 53 He2 e2 L e1 - 9 He1 e1 L e2 - 53 He1 e2 L e2 ==

This product can be simplified further if the values of the scalar products of the basis elements are known. In a Euclidean space, we obtain:
ToMetricForm@P1D MatrixForm K 13 + 11 e1 + 30 e2 9 O 1 - 22 e2 - 13 e1 e2 2 + 53 e1 - 9 e2

Clifford products
2009 9 3

Grassmann Algebra Book.nb

706

Clifford products
The Clifford product of two matrices can be calculated in stages. First we multiply the elements and expand and simplify the terms with GrassmannSimplify.
C1 = %@B@A ADD 882 2 + 3 2 e1 + 3 e1 2 + e1 5 + 9 e1 e1 - 9 e1 He1 e2 L + 6 e2 5 - 54 e2 He1 e2 L, 2 e1 + 6 2 e2 + 3 e1 e1 + 19 e1 e2 + e1 He1 e2 L + 6 e2 e2 + 6 e2 He1 e2 L<, 85 2 + 3 5 e1 + e2 5 - 9 e2 He1 e2 L - 9 He1 e2 L 2 + He1 e2 L 5 - 27 He1 e2 L e1 - 9 He1 e2 L He1 e2 L, 5 e1 + 6 5 e2 + e2 e2 + e2 He1 e2 L - 9 He1 e2 L e1 53 He1 e2 L e2 + He1 e2 L He1 e2 L<<

Next we can convert the Clifford products into exterior and scalar products with ToScalarProducts.
C2 = ToScalarProducts@C1D 984 + 9 He1 e1 L + 17 e1 + 9 He1 e2 L e1 + 54 He2 e2 L e1 + 30 e2 - 9 He1 e1 L e2 - 54 He1 e2 L e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L + 2 e1 - He1 e2 L e1 6 He2 e2 L e1 + 12 e2 + He1 e1 L e2 + 6 He1 e2 L e2 + 19 e1 e2 <, 910 - 9 He1 e2 L2 + 9 He1 e1 L He2 e2 L + 15 e1 - 27 He1 e2 L e1 + 9 He2 e2 L e1 + 5 e2 + 27 He1 e1 L e2 - 9 He1 e2 L e2 - 13 e1 e2 , He1 e2 L2 + e2 e2 - He1 e1 L He2 e2 L + 5 e1 - 9 He1 e2 L e1 54 He2 e2 L e1 + 30 e2 + 9 He1 e1 L e2 + 54 He1 e2 L e2 ==

Finally, if we are in a Euclidean space, this simplifies significantly.


ToMetricForm@C2D MatrixForm K 13 + 71 e1 + 21 e2 9 - 4 e1 + 13 e2 + 19 e1 e2 O 19 + 24 e1 + 32 e2 - 13 e1 e2 - 49 e1 + 39 e2

13.5 The Transpose


Conditions on the transpose of an exterior product
If we adopt the usual definition of the transpose of a matrix as being the matrix where rows become columns and columns become rows, we find that, in general, we lose the relation for the transpose of a product being the product of the transposes in reverse order. To see under what conditions this does indeed remain true we break the matrices up into their even and odd components. Let AAe +Ao and BBe +Bo , where the subscript e refers to the even component and o refers to the odd component. Then

2009 9 3

Grassmann Algebra Book.nb

707

HA BLT HHAe + Ao L HBe + Bo LLT HAe Be + Ae Bo + Ao Be + Ao Bo LT H Ae Be LT + H Ae Bo LT + H Ao Be LT + H Ao Bo LT BT A T HBe + Bo LT H Ae + Ao LT T T T T T T T T T T T IBT e + Bo M I Ae + Ao M Be Ae + Be Ao + Bo Ae + Bo Ao

If the elements of two Grassmann matrices commute, then just as for the usual case, the transpose of a product is the product of the transposes in reverse order. If they anti-commute, then the transpose of a product is the negative of the product of the transposes in reverse order. Thus, the corresponding terms in the two expansions above which involve an even matrix will be equal, and the last terms involving two odd matrices will differ by a sign:
T H Ae Be LT BT e Ae T H Ae Bo LT BT o Ae T H Ao Be LT BT e Ao T H Ao Bo LT - IBT o Ao M

Thus
T T T T H A BLT BT A T - 2 IBT o Ao M B A + 2 H Ao Bo L

13.1

H A BLT BT A T

Ao Bo 0

13.2

Checking the transpose of a product


It is useful to be able to run a check on a theoretically derived relation such as this. To reduce duplication effort in the checking of several cases we devise a test function called TestTransposeRelation.

TestTransposeRelation[A_,B_]:= Module[{T},T=Transpose; ![T["[AB]]] !["[T[B]T[A]]-2"[T[OddGrade[B]] T[OddGrade[A]]]]]


As an example, we test it for two 2!2 matrices in 2-space:
A MatrixForm K 2 + 3 e1 e1 + 6 e2 O 5 - 9 e1 e2 e2 + e1 e2

2009 9 3

Grassmann Algebra Book.nb

708

M MatrixForm a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2 TestTransposeRelation@M, AD True

It should be remarked that for the transpose of a product to be equal to the product of the transposes in reverse order, the evenness of the matrices is a sufficient condition, but not a necessary one. All that is required is that the exterior product of the odd components of the matrices be zero. This may be achieved without the odd components themselves necessarily being zero.

Conditions on the determinant of a general matrix


Due to the non-commutativity of Grassmann numbers, the notion of determinant does not carry over directly to matrices of Grassmann numbers. To see why, consider the diagonal Grassmann matrix composed of 1-elements x and y:
K x 0 O 0 y

The natural definition of the determinant of this matrix would be the product of the diagonal elements. However, which product should it be, xy or yx?. The possible non-commutativity of the Grassmann numbers on the diagonal may give rise to different products of the diagonal elements, depending on the ordering of the elements. On the other hand, if the product of the diagonal elements of a diagonal Grassmann matrix is unique, as would be the case for even elements, the determinant may be defined uniquely as that product.

Determinants of even Grassmann matrices


If a Grassmann matrix is even, its elements commute and its determinant may be defined in the same way as for matrices of scalars. To calculate the determinant of an even Grassmann matrix, we can use the same basic algorithm as for scalar matrices. In GrassmannAlgebra this is implemented with the function GrassmannDeterminant. For example:
B = EvenGrade@MD; MatrixForm@BD a1,1 + d1,1 e1 e2 a1,2 + d1,2 e1 e2 a2,1 + d2,1 e1 e2 a2,2 + d2,2 e1 e2

2009 9 3

Grassmann Algebra Book.nb

709

B1 = GrassmannDeterminant@BD Ha1,1 + d1,1 e1 e2 L Ha2,2 + d2,2 e1 e2 L Ha1,2 + d1,2 e1 e2 L Ha2,1 + d2,1 e1 e2 L

Note that GrassmannDeterminant does not carry out any simplification, but instead leaves the products in raw form to facilitate the interpretation of the process occurring. To simplify the result one needs to use GrassmannSimplify.
%@B1D - a1,2 a2,1 + a1,1 a2,2 + Ha2,2 d1,1 - a2,1 d1,2 - a1,2 d2,1 + a1,1 d2,2 L e1 e2

13.6 Matrix Powers


Positive integer powers
Powers of matrices with no body
Positive integer powers of a matrix of Grassmann numbers are simply calculated by taking the exterior product of the matrix with itself the requisite number of times - and then using GrassmannSimplify to compute and simplify the result. We begin in a 4-space. For example, suppose we let
&4 ; A = 88x, y<, 8u, v<<; MatrixForm@AD K x y O u v

The second power is:


B@ A A D 88x x + y u, x y + y v<, 8u x + v u, u y + v v<<

When simplified this becomes:


A2 = %@B@A ADD; MatrixForm@A2 D K - Hu yL - Hv yL + x y O - Hu vL + u x uy

We can extend this process to form higher powers:


A3 = %@B@A A2 DD; MatrixForm@A3 D K - Hu v yL + 2 u x y vxy O - Hu v xL -2 u v y + u x y

2009 9 3

Grassmann Algebra Book.nb

710

A3 = %@A A2 D; MatrixForm@A3 D K - Hu v yL + 2 u x y vxy O - Hu v xL -2 u v y + u x y

Because this matrix has no body, there will eventually be a power which is zero, dependent on the dimension of the space, which in this case is 4. Here we see that it is the fourth power.
A4 = %@B@A A3 DD; MatrixForm@A4 D K 0 0 O 0 0

Powers of matrices with a body


By giving the matrix a body, we can see that higher powers may no longer be zero.
B = A + IdentityMatrix@2D; MatrixForm@BD K 1+x y O u 1+v

B4 = %@B@B B@B B@B BDDDD; MatrixForm@B4 D K 1 + 4 x - 6 uy - 4 uvy + 8 uxy 4 y - 6 vy + 6 xy + 4 vxy 4 u - 6 uv + 6 ux - 4 uvx 1 + 4 v + 6 uy - 8 uvy + 4 ux

GrassmannIntegerPower
Although we will see later that the GrassmannAlgebra function GrassmannMatrixPower will actually deal with more general cases than the positive integer powers discussed here, we can write a simple function GrassmannIntegerPower to calculate any integer power. Like GrassmannMatrixPower we can also easily store the result for further use in the same Mathematica session. This makes subsequent calculations involving the powers already calculated much faster.

GrassmannIntegerPower[M_,n_?PositiveIntegerQ]:= Module[{P},P=NestList[!["[M#]]&,M,n-1]; GrassmannIntegerPower[M,m_,Dimension]:=First[Take[P,{m}]]; Last[P]]


As an example we take the matrix B created above.
B 881 + x, y<, 8u, 1 + v<<

Now we apply GrassmannIntegerPower to obtain the fourth power of B.

2009 9 3

Grassmann Algebra Book.nb

711

GrassmannIntegerPower@B, 4D MatrixForm K 1 + 4 x - 6 uy - 4 uvy + 8 uxy 4 y - 6 vy + 6 xy + 4 vxy 4 u - 6 uv + 6 ux - 4 uvx 1 + 4 v + 6 uy - 8 uvy + 4 ux

However, on the way to calculating the fourth power, the function GrassmannIntegerPower has remembered the values of the all the integer powers up to the fourth, in this case the second, third and fourth. We can get immediate access to these by adding the Dimension of the space as a third argument. Since the power of a matrix may take on a different form in spaces of different dimensions (due to some terms being zero because their degree exceeds the dimension of the space), the power is recalled only by including the Dimension as the third argument.
Dimension 4 GrassmannIntegerPower@B, 2, 4D MatrixForm Timing :0. Second, K 1 + 2 x - uy 2 y - vy + xy O> 2 u - uv + ux 1 + 2 v + uy

GrassmannIntegerPower@B, 3, 4D MatrixForm Timing :0. Second, K 1 + 3 x - 3 uy - uvy + 2 uxy 3 y - 3 vy + 3 xy + 3 u - 3 uv + 3 ux - uvx 1 + 3 v + 3 uy - 2 uv

GrassmannIntegerPower@B, 4, 4D MatrixForm Timing :0. Second, K 1 + 4 x - 6 uy - 4 uvy + 8 uxy 4 y - 6 vy + 6 xy 4 u - 6 uv + 6 ux - 4 uvx 1 + 4 v + 6 uy - 8 u

GrassmannMatrixPower
The principal and more general function for calculating matrix powers provided by GrassmannAlgebra is GrassmannMatrixPower. Here we verify that it gives the same result as our simple GrassmannIntegerPower function.
GrassmannMatrixPower@B, 4D 881 + 4 x - 6 u y - 4 u v y + 8 u x y, 4 y - 6 v y + 6 x y + 4 v x y<, 84 u - 6 u v + 6 u x - 4 u v x, 1 + 4 v + 6 u y - 8 u v y + 4 u x y<<

Negative integer powers


From this point onwards we will be discussing the application of the general GrassmannAlgebra function GrassmannMatrixPower. We take as an example the following simple 2!2 matrix.

2009 9 3

Grassmann Algebra Book.nb

712

A = 881, x<, 8y, 1<<; MatrixForm@AD K 1 x O y 1

The matrix A-4 is calculated by GrassmannMatrixPower by taking the fourth power of the inverse.
iA4 = GrassmannMatrixPower@A, - 4D 881 + 10 x y, - 4 x<, 8- 4 y, 1 - 10 x y<<

We could of course get the same result by taking the inverse of the fourth power.
GrassmannMatrixInverse@GrassmannMatrixPower@A, 4DD 881 + 10 x y, - 4 x<, 8- 4 y, 1 - 10 x y<<

We check that this is indeed the inverse.


%@B@iA4 GrassmannMatrixPower@A, 4DDD 881, 0<, 80, 1<<

Non-integer powers of matrices with distinct eigenvalues


If the matrix has distinct eigenvalues, GrassmannMatrixPower will use GrassmannMatrixFunction to compute any non-integer power of a matrix of Grassmann numbers. Let A be the 2!2 matrix which differs from the matrix discussed above by having distinct eigenvalues 1 and 2. We will discuss eigenvalues in Section 13.9 below.
A = 881, x<, 8y, 2<<; MatrixForm@AD K 1 x O y 2

We compute the square root of A.


sqrtA = GrassmannMatrixPowerBA, ::1 + :I- 1 + 3 2 + 2 x y, I- 1 + 2 + 1 4 1 2 F

2 M x>, 2 M x y>>

2 M y,

I- 4 + 3

To verify that this is indeed the square root of A we 'square' it.


%@B@sqrtA sqrtADD

More generally, we can extract a symbolic pth root. But to have the simplest result we need to ensure that p has been declared a scalar first, 2009 9 3

Grassmann Algebra Book.nb

713

More generally, we can extract a symbolic pth root. But to have the simplest result we need to ensure that p has been declared a scalar first,
DeclareExtraScalars@8p<D :a, b, c, d, e, f, g, h, p, %, H_ _L ?InnerProductQ, _>
0

pthRootA = GrassmannMatrixPowerBA,
1

1 p

::1 + - 1 + 2 p -

1 p
1

x y, - 1 + 2 p 2
-1+ p
1

x>,

: -1 + 2 p

y, 2 p + - 1 + 2 p -

x y>>

We can verify that this gives us the previous result when p = 2.


sqrtA pthRootA . p 2 Simplify True

If the power required is not integral and the eigenvalues are not distinct, GrassmannMatrixPower will return a message and the unevaluated input. Let B be a simple 2!2 matrix with equal eigenvalues 1 and 1
B = 881, x<, 8y, 1<<; MatrixForm@BD K 1 x O y 1 1 2 F

GrassmannMatrixPowerBB,

Eigenvalues::notDistinct : The matrix 881, x<, 8y, 1<< does not have distinct scalar eigenvalues. The operation applies only to matrices with distinct scalar eigenvalues.
GrassmannMatrixPowerB881, x<, 8y, 1<<, 1 2 F

Integer powers of matrices with distinct eigenvalues


In some circumstances, if a matrix is known to have distinct eigenvalues, it will be more efficient to compute powers using GrassmannMatrixFunction for the basic calculation engine. The function of GrassmannAlgebra which enables this is GrassmannDistinctEigenvalueMatrixPower.

2009 9 3

Grassmann Algebra Book.nb

714

? GrassmannDistinctEigenvaluesMatrixPower GrassmannDistinctEigenvaluesMatrixPower@A,pD calculates the power p of a Grassmann matrix A with distinct eigenvalues. p may be either numeric or symbolic.

We can check whether a matrix has distinct eigenvalues with DistinctEigenvaluesQ. For example, suppose we wish to calculate the 100th power of the matrix A below.
A MatrixForm K 1 x O y 2

DistinctEigenvaluesQ@AD True A 881, x<, 8y, 2<< GrassmannDistinctEigenvaluesMatrixPower@A, 100D 881 + 1 267 650 600 228 229 401 496 703 205 275 x y, 1 267 650 600 228 229 401 496 703 205 375 x<, 81 267 650 600 228 229 401 496 703 205 375 y, 1 267 650 600 228 229 401 496 703 205 376 62 114 879 411 183 240 673 338 457 063 425 x y<<

13.7 Matrix Inverses


A formula for the matrix inverse
We can develop a formula for the inverse of a matrix of Grassmann numbers following the same approach that we used for calculating the inverse of a Grassmann number. Suppose I is the identity matrix, Xk is a bodiless matrix of Grassmann numbers and Xk i is its ith exterior power. Then we can write:
HI + Xk L II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk q M I % Xk q+1

Now, since Xk has no body, its highest non-zero power will be equal to the dimension n of the space. That is Xk n+1 0. Thus

HI + Xk L II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M I
We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the inverse of HI + Xk L. 2009 9 3

13.3

Grassmann Algebra Book.nb

715

We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the inverse of HI + Xk L. If now we have a general Grassmann matrix X, say, we can write X as X Xb + Xs , where Xb is the body of X and Xs is the soul of X, and then take Xb out as a factor to get:
X X b + X s X b I I + X b -1 X s M X b H I + X k L

Pre- and post-multiplying the equation above for the inverse of HI + Xk Lby Xb and Xb -1 respectively gives:
Xb HI + Xk L II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1 I

Since the first factor is simply X, we finally obtain the inverse of X in terms of Xk Xb -1 Xs as:

X-1 II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1
Or, specifically in terms of the body and soul of X:

13.4

X-1 JI - IXb -1 Xs M + IXb -1 Xs M - ... % IXb -1 Xs M N Xb -1

13.5

GrassmannMatrixInverse
This formula is straightforward to implement as a function. In GrassmannAlgebra we can calculate the inverse of a Grassmann matrix X by using GrassmannMatrixInverse.
? GrassmannMatrixInverse GrassmannMatrixInverse@XD computes the inverse of the matrix X of Grassmann numbers in a space of the currently declared number of dimensions.

A definition for GrassmannMatrixInverse may be quite straighforwardly developed from formula 13.4.

GrassmannMatrixInverse[X_]:= Module[{B,S,iB,iBS,K},B=Body[X];S=Soul[X]; iB=Inverse[B];iBS=!["[iBS]]; K=Sum[(-1)^i GrassmannMatrixPower[iBS,i],{i,0,Dimension}]; !["[KiB]]]


As a first simple example we take the matrix:

2009 9 3

Grassmann Algebra Book.nb

716

A = 881, x<, 8y, 1<<; MatrixForm@AD K 1 x O y 1

The inverse of A is:


iA = GrassmannMatrixInverse@AD; MatrixForm@iAD K 1 + xy -x O -y 1 - xy

We can check that this is indeed the inverse.


%@8B@A iAD, B@iA AD<D 8881, 0<, 80, 1<<, 881, 0<, 80, 1<<<

As a second example we take a somewhat more complex matrix.


A = 881 + e1 , 2, e3 e2 <, 83, e2 e4 , e4 <, 8e2 , e1 , 1 + e4 e1 <<; MatrixForm@AD 1 + e1 2 e3 e2 3 e2 e4 e4 e2 e1 1 + e4 e1 iA = GrassmannMatrixInverse@AD Simplify :: : 1 6 1 H- He1 e4 L - e2 e4 L, 1 18 H6 + e1 e4 - e2 e4 + e1 e2 e4 L, e4 3 >,

H- 6 - 6 e1 - e1 e4 + e2 e4 + 3 e1 e2 e3 L, 36 1 He4 + e1 e4 + 3 e2 e3 L>, 6 1 1 : H- 2 e1 - e1 e2 e4 L, H6 e1 - 12 e2 + 13 e1 e2 e4 L, 4 36 1 H6 + 5 e1 e4 + 2 e2 e4 - 3 e1 e2 e3 L>> 6

12 1

H6 + e1 e4 + e2 e4 - 3 e1 e2 e3 + e1 e2 e4 L,

We check that this is indeed the inverse


%@8B@A iAD, B@iA AD<D 8881, 0, 0<, 80, 1, 0<, 80, 0, 1<<, 881, 0, 0<, 80, 1, 0<, 80, 0, 1<<<

Finally, if we try to find the inverse of a matrix with a singular body we get the expected type of error messages.

2009 9 3

Grassmann Algebra Book.nb

717

A = 881 + x, 2 + y<, 81 + z, 2 + w<<; MatrixForm@AD K 1+x 2+y O 1+z 2+w

GrassmannMatrixInverse@AD

Inverse::sing : Matrix 881, 2<, 81, 2<< is singular.

13.8 Matrix Equations


Two types of matrix equations
If we can recast a system of equations into a matrix form which maintains the correct ordering of the factors, we can use the matrix inverse to obtain the solution. Suppose we have an equation M XY where M is an invertible matrix of Grassmann numbers, X is a vector of unknown Grassmann numbers, and Y is a vector of known Grassmann numbers. Then the solution X is given by:
%@B@GrassmannMatrixInverse@MD YDD

Alternatively, if the equations need to be expressed as XMY in which case the solution X is given by:
%@B@Y GrassmannMatrixInverse@MDDD

GrassmannAlgebra implements these as GrassmannLeftLinearSolve and GrassmannRightLinearSolve respectively.


? GrassmannLeftLinearSolve GrassmannLeftLinearSolve@M,YD calculates a matrix or vector X which solves the equation MX==Y.

Solving matrix equations


Here we take an example and show that GrassmannLeftLinearSolve and GrassmannRightLinearSolve both gives solutions to a matrix exterior equation.
M = 881 + x, x y<, 81 - y, 2 + y + x + 5 x y<<; MatrixForm@MD K 1+x xy O 1 - y 2 + x + y + 5 xy

Y = 83 + 2 x y, 7 - 5 x<; MatrixForm@YD K 3 + 2 xy O 7-5x

2009 9 3

Grassmann Algebra Book.nb

718

GrassmannLeftLinearSolve gives: X = GrassmannLeftLinearSolve@M, YD; MatrixForm@XD 3-3x 2-2x+


y 2

19 xy 4

%@B@M XDD Y True GrassmannRightLinearSolve gives: X = GrassmannRightLinearSolve@M, YD; MatrixForm@XD -2 +


7 2 1 19 x 4 17 x 4

+ -

21 y 41 xy + 4 4 7y 29 xy - 4 4

%@B@X MDD Y True

13.9 Matrix Eigensystems


Exterior eigensystems of Grassmann matrices
Grassmann matrices have eigensystems just as do real or complex matrices. Eigensystems are useful because they allow us to calculate functions of matrices. One major difference in approach with Grassmann matrices is that we may no longer be able to use the determinant to obtain the characteristic equation, and hence the eigenvalues. We can, however, return to the basic definition of an eigenvalue and eigenvector to obtain our results. We treat only those matrices with distinct eigenvalues. Suppose the matrix A is n!n and has distinct eigenvalues, then we are looking for an n!n diagonal matrix L (the matrix of eigenvalues) and an n!n matrix X (the matrix of eigenvectors) such that:

AX XL

13.6

The pair {X,L} is called the eigensystem of A. If X is invertible we can post-multiply by X-1 to get:

A X L X -1
and pre-multiply by X-1 to get:

13.7

2009 9 3

Grassmann Algebra Book.nb

719

X -1 A X L

13.8

It is this decomposition which allows us to define functions of Grassmann matrices. This will be discussed further in Section 13.10. The number of scalar equations resulting from the matrix equation AX XL is n2 . The number of unknowns that we are likely to be able to determine is therefore n2 . There are n unknown eigenvalues in L, leaving n2 - n unknowns to be determined in X. Since X occurs on both sides of the equation, we need only determine its columns (or rows) to within a scalar multiple. Hence there are really only n2 - n unknowns to be determined in X, which leads us to hope that a decomposition of this form is indeed possible. The process we adopt is to assume unknown general Grassmann numbers for the diagonal components of L and for the components of X. If the dimension of the space is N, there will be 2N scalar coefficients to be determined for each of the n2 unknowns. We then use Mathematica's powerful Solve function to obtain the values of these unknowns. In practice, during the calculations, we assume that the basis of the space is composed of just the non-scalar symbols existing in the matrix A. This enables the reduction of computation should the currently declared basis be of higher dimension than the number of different 1-elements in the matrix, and also allows the eigensystem to be obtained for matrices which are not expressed in terms of basis elements. For simplicity we also use Mathematica's JordanDecomposition function to determine the scalar eigensystem of the matrix A before proceeding on to find the non-scalar components. To see the relationship of the scalar eigensystem to the complete Grassmann eigensystem, rewrite the eigensystem equation AX XL in terms of body and soul, and extract that part of the equation that is pure body.
HAb + As L HXb + Xs L == HXb + Xs L HLb + Ls L

Expanding this equation gives


Ab Xb + HAb Xs + As Xb + As Xs L == Xb Lb + HXb Ls + Xs Lb + Xs Ls L

The first term on each side is the body of the equation, and the second terms (in brackets) its soul. The body of the equation is simply the eigensystem equation for the body of A and shows that the scalar components of the unknown X and L matrices are simply the eigensystem of the body of A.

Ab Xb Xb Lb

13.9

By decomposing the soul of the equation into a sequence of equations relating matrices of elements of different grades, the solution for the unknown elements of X and L may be obtained sequentially by using the solutions from the equations of lower grade. For example, the equation for the elements of grade one is

Ab Xs + As Xb Xb Ls + Xs Lb
1 1 1 1

13.10

where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1

have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this 1 1 2009 9 3 process would be helpful if trying to solve by hand, it is not necessary when using the Solve function in Mathematica.

Grassmann Algebra Book.nb

720

where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1 1 1

have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this process would be helpful if trying to solve by hand, it is not necessary when using the Solve function in Mathematica.

GrassmannMatrixEigensystem
The function implemented in GrassmannAlgebra for calculating the eigensystem of a matrix of Grassmann numbers is GrassmannMatrixEigensystem. It is capable of calculating the eigensystem only of matrices whose body has distinct eigenvalues.
? GrassmannMatrixEigensystem GrassmannMatrixEigensystem@AD calculates a list comprising the matrix of eigenvectors and the diagonal matrix of eigenvalues for a Grassmann matrix A whose body has distinct eigenvalues.

If the matrix does not have distinct eigenvalues, GrassmannMatrixEigensystem will return a message telling us. For example:
A = 882, x<, 8y, 2<<; MatrixForm@AD K 2 x O y 2

GrassmannMatrixEigensystem@AD

Eigenvalues::notDistinct : The matrix 882, x<, 8y, 2<< does not have distinct scalar eigenvalues. The operation applies only to matrices with distinct scalar eigenvalues.
GrassmannMatrixEigensystem@882, x<, 8y, 2<<D

If the matrix has distinct eigenvalues, a list of two matrices is returned. The first is the matrix of eigenvectors (whose columns have been normalized with an algorithm which tries to produce as simple a form as possible), and the second is the (diagonal) matrix of eigenvalues.
A = 882, 3 + x<, 82 + y, 3<<; MatrixForm@AD K 2 3+x O 2+y 3

8X, L< = GrassmannMatrixEigensystem@AD; MatrixForm 8X, L< : -3 2x 5

9y 10

1 1x 5

+
y 5

2x 5

3y 5

xy 5

0 5+
2x 5

3y 5

xy 5

>

To verify that this is indeed the eigensystem for A we can evaluate the equation AX XL.

2009 9 3

Grassmann Algebra Book.nb

721

%@B@A XDD %@B@X LDD True

Next we take a slightly more complex example.


A = 882 + e1 + 2 e1 e2 , 0, 0<, 80, 2 + 2 e1 , 5 + e1 e2 <, 80, - 1 + e1 + e2 , - 2 - e2 <<; MatrixForm@AD 2 + e1 + 2 e1 e2 0 0 0 2 + 2 e1 5 + e1 e2 0 - 1 + e1 + e2 - 2 - e2 8X, L< = GrassmannMatrixEigensystem@AD; X :80, 0, 1<, :H- 2 + L - 3 + 3L ::- + 1 + 9 e1 e1 1 2 7 e2 e2 15 4 9 e1 e2 , 0, 0>, e1 e2 , 0>, 7 2 e1 7 2 5 2 e1 5 2 5 2 + 5 2 17 4 e2 + + 7 2 17 4 7 2 e1 e2 , H- 2 - L -

e2 +

e1 e2 , 0>, 81, 1, 0<>

2 9 :0, + 1 2

2 1 7 + 2 2

2 15 9 + 4 2

80, 0, 2 + e1 + 2 e1 e2 <> %@B@A XDD %@B@X LDD True

13.10 Matrix Functions


Distinct eigenvalue matrix functions
As shown above in the section on matrix eigensystems, we can express a matrix with distinct eigenvalues in the form:
A X L X -1

Hence we can write its exterior square as:


A 2 I X L X -1 M I X L X -1 M X L I X -1 X M L X -1 X L 2 X -1

This result is easily generalized to give an expression for any power of a matrix. 2009 9 3

Grassmann Algebra Book.nb

722

This result is easily generalized to give an expression for any power of a matrix.

A p X L p X -1
Indeed, it is also valid for any linear combination of powers,
ap Ap ap X Lp X-1 X I ap Lp M X-1

13.11

and hence any function f[A] defined by a linear combination of powers (series):
f@ A D ap A p

f @ A D X f @ L D X -1
Now, since L is a diagonal matrix:
L DiagonalMatrix@8l1 , l2 , , lm <D

13.12

its pth power is just the diagonal matrix of the pth powers of the elements.
Lp DiagonalMatrix@8l1 p , l2 p , , lm p <D

Hence:
ap Lp DiagonalMatrixA9 ap l1 p , ap l2 p , , ap lm p =E f@LD DiagonalMatrix@8f@l1 D, f@l2 D, , f@lm D<D

Hence, finally we have an expression for the function f of the matrix A in terms of the eigenvector matrix and a diagonal matrix in which each diagonal element is the function f of the corresponding eigenvalue.

f@AD X DiagonalMatrix@8f@l1 D, f@l2 D, , f@lm D<D X-1

13.13

GrassmannMatrixFunction
We have already seen in Chapter 9 how to calculate a function of a Grassmann number by using GrassmannFunction. We use GrassmannFunction and the formula above to construct GrassmannMatrixFunction for calculating functions of matrices.

2009 9 3

Grassmann Algebra Book.nb

723

? GrassmannMatrixFunction GrassmannMatrixFunction@f@ADD calculates a function f of a Grassmann matrix A with distinct eigenvalues, where f is a pure function. GrassmannMatrixFunction@8fx,x<,AD calculates a function fx of a Grassmann matrix A with distinct eigenvalues, where fx is a formula with x as variable.

It should be remarked that, just as functions of a Grassmann number will have different forms in spaces of different dimension, so too will functions of a Grassmann matrix.

Exponentials and Logarithms


In what follows we shall explore some examples of functions of Grassmann matrices in 3-space.
&3 ; A = 881 + x, x z<, 82, 3 - z<<; MatrixForm@AD K 1 + x xz O 2 3-z

We first calculate the exponential of A.


expA = GrassmannMatrixFunction@Exp@ADD :: + x + :- + +
3

1 2 1 2

I- 3 + 2 M x z, I- 3 + M x 1 2
2

2 z 2

I- 1 + 2 M x z>, 3 z 2 + 3 x z,

3 - 3 z +

I + 3 M x z>>

We then calculate the logarithm of this exponential and see that it is A as we expect.
GrassmannMatrixFunction@Log@expADD 881 + x, x z<, 82, 3 - z<<

Trigonometric functions
We compute the square of the sine and the square of the cosine of A:

2009 9 3

Grassmann Algebra Book.nb

724

s2A = GrassmannMatrixFunction@8Sin@xD ^2, x<, AD ::2 x Cos@1D Sin@1D + Sin@1D2 + 1 2 Sin@1D H- 4 Cos@1D + Sin@3D + Sin@5DL x z, Cos@2D Sin@2D2 x z>, 1 2 1 2 x Sin@2D H- 2 + Sin@4DL + Sin@2D Sin@2D H6 + 6 Cos@4D - 3 Sin@4DL x z,

:- z Sin@2D - 2 z Cos@4D Sin@2D + Sin@4D + 1 2 z Sin@2D Sin@4D + 1 4

Sin@3D2 - z Sin@6D +

H- Cos@2D + Cos@6D + 4 Sin@6DL x z>>

c2A = GrassmannMatrixFunction@8Cos@xD ^2, x<, AD ::Cos@1D2 - 2 x Cos@1D Sin@1D + 1 2 Cos@1D H- Cos@3D + Cos@5D + 4 Sin@1DL x z, - Cos@2D Sin@2D2 x z>, 1 2 x Sin@2D H- 2 + Sin@4DL - Sin@2D Sin@4D -

:z Sin@2D + 2 z Cos@4D Sin@2D 1 2 z Sin@2D Sin@4D 1

Sin@2D H6 + 6 Cos@4D - 3 Sin@4DL x z, 2 1 Cos@3D2 + z Sin@6D H- Cos@2D + Cos@6D + 4 Sin@6DL x z>> 4

and show that sin2 x + cos2 x = 1, even for Grassmann matrices:


s2A + c2A Simplify MatrixForm K 1 0 O 0 1

Symbolic matrices
B = 88a + x, x z<, 8b, c - z<<; MatrixForm@BD K a + x xz O b c-z

2009 9 3

Grassmann Algebra Book.nb

725

GrassmannMatrixFunction@Exp@BDD ::a H1 + xL + : 1 Ha - cL3 b H-a + a a - c a + c L x z Ha - cL2 , Ha - c L x z a-c >,

Hb HHa - cL H- c a + c c - a x - c a x + c x -

a z + c z - c c z + a Ha - c + a x + c zLL + H1 + bL H- 2 a - c a + 2 c - c c + a Ha + c LL x zLL, b Ha - c - a c + c c L x z c - H- 1 + zL + >> Ha - cL2

Symbolic functions
Suppose we wish to compute the symbolic function f of the matrix A.
A MatrixForm K 1 + x xz O 2 3-z

If we try to compute this we get an incorrect result because the derivatives evaluated at various scalar values are not recognized by GrassmannSimplify as scalars. Here we print out just three lines of the result.
Short@GrassmannMatrixFunction@f@ADD, 3D :81<, :- f@1D + f@3D + x z 2 f @1D + 5 2 1 2 x f@1D 1 2 x f@3D + 22 + 1 2 x f @3D + 2 z f @3D ,

x f @1D + z f @1D + f @3D + 1 2 x f @1D + f @3D + 1 2

f@3D + 9 + x z

z f @3D >>

We see from this that the derivatives in question are the zeroth and first, f[_], and f'[_]. We therefore declare these patterns to be scalars so that the derivatives evaluated at any scalar arguments will be considered by GrassmannSimplify as scalars.
DeclareExtraScalars@8f@_D, f @_D<D :a, b, c, d, e, f, g, h, %, f@_D, H_ _L ?InnerProductQ, _, f @_D>
0

2009 9 3

Grassmann Algebra Book.nb

726

fA = GrassmannMatrixFunction@f@ADD ::f@1D + x f @1D :- f@1D 1 2 1 2 x z Hf@1D - f@3D + 2 f @1DL, 1 2 3 2 z f@1D + f@3D + 1 2 x f@3D + 1 2 1 2 Hf@1D - f@3DL x z>,

x f@1D -

z f@3D -

x f @1D - z f @3D + f@3D - z f @3D + 1 2

x z Hf@1D - f@3D + f @1D + f @3DL,

x z Hf@1D - f@3D + 2 f @3DL>>

The function f may be replaced by any specific function. We replace it here by Exp and check that the result is the same as that given above.
expA HfA . f ExpL Simplify True

13.11 Supermatrices
To be completed.

2009 9 3

Grassmann Algebra Book.nb

727

A1 Guide to GrassmannAlgebra
To be completed

2009 9 3

Grassmann Algebra Book.nb

728

A2 A Brief Biography of Grassmann


Hermann Gnther Grassmann was born in 1809 in Stettin, a town in Pomerania a short distance inland from the Baltic. His father Justus Gnther Grassmann taught mathematics and physical science at the Stettin Gymnasium. Hermann was no child prodigy. His father used to say that he would be happy if Hermann became a craftsman or a gardener. In 1827 Grassmann entered the University of Berlin with the intention of studying theology. As his studies progressed he became more and more interested in studying philosophy. At no time whilst a student in Berlin was he known to attend a mathematics lecture. Grassmann was however only 23 when he made his first important geometric discovery: a method of adding and multiplying lines. This method was to become the foundation of his Ausdehnungslehre (extension theory). His own account of this discovery is given below. Grassmann was interested ultimately in a university post. In order to improve his academic standing in science and mathematics he composed in 1839 a work (over 200 pages) on the study of tides entitled Theorie der Ebbe und Flut. This work contained the first presentation of a system of spacial analysis based on vectors including vector addition and subtraction, vector differentiation, and the elements of the linear vector function, all developed for the first time. His examiners failed to see its importance. Around Easter of 1842 Grassmann began to turn his full energies to the composition of his first 'Ausdehnungslehre', and by the autumn of 1843 he had finished writing it. The following is an excerpt from the foreword in which he describes how he made his seminal discovery. The translation is by Lloyd Kannenberg (Grassmann 1844).
The initial incentive was provided by the consideration of negatives in geometry; I was used to regarding the displacements AB and BA as opposite magnitudes. From this it follows that if A, B, C are points of a straight line, then AB + BC = AC is always true, whether AB and BC are directed similarly or oppositely, that is even if C lies between A and B. In the latter case AB and BC are not interpreted merely as lengths, but rather their directions are simultaneously retained as well, according to which they are precisely oppositely oriented. Thus the distinction was drawn between the sum of lengths and the sum of such displacements in which the directions were taken into account. From this there followed the demand to establish this latter concept of a sum, not only for the case that the displacements were similarly or oppositely directed, but also for all other cases. This can most easily be accomplished if the law AB + BC = AC is imposed even when A, B, C do not lie on a single straight line. Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that result seemed scarcely worthy of note until it was combined with a related idea. While I was pursuing the concept of product in geometry as it had been established by my father, I concluded that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of their sides, provided one again interprets the product, not as the product of their lengths, but as that of the two displacements with their directions taken into account. When I combined this concept of the product with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the individual terms by the same displacement and added the products with due regard for their positive and 2009 9 3 negative values, the same result obtained, and must always obtain. This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one that could lead to important results. Yet this idea remained dormant for some time since the demands of my

Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that result seemed scarcely worthy of note until it was combined with a related idea. While I was pursuing the concept of product in geometry as it had been established by my father, I concluded Grassmann Algebra Book.nb 729 that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of their sides, provided one again interprets the product, not as the product of their lengths, but as that of the two displacements with their directions taken into account. When I combined this concept of the product with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the individual terms by the same displacement and added the products with due regard for their positive and negative values, the same result obtained, and must always obtain. This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one that could lead to important results. Yet this idea remained dormant for some time since the demands of my job led me to other tasks; also, I was initially perplexed by the remarkable result that, although the laws of ordinary multiplication, including the relation of multiplication to addition, remained valid for this new type of product, one could only interchange factors if one simultaneously changed the sign (i.e. changed + into | and vice versa).

As with his earlier work on tides, the importance of this work was ignored. Since few copies were sold, most ended by being used as waste paper by the publisher. The failure to find acceptance for Grassmann's ideas was probably due to two main reasons. The first was that Grassmann was just a simple schoolteacher, and had none of the academic charisma that other contemporaries, like Hamilton for example, had. History seems to suggest that the acceptance of radical discoveries often depends more on the discoverer than the discovery. The second reason is that Grassmann adopted the format and the approach of the modern mathematician. He introduced and developed his mathematical structure axiomatically and abstractly. The abstract nature of the work, initially devoid of geometric or physical significance, was just too new and formal for the mathematicians of the day and they all seemed to find it too difficult. More fully than any earlier mathematician, Grassmann seems to have understood the associative, commutative and distributive laws; yet still, great mathematicians like Mbius found it unreadable, and Hamilton was led to write to De Morgan that to be able to read Grassmann he 'would have to learn to smoke'. In the year of publication of the Ausdehnungslehre (1844) the Jablonowski Society of Leipzig offered a prize for the creation of a mathematical system fulfilling the idea that Leibniz had sketched in 1679. Grassmann entered with 'Die Geometrische Analyse geknpft und die von Leibniz Characteristik', and was awarded the prize. Yet as with the Ausdehnungslehre it was subsequently received with almost total silence. However, in the few years following, three of Grassmann's contemporaries were forced to take notice of his work because of priority questions. In 1845 Saint-Venant published a paper in which he developed vector sums and products essentially identical to those already occurring in Grassmann's earlier works (Barr 1845). In 1853 Cauchy published his method of 'algebraic keys' for solving sets of linear equations (Cauchy 1853). Algebraic keys behaved identically to Grassmann's units under the exterior product. In the same year Saint-Venant published an interpretation of the algebraic keys geometrically and in terms of determinants (Barr 1853). Since such were fundamental to Grassmann's already published work he wrote a reply for Crelle's Journal in 1855 entitled 'Sur les diffrentes genres de multiplication' in which he claimed priority over Cauchy and Saint-Venant and published some new results (Grassmann 1855). It was not until 1853 that Hamilton heard of the Ausdehnungslehre. He set to reading it and soon after wrote to De Morgan.
I have recently been reading more than a hundred pages of Grassmann's Ausdehnungslehre, with great admiration and interest . If I could hope to be put in rivalship with Des Cartes on the one hand and with Grassmann on the other, my scientific ambition would be fulfilled.

During the period 1844 to 1862 Grassmann published seventeen scientific papers, including important papers in physics, and a number of mathematics and language textbooks. He edited a political paper for a time and published materials on the evangelization of China. This, on top of a heavy load and the raising of a large family. However, this same period saw only few 2009 9teaching 3 mathematicians ~ Hamilton, Cauchy, Mbius, Saint-Venant, Bellavitis and Cremona ~ having any acquaintance with, or appreciation of, his work.

Grassmann Algebra Book.nb

730

During the period 1844 to 1862 Grassmann published seventeen scientific papers, including important papers in physics, and a number of mathematics and language textbooks. He edited a political paper for a time and published materials on the evangelization of China. This, on top of a heavy teaching load and the raising of a large family. However, this same period saw only few mathematicians ~ Hamilton, Cauchy, Mbius, Saint-Venant, Bellavitis and Cremona ~ having any acquaintance with, or appreciation of, his work. In 1862 Grassmann published a completely rewritten Ausdehnungslehre: Die Ausdehnungslehre: Vollstnding und in strenger Form. In the foreword Grassmann discussed the poor reception accorded his earlier work and stated that the content of the new book was presented in 'the strongest mathematical form that is actually known to us; that is Euclidean '. It was a book of theorems and proofs largely unsupported by physical example. This apparently was a mistake, for the reception accorded this new work was as quiet as that accorded the first, although it contained many new results including a solution to Pfaff's problem. Friedrich Engel (the editor of Grassmann's collected works) comments: 'As in the first Ausdehnungslehre so in the second: matters which Grassmann had published in it were later independently rediscovered by others, and only much later was it realized that Grassmann had published them earlier' (Engel 1896). Thus Grassmann's works were almost totally neglected for forty-five years after his first discovery. In the second half of the 1860s recognition slowly started to dawn on his contemporaries, among them Hankel, Clebsch, Schlegel, Klein, Noth, Sylvester, Clifford and Gibbs. Gibbs discovered Grassmann's works in 1877 (the year of Grassmann's death), and Clifford discovered them in depth about the same time. Both became quite enthusiastic about Grassmann's new mathematics. Grassmann's activities after 1862 continued to be many and diverse. His contribution to philology rivals his contribution to mathematics. In 1849 he had begun a study of Sanskrit and in 1870 published his Wrtebuch zum Rig-Veda, a work of 1784 pages, and his translation of the Rig-Veda, a work of 1123 pages, both still in use today. In addition he published on mathematics, languages, botany, music and religion. In 1876 he was made a member of the American Oriental Society, and received an honorary doctorate from the University of Tbingen. On 26 September 1877 Hermann Grassmann died, departing from a world only just beginning to recognize the brilliance of the mathematical creations of one of its most outstanding eclectics.

2009 9 3

Grassmann Algebra Book.nb

731

A3 Notation
To be completed.

Operations

l

exterior product operation regressive product operation interior product operation generalized product operation Clifford product operation hypercomplex product operation complement operation vector subspace complement operation cross product operation measure defined equal to equal to congruence

! =

Elements
a, b, c, x, y, z, a, b, g ,
m m m

scalars 1-elements or vectors m-elements Grassmann numbers position vectors points lines

X, Y, Z, n1 , n2 , P1 , P2 , L1 , L2 ,

2009 9 3

Grassmann Algebra Book.nb

732

L1 , L2 , P1 , P2 ,

lines planes

Declarations
!n " n S V

declare a linear or vector space of n dimensions declare an n-space comprising an origin point and a vector space of n dimensions declare extra scalar symbols declare extra vector symbols

Special objects
1 1 " D n c g

unit scalar unit n-element origin point dimension of the currently declared space symbolic dimension of a space symbolic congruence factor derterminant of the metric tensor

Spaces
L
0

linear space of scalars or field of scalars linear space of 1-elements, underlying linear space linear space of m-elements linear space of n-elements

L
1

L
m

L
n

Basis elements
ei

basis 1-element or covariant basis element

2009 9 3

Grassmann Algebra Book.nb

733

ei ei
m

contravariant basis 1-element basis m-element or covariant basis m-element contravariant basis m-element cobasis element of ei
m

ei
m

ei
m

2009 9 3

Grassmann Algebra Book.nb

734

A4 Glossary
To be completed.

Ausdehnungslehre The term Ausdehnungslehre is variously translated as 'extension theory', 'theory of extension', or 'calculus of extension'. Refers to Grassmann's original work and other early work in the same notational and conceptual tradition. Bivector A bivector is a sum of simple bivectors. Bound vector A bound vector B is the exterior product of a point and a vector. B Px. It may also always be expressed as the exterior product of two points. Bound vector space A bound vector space is a linear space whose basis elements are interpreted as an origin point " and vectors. Bound bivector A bound bivector B is the exterior product of a point P and a bivector W: B PW. Bound simple bivector A bound simple bivector B is the exterior product of a point P and a simple bivector xy: B Pxy. It may also always be expressed as the exterior product of two points and a vector, or the exterior product of three points. Cofactor The cofactor of a minor M of a matrix A is the signed determinant formed from the rows and columns of A which are not in M. The sign may be determined from H- 1LS Hri +ci L , where ri and ci are the row and column numbers. Dimension of a linear space The dimension of a linear space is the maximum number of independent elements in it.

2009 9 3

Grassmann Algebra Book.nb

735

Dimension of an exterior linear space The dimension of an exterior linear space L is


m

n m

where n is the dimension of the

underlying linear space L.


1

The dimension time.

n m

is equal to the number of combinations of n elements taken m at a

Dimension of a Grassmann algebra The dimension of a Grassmann algebra is the sum of the dimensions of its component exterior linear spaces. The dimension of a Grassmann algebra is then given by 2n , where n is the dimension of the underlying linear space. Direction A direction is the space of a vector and is therefore the set of all vectors parallel to a given vector. Displacement A displacement is a physical interpretation of a vector. It may also be viewed as the difference of two points. Exterior linear space An exterior linear space of grade m, denoted L, is the linear space generated by mm

elements. Force A force is a physical entity represented by a bound vector. This differs from common usage in which a force is represented by a vector. For reasons discussed in the text, common use does not provide a satisfactory model. Force vector A force vector is the vector of the bound vector representing the force. General geometrically interpreted 2-element A general geometrically interpreted 2-element U is the sum of a bound vector Px and a bivector W. That is, U Px + W.

2009 9 3

Grassmann Algebra Book.nb

736

Geometric entities Points, lines, planes, are geometric entities. Each is defined as the space of a geometrically interpreted element. A point is a geometric 1-entity. A line is a geometric 2-entity. A plane is a geometric 3-entity. Geometric interpretations Points, weighted points, vectors, bound vectors, bivectors, are geometric interpretations of m-elements. Geometrically interpreted algebra A geometrically interpreted algebra is a Grassmann algebra with a geometrically interpreted underlying linear space. Grade The grade of an m-element is m. The grade of a simple m-element is the number of 1-element factors in it. The grade of the exterior product of an m-element and a k-element is m+k. The grade of the regressive product of an m-element and a k-element is m+k|n. The grade of the complement of an m-element is n|m. The grade of the interior product of an m-element and a k-element is m|k. The grade of a scalar is zero. (The dimension n is the dimension of the underlying linear space.) GrassmannAlgebra The concatenated italicized term GrassmannAlgebra refers to the Mathematica software package which accompanies this book. A Grassmann algebra A Grassmann algebra is the direct sum of an underlying linear space L, its field L, and
1 0

the exterior linear spaces L (2 m n).


m

L#L#L#!#L#!#L
0 1 2 m n

The Grassmann algebra The Grassmann algebra is used to describe that body of algebraic theory and results based on the Ausdehnungslehre, but extended to include more recent results and viewpoints. Intersection

2009 9 3

Grassmann Algebra Book.nb

737

Intersection An intersection of two simple elements is any of the congruent elements defined by the intersection of their spaces. Laplace expansion theorem The Laplace expansion theorem states: If any r rows are fixed in a determinant, then the value of the determinant may be obtained as the sum of the products of the minors of rth order (corresponding to the fixed rows) by their cofactors. Line A line is the space of a bound vector. Thus a line consists of all the (perhaps weighted) points on it and all the vectors parallel to it. Linear space A linear space is a mathematical structure defined by a standard set of axioms. It is often referred to simply as a 'space'. Minor A minor of a matrix A is the determinant (or sometimes matrix) of degree (or order) k formed from A by selecting the elements at the intersection of k distinct columns and k distinct rows. m-direction An m-direction is the space of a simple m-vector. It is also therefore the set of all vectors parallel to a given simple m-vector. m-element An m-element is a sum of simple m-elements. m-plane An m-plane is the space of a bound simple m-vector. Thus a plane consists of all the (perhaps weighted) points on it and all the vectors parallel to it. m-vector An m-vector is a sum of simple m-vectors. n-algebra The term n-algebra is an alias for the phrase Grassmann algebra with an underlying linear space of n dimensions. Origin
2009 9 3

Grassmann Algebra Book.nb

738

Origin The origin " is the geometric interpretation of a specific 1-element as a reference point. Plane A plane is the space of a bound simple bivector. Thus a plane consists of all the (perhaps weighted) points on it and all the vectors parallel to it. Point A point P is the sum of the origin " and a vector x: P " + x. Point mass A point mass is a physical interpretation of a weighted point. Physical entities Point masses, displacements, velocities, forces, moments, angular velocities, are physical entities. Each is represented by a geometrically interpreted element. A point mass, displacement or velocity is a physical 1-entity. A force, moment or angular velocity is a physical 2-entity. Physical representations Points, weighted points, vectors, bound vectors, bivectors, are geometric representations of physical entities such as point masses, displacements, velocities, forces, moments. Physical entities are represented by geometrically interpreted elements. Scalar A scalar is an element of the field L of the underlying linear space L.
0 1

A scalar is of grade zero. Screw A screw is a geometrically interpreted 2-element in a three-dimensional (physical) space (four-dimensional linear space) in which the bivector is orthogonal to the vector of the bound vector. The bivector is necessarily simple since the vector subspace is three-dimensional. Simple element A simple element is an element which may be expressed as the exterior product of 1elements. Simple bivector
2009 9 3

Grassmann Algebra Book.nb

739

Simple bivector A simple bivector V is the exterior product of two vectors. V xy. Simple m-element A simple m-element is the exterior product of m 1-elements. Simple m-vector A simple m-vector is the exterior product of m vectors. Space of a simple m-element The space of a simple m-element a is the set of all 1-elements x such that ax0.
m m

The space of a simple m-element is a linear space of dimension m. Space of a non-simple m-element The space of a non-simple m-element is the union of the spaces of its component simple m-elements. 2-direction A 2-direction is the space of a simple bivector. It is therefore the set of all vectors parallel to a given simple bivector. Underlying linear space The underlying linear space of a Grassmann algebra is the linear space L of 1-elements,
1

which together with the exterior product operation, generates the algebra. The dimension of an underlying linear space is denoted by the symbol n. Underlying bound vector space An underlying bound vector space is an underlying linear space whose basis elements are interpreted as an origin point " and vectors. It can be shown that from this basis a second basis can be constructed, all of whose basis elements are points. Union A union of two simple elements is any of the congruent elements which define the union of their spaces. Vector A vector is a geometric interpretation of a 1-element. Vector space
2009 9 3

Grassmann Algebra Book.nb

740

Vector space A vector space is a linear space in which the elements are interpreted as vectors. Vector subspace of a geometrically interpreted underlying linear space The vector subspace of a geometrically interpreted underlying linear space is the subspace of elements which do not involve the origin. Weighted point A weighted point is a scalar multiple a of a point P: a P.

2009 9 3

Grassmann Algebra Book.nb

741

A5 Bibliography
To be completed.

Bibliography
Note This bibliography is specifically directed at covering the algebraic implications of Grassmann's work. It is outside the scope of this book to cover any applications of Grassmann's work to calculus (for example, the theory of exterior differential forms), nor to the discussion of other algebraic systems (for example Clifford algebra) except where they clearly intersect with Grassmann's algebraic concepts. Of course, a result which may be expressed by an author in one system may well find expression in others. Armstrong H L 1959 'On an alternative definition of the vector product in n-dimensional vector analysis' Matrix and Tensor Quarterly, IX no 4, pp 107-110. The author proposes a definition equivalent to the complement of the exterior product of n|1 vectors. Ball R S 1876 The Theory of Screws: A Study in the Dynamics of a Rigid Body Dublin See the note on Hyde (1888). Ball R S 1900 A Treatise on the Theory of Screws Cambridge University Press (1900). Reprinted 1998. A classical work on the theory of screws, containing an annotated bibliography in which Ball refers to the Ausdehnungslehre of 1862: 'This remarkable work contains much that is of instruction and interest in connection with the present theory . Here we have a very general theory which includes screw coordinates as a special case.' Ball does not use Grassmann's methods in his treatise. Barton H 1927 'A Modern Presentation of Grassmann's Tensor Analysis' American Journal of Mathematics, XLIX, pp 598-614. This paper covers similar ground to that of Moore (1926). Bourbaki N 1948
2009 9 3

Grassmann Algebra Book.nb

742

Bourbaki N 1948 Algbra Actualits Scientifiques et Industrielles No.1044, Paris Chapter III treats multilinear algebra. Bowen R M and Wang C-C 1976 Introduction to Vectors and Tensors Plenum Press. Two volumes. This is the only contemporary text on vectors and tensors I have sighted which relates points and vectors via the explicit introduction of the origin into the calculus (p 254). Brand L 1947 Vector and Tensor Analysis Wiley, New York. Contains a chapter on motor algebra which, according to the author in his preface ' is apparently destined to play an important role in mechanics as well as in line geometry'. There is also a chapter on quaternions. Buchheim A 1884-1886 'On the Theory of Screws in Elliptic Space' Proceedings of the London Mathematical Society, xiv (1884) pp 83-98, xvi (1885) pp 15-27, xvii (1886) pp 240-254, xvii p 88. The author writes 'My special object is to show that the Ausdehnungslehre supplies all the necessary materials for a calculus of screws in elliptic space. Clifford was apparently led to construct his theory of biquaternions by the want of such a calculus, but Grassmann's method seems to afford a simpler and more natural means of expression than biquaternions.' (xiv, p 90) Later he extends this theory to ' all kinds of space.' (xvi, p 15) Burali-Forti C 1897 Introduction la Gomtrie Diffrentielle suivant la Mthode de H. Grassmann Gauthier-Villars, Paris This work covers both algebra and differential geometry in the tradition of the Peano approach to the Ausdehnungslehre. Burali-Forti C and Marcolongo R 1910 lments de Calcul Vectoriel Hermann, Paris Mostly a treatise on standard vector analysis, but it does contain an appendix (pp 176-198) on the methods of the Ausdehnungslehre and some interesting historical notes on the vector calculi and their notations. The authors use the wedge to denote Gibbs' cross product ! and use the !, initially introduced by Grassmann to denote the scalar or inner product.
2009 9 3

Grassmann Algebra Book.nb

743

Mostly a treatise on standard vector analysis, but it does contain an appendix (pp 176-198) on the methods of the Ausdehnungslehre and some interesting historical notes on the vector calculi and their notations. The authors use the wedge to denote Gibbs' cross product ! and use the !, initially introduced by Grassmann to denote the scalar or inner product. Cartan 1922 Leons sur les Invariants Intgraux Hermann, Paris In this work Cartan develops the theory of exterior differential forms, remarking that he has called them 'extrieures' since they obey Grassmann's rules of 'multiplication extrieures'. Cartan 1938 Leons sur la Thorie des Spineures Hermann, Paris In the introduction Cartan writes "One of the principal aims of this work is to develop the theory of spinors systematically by giving a purely geometrical definition of these mathematical entities: because of this geometrical origin, the matrices used by physicists in quantum mechanics appear of their own accord, and we can grasp the profound origin of the property, possesssed by Clifford algebras, of representing rotations in space having any number of dimensions". Contains a short section on multivectors and complements (the French term is supplment). Cartan 1946 Leons sur la Gomtrie des Espaces de Riemann Gautier-Villars, Paris This is a classic text (originating in 1925) on the application of the exterior calculus to differential geometry. Cartan begins with a chapter on exterior algebra and its expression in tensor index notation. He uses square brackets to denote the exterior product (as did Grassmann), and a wedge to denote the cross product. Carvallo M E 1892 'La Mthode de Grassmann' Nouvelles Annales de Mathmatiques, serie 3 XI, pp 8-37. An exposition of some of Grassmann's methods applied to three dimensional geometry following the approach of Peano. It does not treat the interior product. Chevalley C 1955 The Construction and Study of Certain Important Algebras Mathematical Society of Japan, Tokyo. Lectures given at the University of Tokyo on graded, tensor, Clifford and exterior algebras. Clifford W K 1873
2009 9 3

Grassmann Algebra Book.nb

744

Clifford W K 1873 'Preliminary Sketch of Biquaternions' Proceedings of the London Mathematical Society, IV, nos 64 and 65, pp 381-395 This paper includes an interesting discussion of the geometric nature of mechanical quantities. Clifford adopts the term 'rotor' for the bound vector, and 'motor' for the general sum of rotors. By analogy with the quaternion as a quotient of vectors he defines the biquaternion as a quotient of motors. Clifford W K 1878 'Applications of Grassmann's Extensive Algebra' American Journal of Mathematics Pure and Applied, I, pp 350-358 In this paper Clifford lays the foundations for general Clifford algebras. Clifford W K 1882 Mathematical Papers Reprinted by Chelsea Publishing Co, New York (1968). Of particular interest in addition to his two published papers above are the otherwise unpublished notes: 'Notes on Biquaternions' (~1873) 'Further Note on Biquaternions' (1876) 'On the Classification of Geometric Algebras' (1876) 'On the Theory of Screws in a Space of Constant Positive Curvature' (1876). Coffin J G 1909 Vector Analysis Wiley, New York. This is the second English text in the Gibbs-Heaviside tradition. It contains an appendix comparing the various notations in use at the time, including his view of the Grassmannian notation. Collins J V 1899-1900 'An elementary Exposition of Grassmann's Ausdehnungslehre or Theory of Extension' American Mathematical Monthly, 6 (1899) pp 193-198, 261-266, 297-301; 7 (1900) pp 31-35, 163-166, 181-187, 207-214, 253-258. This work follows in summary form the Ausdehnungslehre of 1862 as regards general theory but differs in its discussion of applications. It includes applications to geometry and brief applications to linear equations, mechanics and logic.

2009 9 3

Grassmann Algebra Book.nb

745

Coolidge J L 1940 'Grassmann's Calculus of Extension' in A History of Geometrical Methods Oxford University Press, pp 252-257 This brief treatment of Grassmann's work is characterized by its lack of clarity. The author variously describes an exterior product as 'essentially a matrix' and as 'a vector perpendicular to the factors' (p 254). And confusion arises between Grassmann's matrix and the division of two exterior products (p 256). Cox H 1882 'On the application of Quaternions and Grassmann's Ausdehnungslehre to different kinds of Uniform Space' Cambridge Philosophical Transactions, XIII part II, pp 69-143. The author shows that the exterior product is the multiplication required to describe nonmetric geometry, for 'it involves no ideas of distance' (p 115). He then discusses exterior, regressive and interior products, applying them to geometry, systems of forces, and linear complexes | using the notation of 1844. In other papers Cox applies the Ausdehnungslehre to non-Euclidean geometry (1873) and to the properties of circles (1890). Crowe M J 1967, 1985 A History of Vector Analysis Notre Dame 1967. Republished by Dover 1985. This is the most informative work available on the history of vector analysis from the discovery of the geometric representation of complex numbers to the development of the Gibbs-Heaviside system Crowe's thesis is that the Gibbs-Heaviside system grew mostly out of quaternions rather than from the Ausdehnungslehre. His explanation of Grassmannian concepts is particularly accurate in contradistinction to many who supply a more casual reference. Dibag I 1974 'Factorization in Exterior Algebras' Journal of Algebra, 30, pp 259-262 The author develops necessary and sufficient conditions for an m-element to have a certain number of 1-element factors. He also shows that an (n|2)-element in an odd dimensional space always has a 1-element factor. Drew T B 1961 Handbook of Vector and Polyadic Analysis Reinhold, New York

2009 9 3

Grassmann Algebra Book.nb

746

Tensor analysis in invariant notation. Of particular interest here is a section (p 57) on 'polycross products' | an extension of the (three-dimensional) cross product to polyads. Efimov N V and Rozendorn E R 1975 Linear Algebra and Multi-Dimensional Geometry MIR, Moscow Contains a chapter on multivectors and exterior forms. Fehr H 1899 Application de la Mthode Vectorielle de Grassmann la Gomtrie Infinitsimale Georges Carr, Paris Comprises an initial chapter on exterior algebra as well as standard differential geometry. Fleming W H 1965 Functions of Several Variables Addison-Wesley Contains a chapter on exterior algebra. Forder H G 1941 The Calculus of Extension Cambridge This text is the one of the most recent devoted to an exposition of Grassmann's methods. It is an extensive work (490 pages) largely using Grassmann's notations and relying primarily on the Ausdehnungslehre of 1862. Its application is particularly to geometry including many examples well illustrating the power of the methods, a chapter on forces, screws and linear complexes, and a treatment of the algebra of circles. Gibbs J W 1886 'On multiple algebra' Address to the American Association for the Advancement of Science In Collected Works, Gibbs 1928, vol 2. This paper is probably the most authoritative historical comparison of the different 'vectorial' algebras of the time. Gibbs was obviously very enthusiastic about the Ausdehnungslehre, and shows himself here to be one of Grassmann's greatest proponents. Gibbs J W 1891 'Quaternions and the Ausdehnungslehre' Nature, 44, pp 79|82. Also in Collected Works, Gibbs 1928. Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also concludes that to Grassmann must be attributed the discovery of matrices. Gibbs published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the relationship between quaternions and vector analysis, providing an enlightening insight into the quaternion|vector analysis controversy of the time.

2009 9 3

Grassmann Algebra Book.nb

747

Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also concludes that to Grassmann must be attributed the discovery of matrices. Gibbs published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the relationship between quaternions and vector analysis, providing an enlightening insight into the quaternion|vector analysis controversy of the time. Gibbs J W 1928 The Collected Works of J. Willard Gibbs Ph.D. LL.D. Two volumes. Longmans, New York. In part 2 of Volume 2 is reprinted Gibbs' only personal work on vector analysis: Elements of Vector Analysis, Arranged for the Use of Students of Physics (1881|1884). This was not published elsewhere. Grassmann H G 1844 Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik Leipzig. The full title is Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik dargestellt und durch Andwendungen auf die brigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erlutert. This first book on Grassmann's new mathematics is known shortly as Die Ausdehnungslehre von 1844. It develops the theory of exterior multiplication and division and regressive exterior multiplication. It does not treat complements or interior products in the way of the Ausdehnungslehre of 1862. The original Die Ausdehnungslehre von 1844 was republished in 1878. The best source to this work is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (1995). Grassmann H G 1855 'Sur les diffrents genres de multiplication' Crelle's Journal, 49, pp 123-141 This paper was written to claim of Cauchy priority for a method of solving linear equations. Grassmann H G 1862 Die Ausdehnungslehre. Vollstndig und in strenger Form Berlin. This is Grassmann's second attempt to publish his new discoveries in book form. It adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more on the theorem|proof approach. The work comprises two main parts: the first on the exterior algebra and the second on the theory of functions. The first part includes chapters on addition and subtraction, products in general, combinatorial products (exterior and regressive), the interior product, and applications to geometry. This is probably Grassmann's most important work. The best source is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (2000). In the collected works edition, the editor Friedrich Engel has appended extensive notes and comments, which Kannenberg has also translated.
2009 9 3

adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more on the theorem|proof approach. The work comprises two main parts: the first on the exterior algebra and the second on the theory of functions. The first part includes chapters on addition and subtraction, products in general, combinatorial Grassmann productsAlgebra (exterior Book.nb and 748 regressive), the interior product, and applications to geometry. This is probably Grassmann's most important work. The best source is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (2000). In the collected works edition, the editor Friedrich Engel has appended extensive notes and comments, which Kannenberg has also translated. Grassmann H G 1878 'Verwendung der Ausdehnungslehre fur die allgemeine Theorie der Polaren und den Zusammenhang algebraischer Gebilde' Crelle's Journal, 84, pp 273|283. This is Grassmann's last paper. It contains, among other material, his most complete discussion on the notion of 'simplicity'. Grassmann H G 1896|1911 Hermann Grassmanns Gesammelte Mathematische und Physikalische Werke Teubner, Leipzig. Volume 1 (1896), Volume 2 (1902, 1904), Volume 3 (1911). Grassmann's complete collected works appeared between 1896 and 1911 under the editorship of Friedrich Engel and with the collaboration of Jakob Lroth, Eduard Study, Justus Grassmann, Hermann Grassmann jr. and Georg Scheffers. The following is a summary of their contents. Volume 1 Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik (1844) Geometrische Analyse: geknpft an die von Leibniz erfundene geometrische Charakteristik (1847) Die Ausdehnungslehre. Vollstndig und in strenger Form (1862) Volume 2 Papers on geometry, analysis, mechanics and physics Volume 3 Theorie der Ebbe und Flut (1840) Further papers on mathematical physics. Parts of Volume 1 (Die lineale Ausdehnungslehre (1844) and Geometrische Analyse (1847)) together with selected papers on mathematics and physics have been translated into English by Lloyd C Kannenberg and published as A New Branch of Mathematics (1995). Geometrische Analyse is Grassmann's prize-winning essay fulfilling Leibniz' search for an algebra of geometry. The remainder of Volume 1 (Die Ausdehnungslehre (1862)) has been translated into English by Lloyd C Kannenberg and published as Extension Theory (2000). Volume 2 comprises papers on geometry, analysis, analytical mechanics and mathematical physics plus two texts, one on arithmetic and the other on trigonometry. Volume 3 comprises Grassmann's earliest major work (Theorie der Ebbe und Flut (1840)) and further papers, particularly on wave theory. The Theory of Tides begins to apply Grassmann's new approach to vector analysis. Grassmann H der Jngere 1909, 1913, 1927 Projektive Geometrie der Ebene Teubner, Leipzig. A comprehensive treatment in three books using the methods of the elder H. Grassmann.
2009 9 3

Grassmann Algebra Book.nb

749

A comprehensive treatment in three books using the methods of the elder H. Grassmann. Greenberg M J 1976 'Element of area via exterior algebra' American Mathematical Monthly, 83, pp 274-275 The author suggests that the treatment of elements of area by using the exterior product would be a more satisfactory treatment than that normally given in calculus texts. Greub W H 1967 Multilinear Algebra Springer-Verlag, Berlin. Contains chapters on exterior algebra. Gurevich G B 1964 Foundations of the Theory of Algebraic Invariants Noordhoff, Groningen Contains a chapter on m-vectors (called here polyvectors) with an extensive treatment of the conditions for divisibility of an m-vector by one or more vectors (pp 354-395). Hamilton W R 1853 Lectures on Quaternions Dublin The first English text on quaternions. The introduction briefly mentions Grassmann. Hamilton W R 1866 Elements of Quaternions Dublin The editor Charles Joly in his preface remarks in relation to the quaternion's associativity that "For example, Grassmann's multiplication is sometimes associative, but sometimes it is not." The exterior product is of course associative. However, here it seems Joly may be suffering from a confusion caused by Grassmann's notation for expresssions in which both exterior and regressive products appear. The second edition of 1899 has been reprinted in 1969 by Chelsea Publishing Company. Hardy A S 1895 Elements of Quaternions Ginn and Company, Boston The author claims this to be an introduction to quaternions at an elementary level.

2009 9 3

Grassmann Algebra Book.nb

750

Heath A E 1917 'Hermann Grassmann 1809-1877' The Monist, 27, pp 1-21 'The Neglect of the Work of H. Grassmann' The Monist, 27, pp 22-35 'The Geometric Analysis of Hermann Grassmann and its connection with Leibniz's characteristic' The Monist, 27, pp 36-56 Hestenes D 1966 Space|Time Algebra Gordon and Breach This work is a seminal exposition of Clifford algebra emphasising the geometric nature of the quantities and operations involved. The author writes " ideas of Grassmann are used to motivate the construction of Clifford algebra and to provide a geometric interpretation of Clifford numbers. This is to be contrasted with other treatments of Clifford algebra which are for the most part formal algebra. By insisting on Grassmann's geometric viewpoint, we are led to look upon the Dirac algebra with new eyes." (p 2) Hestenes D 1968 'Multivector Calculus' Journal of Mathematical Analysis and Applications, 24, pp 313-325 In the words of the author: "The object of this paper is to show how differential and integral calculus in many dimensions can be greatly simplified using Clifford algebra." Hodge W V D 1952 Theory and Applications of Harmonic Integrals Cambridge The "star" operator defined by Hodge is of similar nature to Grassmann's complement operator. Hunt K H 1970 Screw systems in Spatial Kinematics (Screw Systems Surveyed and Applied to Jointed Rigid Bodies) Report MMERS 3, Department of Mechanical Engineering, Monash University, Clayton, Australia Although in this report the author has intentionally confined himself to well known methods of pure and analytical geometry, he is a strong proponent of the screw as being the natural language for investigating spacial mechanisms.

2009 9 3

Grassmann Algebra Book.nb

751

Hyde E W 1884 'Calculus of Direction and Position' American Journal of Mathematics, VI, pp 1-13 In this paper the author compares quaternions to the methods of the Ausdehnungslehre and concludes that Grassmann's system is far preferable as a system of directed quantities. Hyde E W 1888 'Geometric Division of Non-congruent Quantities' Annals of Mathematics, 4, pp 9-18 This paper deals with the concept of exterior division more extensively than did Grassmann. Hyde E W 1888 'The Directional Theory of Screws' Annals of Mathematics, 4, pp 137-155 This paper is an account of the theory of screws using the Ausdehnungslehre. Hyde claims that "A screw evidently belongs thoroughly to the realm of the Directional Calculus and will not be easily or naturally treated by Cartesian methods; and Ball's treatment is throughout essentially Cartesian in its nature." Here he is referring to The Theory of Screws: A Study in the Dynamics of a Rigid Body (1876). In A Treatise on the Theory of Screws (1900) Ball comments: "Prof. Hyde proves by his [sic] calculus many of the fundamental theorems in the present theory in a very concise manner" (p 531). Hyde E W 1890 The Directional Calculus based upon the Methods of Hermann Grassmann Ginn and Company, Boston The author discusses geometric applications in 2 and 3 dimensions including screws and complements of bound elements (for example, points, lines and planes). Hyde E W 1906 Grassmann's Space Analysis Wiley, New York In the words of the author: "This little book is an attempt to present simply and concisely the principles of the "Extensive Analysis" as fully developed in the comprehensive treatises of Hermann Grassmann, restricting the treatment however to the geometry of two and three dimensional space."

2009 9 3

Grassmann Algebra Book.nb

752

Jahnke E 1905 Vorlesungen ber die Vektorenrechnung mit Anwendungen auf Geometrie, Mechanik und Mathematische Physik. Teubner, Leipzig This work is full of examples of application of the Ausdehnungslehre (notation of 1862) to geometry, mechanics and physics. Only two and three dimensional problems are considered. Lasker E 1896 'A Essay on the Geometrical Calculus' Proceedings of the London Mathematical Society, XXVIII, pp 217-260 This work differs from most of the other papers of this era on the geometrical applications of the Ausdehnungslehre by concentrating on a space of arbitrary dimension rather than two or three. Lewis G N 1910 'On four-dimensional vector calculus and its application in electrical theory' Proceedings of the American Academy of Arts and Sciences, XLVI, pp 165-181 A specialization of the Ausdehnungslehre to four dimensions and its applications to electromagnetism in Minkowskian terms. The author introduces the new concepts with the minimum of explanation: for example, the anti-symmetric properties of bivectors and trivectors are justified as conventions! (p 167). Lotze A 1922 Die Grundgleichungen der Mechanik insbesondere Starrer Krper Teubner, Leipzig This short monograph is one of the rare works addressing mechanics using the methods of the Ausdehnungslehre. It treats the dynamics of systems of particles and the kinematics and dynamics of rigid bodies. Macfarlane A 1904 Bibliography of Quaternions and Allied Systems of Mathematics Dublin Published for the International Association for Promoting the Study of Quaternions and Allied Systems of Mathematics, this bibliography together with supplements to 1913 contains about 2500 articles including many on the Ausdehnungslehre and vector analysis. Marcus M 1966 'The Cauch-Schwarz inequality in the exterior algebra' Quarterly Journal of Mathematics, 17, pp 61-63 The author shows that a classical inequality for positive definite hermitian matrices is a special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.
2009 9 3

Grassmann Algebra Book.nb

753

The author shows that a classical inequality for positive definite hermitian matrices is a special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra. Marcus M 1975 Finite Dimensional Multilinear Algebra Marcel Dekker, New York
Part II contains chapters on Grassmann and Clifford algebras.

Mehmke R 1913 Vorlesungen ber Punkt- und Vektorenrechnung (2 volumes) Teubner, Leipzig Volume 1 (394 pages) deals with the analysis of bound elements (points, lines and planes) and projective geometry. Milne E A 1948 Vectorial Mechanics Methuen, London An exposition of three-dimensional vectorial (and tensorial) mechnaics using 'invariant' notation. Milne is one of the rare authors who realises that physical forces and the linear momenta of particles are better modelled by line vectors (bound vectors) rather than by the usual vector algebra. He treats systems of line vectors (bound vectors) as vector pairs. Moore C L E 1926 'Grassmannian Geometry in Riemannian Space' Journal of Mathematics and Physics, 5, pp 191-200 This paper treats the complement, and the exterior, regressive and interior products in a Riemannian space in tensor index notation using the alternating tensors and the generalised Kronecker symbol. This classic use of tensor notation does not enhance the readability of the exposition. Murnaghan F D 1925 'The Generalised Kronecker Symbol and its Application to the Theory of Determinants' American Mathematical Monthly, 32, pp 233-241 The generalised Kronecker symbol is essentially the generalisation to an exterior product space of the usual Kronecker symbol. Murnaghan F D 1925 'The Tensor Character of the Generalised Kronecker Symbol' Bulletin of the American Mathematical Society, 31, pp 323-329 The author states "It will be readily recognised that there is an intimate connection here with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition of this theory with the aid of the generalised Kronecker symbol would help to make it more widely understood".
2009 9 3

Grassmann Algebra Book.nb

754

The author states "It will be readily recognised that there is an intimate connection here with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition of this theory with the aid of the generalised Kronecker symbol would help to make it more widely understood". Peano G 1895 'Essay on Geometrical Calculus' in Selected Works of Guiseppe Peano Chapter XV, p 169-188 Allen and Unwin, London The essay is translated from 'Saggio di calcolo geometrico' Atti, Accad. Sci. Torino, 31, (1895-6) pp 952-975. Peano claims to have understood Grassmann's ideas by reconstructing them himself. His geometric ideas come through with clarity, substantiating his claim. Peano's principle exposition of Grassmann's work was in Calcolo geometrico secondo l'Ausdehnunglehre di H. Grassmann, Turin (1888) of which a translation of selected passages appears on pp 90-100 of the above Selected Works. Peano G 1901 Formulaire de Mathematiques Gauthier-Villars, Paris A compendium of axioms and results. The last part (pp 192-209) is devoted to point and vector spaces and includes some interesting historical comments. Pedoe D 1967 'On a geometrical theorem in exterior algebra' Canadian Journal of Mathematics, 19, pp 1187-1191 The author remarks 'This paper owes its inspiration to the remarkable book by H. G. Forder The Calculus of Extension Forder introduces many concepts which I find difficult to bring down to earth. But the methods developed in his book are powerful ones, and it is evident that much work can usefully be done in simplifying and interpreting some of the concepts he uses'. He does not mention Grassmann. Saddler W 1927 'Apolar triads on a cubic curve' Proceedings of the London Mathematical Society, Series 2, 26, pp 249-256 'Apolar tetrads on the Grassmann quartic surface' Journal of the London Mathematical Society, 2, pp 185-189 Exterior algebra applied to geometric construction. Sain M 1976 'The Growing Algebraic Presence in Systems Engineering: An Introduction' Proceedings of the IEEE, 64, p 96 A modern algebraic discussion culminating in the definition of the exterior algebra and its relationship to the theory of determinants and some systems theoretical applications.
2009 9 3

Grassmann Algebra Book.nb

755

A modern algebraic discussion culminating in the definition of the exterior algebra and its relationship to the theory of determinants and some systems theoretical applications. Schlegel V 1872, 1875 System der Raumlehre (2 volumes) Leipzig Geometry using Grassmann's methods. Schouten J A 1951 Tensor Analysis for Physicists Oxford Contains a chapter on m-vectors from a tensor-analytic viewpoint. Schweitzer A R 1950 Bulletin of the American Mathematical Society, 56 'Grassmann's extensive algebra and modern number theory' (Part I, p 355; Part II, p 458) 'On the place of the algebraic equation in Grassmann's extensive algebra' (p 459) 'On the derivation of the regressive product in Grassmann's geometrical calculus' (p 463) 'A metric generalisation of Grassmann's geometric calculus' (p 464) Resums only of these papers are printed. Scott R F 1880 A Treatise on the Theory of Determinants Cambridge, London The author states in the preface that 'The principal novelty in the treatise lies in its systematic use of Grassmann's alternate units, by means of which the study of determinants is, I believe, much simplified.' Shepard G C 1966 Vector Spaces of Finite Dimension Oliver and Boyd, London Chapter IV contains an introduction to exterior products via tensors and multilinear algebra. Thomas J M 1962 Systems and Roots W Byrd Press The author uses exterior algebraic concepts in some of his network analysis.

2009 9 3

Grassmann Algebra Book.nb

756

Tonti E 1972 Accademia Nazionale die Lincei, Serie VIII, Volume L II 'On the Mathematical Structure of a Large Class of Physical Theories' (Fasc.1, p 48) 'A Mathematical Model for Physical Theories' (Fasc.2-3, p 176, 351) These papers begin the author's investigations into the structure of physical theories, in which he considers the geometrical calculus to play an important part. Tonti E 1975 On the Formal Structure of Physical Theories Report of the Instituto di Matematica del Politecnico di Milano In this report the author constructs a classification scheme for physical quantities and the equations of physical theories. The mathematical structures needed for this, and which are reviewed in this report are algebraic topology, exterior algebra, exterior differential forms, and Clifford algebra. The author shows that the underlying structure of physical theories is basically capable of a geometric interpretation. Whitehead A N 1898 A Treatise on Universal Algebra (Volume 1) Cambridge No further volumes appeared. This is probably the best and most complete exposition of Grassmann's works in English (586 pages). The author recreates many of Grassmann's results, in many cases extending and clarifying them with original contributions. Whitehead considers non-Euclidean metrics and spaces of arbitrary dimension. However, like Grassmann, he does not distinguish between n-elements and scalars. Willmore T J 1959 Introduction to Differential Geometry Oxford Includes a brief discussion of exterior algebra and its application to differential geometry (p 189). Wilson E B 1901 Vector Analysis New York The first formally published book entirely devoted to presenting the Gibbs-Heaviside system of vector analysis based on Gibbs' lectures and Heaviside's papers in the Electrician in 1893. (Wilson uses the term 'bivector', but by it means a vector with real and imaginary parts.)

2009 9 3

Grassmann Algebra Book.nb

757

Wilson E B and Lewis G N 1912 'The space-time manifold of relativity. The non-Euclidean geometry of mechanics and electromagnetics' Proceedings of the American Academy of Arts and Sciences, XLVIII, pp 387-507 This treatise uses a four-dimensional vector calculus developed by Lewis (see Lewis G N) by specializing the exterior calculus to four dimensions (with the scalar products of time-like vectors negative). This is a good example of the power of the exterior calculus. Ziwet A 1885-6 'A Brief Account of H. Grassmann's Geometrical Theories' Annals of Mathematics, 2, (1885 pp 1-11; 1886 pp 25-34) In the words of the author: 'It is the object of the present paper to give in the simplest form possible, a succinct accont of Grassmann's matheamtical theories and methods in their application to plane geometry'. (Follows in the main Schlegel's System der Raumlehre.)

A note on sources to Grassmann's work


The best source for Grassmann's contributions to science is his Collected Works (Grassmann 1896) which contain in volume 1 both Die Ausdehnungslehre von 1844 and Die Ausdehnungslehre von 1862, as well as Geometrische Analyse, his prizewinning essay fulfilling Leibniz's search for an algebra of geometry. Volume 2 contains papers on geometry, analysis, mechanics and physics, while volume 3 contains Theorie der Ebbe und Flut. Die Ausdehnungslehre von 1862, fully titled: Die Ausdehnungslehre. Vollstndig und in strenger Form is perhaps Grassmann's most important mathematical work. It comprises two main parts: the first devoted basically to the Ausdehnungslehre (212 pages) and the second to the theory of functions (155 pages). The Collected Works edition contains 98 pages of notes and comments. The discussion on the Ausdehnungslehre includes chapters on addition and subtraction, products in general, progressive and regressive products, interior products, and applications to geometry. A Cartesian metric is assumed. Both Grassmann's Ausdehnungslehre have been translated into English by Lloyd C Kannenberg. The 1844 version is published as A New Branch of Mathematics: The Ausdehnungslehre of 1844 and Other Works, Open Court 1995. The translation contains Die Ausdehnungslehre von 1844, Geometrische Analyse, selected papers on mathematics and physics, a bibliography of Grassmann's principal works, and extensive editorial notes. The 1862 version is published as Extension Theory. It contains work on both the theory of extension and the theory of functions. Particularly useful are the editorial and supplementary notes. Apart from these translations, probably the best and most complete exposition on the Ausdehnungslehre in English is in Alfred North Whitehead's A Treatise on Universal Algebra (Whitehead 1898). Whitehead saw Grassmann's work as one of the foundation stones on which he hoped to build an algebraic theory which united the several important and new mathematical systems which emerged during the nineteenth century ~ the algebra of symbolic logic, Grassmann's theory of extension, quaternions, matrices and the general theory of linear algebras. The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the theory of extension. 2009 9 3

Grassmann Algebra Book.nb

758

The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the theory of extension. The only other books on Grassmann in English are those by Edward Wyllys Hyde, The Directional Calculus (Hyde 1890) and Grassmann's Space Analysis (Hyde 1906). They treat the theory of extension in two and three-dimensional geometric contexts and include some applications to statics. Several topics such as Hyde's treatment of screws are original contributions. The seminal papers on Clifford algebra are by William Kingdon Clifford and can be found in his collected works Mathematical Papers (Clifford 1882), republished in a facsimile edition by Chelsea. Fortunately for those interested in the evolution of the emerging 'geometric algebras', The International Association for Promoting the Study of Quaternions and Allied Systems of Mathematics published a bibliography (Macfarlane 1913) which, together with supplements to 1913, contains about 2500 articles. This therefore most likely contains all the works on the Ausdehnungslehre and related subjects up to 1913. The only other recent text devoted specifically to Grassmann algebra (to the author's knowledge as of 2001) is Arno Zaddach's Grassmanns Algebra in der Geometrie, BI-Wissenschaftsverlag, (Zaddach 1994).

2009 9 3

Grassmann Algebra Book.nb

759

Index
To be completed.

2009 9 3

Potrebbero piacerti anche