Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
John Browne
2009 9 3
2009 9 3
Table of Contents
1 Introduction
1.1 Background The mathematical representation of physical entities The central concept of the Ausdehnungslehre Comparison with the vector and tensor algebras Algebraicizing the notion of linear dependence Grassmann algebra as a geometric calculus 1.2 The Exterior Product The anti-symmetry of the exterior product Exterior products of vectors in a three-dimensional space Terminology: elements and entities The grade of an element Interchanging the order of the factors in an exterior product A brief summary of the properties of the exterior product 1.3 The Regressive Product The regressive product as a dual product to the exterior product Unions and intersections of spaces A brief summary of the properties of the regressive product The Common Factor Axiom The intersection of two bivectors in a three dimensional space 1.4 Geometric Interpretations Points and vectors Sums and differences of points Determining a mass-centre Lines and planes The intersection of two lines 1.5 The Complement The complement as a correspondence between spaces The Euclidean complement The complement of a complement The Complement Axiom 1.6 The Interior Product The definition of the interior product Inner products and scalar products Sequential interior products Orthogonality Measure and magnitude Calculating interior products Expanding interior products The interior product of a bivector and a vector The cross product 2009 9 3
1.6 The Interior Product The definition of the interior product Inner products and scalar products Sequential interior products Orthogonality Measure and magnitude Calculating interior products Expanding interior products The interior product of a bivector and a vector The cross product 1.7 Exploring Screw Algebra To be completed 1.8 Exploring Mechanics To be completed 1.9 Exploring Grassmann Algebras To be completed 1.10 Exploring the Generalized Product To be completed 1.11 Exploring Hypercomplex Algebras To be completed 1.12 Exploring Clifford Algebras To be completed 1.13 Exploring Grassmann Matrix Algebras To be completed 1.14 The Various Types of Linear Product Introduction Case 1 The product of an element with itself is zero Case 2 The product of an element with itself is non-zero Examples 1.15 Terminology Terminology 1.16 Summary To be completed
2009 9 3
2.3 Exterior Linear Spaces Composing m-elements Composing elements automatically Spaces and congruence The associativity of the exterior product Transforming exterior products 2.4 Axioms for Exterior Linear Spaces Summary of axioms Grassmann algebras On the nature of scalar multiplication Factoring scalars Grassmann expressions Calculating the grade of a Grassmann expression 2.5 Bases Bases for exterior linear spaces Declaring a basis in GrassmannAlgebra Composing bases of exterior linear spaces Composing palettes of basis elements Standard ordering Indexing basis elements of exterior linear spaces 2.6 Cobases Definition of a cobasis The cobasis of unity Composing palettes of cobasis elements The cobasis of a cobasis 2.7 Determinants Determinants from exterior products Properties of determinants The Laplace expansion technique Calculating determinants
2.8 Cofactors Cofactors from exterior products The Laplace expansion in cofactor form Exploring the calculation of determinants using minors and cofactors Transformations of cobases Exploring transformations of cobases 2.9 Solution of Linear Equations Grassmann's approach to solving linear equations Example solution: 3 equations in 4 unknowns Example solution: 4 equations in 4 unknowns 2.10 Simplicity The concept of simplicity All (n|1)-elements are simple Conditions for simplicity of a 2-element in a 4-space Conditions for simplicity of a 2-element in a 5-space 2.11 Exterior division The definition of an exterior quotient Division by a 1-element Division by a k-element Automating the division process 2009 9 3
2.11 Exterior division The definition of an exterior quotient Division by a 1-element Division by a k-element Automating the division process 2.12 Multilinear forms The span of a simple element Composing spans Example: Refactorizations Multilinear forms Defining m:k-forms Composing m:k-forms Expanding and simplifying m:k-forms Developing invariant forms Properties of m:k-forms The complete span of a simple element 2.13 Unions and intersections Union and intersection as a multilinear form Where the intersection is evident Where the intersections is not evident Intersection with a non-simple element Factorizing simple elements 2.14 Summary
3.5 The Common Factor Axiom Motivation The Common Factor Axiom Extension of the Common Factor Axiom to general elements Special cases of the Common Factor Axiom Dual versions of the Common Factor Axiom Application of the Common Factor Axiom When the common factor is not simple 3.6 The Common Factor Theorem Development of the Common Factor Theorem Proof of the Common Factor Theorem The A and B forms of the Common Factor Theorem Example: The decomposition of a 1-element Example: Applying the Common Factor Theorem Automating the application of the Common Factor Theorem 3.7 The Regressive Product of Simple Elements The regressive product of simple elements The regressive product of (n|1)-elements Regressive products leading to scalar results Expressing an element in terms of another basis Exploration: The cobasis form of the Common Factor Axiom Exploration: The regressive product of cobasis elements 3.8 Factorization of Simple Elements Factorization using the regressive product Factorizing elements expressed in terms of basis elements The factorization algorithm Factorization of (n|1)-elements Factorizing simple m-elements Factorizing contingently simple m-elements Determining if an element is simple 3.9 Product Formulas for Regressive Products The Product Formula Deriving Product Formulas Deriving Product Formulas automatically Computing the General Product Formula Comparing the two forms of the Product Formula The invariance of the General Product Formula Alternative forms for the General Product Formula The Decomposition Formula Exploration: Dual forms of the General Product Formulas 3.10 Summary
4 Geometric Interpretations
4.1 Introduction 4.2 Geometrically Interpreted 1-elements Vectors Points Declaring a basis for a bound vector space Composing vectors and points Example: Calculation of the centre of mass
2009 9 3
4.2 Geometrically Interpreted 1-elements Vectors Points Declaring a basis for a bound vector space Composing vectors and points Example: Calculation of the centre of mass 4.3 Geometrically Interpreted 2-elements Simple geometrically interpreted 2-elements Bivectors Bound vectors Composing bivectors and bound vectors The sum of two parallel bound vectors The sum of two non-parallel bound vectors Sums of bound vectors Example: Reducing a sum of bound vectors 4.4 Geometrically Interpreted m-Elements Types of geometrically interpreted m-elements The m-vector The bound m-vector Bound simple m-vectors expressed by points Bound simple bivectors Composing m-vectors and bound m-vectors 4.5 Geometrically Interpreted Spaces Vector and point spaces Coordinate spaces Geometric dependence Geometric duality 4.6 m-planes m-planes defined by points m-planes defined by m-vectors m-planes as exterior quotients Computing exterior quotients The m-vector of a bound m-vector 4.7 Line Coordinates Lines in a plane Lines in a 3-plane Lines in a 4-plane Lines in an m-plane 4.8 Plane Coordinates Planes in a 3-plane Planes in a 4-plane Planes in an m-plane The coordinates of geometric entities 4.9 Calculation of Intersections The intersection of two lines in a plane The intersection of a line and a plane in a 3-plane The intersection of two planes in a 3-plane Example: The osculating plane to a curve 4.10 Decomposition into Components The shadow Decomposition in a 2-space Decomposition in a 3-space 2009 9 3 Decomposition in a 4-space Decomposition of a point or vector in an n-space
4.10 Decomposition into Components The shadow Decomposition in a 2-space Decomposition in a 3-space Decomposition in a 4-space Decomposition of a point or vector in an n-space 4.11 Projective Space The intersection of two lines in a plane The line at infinity in a plane Projective 3-space Homogeneous coordinates Duality Desargues' theorem Pappas' theorem Projective n-space 4.12 Regions of Space Regions of space Regions of a plane Regions of a line Planar regions defined by two lines Planar regions defined by three lines Creating a pentagonal region Creating a 5-star region Creating a 5-star pyramid Summary 4.13 Geometric Constructions Geometric expressions Geometric equations for lines and planes The geometric equation of a conic section in the plane The geometric equation as a prescription to construct The algebraic equation of a conic section in the plane An alternative geometric equation of a conic section in the plane Conic sections through five points Dual constructions Constructing conics in space A geometric equation for a cubic in the plane Pascal's Theorem Pascal lines 4.14 Summary
5 The Complement
5.1 Introduction 5.2 Axioms for the Complement The grade of a complement The linearity of the complement operation The complement axiom The complement of a complement axiom 2009 9 3 The complement of unity
10
5.2 Axioms for the Complement The grade of a complement The linearity of the complement operation The complement axiom The complement of a complement axiom The complement of unity 5.3 Defining the Complement The complement of an m-element The complement of a basis m-element Defining the complement of a basis 1-element Constraints on the value of ! Choosing the value of ! Defining the complements in matrix form 5.4 The Euclidean Complement Tabulating Euclidean complements of basis elements Formulae for the Euclidean complement of basis elements Products leading to a scalar or n-element 5.5 Complementary Interlude Alternative forms for complements Orthogonality Visualizing the complement axiom The regressive product in terms of complements Glimpses of the inner product 5.6 The Complement of a Complement The complement of a complement axiom The complement of a cobasis element The complement of the complement of a basis 1-element The complement of the complement of a basis m-element The complement of the complement of an m-element Idempotent complements 5.7 Working with Metrics Working with metrics The default metric Declaring a metric Declaring a general metric Calculating induced metrics The metric for a cobasis Creating palettes of induced metrics 5.8 Calculating Complements Entering a complement Creating palettes of complements of basis elements Converting complements of basis elements Simplifying expressions involving complements Converting expressions involving complements to specified forms Converting regressive products of basis elements in a metric space 5.9 Complements in a vector space The Euclidean complement in a vector 2-space The non-Euclidean complement in a vector 2-space 2009 9 3 The Euclidean complement in a vector 3-space The non-Euclidean complement in a vector 3-space
11
5.9 Complements in a vector space The Euclidean complement in a vector 2-space The non-Euclidean complement in a vector 2-space The Euclidean complement in a vector 3-space The non-Euclidean complement in a vector 3-space 5.10 Complements in a bound space Metrics in a bound space The complement of an m-vector Products of vectorial elements in a bound space The complement of an element bound through the origin The complement of the complement of an m-vector Calculating with vector space complements 5.11 Complements of bound elements The Euclidean complement of a point in the plane The Euclidean complement of a point in a point 3-space The complement of a bound element Euclidean complements of bound elements The regressive product of point complements 5.12 Reciprocal Bases Reciprocal bases The complement of a basis element The complement of a cobasis element The complement of a complement of a basis element The exterior product of basis elements The regressive product of basis elements The complement of a simple element is simple 5.13 Summary
12
6.4 The Interior Common Factor Theorem The Interior Common Factor Formula The Interior Common Factor Theorem Examples of the Interior Common Factor Theorem The computational form of the Interior Common Factor Theorem 6.5 The Inner Product Implications of the Common Factor Axiom The symmetry of the inner product The inner product of complements The inner product of simple elements Calculating inner products Inner products of basis elements 6.6 The Measure of an m-element The definition of measure Unit elements Calculating measures The measure of free elements The measure of bound elements Determining the multivector of a bound multivector 6.7 The Induced Metric Tensor Calculating induced metric tensors Using scalar products to construct induced metric tensors Displaying induced metric tensors as a matrix of matrices 6.8 Product Formulae for Interior Products The basic interior Product Formula Deriving interior Product Formulas Deriving interior Product Formulas automatically Computable forms of interior Product Formulas The invariance of interior Product Formulas An alternative form for the interior Product Formula The interior decomposition formula Interior Product Formulas for 1-elements Interior Product Formulas in terms of double sums 6.9 The Zero Interior Sum Theorem The zero interior sum Composing interior sums The Gram-Schmidt process Proving the Zero Interior Sum Theorem 6.10 The Cross Product Defining a generalized cross product Cross products involving 1-elements Implications of the axioms for the cross product The cross product as a universal product Cross product formulae 6.11 The Triangle Formulae Triangle components The measure of the triangle components Equivalent forms for the triangle components 2009 9 3
13
6.11 The Triangle Formulae Triangle components The measure of the triangle components Equivalent forms for the triangle components 6.12 Angle Defining the angle between elements The angle between a vector and a bivector The angle between two bivectors The volume of a parallelepiped 6.13 Projection To be completed. 6.14 Interior Products of Interpreted Elements To be completed. 6.15 The Closest Approach of Multiplanes To be completed.
2009 9 3
14
8 Exploring Mechanics
8.1 Introduction 8.2 Force Representing force Systems of forces Equilibrium Force in a metric 3-plane 8.3 Momentum The velocity of a particle Representing momentum The momentum of a system of particles The momentum of a system of bodies Linear momentum and the mass centre Momentum in a metric 3-plane 8.4 Newton's Law Rate of change of momentum Newton's second law 8.5 The Angular Velocity of a Rigid Body To be completed. 8.6 The Momentum of a Rigid Body To be completed. 8.7 The Velocity of a Rigid Body To be completed. 8.8 The Complementary Velocity of a Rigid Body To be completed. 8.9 The Infinitesimal Displacement of a Rigid Body To be completed. 8.10 Work, Power and Kinetic Energy To be completed.
9 Grassmann Algebra
9.1 Introduction 9.1 Grassmann Numbers Creating Grassmann numbers Body and soul Even and odd components The grades of a Grassmann number Working with complex scalars 2009 9 3
9.1 Grassmann Numbers Creating Grassmann numbers Body and soul Even and odd components The grades of a Grassmann number Working with complex scalars 9.3 Operations with Grassmann Numbers The exterior product of Grassmann numbers The regressive product of Grassmann numbers The complement of a Grassmann number The interior product of Grassmann numbers 9.4 Simplifying Grassmann Numbers Elementary simplifying operations Expanding products Factoring scalars Checking for zero terms Reordering factors Simplifying expressions 9.5 Powers of Grassmann Numbers Direct computation of powers Powers of even Grassmann numbers Powers of odd Grassmann numbers Computing positive powers of Grassmann numbers Powers of Grassmann numbers with no body The inverse of a Grassmann number Integer powers of a Grassmann number General powers of a Grassmann number 9.6 Solving Equations Solving for unknown coefficients Solving for an unknown Grassmann number 9.7 Exterior Division Defining exterior quotients Special cases of exterior division The non-uniqueness of exterior division 9.8 Factorization of Grassmann Numbers The non-uniqueness of factorization Example: Factorizing a Grassmann number in 2-space Example: Factorizing a 2-element in 3-space Example: Factorizing a 3-element in 4-space 9.9 Functions of Grassmann Numbers The Taylor series formula The form of a function of a Grassmann number Calculating functions of Grassmann numbers Powers of Grassmann numbers Exponential and logarithmic functions of Grassmann numbers Trigonometric functions of Grassmann numbers Functions of several Grassmann numbers
15
16
2009 9 3
17
10.10 Properties of the Generalized Product Summary of properties 10.11 The Triple Generalized Sum Conjecture The generalized Grassmann product is not associative The triple generalized sum The triple generalized sum conjecture Exploring the triple generalized sum conjecture An algorithm to test the conjecture 10.12 Exploring Conjectures A conjecture Exploring the conjecture 10.13 The Generalized Product of Intersecting Elements The case l < p The case l p The special case of l = p 10.14 The Generalized Product of Orthogonal Elements The generalized product of totally orthogonal elements The generalized product of partially orthogonal elements 10.15 The Generalized Product of Intersecting Orthogonal Elements The case l < p The case l p 10.16 Generalized Products in Lower Dimensional Spaces Generalized products in 0, 1, and 2-spaces 0-space 1-space 2-space 10.17 Generalized Products in 3-Space To be completed
18
11.4 The Hypercomplex Product in a 2-Space Tabulating products in 2-space The hypercomplex product of two 1-elements The hypercomplex product of a 1-element and a 2-element The hypercomplex square of a 2-element The product table in terms of exterior and interior products 11.5 The Quaternions " The product table for orthonormal elements Generating the quaternions The norm of a quaternion The Cayley-Dickson algebra 11.6 The Norm of a Grassmann number The norm The norm of a simple m-element The skew-symmetry of products of elements of different grade The norm of a Grassmann number in terms of hypercomplex products The norm of a Grassmann number of simple components The norm of a non-simple element 11.7 Products of two different elements of the same grade The symmetrized sum of two m-elements Symmetrized sums for elements of different grades The body of a symmetrized sum The soul of a symmetrized sum Summary of results of this section 11.8 Octonions To be completed
19
12.4 Special Cases of Clifford Products The Clifford product with scalars The Clifford product of 1-elements The Clifford product of an m-element and a 1-element The Clifford product of an m-element and a 2-element The Clifford product of two 2-elements The Clifford product of two identical elements 12.5 Alternate Forms for the Clifford Product Alternate expansions of the Clifford product The Clifford product expressed by decomposition of the first factor Alternative expression by decomposition of the first factor The Clifford product expressed by decomposition of the second factor The Clifford product expressed by decomposition of both factors 12.6 Writing Down a General Clifford Product The form of a Clifford product expansion A mnemonic way to write down a general Clifford product 12.7 The Clifford Product of Intersecting Elements General formulae for intersecting elements Special cases of intersecting elements 12.8 The Clifford Product of Orthogonal Elements The Clifford product of totally orthogonal elements The Clifford product of partially orthogonal elements Testing the formulae 12.9 The Clifford Product of Intersecting Orthogonal Elements Orthogonal union Orthogonal intersection 12.10 Summary of Special Cases of Clifford Products Arbitrary elements Arbitrary and orthogonal elements Orthogonal elements Calculating with Clifford products 12.11 Associativity of the Clifford Product Associativity of orthogonal elements A mnemonic formula for products of orthogonal elements Associativity of non-orthogonal elements Testing the general associativity of the Clifford product 12.13 Clifford Algebra Generating Clifford algebras Real algebra Clifford algebras of a 1-space 12.14 Clifford Algebras of a 2-Space The Clifford product table in 2-space Product tables in a 2-space with an orthogonal basis Case 1: 8e1 e1 + 1, e2 e2 + 1< Case 2: 8e1 e1 + 1, e2 e2 - 1< Case 3: 8e1 e1 - 1, e2 e2 - 1<
2009 9 3
12.14 Clifford Algebras of a 2-Space The Clifford product table in 2-space Product tables in a 2-space with an orthogonal basis Case 1: 8e1 e1 + 1, e2 e2 + 1< Case 2: 8e1 e1 + 1, e2 e2 - 1< Case 3: 8e1 e1 - 1, e2 e2 - 1< 12.15 Clifford Algebras of a 3-Space The Clifford product table in 3-space !"3 : The Pauli algebra !"3 + : The Quaternions The Complex subalgebra Biquaternions 12.16 Clifford Algebras of a 4-Space The Clifford product table in 4-space !"4 : The Clifford algebra of Euclidean 4-space !"4 + : The even Clifford algebra of Euclidean 4-space !"1,3 : The Dirac algebra !"0,4 : The Clifford algebra of complex quaternions 12.17 Rotations To be completed
20
21
13.6 Matrix Powers Positive integer powers Negative integer powers Non-integer powers of matrices with distinct eigenvalues Integer powers of matrices with distinct eigenvalues 13.7 Matrix Inverses A formula for the matrix inverse GrassmannMatrixInverse 13.8 Matrix Equations Two types of matrix equations Solving matrix equations 13.9 Matrix Eigensystems Exterior eigensystems of Grassmann matrices GrassmannMatrixEigensystem 13.10 Matrix Functions Distinct eigenvalue matrix functions GrassmannMatrixFunction Exponentials and Logarithms Trigonometric functions Symbolic matrices Symbolic functions 13.11 Supermatrices To be completed
Appendices
Contents
A1 Guide to GrassmannAlgebra A2 A Brief Biography of Grassmann A3 Notation A4 Glossary A5 Bibliography Bibliography A note on sources to Grassmann's work
Index
2009 9 3
22
Preface
The origins of this book
This book has grown out of an interest in Grassmann's work over the past three decades. There is something fascinating about the beauty with which the mathematical structures Grassmann discovered (invented, if you will) describe the physical world, and something also fascinating about how these beautiful structures have been largely lost to the mainstreams of mathematics and science.
23
Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software package which accompanies it.
24
Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read independently of each other, with the proviso that the chapters on hypercomplex and Clifford algebra depend on the notion of the generalized product. Some computational sections within the text use GrassmannAlgebra or Mathematica. Wherever possible, explanation is provided so that the results are still intelligible without a knowledge of Mathematica. Mathematica input/output dialogue is in indented Courier font, with the input in bold. For example:
GrassmannSimplify@1 + x + x xD
1+x
Any chapter or section whose title begins with Exploring may be ommitted on first reading without unduly disturbing the flow of concept development. Exploratory chapters cover separate applications. Exploratory sections treat topics that are: important for the justification of later developments, but not essential for their understanding; of a specialized nature; or, results which appear to be new and worth documenting.
Acknowledgements
Finally I would like to acknowledge those who were instrumental in the evolution of this book: Cecil Pengilley who originally encouraged me to look at applying Grassmann's work to engineering; the University of Melbourne, Xerox Corporation, the University of Rochester and Swinburne University of Technology which sheltered me while I pursued this interest; Janet Blagg who peerlessly edited the text; my family who had to put up with my writing, and finally to Stephen Wolfram who created Mathematica and provided me with a visiting scholar's grant to work at Wolfram Research Institute in Champaign where I began developing the GrassmannAlgebra package. Above all however, I must acknowledge Hermann Grassmann. His contribution to mathematics and science puts him among the great thinkers of the nineteenth century. I hope you enjoy exploring this beautiful mathematical system.
For I have every confidence that the effort I have applied to the science reported upon here, which has occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to be lost. a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down here will bear fruit. some day these ideas, even if in an altered form, will reappear and with the passage of time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the development of truth, however small the domain it embraces, can pass away without a trace. It remains even if the garments in which feeble men clothe it fall into dust.
2009 9 3
For I have every confidence that the effort I have applied to the science reported upon here, which has Grassmann Algebra Book.nb 25 occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to be lost. a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down here will bear fruit. some day these ideas, even if in an altered form, will reappear and with the passage of time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the development of truth, however small the domain it embraces, can pass away without a trace. It remains even if the garments in which feeble men clothe it fall into dust.
Hermann Grassmann in the foreword to the Ausdehnungslehre of 1862 translated by Lloyd Kannenberg
2009 9 3
26
1 Introduction
1.1 Background
The mathematical representation of physical entities
Three of the more important mathematical systems for representing the entities of contemporary engineering and physical science are the (three-dimensional) vector algebra, the more general tensor algebra, and geometric algebra. Grassmann algebra is more general than vector algebra, overlaps aspects of the tensor algebra, and underpins geometric algebra. It predates all three. In this book we will show that it is only via Grassmann algebra that many of the geometric and physical entities commonly used in the engineering and physical sciences may be represented mathematically in a way which correctly models their pertinent properties and leads straightforwardly to principal results. As a case in point we may take the concept of force. It is well known that a force is not satisfactorily represented by a (free) vector, yet contemporary practice is still to use a (free) vector calculus for this task. The deficiency may be made up for by verbal appendages to the mathematical statements: for example 'where the force f acts along the line through the point P'. Such verbal appendages, being necessary, and yet not part of the calculus being used, indicate that the calculus itself is not adequate to model force satisfactorily. In practice this inadequacy is coped with in terms of a (free) vector calculus by the introduction of the concept of moment. The conditions of equilibrium of a rigid body include a condition on the sum of the moments of the forces about any point. The justification for this condition is not well treated in contemporary texts. It will be shown later however that by representing a force correctly in terms of an element of the Grassmann algebra, both force-vector and moment conditions for the equilibrium of a rigid body are natural consequences of the algebraic processes alone. Since the application of Grassmann algebra to mechanics was known during the nineteenth century one might wonder why, with the 'progress of science', it is not currently used. Indeed the same question might be asked with respect to its application in many other fields. To attempt to answer these questions, a brief biography of Grassmann is included as an appendix. In brief, the scientific world was probably not ready in the nineteenth century for the new ideas that Grassmann proposed, and now, in the twenty-first century, seems only just becoming aware of their potential.
2009 9 3
27
2009 9 3
28
For applications to the physical world, however, the Grassmann algebra possesses a critical capability that no other algebra possesses so directly: it can distinguish between points and vectors and treat them as separate tensorial entities. Lines and planes are examples of higher order 2009 9 3 from points and vectors, which have both position and direction. A line can be constructs represented by the exterior product of any two points on it, or by any point on it with a vector parallel to it.
29
For applications to the physical world, however, the Grassmann algebra possesses a critical capability that no other algebra possesses so directly: it can distinguish between points and vectors and treat them as separate tensorial entities. Lines and planes are examples of higher order constructs from points and vectors, which have both position and direction. A line can be represented by the exterior product of any two points on it, or by any point on it with a vector parallel to it.
Pointvector depiction
Pointpoint depiction
A plane can be represented by the exterior product of any point on it with a bivector parallel to it, any two points on it with a vector parallel to it, or any three points on it.
Pointvectorvector
Pointpointvector
Pointpointpoint
Finally, it should be noted that the Grassmann algebra subsumes all of real algebra, the exterior product reducing in this case to the usual product operation amongst real numbers. Here then is a geometric calculus par excellence. We hope you enjoy exploring it.
2009 9 3
30
x y -y x
From this we can easily show the equivalent relation, that the exterior product of a vector with itself is zero.
1.1
xx 0
This is as expected because x is linearly dependent on itself. The exterior product is associative, distributive, and behaves linearly as expected with scalars.
1.2
Here, the ai and bi are of course scalars. Taking the exterior product of x and y and multiplying out the product allows us to express the bivector x y as a linear combination of basis bivectors.
x y Ha1 e1 + a2 e2 + a3 e3 L Hb1 e1 + b2 e2 + b3 e3 L x y Ha1 b1 L e1 e1 + Ha1 b2 L e1 e2 + Ha1 b3 L e1 e3 + Ha2 b1 L e2 e1 + Ha2 b2 L e2 e2 + Ha2 b3 L e2 e3 + Ha3 b1 L e3 e1 + Ha3 b2 L e3 e2 + Ha3 b3 L e3 e3
The first simplification we can make is to put all basis bivectors of the form ei ei to zero. The second simplification is to use the anti-symmetry of the product and collect the terms of the bivectors which are not essentially different (that is, those that may differ only in the order of their factors, and hence differ only by a sign). The product x y can then be written:
x y Ha1 b2 - a2 b1 L e1 e2 + Ha2 b3 - a3 b2 L e2 e3 + Ha3 b1 - a1 b3 L e3 e1
The scalar factors appearing here are just those which would have appeared in the usual vector cross product of x and y. However, there is an important difference. The exterior product expression does not require the vector space to have a metric, while the cross product, because it generates a vector orthogonal to x and y, necessarily assumes a metric. Furthermore, the exterior product is valid for any number of vectors in spaces of arbitrary dimension, while the cross product is necessarily confined to products of two vectors in a space of three dimensions. For example, we may continue the product by multiplying x y by a third vector z.
z c1 e1 + c2 e2 + c3 e3
2009 9 3
31
Adopting the same simplification procedures as before we obtain the trivector x y z expressed in basis form.
xyz Ha1 b2 c3 - a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 L e1 e2 e3
A trivector in a space of three dimensions has just one component. Its coefficient is the determinant of the coefficients of the original three vectors. Clearly, if these three vectors had been linearly dependent, this determinant would have been zero. In a metric space, this coefficient would be proportional to the volume of the parallelepiped formed by the vectors x, y, and z. Hence the geometric interpretation of the algebraic result: if x, y, and z are lying in a planar direction, that is, they are dependent, then the volume of the parallelepiped defined is zero. We see here also that the exterior product begins to give geometric meaning to the often inscrutable operations of the algebra of determinants. In fact we shall see that all the operations of determinants are straightforward consequences of the properties of the exterior product. In three-dimensional metric vector algebra, the vanishing of the scalar triple product of three vectors is often used as a criterion of their linear dependence, whereas in fact the vanishing of their exterior product (valid also in a non-metric space) would suffice. It is interesting to note that the notation for the scalar triple product, or 'box' product, is Grassmann's original notation, viz [x y z]. Finally, we can see that the exterior product of more than three vectors in a three-dimensional space will always be zero, since they must be dependent. 2009 9 3
32
Finally, we can see that the exterior product of more than three vectors in a three-dimensional space will always be zero, since they must be dependent.
2009 9 3
33
For simplicity, however, we do not generally denote 1-elements with an underscripted '1'. The grade of a scalar is 0. We shall see that this is a natural consequence of the exterior product axioms formulated for elements of general grade. The dimension of the underlying linear space of 1-elements is denoted by n. Elements of grade greater than n are zero. The complementary grade of an m-element in an n-space is n|m.
In fact, interchanging the order of any two non-adjacent 1-element factors will also change the sign of the product.
x y - H y x L
To see why this is so, suppose the number of factors between x and y is m. First move y to the immediate left of x. This will cause m+1 changes of sign. Then move x to the position that y vacated. This will cause m changes of sign. In all there will be 2m+1 changes of sign, equivalent to just one sign change. Note that it is only elements of odd grade that anti-commute. If, in a product of two elements, at least one of them is of even grade, then the elements commute. For example, 2-elements commute with all other elements.
Hx yL z z Hx yL
2009 9 3
34
The exterior product of an m-element and a k-element is an (m+k)-element. The exterior product is associative.
ab
m k
g a bg
r m k r
1.3
a 1a a1
m m m
1.4
Ka aO b a a b
m k m k
a ab
m k
1.5
An exterior product is anti-commutative whenever the grades of the factors are both odd.
a b H- 1L m k b a
m k k m
1.6
The exterior product is both left and right distributive under addition.
Ka + b O g a g + b g
m m r m r m r
a Kb + gO a b + a g
m r r m r m r
1.7
2009 9 3
35
We will also need the notion of congruence. Two elements are congruent if one is a scalar factor times the other. For example x and 2 x are congruent; x y and - x y are congruent. Congruent elements define the same subspace. We denote congruence by the symbol !. For two elements, congruence is equivalent to linear dependence. For more than two elements however, although the elements may be linearly dependent, they are not necessarily congruent. The following concepts of union and intersection only make sense up to congruence. A union of elements is an element defining the subspace they together span. 2009 9 3
36
A union of elements is an element defining the subspace they together span. The dual concept to union of elements is intersection of elements. An intersection of elements is an element defining the subspace they span in common. Suppose we have three independent 1-elements: x, y, and z. A union of x y and y z is any element congruent to x y z. An intersection of x y and y z is any element congruent to y. The regressive product enables us to calculate intersections of elements. Thus it is the product operation correctly dual to the exterior product. In fact, in Chapter 3: The Regressive Product, we will develop the axiom set for the regressive product from that of the exterior product, and also show that we can reverse the procedure.
The regressive product of an m-element and a k-element in an n-space is an (m+k|n)-element. The regressive product is associative.
ab
m k
g a bg
r m k r
1.8
a 1a a1
m n m m n
1.9
Ka aO b a a b
m k m k
a ab
m k
1.10
A regressive product is anti-commutative whenever the complementary grades of the factors are both odd.
a b H - 1 L Hn- m L Hn-kL b a
m k k m
1.11
The regressive product is both left and right distributive under addition.
2009 9 3
37
The regressive product is both left and right distributive under addition.
Ka + b O g a g + b g
m m r m r m r
a Kb + gO a b + a g
m r r m r m r
1.12
Since the space is 3-dimensional, we can write any 3-element such as x y z as a scalar factor (a, say) times the unit 3-element (introduced in axiom 1.9).
Hx y zL z Ka 1O z a z
3
This then gives us the axiomatic structure to say that the regressive product of two elements possessing an element in common is congruent to that element. Another way of saying this is: the regressive product of two elements defines their intersection.
2009 9 3
38
Hx zL Hy zL z
Of course this is just a simple case. More generally, let a, b, and g be simple elements with m+k+p
m k p
= n, where n is the dimension of the space. Then the Common Factor Axiom states that
ag bg abg g
m p k p m k p p
m+k+p n
1.13
There are many rearrangements and special cases of this formula which we will encounter in later chapters. For example, when p is zero, the Common Factor Axiom shows that the regressive product of an m-element with an (n|m)-element is a scalar which can be expressed in the alternative form of a regressive product with the unit 1.
a b a b
m n-m m n-m
The Common Factor Axiom allows us to prove a particularly useful result: the Common Factor Theorem. The Common Factor Theorem expresses any regressive product in terms of exterior products alone. This of course enables us to calculate intersections of arbitrary elements. Most importantly however we will see later that the Common Factor Theorem has a counterpart expressed in terms of exterior and interior products, called the Interior Common Factor Theorem. This forms the principal expansion theorem for interior products and from which we can derive all the important theorems relating exterior and interior products. The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based, are possibly the most important theorems in the Grassmann algebra. 2009 9 3
39
The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based, are possibly the most important theorems in the Grassmann algebra. In the next section we informally apply the Common Factor Theorem to obtain the intersection of two bivectors in a three-dimensional space.
We will see in Chapter 3: The Regressive Product that this can be expanded by the Common Factor Theorem to give
Hx yL Hu vL Hx y vL u - Hx y uL v
1.14
But we have already seen in Section 1.2 that in a 3-space, the exterior product of three vectors will, in any given basis, give the basis trivector, multiplied by the determinant of the components of the vectors making up the trivector. Additionally, we note that the regressive product (intersection) of a vector with an element like the basis trivector completely containing the vector, will just give an element congruent to itself. Thus the regressive product leads us to an explicit expression for an intersection of the two bivectors.
z Det@x, y, vD u - Det@x, y, uD v
Here Det[x,y,v] is the determinant of the components of x, y, and v. We could also have obtained an equivalent formula expressing z in terms of x and y instead of u and v by simply interchanging the order of the bivector factors in the original regressive product. Note carefully however, that this formula only finds the common factor up to congruence, because until we determine an explicit expression for the unit n-element 1 in terms of basis elements
n
(which we do by introducing the complement operation in Chapter 5), we cannot usefully use axiom 1.9 above. Nevertheless, this is not to be seen as a restriction. Rather, as we shall see in the next section it leads to interesting insights as to what can be accomplished when we work in spaces without a metric (projective spaces).
2009 9 3
40
P ! + x
The vector x is called the position vector of the point P. 2009 9 3
1.15
41
P x x+y !
y Q
A scalar multiple of a point is called a weighted point. For example, if m is a scalar, m P is a weighted point with weight m. The sum of two points gives the point halfway between them with a weight of 2.
2009 9 3
42
x1 + x2 2
O 2 Pc
The sum of two points is a point of weight 2 located mid-way between them
Historical Note
The point was originally considered the fundamental geometric entity of interest. However the difference of points was clearly no longer a point, since reference to the origin had been lost. Sir William Rowan Hamilton coined the term 'vector' for this new entity since adding a vector to a point 'carried' the point to a new point.
Determining a mass-centre
A classic application of a sum of weighted points is in the determination of a centre of mass. Consider a collection of points Pi weighted with masses mi . The sum of the weighted points gives the point PG at the mass-centre (centre of gravity) weighted with the total mass M. To show this, first add the weighted points and collect the terms involving the origin.
M PG m1 H" + x1 L + m2 H" + x2 L + m3 H" + x3 L + H m1 + m2 + m3 + L " + H m1 x1 + m2 x2 + m3 x3 + L
To fix ideas, we take a simple example demonstrating that centres of mass can be accumulated in any order. Suppose we have three points P, Q, and R with masses p, q, and r. The centres of mass taken two at a time are given by
Hp + qL GPQ p P + q Q Hq + rL GQR q Q + r R Hp + rL GPR p P + r R
Now take the centre of mass of each of these with the other weighted point. Clearly, the three sums will be equal.
2009 9 3
43
GPR
GQR GPQR
GPQ
Centres of mass can be accumulated in any order
x P
L Px
Q P
L PQ
A line is independent of the specific points used to define it. To see this, consider any other point R on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar multiple of the vector x: 2009 9 3
44
A line is independent of the specific points used to define it. To see this, consider any other point R on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar multiple of the vector x:
L R x HP + a xL x P x
A line may also be represented by the exterior product of any two points on it.
L P R P HP + a xL a P x
Note that the bound vectors P x and P R are different, but congruent. They therefore define the same line.
L Px Rx PR
1.16
x x P
Two congruent bound vectors defining the same line.
These concepts extend naturally to higher dimensional constructs. For example a plane P may be represented by the exterior product of single point on it together with a bivector in the direction of the plane, any two points on it together with a vector in it (not parallel to the line joining the points), or any three points on it (not in the same line).
P Pxy PQy PQR
1.17
Pxy
PQy
PQR
To build higher dimensional geometric entities from lower dimensional ones, we simply take their exterior product. For example we can build a line by taking the exterior product of a point with any point or vector exterior to it. Or we can build a plane by taking the exterior product of a line with any point or vector exterior to it.
45
x1
x2
P1
P2
The point of intersection of L1 and L2 is the point P given by (congruent to) the regressive product of the lines L1 and L2 .
P L1 L2 H" x1 + n1 x1 L H" x2 + n2 x2 L
The Common Factor Theorem for the regressive product of elements of the form Hx yL Hu vL in a linear space of three dimensions was introduced as formula 1.14 in Section 1.3 as:
Hx yL Hu vL Hx y vL u - Hx y uL v
Since a bound 2-space is three dimensional (its basis contains three elements - the origin and two vectors), we can use this formula to expand each of the terms in P:
H" x1 L H" x2 L H" x1 x2 L " - H" x1 "L x2 Hn1 x1 L H" x2 L Hn1 x1 x2 L " - Hn1 x1 "L x2 H" x1 L Hn2 x2 L - Hn2 x2 L H" x1 L - H n 2 x 2 x 1 L " + H n 2 x 2 "L x 1 Hn1 x1 L Hn2 x2 L Hn1 x1 x2 L n2 - Hn1 x1 n2 L x2
The term " x1 " is zero because of the exterior product of repeated factors. The four terms involving the exterior product of three vectors, for example n1 x1 x2 , are also zero since any three vectors in a two-dimensional vector space must be dependent (The vector space is 2dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of intersection as congruent to the weighted point P: 2009 9 3
46
The term " x1 " is zero because of the exterior product of repeated factors. The four terms involving the exterior product of three vectors, for example n1 x1 x2 , are also zero since any three vectors in a two-dimensional vector space must be dependent (The vector space is 2dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of intersection as congruent to the weighted point P:
P H" x1 x2 L " + H" n2 x2 L x1 - H" n1 x1 L x2
If we express the vectors in terms of a basis, e1 and e2 say, we can reduce this formula (after some manipulation) to:
P " + Det@n2 , x2 D Det@x1 , x2 D x1 Det@n1 , x1 D Det@x1 , x2 D x2
Here, the determinants are the determinants of the coefficients of the vectors in the given basis. To verify that P does indeed lie on both the lines L1 and L2 , we only need to carry out the straightforward verification that the products P L1 and P L2 are both zero. Although this approach in this simple case is certainly more complex than the standard algebraic approach in the plane, its interest lies in the facts that it is immediately generalizable to intersections of any geometric objects in spaces of any number of dimensions, and that it leads to easily computable solutions.
elements in its basis: e1 e2 , e1 e3 , e2 e3 . The anti-symmetric nature of the exterior product means that there are just as many basis elements in the linear space of (n|m)-elements as there are in the linear space of m-elements. Because these linear spaces have the same dimension, we can set up a correspondence between m-elements and (n|m)-elements. That is, given any m-element, we can define its corresponding (n|m)-element. The (n|m)-element is called the complement of the m-element. Normally this correspondence is set up between basis elements and extended to all other elements by linearity.
47
The Euclidean complement is the simplest type of complement and defines a Euclidean metric, that is, where the basis elements are mutually orthonormal. This was the only type of complement considered by Grassmann. In Chapter 5: The Complement, we will show however, that Grassmann's concept of complement is easily extended to more general metrics. Note carefully that we will be using the notion of complement to define the notions of orthogonality and metric, and until we do this, we will not be relying on their existence in Chapters 2, 3, and 4. With the definitions above, we can now proceed to define the Euclidean complement of a general 1element x a e1 + b e2 + c e3 . To do this we need to endow the complement operation with the property of linearity so that it has meaning for linear combinations of basis elements.
x a e1 + b e2 + c e3 a e1 + b e2 + c e3 a e2 e3 + b e3 e1 + c e1 e2
In Section 1.6 we will see that the complement of an element is orthogonal to the element, because we will define the interior product (and hence inner and scalar products) using the complement. We can start to see how the scalar product of a 1-element with itself might arise by expanding the product 2009 9 3x x:
48
In Section 1.6 we will see that the complement of an element is orthogonal to the element, because we will define the interior product (and hence inner and scalar products) using the complement. We can start to see how the scalar product of a 1-element with itself might arise by expanding the product x x:
x x Ha e1 + b e2 + c e3 L Ha e2 e3 + b e3 e1 + c e1 e2 L Ia2 + b2 + c2 M e1 e2 e3
The complement of the basis 2-elements can be defined in a manner analogous to that for 1elements, that is, such that the exterior product of a basis 2-element with its complement is equal to the basis 3-element. The complement of a 2-element in 3-space is therefore a 1-element.
e2 e3 e1 e3 e1 e2 e1 e2 e3 e2 e3 e2 e3 e2 e3 e1 e1 e2 e3 e3 e1 e3 e1 e3 e1 e2 e1 e2 e3 e1 e2 e1 e2 e1 e2 e3
Summarizing these results for the Euclidean complement of the basis elements of a Grassmann algebra in three dimensions shows the essential symmetry of the complement operation.
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1
2009 9 3
49
More generally, as we shall see in Chapter 5: The Complement, we can show that the complement of the complement of any element is the element itself, apart from a possible sign.
a H - 1 L m Hn- m L a
m m
1.18
This result is independent of the correspondence that we set up between the m-elements and (n|m)elements of the space, except that the correspondence must be symmetric. This is equivalent to the requirement that the metric tensor (and inner product) be symmetric. Whereas in a 3-space, the complement of the complement of a 1- element is the element itself, in a 2-space it turns out to be the negative of the element.
Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1
Although this sign dependence on the dimension of the space and grade of the element might appear arbitrary, it turns out to capture some essential properties of the elements in their spaces to which we have become accustomed. For example in 2-space the complement x of a vector x rotates it anticlockwise by 2 . Taking the complement x of the vector x rotates it anticlockwise by a further 2 into -x.
x a e1 + b e2
p p
2009 9 3
50
x a e1 + b e2 a e2 - b e1 x a e2 - b e1 - a e1 - b e2
x x
x = -x
ab ab
m k m k
1.19
However, although this may be derived in this simple case, to develop the Grassmann algebra for general metrics, we will assume this relationship holds independent of the metric. It thus takes on the mantle of an axiom. This axiom, which we call the Complement Axiom, is the quintessential formula of the Grassmann algebra. It expresses the duality of its two fundamental operations on elements and their complements. We note the formal similarity to de Morgan's law in Boolean algebra. We will also see that adopting this formula for general complements will enable us to compute the complement of any element of a space once we have defined the complements of its basis 1elements. As an example consider two vectors x and y in 3-space, their exterior product. The Complement Axiom becomes
xy xy
The complement of the bivector x y is a vector. The complements of x and y are bivectors. The regressive product of these two bivectors is a vector. The following graphic depicts this relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.
2009 9 3
51
The complement of the bivector x y is a vector. The complements of x and y are bivectors. The regressive product of these two bivectors is a vector. The following graphic depicts this relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.
ab ab
m k m k
1.20
The grade of an interior product a b may be seen from the definition to be m+(n|k)|n = m|k.
m k
Note that while the grade of a regressive product depends on the dimension of the underlying linear space, the grade of an interior product is independent of the dimension of the underlying space. This independence underpins the important role the interior product plays in the Grassmann algebra - the exterior product sums grades while the interior product differences them. However, grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of negative grade. 2009 9 3
52
Note that while the grade of a regressive product depends on the dimension of the underlying linear space, the grade of an interior product is independent of the dimension of the underlying space. This independence underpins the important role the interior product plays in the Grassmann algebra - the exterior product sums grades while the interior product differences them. However, grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of negative grade. Thus the order of factors in an interior product is important. When the grade of the first element is less than that of the second element, the result is necessarily zero.
Since the grade of an interior product is the difference of the grades of its factors, an inner product is always of grade zero, hence scalar. A special case of the interior or inner product in which the two factors of the product are of grade 1 is called a scalar product to conform to the common usage. In Chapter 6 we will show that the inner product is symmetric, that is, the order of the factors is immaterial.
a b b a
m m m m
1.21
When the inner product is between simple elements it can be expressed as the determinant of the array of scalar products according to the following formula:
1.22
1.23
a b1 b2 bk KKa b1 O b2 O bk
m m
2009 9 3
53
a Hb1 b2 bk L
m
Ka b1 O Hb2 bk L KKKa b1 O b2 O O bk
m m
1.24
For example, the inner product of two bivectors can be rewritten to display the interior product of the first bivector with either of the vectors
Hx yL Hu vL HHx yL uL v - HHx yL vL u
It is the straightforward and consistent derivation of formulae like 1.24 from definition 1.20 using only the fundamental exterior, regressive and complement operations, that shows how powerful Grassmann's approach is. The alternative approach of simply introducing an inner product onto a space cannot bring such power to bear.
Orthogonality
As is well known, two 1-elements are said to be orthogonal if their scalar product is zero. More generally, a 1-element x is orthogonal to a simple element a if and only if their interior
m
product a x is zero.
m
Hence it becomes immediately clear that if a x1 is zero then so is the whole product
m
a Hx1 x2 ! xk L.
m
1.25
Under a geometric interpretation of the space in which vectors are interpreted as representing displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three vectors. The magnitude of a scalar is the scalar itself. 2009 9 3
54
Under a geometric interpretation of the space in which vectors are interpreted as representing displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three vectors. The magnitude of a scalar is the scalar itself. The magnitude of a vector x is, as expected, given by the standard formula.
xx
1.26
x y
Hx yL Hx yL
DetB
xx xy F xy yy
1.27
Of course, a bivector may be expressed in an infinity of ways as the exterior product of two vectors, for example
B = x y x Hy + a xL
From this, the square of its area may be written in either of two ways
B B Hx yL Hx yL B B Hx Hy + a xLL Hx Hy + a xLL
However, multiplying out these expressions using formula 1.23 shows that terms cancel in the second expression, thus reducing them both to the same expression.
B B Hx xL Hy yL - Hx yL2
Thus the measure of a bivector is independent of the actual vectors used to express it. Geometrically interpreted this means that the area of the corresponding parallelogram (with sides corresponding to the displacements represented by the vectors) is independent of its shape. These results extend straightforwardly to simple elements of any grade. The measure of an element is equal to the measure of its complement. By the definition of the interior product, and formulae 1.18 and 1.19 we have
a a a a a a H - 1 L m Hn-m L a a a a a a
m m m m m m m m m m m m
aa aa
m m m m
1.28
` A unit element a can be defined by the ratio of the element to its measure.
m
2009 9 3
55
` a
m
a
m
a
m
1.29
If a basis 2-element contains a given basis 1-element, then their interior product is not zero:
He1 e2 L e1 He1 e2 L e1 He1 e2 L He2 e3 L He1 e2 e3 L e2 1 e2 e2
If a basis 2-element does not contain a given basis 1-element, then their interior product is zero:
He1 e2 L e3 He1 e2 L e3 He1 e2 L He1 e2 L 0
m O) essentially k
different factors ai (of grade m|k) of the simple element of higher degree.
m -k
1.30
2009 9 3
56
a b ai b ai
m k i=1 k k k
m -k
a a1 a1 a2 a2 ! an an
m k m -k m -k k
m -k
For example, the Interior Common Factor Theorem may be used to prove a relationship involving the interior product of a 1-element x with the exterior product of two factors, each of which may not be simple. This relationship and the special cases that derive from it find application throughout the explorations in the rest of this book.
a b x Ka xO b + H- 1L m a b x
m k m k m k
1.31
The resulting vector B x is also orthogonal to x. We can show this by taking its scalar product with x, or by using formula 1.24.
HB xL x B Hx xL 0 ` If B is the unit bivector of B, the projection x of x onto B is given by ` ` x - B IB xM
1.30
2009 9 3
57
` ` x IB xM B
These concepts may easily be extended to geometric entities of higher grade. We will explore this further in Chapter 6.
ab ab
m k m k
1.32
This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space. For 1-elements xi the definition has the following consequences, independent of the dimension of the space. 2009 9 3
58
This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space. For 1-elements xi the definition has the following consequences, independent of the dimension of the space. The triple cross product is a 1-element in any number of dimensions.
1.33
Hx1 x2 L x3 x1 x2 x3
The scalar product of two cross products is a scalar in any number of dimensions.
1.34
1.35
The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in three dimensions. It corresponds to the regressive product of two exterior products.
1.36
59
where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity with scalars. 2009 9 3
60
where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity with scalars. We denote the product with a dot: a.b. Note that, while a and b must both belong to the same linear space, we make no assertions as to where the product belongs. There are two essentially different forms of product we can form from the two distinct elements a and b: those involving an element with itself (a.a and b.b); and those involving distinct elements (a.b and b.a). For a product of an element with itself, there are just two distinct possibilities: either it is zero, or it is non-zero. In what follows, we shall take each of these two cases and explore for any relations it might imply on products of distinct elements.
Conversely, if we suppose that a b + b a 0 for all a and b, we can replace b with a to obtain 2 a a 0. Whence a a 0. Similarly b b 0. Thus for Case 1 we have
a a 0, b b 0
ab + ba 0
1.37
This means that, if a linear product is defined to be nilpotent (that is a a 0), then necessarily it is antisymmetric. Conversely, if it is antisymmetric, it is also nilpotent.
61
Since, for the case we are considering, the product of no element with itself is zero, we conclude that both a and d must be zero. Thus, equation A becomes
D: b ab + c ba 0
Again, since for this case a.a is not zero, we must have that b + c is zero, that is, c is equal to -b. Replace c by -b in equation D to give
F: b Ha b - b aL 0
Now, either b is zero, or a b - b a is zero. In the case b is zero we also have that c is zero. Additionally, from equation C we have that both a and d are zero. This condition leads to no linear axiom of the form A being possible. The remaining possibility is that a b - b a 0. Thus we can summarize the results as leading to either of the two possibilities
a a ! 0, b b ! 0
1.38
This means that if a linear product is defined such that the product of any non-zero element with itself is never zero, then necessarily, either no linear relation is implied between products or, the product operation is symmetric, that is a b b a.
Summary
In sum, there exists just two types of linear product axioms: one asserting the symmetry of the product operation, and the other asserting the antisymmetry of the product operation. This does not preclude linear products which have no axioms relating products of two elements.
Examples
Symmetric products
Inner products These are symmetric for factors of (equal) arbitrary grade.
2009 9 3
62
Scalar products These are a special case of inner product for factors of grade 1. Exterior products These are symmetric for factors of (equal) even grade. Algebraic products The ordinary algebraic product is symmetric. It is equivalent to the special cases of the inner or exterior products of elements of zero grade.
Antisymmetric products
Exterior products These are antisymmetric for factors of (equal) odd grade. Regressive products These are antisymmetric for factors of (equal) grade, but one whose parity is different from that of the dimension of the space. Generalized products These are antisymmetric for factors of (equal) grade, but one whose parity is different from that of the order of the product.
The Clifford product of two 1-elements is defined as the sum of their exterior product and their inner (scalar) product, so now let a b a b + a b
B: a Ha a + a aL + b Ha b + a bL + c Hb a + b aL + d Hb b + b bL 0
Since the exterior and scalar product operations are distinct, leading to elements of different grade, we can write B as two separate linear product axioms:
C: D: a Ha aL + b Ha bL + c Hb aL + d Hb bL 0 a Ha aL + b Ha bL + c Hb aL + d Hb bL 0
Put a equal to - a in G and subtract the result from G to give 4b(a b) 0. Hence b must be zero, giving 2009 9 3
63
Put a equal to - a in G and subtract the result from G to give 4b(a b) 0. Hence b must be zero, giving
H: a Ha aL + d Hb bL 0
Now put b equal to a to get (a + d)(a a) 0, whence a must be equal to -d yielding finally
I: aa - bb 0
This is clearly a contradiction (as we suspected might be the case), hence there is no linear product axiom relating Clifford products of 1-elements.
1.15 Terminology
Terminology
Grassmann algebra is really quite a simple structure, straightforwardly generated by introducing a product operation onto the elements of a linear space to generate a suite of new linear spaces. If we only wish to describe this algebra and its elements, the terminology can be quite compact and consistent with accepted practice. However in applications of the algebra, for example to geometry and physics, we may want to work in an interpreted version of the algebra. This new interpretation may be as simple as viewing one of the basis elements of the underlying linear space as an origin point, and the rest as vectors, but the interpreted algebraic, and therefore terminological, distinctions are now significantly increased. Whereas the exterior product on an algebraically uninterpreted basis multiplied the basis elements into a single suite of higher grade elements, the exterior product on such an interpreted basis multiplies the basis elements into two suites of interpreted higher grade elements - those containing the origin as a factor, and those which do not. This is of course more complex, and requires more terminology to make all the distinctions. Deciding on the terminology to use however poses a challenge, because many of the distinctions while new to some, are known to others by different names. For example, E. A. Milne in his Vectorial Mechanics, although not mentioning Grassmann or the exterior product uses the term 'line vector' for the entity we model by the exterior product of a point with a vector. Robert Stawell Ball in his A Treatise on the Theory of Screws, only briefly mentioning Grassmann, uses the word 'screw' for the entity we model by the sum of a 'line vector' and what is currently known as a bivector. It is not within my capabilities to extract complete consistency from historically precedent terminology in this highly applicable yet largely unexplored area of mathematics. I have had to adopt some compromises. These compromises have been directed by the desire that the terminology of the book be consistent, historically cognizant, modern, and intuitive. It is certainly not perfect.
2009 9 3
64
One compromise, adopted to avoid tedious extra-locution, is to extend the term space in this book to also mean a Grassmann algebra. I have done this because it seems to me that when we visualize space, it is not only inhabited by points and vectors, but also by lines, planes, bivectors, trivectors, and other multi-dimensional entities.
Linear space
In this book a linear space is defined in the usual way as consisting of an abelian group under addition, a field, and a (scalar) multiplication operation between their elements. The dimension of a linear space is the number of elements in a basis of the space.
exterior product operation generates the algebra. The field of L is denoted L, and dimension of L
1 0 1
n m
. Its
elements are called m-elements. The integer m is called the grade of L and its elements.
1 e1 , e2 , e3 , , en e1 e2 , e1 e3 , , en-1 en
L 1
1
L 2
2
L m
m
L n
n
2009 9 3
65
Grassmann algebra
A Grassmann algebra L is the direct sum L!L ! L ! ! L ! ! L of an underlying linear
0 1 2 m n
space L, its field L, and the exterior linear spaces L (2 m n). As well as being an algebra, L is
1 0 m
a linear space of dimension 2 , whose basis consists of all the basis elements of its exterior linear spaces. Its elements of grade m are called m-elements, and its multigraded elements are called Lelements. An m-element is called simple if it can be expressed as the exterior product of 1-element factors.
Space
In this book, the term space is another term for a Grassmann algebra. The term n-space is another term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of terminology we will refer to the dimension and basis of an n-space as that of its underlying linear space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering !n . The space of a simple m-element is the m-space whose underlying linear space consists of all the 1elements in the m-element. A 1-element is said to be in a simple m-element if and only their exterior product is zero. A basis of the space of a simple m-element may be taken as any set of m independent factors of the m-element. An n-space and the space of its n-element are identical.
Vector space
A vector space is a space in which the elements of the underlying linear space have been interpreted as vectors. This interpretation views a vector as an (unlocated) direction and graphically depicts it as an arrow. An m-element in a vector space is called an m-vector or multivector. A 2-vector is also called a bivector. A 3-vector is also called a trivector. A simple mvector is viewed as a multi-dimensional direction, or m-direction. An element of a vector space may also be called a geometric entity (or simply, entity) to emphasize that it has a geometric interpretation. An m-vector is an entity of grade m. Vectors are 1-entities, bivectors are 2-entities. A vector n-space is a vector space whose underlying linear space is of dimension n. A vector 3space is thus richer than the usual 3-dimensional vector algebra, since it also contains bivectors and trivectors, as well as the exterior product operation.
2009 9 3
66
1 e1 , e2 , e3 , , en e1 e2 , e1 e3 , , en-1 en
L 1
1
L 2
2
L m
m
L n
n
The space of a simple m-vector is the m-space whose underlying linear space consists of all the vectors in the m-vector.
Bound space
A bound space is a vector space to whose basis has been added an origin. The origin is an element with the geometric interpretation of a point. A bound n-space is a vector n-space to whose basis has been added an origin. The symbol n refers to the number of vectors in the basis of the bound space, thus making the dimension of a bound n-space n+1. In GrassmannAlgebra you can declare that you want to work in a bound n-space by entering "n . You can determine the dimension of a space by entering D, which in this case would return n+1. Thus the underlying linear space of a bound n-space is of dimension n+1. An element of a bound space may also be called a geometric entity (or simply, entity) to emphasize it has a geometric interpretation. An m-entity is an entity of grade m. Vectors and points are 1entities. Bivectors and bound vectors are 2-entities. A bound n-space is thus richer than a vector n-space, since as well as containing vectorial (unlocated) entities (vectors, bivectors, ), it also contains bound (located) entities (points, bound vectors, ) and sums of these. In contradistinction to the term bound space, a vector space may sometimes be called a free space. A bound 3-space is a closer algebraic model to physical 3-space than a vector 3-space, even though its underlying linear space is of four dimensions.
2009 9 3
67
Free Entity Bound Entity scalar vector bivector weighted point bound vector bound Hm-1L-vector bound Hn-1L-vector bound n-vector
0 1 2 m n n+1
L
1
L
2
L
m
L
n
n+1
The space of a bound simple m-vector is the (m+1)-space whose underlying linear space consists of all the points and vectors in the bound simple m-vector.
Geometric objects
The geometric object of a bound simple entity is the set of all points in the entity, that is, all points whose exterior product with the bound simple entity is zero. The geometric object of a bound scalar or weighted point is the point itself. The geometric object of a bound vector is a line (an infinite set of points). The geometric object of a bound simple bivector is a plane (a doubly infinite set of points). Since the properties of geometric objects are well modelled algebraically by their corresponding entities, we find it convenient to compute with geometric objects by computing with their corresponding bound simple entities; thus for example computing the intersection of two lines by computing the regressive product of two bound vectors.
2009 9 3
68
1 2 3 m n n+1
bound scalar bound vector bound simple bivector bound simple Hm-1L-vector bound Hn-1L-vector bound n-vector
L
2
L
3
L
m
L
n
n+1
2009 9 3
69
Congruent vectors of opposite orientation are also said to have opposite sense. Congruent simple m-vectors are said to define the same m-direction. Congruent bound simple elements are said to define the same position and direction. A bound simple m-vector that has been carried to a new bound simple m-vector by the addition of an (m+1)-vector may be said to have been carried to a new position. The m-direction of the bound simple m-vector is unchanged. The geometric object of a bound simple entity may be said to have the position and direction of its entity. The notions of position, direction, and sense are geometric interpretations on the algebraic elements.
1.16 Summary
To be completed
2009 9 3
70
2.1 Introduction
The exterior product is the natural fundamental product operation for elements of a linear space. Although it is not a closed operation (that is, the product of two elements is not itself an element of the same linear space), the products it generates form a series of new linear spaces, the totality of which can be used to define a true algebra, which is closed. The exterior product is naturally and fundamentally connected with the notion of linear dependence. Several 1-elements are linearly dependent if and only if their exterior product is zero. All the properties of determinants flow naturally from the simple axioms of the exterior product. The notion of 'exterior' is equivalent to the notion of linear independence, since elements which are truly exterior to one another (that is, not lying in the same space) will have a non-zero product. An exterior product of 1-elements has a straightforward geometric interpretation as the multidimensional equivalent to the 1-elements from which it is constructed. And if the space possesses a metric, its measure or magnitude may be interpreted as a length, area, volume, or hyper-volume according to the grade of the product. However, the exterior product does not require the linear space to possess a metric. This is in direct contrast to the three-dimensional vector calculus in which the vector (or cross) product does require a metric, since it is defined as the vector orthogonal to its two factors, and orthogonality is a metric concept. Some of the results which use the cross product can equally well be cast in terms of the exterior product, thus avoiding unnecessary assumptions. We start the chapter with an informal discussion of the exterior product, and then collect the axioms for exterior linear spaces together more formally into a set of 12 axioms which combine those of the underlying field and underlying linear space with the specific properties of the exterior product. In later chapters we will derive equivalent sets for the regressive and interior products, to which we will often refer. Next, we pin down the linear space notions that we have introduced axiomatically by introducing a basis onto the underlying (primary) linear space and showing how this can induce a basis onto each of the other linear spaces generated by the exterior product. A constantly useful partner to a basis element of any of these exterior linear spaces is its cobasis element. The exterior product of a basis element with its cobasis element always gives the basis n-element of the algebra. We develop the notion of cobasis following that of basis. The next three sections look at some standard topics in linear algebra from a Grassmannian viewpoint: determinants, cofactors, and the solution of systems of linear equations. We show that all the well-known properties of determinants, cofactors, and linear equations proceed directly from the properties of the exterior product. The next two sections discuss two concepts dependent on the exterior product that have no direct counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it can be factorized into 1-elements. If a simple element is divided by another simple element contained in it, the result is not unique. Both these properties will find application later in our explorations of geometry. 2009 9 3
71
The next two sections discuss two concepts dependent on the exterior product that have no direct counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it can be factorized into 1-elements. If a simple element is divided by another simple element contained in it, the result is not unique. Both these properties will find application later in our explorations of geometry. At the end of the chapter, we take the opportunity to introduce the concepts of span and cospan of a simple element, and the concept that we can define multilinear forms involving the spans and cospans of an element which are invariant to any refactorization of the element. Following this we explore their application to calculating the union and intersection of two simple elements, and the factorization of a simple element. These applications involve only the exterior product. But we will continue to use the concepts of span, cospan and multilinear forms throughout the book, where they will generally involve other product operations.
by a, b, c, . Let L denote a linear space of dimension n whose field is L. Its elements, called 11 0
elements, will be denoted by x, y, z, . We call L a linear space rather than a vector space and its elements 1-elements rather than vectors
1
because we will be interpreting its elements as points as well as vectors. A second linear space denoted L may be constructed as the space of sums of exterior products of 12
elements taken two at a time. The exterior product operation is denoted by , and has the following properties:
a Hx yL Ha xL y
2.1
x Hy + zL x y + x z
2.2
Hy + zL x y x + z x
2.3
xx 0
2.4
An important property of the exterior product (which is sometimes taken as an axiom) is the antisymmetry of the product of two 1-elements.
x y -y x
This may be proved from the distributivity and nilpotency axioms since: 2009 9 3
2.5
72
This may be proved from the distributivity and nilpotency axioms since:
Hx + yL Hx + yL 0 xx + xy + yx + yy 0 xy + yx 0 x y -y x
An element of L will be called a 2-element (of grade 2) and is denoted by a kernel letter with a '2'
2
A simple 2-element is the exterior product of two 1-elements. It is important to note the distinction between a 2-element and a simple 2-element (a distinction of no consequence in L, where all elements are simple). A 2-element is in general a sum of simple 21
elements, and is not generally expressible as the product of two 1-elements (except where L is of
1
dimension n b 3). The structure of L, whilst still that of a linear space, is thus richer than that of L,
2 1
simple. Suppose that L has basis e1 , e2 , e3 . Then a basis of L is the set of all essentially different (linearly
1 2
product e1 e2 is not considered essentially different from e2 e1 in view of the anti-symmetry property). Let a general 2-element a be expressed in terms of this basis as:
2
a a e1 e2 + b e2 e3 + c e3 e1
2
Without loss of generality, suppose a " 0. Then a can be recast in the form below, thus proving
2
the proposition.
a a e1 2
b a
e3 e2 -
c a
e3
2009 9 3
73
Historical Note
In the Ausdehnungslehre of 1844 Grassmann denoted the exterior product of two symbols a and b by a simple concatenation viz. a b; whilst in the Ausdehnungslehre of 1862 he enclosed them in square brackets viz. [a b]. This notation has survived in the threedimensional vector calculus as the 'box' product [a b g] used for the triple scalar product. Modern usage denotes the exterior product operation by the wedge , thus ab. Amongst other writers, Whitehead [1898] used the 1844 version whilst Forder [1941] and Cartan [1922] followed the 1862 version.
If you have just loaded the GrassmannAlgebra package, you will see these symbols listed in the status panes at the top of the palette. (You can load the package by clicking the red title button on the GrassmannAlgebra Palette, and get help on any of the topics or commands by clicking the associated ? button.) To declare your own list of scalar symbols, enter DeclareScalarSymbols[list].
DeclareScalarSymbols@8a, b, g<D 8a, b, g<
You will now see these new symbols reflected in the Current ScalarSymbols pane. Now, if you enter ScalarSymbols you see the new list:
ScalarSymbols 8a, b, g<
You can always return to the default declaration by entering the command DeclareDefaultScalarSymbols or its alias S. (You can enter this with the keystroke combination of *5S, or by using the 5-star symbol on the palette.)
S 8a, b, c, d, e, f, g, h<
The same procedures apply mutatis mutandi for vector symbols, with the word "Scalar" replaced by the word "Vector" and S by V. As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include patterns. Any expression which conforms to such a pattern is considered to be a symbol of the respective type. For further information, check the Preferences section in the palette. 2009 9 3
74
As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include patterns. Any expression which conforms to such a pattern is considered to be a symbol of the respective type. For further information, check the Preferences section in the palette. Note that the five-pointed star symbol (*5) is used in GrassmannAlgebra as the first character in each of its aliases to commonly used commands, objects or functions. You can reset all the default declarations (ScalarSymbols, VectorSymbols, Basis and Metric) by entering A, which you can access from the palette. We will do this often throughout the book to ensure we are starting our computations from the default environment.
sums of exterior products of elements of L two at a time, so L may be composed from sums of
1 m
An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an 'm' written below: 2009 9 3
75
An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an 'm' written below:
a = a1 Hx1 x2 ! xm L + a2 Hy1 y2 ! ym L + !
m
ai L
0
The number m is called the grade of the m-element. The exterior linear space L has essentially only one element in its basis, which we take as the unit
0
The exterior linear space L (where n is the dimension of L) also has essentially only one element in
n 1
its basis: e1 e2 ! en . All other elements of L are scalar multiples of this basis element.
n
Any m-element, where m > n, is zero, since there are no more than n independent elements available from which to compose it, and the nilpotency property causes it to vanish.
a x0 + b x1 + c x2 x3 + d x4 x5 x6
GrassmannAlgebra understands that the new symbols generated are vector symbols. You can verify this by checking the Current VectorSymbols status pane in the palette, or by entering VectorSymbols.
VectorSymbols 8p, q, r, s, t, u, v, w, x, y, z, x1 , x2 , x3 , x4 , x5 , x6 <
For further information, check the Expression Composition section in the palette.
2009 9 3
76
We may also say that a is in the space of a or that a defines its own space.
m m
A simple element b is said to be contained in another simple element a if and only if the space of b
k m k
is a subset of that of a.
m
We say that two elements are congruent if one is a scalar multiple of the other, and denote the relationship with the symbol . For example we may write:
aaa
m m
We can therefore say that if two elements are congruent, then their spaces are the same. Note that whereas several congruent elements are linearly dependent, several linearly dependent elements are not necessarily congruent. When we have one element equal to a scalar multiple of another, a = a b say, we may sometimes
m m
take the liberty of writing the scalar multiple as a quotient of the two elements:
a a
m m
These notions will be encountered many times in the rest of the book. An overview of the terminology used in this book can found at the end of Chapter 1.
Hence
ab g a bg
m k p m k p
2.6
Thus the brackets may be omitted altogether. From this associativity together with the anti-symmetric property of 1-elements it may be shown that the exterior product is anti-symmetric in all (1-element) factors. That is, a transposition of any two 1-element factors changes the sign of the product. For example:
x1 x2 x3 x4 - x3 x2 x1 x4 x3 x4 x1 x2 !
Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero. For example: 2009 9 3
77
Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero. For example:
x1 x2 x3 x2 0
necessarily possess this property, as the following example shows. Suppose L is of dimension 4 with basis e1 , e2 , e3 , e4 . Then the following exterior product of
1
An alias for GrassmannExpand is #, which you can find on the GrasssmannAlgebra palette. Note that GrassmannExpand does not simplify the expression.
78
Note that you can also use these functions to expand and/or simplify any Grassmann expression. For further information, check the Expression Transformation section in the palette.
1:
2.7
2:
Ka + b O + g a + Kb + g O
m m m m m m
2.8
3:
2.9
2009 9 3
79
4:
2.10
5:
2.11
6:
2.12
7:
ab
m k
g a bg
j m k j
2.13
8:
There is a unit scalar which acts as an identity under the exterior product. :!"#$%$B1, 1 LF, a 1 a>
0 m m
2.14
9:
2.15
2009 9 3
80
10:
2.16
11:
Additive identities act as multiplicative zero elements under the exterior product. :!"#$%$B0, 0 LF, 0 a
k k k k m
0>
k+ m
2.17
12:
The exterior product is both left and right distributive under addition.
Ka + b O g a g + b g
m m k m k m k
2.18
a b + g
m k k
ab + ag
m k m k
13:
a a b Ka aO b a a b
m k m k m k
2.19
Grassmann algebras
Under the foregoing axioms it may be directly shown that: 1. L is a field.
0
81
1. L is a field.
0
This algebra is called a Grassmann algebra. Its elements are sums of elements from the L, thus
m
allowing closure over both addition and exterior multiplication. For example, we can expand and simplify the product of two elements of the algebra to give another element.
%@H1 + 2 x + 3 x y + 4 x y zL H1 + 2 x + 3 x y + 4 x y zLD 1 + 4 x + 6 xy + 8 xyz
A Grassmann algebra is also a linear space of dimension 2n , where n is the dimension of the underlying linear space L. We will sometimes refer to the Grassmann algebra whose underlying
1
linear space is of dimension n as the Grassmann algebra of n-space. Grassmann algebras will be discussed further in Chapter 9: Exploring Grassmann Algebra.
a b H- 1Lm k b a
If one of these factors is a scalar (b = a, say; k = 0), the axiom reduces to:
k
aa aa
m m
Since by axiom 6, each of these terms is an m-element, we may permit the exterior product to subsume the normal field multiplication. Thus, if a is a scalar, a a is equivalent to a a. The latter
m m
(conventional) notation will usually be adopted. In the usual definitions of linear spaces no discussion is given to the nature of the product of a scalar and an element of the space. A notation is usually adopted (that is, the omission of the product sign) that leads one to suppose this product to be of the same nature as that between two scalars. From the axioms above it may be seen that both the product of two scalars and the product of a scalar and an element of the linear space may be interpreted as exterior products.
aa aa a a
m m m
aL
0
2.20
2009 9 3
82
Factoring scalars
In GrassmannAlgebra scalars can be factored out of a product by GrassmannSimplify (alias $) so that they are collected at the beginning. For example:
$@5 H2 xL H3 yL Hb zLD 30 b x y z
If any of the factors in the product is scalar, it will also be collected at the beginning. Here a is a scalar since it has been declared as one (by default).
$@5 H2 xL H3 aL Hb zLD 30 a b x z GrassmannSimplify works with any Grassmann expression, or lists (to any level) of Grassmann expressions: $BK K 1 + ab ax OF MatrixForm zb 1 - zx
1+ab ax O bz 1 + xz
Grassmann expressions
A Grassmann expression is any well-formed expression recognized by GrassmannAlgebra as a valid element of a Grassmann algebra. To check whether an expression is considered to be a Grassmann expression, you can use the function GrassmannExpressionQ.
GrassmannExpressionQ@1 + x yD True GrassmannExpressionQ also works on lists of expressions. Below we determine that whereas the product of a scalar a and a 1-element x is a valid element of the Grassmann algebra, the ordinary multiplication of two 1-elements x y is not. GrassmannExpressionQ@8x y, a x, x y<D 8True, True, False<
For further information, check the Expression Analysis section in the palette.
83
At this stage we have not yet begun to use expressions for which the grade is other than obvious by inspection. However, as will be seen later, the grade will not always be obvious, especially when general Grassmann expressions, for example those involving Clifford or hypercomplex numbers, are being considered. A Clifford number may, for example, have components of different grade. In GrassmannAlgebra you can use the function Grade (alias G) to calculate the grade of a Grassmann expression. Grade works with single Grassmann expressions or with lists (to any level) of expressions.
Grade@x y zD 3 Grade@81, x, x y, x y z<D 80, 1, 2, 3<
It will also calculate the grades of the elements in a more general expression, returning them in a list.
Grade@1 + x + x y + x y zD 80, 1, 2, 3<
2.5 Bases
Bases for exterior linear spaces
Suppose e1 , e2 , !, en is a basis for L. Then, as is well known, any element of L may be
1 1
expressed as a linear combination of these basis elements. A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 ei2 Hi1 , i2 : 1, , nL. Two products are essentially different if they do not involve 2009 9 3 n n the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2
2
84
A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 ei2 Hi1 , i2 : 1, , nL. Two products are essentially different if they do not involve the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2
n 2
n 2
In general, a basis for L may be constructed from the ei by taking all the essentially different
m
n O-dimensional. m
When first loaded GrassmannAlgebra sets up a default basis of 8e1 , e2 , e3 <. You can see this from the Current Basis pane on the palette. This is the basis of a 3-dimensional linear space which may be interpreted as a 3-dimensional vector space if you wish. (We will be exploring interpretations other than the vectorial later.) To declare your own basis, enter DeclareBasis[list]. For example:
DeclareBasis@8i, j, k<D 8i, j, k<
Notice that the Current Basis pane of the palette now reads {i, j, k}.
DeclareBasis either takes a list (of your own basis elements) or a positive integer as its argument. By using the positive integer form you can simply declare a subscripted basis of any dimension you please. DeclareBasis@8D 8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <
An optional argument gives you control over the kernel symbol used for the basis elements.
DeclareBasis@8, eD 8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <
A convenient way to change your space to one of dimension n is by entering the symbol !n with n a positive integer. The simplest way to enter this is from the palette using the ! symbol, and entering the integer into the placeholder .
2009 9 3
85
!4 8e1 , e2 , e3 , e4 <
You can return to the default basis by entering DeclareBasis[3] or !3 . You can set all preferences back to their default values by entering DeclareAllDefaultPreferences (or it alias A). For further information, check the Preferences section of the palette.
the declared basis of L. For example, we declare the basis to be that of a 3-dimensional vector
1
!3 8e1 , e2 , e3 < BasisL@0D 81< BasisL@1D 8e1 , e2 , e3 < BasisL@2D 8e1 e2 , e1 e3 , e2 e3 < BasisL@3D 8e1 e2 e3 < BasisL[] generates a list of the bases of each of the L.
m
You can combine these into a single list by using Mathematica's inbuilt Flatten function.
Flatten@BasisL@DD 81, e1 , e2 , e3 , e1 e2 , e1 e3 , e2 e3 , e1 e2 e3 <
This is a basis of the Grassmann algebra whose underlying linear space is 3-dimensional. As a linear space, the algebra has 23 = 8 dimensions corresponding to its 8 basis elements.
86
Entering BasisPalette would then give you the 16 basis elements of the corresponding Grassmann algebra:
BasisPalette
Basis Palette
L L
0 1
L
2
L
3
L
4
1 e1 e1 e2 e1 e2 e3 e1 e2 e3 e4 e2 e1 e3 e1 e2 e4 e3 e1 e4 e1 e3 e4 e4 e2 e3 e2 e3 e4 e2 e4 e3 e4
You can click on any of the buttons to get its contents pasted into your notebook. For example you can compose Grassmann numbers of your choice by clicking on the relevant basis elements.
X = a + b e1 + c e1 e3 + d e1 e3 e4
Standard ordering
The standard ordering of the basis of L is defined as the ordering of the elements in the declared
1
list of basis elements. This ordering induces a natural standard ordering on the basis elements of L:
m
if the basis elements of L were letters of the alphabet arranged alphabetically, then the basis
1
elements of L would be words arranged alphabetically. Equivalently, if the basis elements were
m
digits, the ordering would be numeric. For example, if we take {A,B,C,D} as basis, we can see from its BasisPalette that the basis elements for each of the bases are arranged alphabetically.
2009 9 3
87
Basis Palette
L L
0 1
L
2
L
3
L
4
denote more succinctly the basis elements of an exterior linear space of large or arbitrary dimension. For example, suppose we have a basis 8e1 , e2 , e3 , e4 < for L, then the standard basis for L is
1 3
given by
!4 ; BasisL@3D 8e1 e2 e3 , e1 e2 e4 , e1 e3 e4 , e2 e3 e4 <
We could, if we wished, set up rules to denote these basis elements more compactly. For example:
:e1 e2 e3 e1 , e1 e2 e4 e2 , e1 e3 e4 e3 , e2 e3 e4 e4 >
3 3 3 3
We will find this more compact notation of some use in later theoretical derivations. Note however that GrassmannAlgebra is set up to accept only declarations for the basis of L from which it
1
generates the bases of the other exterior linear spaces of the algebra.
2009 9 3
88
2.6 Cobases
Definition of a cobasis
The notion of cobasis of a basis will be conceptually and notationally useful in our development of later concepts. The cobasis of a basis of L is simply the basis of L which has been ordered in a
m n-m
Let 8e1 , e2 , !, en < be a basis for L. The cobasis element associated with a basis element ei is
1
denoted ei and is defined as the product of the remaining basis elements such that the exterior product of a basis element with its cobasis element is equal to the basis element of L.
n
ei ei e1 e2 ! en
That is:
e i = H - 1 L i-1 e 1 ! i ! e n
where i means that ei is missing from the product. The choice of the underbar notation to denote the cobasis may be viewed as a mnemonic to indicate that the element ei is the basis element of L with ei 'struck out' from it.
n
2.21
Cobasis elements have similar properties to Euclidean complements, which are denoted with overbars (see Chapter 5: The Complement). However, it should be noted that the underbar denotes an element, and is not an operation. For example: a ei is not defined. More generally, the cobasis element of the basis m-element ei = ei1 ! eim of L is denoted ei
m
and is defined as the product of the remaining basis elements such that:
ei ei e1 e2 ! en
m m
2.22
That is:
e i H - 1 L K m e 1 ! i1 ! i m ! e n
m
2.23
From the above definition it can be seen that the exterior product of a basis element with the cobasis element of another basis element is zero. Hence we can write: 2009 9 3
89
From the above definition it can be seen that the exterior product of a basis element with the cobasis element of another basis element is zero. Hence we can write:
ei ej dij e1 e2 ! en
m m
2.24
Thus:
1 e1 e2 ! en e
n
2.25
Any of these three representations will be called the basis n-element of the space. 1 may also be described as the cobasis of unity. Note carefully that 1 may be different in different bases.
2009 9 3
90
!3 ; CobasisPalette
Cobasis Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Cobasis e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1
If you want to compose a palette for another basis, first declare the basis.
DeclareBasis@8&, '<D; CobasisPalette
Cobasis Palette
Basis Cobasis 1 ! " ! " ! " " -! 1
element is denoted ej and is defined as expected by the product of the remaining basis elements
m
such that:
2009 9 3
91
ei ej dij e1 e2 ! en
m m
2.26
which, by comparison with the definition for the cobasis shows that:
e i H - 1 L m Hn- m L e i m
m
2.27
That is, the cobasis element of a cobasis element of an element is (apart from a possible sign) equal to the element itself. We shall find this formula useful as we develop general formulae for the complement and interior products in later chapters.
2.7 Determinants
Determinants from exterior products
All the properties of determinants follow naturally from the properties of the exterior product. Indeed, it may be reasonably posited that the theory of determinants was a system for answering questions of linear dependence, developed only because Grassmann's work was ignored. Had Grassmann's work been more widely known, his simpler and more understandable approach would have rendered derterminant theory of little interest. The determinant:
a11 a12 a21 a22 an1 an2 ! a1 n ! a2 n ! ! an n
may be calculated by considering each of its rows (or columns) as a 1-element. Here, we consider rows. Development by columns may be obtained mutatis mutandis. Introduce an arbitrary basis ei in order to encode the position of the entry in the row, and let
ai ai1 e1 + ai2 e2 + ! + ain en
Then form the exterior product of all the ai . We can arrange the ai in rows to portray the effect of an array:
2009 9 3
92
D=
a1 a2 ! an e1 e2 ! en
2.28
Properties of determinants
All the well-known properties of determinants proceed directly from the properties of the exterior product.
4 The determinant is equal to the sum of p determinants if each element of a given row is the sum of p terms.
The exterior product is distributive with respect to addition.
a1 ! JS ai N ! S H a1 ! ai !L
i i
2009 9 3
93
5 The determinant is unchanged if to any row is added scalar multiples of other rows.
The exterior product is unchanged if to any factor is added multiples of other factors.
a1 ! ai +
j!i
S kj aj
! a1 ! ai !
A generalization of the Laplace expansion technique is evident from the fact that the exterior product of the ai may be effected in any grouping and sequence which facilitate the computation.
Calculating determinants
As an example take the following matrix. We can calculate its determinant with Mathematica's inbuilt Det function:
a 0 0 0 DetB 0 0 g 0 0 0 b 0 0 f 0 0 0 0 0 0 e 0 0 0 0 c 0 0 0 d 0 0 0 0 0 e 0 0 0 d 0 0 f 0 0 c 0 0 0 0 0 0 g 0 0 b 0 a 0 0 F 0 0 h 0
a b2 c2 e2 h - a b c e2 f2 h
and then expand and simplify the exterior product of the 8 rows or columns expressed as 1elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in this specialized case than the more general GrassmannExpandAndSimplify) to expand by columns. 2009 9 3
94
and then expand and simplify the exterior product of the 8 rows or columns expressed as 1elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in this specialized case than the more general GrassmannExpandAndSimplify) to expand by columns.
ExpandAndSimplifyExteriorProducts@ HHa e1 + g e7 L Hb e3 + f e6 L He e5 L Hc e2 + d e6 L He e4 + d e8 L Hf e3 + c e6 L Hg e5 + b e8 L Ha e2 + h e7 LLD Ia b2 c2 e2 h - a b c e2 f2 hM e1 e2 e3 e4 e5 e6 e7 e8
Speeding up calculations
For maximum speed it is clearly better to use Mathematica's inbuilt Det wherever possible. If however, you are using GrassmannExpandAndSimplify or ExpandAndSimplifyExteriorProducts it is usually faster to apply the generalized Laplace expansion technique. That is, the order in which the products are calculated is arranged with the objective of minimizing the number of resulting terms produced at each stage. You can do this by dividing the product up into groupings of factors where each grouping contains as few basis elements as possible in common with the others. You then calculate the result from each grouping before combining them into the final result. Of course, you must ensure that the sign of the overall regrouping is the same as the original expression. In the example below, we have rearranged the determinant calculation above into three groupings, which we calculate successively before combining the results. Our determinant calculation then becomes
G = ExpandAndSimplifyExteriorProducts; - G@G@Ha e1 + g e7 L Hc e2 + d e6 L Ha e2 + h e7 LD G@ Hb e3 + f e6 L Hf e3 + c e6 LD G@He e5 L He e4 + d e8 L Hg e5 + b e8 LDD a b c e2 Ib c - f2 M h e1 e2 e3 e4 e5 e6 e7 e8
This grouping took a third of the time of the direct approach above.
Historical Note
Grassmann applied his Ausdehnungslehre to the theory of determinants and linear equations quite early in his work. Later, Cauchy published his technique of 'algebraic keys' which essentially duplicated Grassmann's results. To claim priority, Grassmann was led to publish his only paper in French, obviously directed at Cauchy: 'Sur les diffrents genres de multiplication' ('On different types of multiplication') [1855]. For a complete treatise on the theory of determinants from a Grassmannian viewpoint see R. F. Scott [1880].
2009 9 3
95
2.8 Cofactors
Cofactors from exterior products
The cofactor is an important concept in the usual approach to determinants. One often calculates a determinant by summing the products of the elements of a row by their corresponding cofactors. Cofactors divided by the determinant form the elements of an inverse matrix. In the subsequent development of the Grassmann algebra, particularly the development of the complement operation, we will find it useful to see how cofactors arise from exterior products. Consider the product of n 1-elements introduced in the previous section:
a1 a2 ! an Ha11 e1 + a12 e2 + ! + a1 n en L Ha21 e1 + a22 e2 + ! + a2 n en L ! Han1 e1 + an2 e2 + ! + an n en L
and multiplying out the remaining n|1 factors results in an expression of the form:
a2 ! an a11 e2 e3 ! en + a12 H- e1 e3 ! en L + ! + a1 n H- 1Ln-1 e1 e2 ! en-1
Here, the signs attached to the basis (n|1)-elements have been specifically chosen so that together they correspond to the cobasis elements of e1 to en . The underscored scalar coefficients have yet to be interpreted. Thus we can write:
a2 ! an a11 e1 + a12 e2 + ! + a1 n en
Multiplying this out and remembering that the exterior product of a basis element with the cobasis element of another basis element is zero, we get the sum:
a1 a2 ! an Ia11 a11 + a12 a12 + ! + a1 n a1 n M e1 e2 ! en
2009 9 3
96
showing that the aij are the cofactors of the aij . Mnemonically we can visualize the aij as the determinant with the row and column containing
aij as 'struck out' by the underscore.
Of course, there is nothing special in the choice of the element a1 about which to expand the determinant. We could have written the expansion in terms of any factor (row or column):
D ai1 ai1 + ai2 ai2 + ! + ain ain
It can be seen immediately that the product of the elements of a row with the cofactors of any other row is zero, since in the exterior product formulation the row must have been included in the calculation of the cofactors. Summarizing these results for both row and column expansions we have:
n j=1 n
2.29
n m
Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis 1-elements, the exterior product of a basis m-element with the cobasis element of another basis melement 2009 9 3is zero. That is:
97
Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis 1-elements, the exterior product of a basis m-element with the cobasis element of another basis melement is zero. That is:
ei ej dij e1 e2 ! en
m m
This is the Laplace expansion. The ai are minors and the ai are their cofactors.
We can extract the scalar coefficients from M using the GrassmannAlgebra function ExtractCoefficients. The result is now a 4!4 matrix Mc whose determinant we proceed to calculate.
Mc = ExtractCoefficients@BasisD M; MatrixForm@McD a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c3 d3 a4 b4 c4 d4
2009 9 3
98
A = % @y z w D H- b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e1 e2 e3 + H- b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e1 e2 e4 + H- b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e1 e3 e4 + H- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e2 e3 e4
The cobasis of the current basis 8e1 , e2 , e3 , e4 < can be displayed by entering Cobasis.
Cobasis 8e2 e3 e4 , - He1 e3 e4 L, e1 e2 e4 , - He1 e2 e3 L<
By comparing the basis elements in the expansion of A with the cobasis elements of the current basis above, we see that both e1 e2 e3 and e1 e3 e4 require a change of sign to bring them into the required cobasis form. The crux of the calculation is to garner this required sign from the coefficients of the basis elements (and hence also changing the sign of the coefficient). We denote the cobasis elements of 8e1 , e2 , e3 , e4 < by 9e1 , e2 , e3 , e4 = and use a rule to replace the basis elements of A with the correctly signed values.
A1 = A . 9e2 e3 e4 -> e1 , e1 e3 e4 -> - e2 , e1 e2 e4 -> e3 , e1 e2 e3 -> - e4 = H- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e1 H- b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e2 + H- b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e3 H- b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e4
The coefficients of 9e1 , e2 , e3 , e4 = are the cofactors of 8a1 , a2 , a3 , a4 <, which we denote 9a1 , a2 , a3 , a4 =. We extract them again using ExtractCoefficients.
9a1 , a2 , a3 , a4 = = ExtractCoefficientsA9e1 , e2 , e3 , e4 =E@A1 D 8- b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 , b4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 , - b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 , b3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 <
This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.
2009 9 3
99
This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.
Expand@D1 D == Det@McD True
Expand and simplify the product of the third and fourth rows.
B2 = % @z w D H- c2 d1 + c1 d2 L e1 e2 + H- c3 d1 + c1 d3 L e1 e3 + H- c4 d1 + c1 d4 L e1 e4 + H- c3 d2 + c2 d3 L e2 e3 + H- c4 d2 + c2 d4 L e2 e4 + H- c4 d3 + c3 d4 L e3 e4
Extract the coefficients of these basis 2-elements and denote them by bi . (For a more direct correspondence, we denote them in reverse order).
8b6 , b5 , b4 , b3 , b2 , b1 < = ExtractCoefficients@LBasisD@B2 D 8- c2 d1 + c1 d2 , - c3 d1 + c1 d3 , - c4 d1 + c1 d4 , - c3 d2 + c2 d3 , - c4 d2 + c2 d4 , - c4 d3 + c3 d4 <
Now, noting the required change in sign from the basis of L and its cobasis:
2
2009 9 3
100
D2 = a1 a1 + a2 a2 + a3 a3 + a4 a4 + a5 a5 + a6 a6 H- a4 b3 + a3 b4 L H- c2 d1 + c1 d2 L + H- a4 b2 + a2 b4 L Hc3 d1 - c1 d3 L + H- a4 b1 + a1 b4 L H- c3 d2 + c2 d3 L + H- a3 b2 + a2 b3 L H- c4 d1 + c1 d4 L + H- a3 b1 + a1 b3 L Hc4 d2 - c2 d4 L + H- a2 b1 + a1 b2 L H- c4 d3 + c3 d4 L
This can be easily verified as equal to the determinant of the original array computed using Mathematica's Det function.
Expand@D2 D == Det@McD True
Transformations of cobases
In this section we show that if a basis of the underlying linear space is transformed by a transformation whose components are aij , then its cobasis is transformed by the induced transformation whose components are the cofactors of the aij . For simplicity in what follows, we use Einstein's summation convention in which a summation over repeated indices is understood. Let aij be a transformation on the basis ej to give the new basis !i . That is, !i aij ej . Let aij be the corresponding transformation on the cobasis ej to give the new cobasis !i . That is, !i aij ej . Now take the exterior product of these two equations.
!i !k Haip ep L J akj ej N aip akj ep ej
But the product of a basis element and its cobasis element is equal to the n-element of that basis. That is, !i !k dik ! and ep ej dpj e. Substituting in the previous equation gives:
n n
Using the properties of the Kronecker delta we can simplify the right side to give:
dik ! aij akj e n n
This is precisely the relationship 2.29 derived in the section above for the expansion of a determinant in terms of cofactors. Hence we have shown that akj akj .
2009 9 3
101
!i aij ej !i aij ej
In sum: If a basis of the underlying linear space is transformed by a transformation whose components are aij , then its cobasis is transformed by the induced transformation whose components are the cofactors aij of the aij .
2.30
The commands will be valid in an arbitrary-dimensional space, but we exhibit the 3-dimensional case as we proceed. 1 Reset all the defaults and declare the elements of the transformation as scalar symbols.
A; DeclareExtraScalarSymbols@a__ D 8a, b, c, d, e, f, g, h, a__ <
3 Construct a formula for the adjoint Cn of An (the matrix of cofactors) using Mathematica's inbuilt functions.
Cn_ := Transpose@Det@An D Inverse@An DD; MatrixForm@C3 D - a2,3 a3,2 + a2,2 a3,3 a2,3 a3,1 - a2,1 a3,3 - a2,2 a3,1 + a2,1 a3,2 a1,3 a3,2 - a1,2 a3,3 - a1,3 a3,1 + a1,1 a3,3 a1,2 a3,1 - a1,1 a3,2 - a1,3 a2,2 + a1,2 a2,3 a1,3 a2,1 - a1,1 a2,3 - a1,2 a2,1 + a1,1 a2,2
2009 9 3
102
8 Expand and simplify the exterior products on both sides of the equation.
Fn_ := Expand@ExpandAndSimplifyExteriorProducts@Ln DD == Expand@Rn D; F3 True
2009 9 3
103
am1 x1 + am2 x2 + ! + am n xn am
The Ci and C0 are therefore 1-elements in a linear space of dimension m. Adding the resulting equations then gives the system in the form:
x1 C1 + x2 C2 + ! + xn Cn C0
2.31
To obtain an equation from which the unknowns xi have been eliminated, it is only necessary to multiply the linear system through by the corresponding Ci . If m = n and a solution for xi exists, it is obtained by eliminating x1 , !, xi-1 , xi+1 , !, xn ; that is, by multiplying the linear system through by C1 ! Ci-1 Ci+1 ! Cn :
xi Ci C1 !
xi
2.32
In this form we only have to calculate the (n|1)-element C1 ! Ci-1 Ci+1 ! Cn once. An alternative form more reminiscent of Cramer's Rule is:
2009 9 3
104
xi
C 1 ! C i-1 C 0 C i+1 ! C n
C1 C2 ! Cn
2.33
All the well-known properties of solutions to systems of linear equations proceed directly from the properties of the exterior products of the Ci .
To solve this system, first declare a basis of dimension at least equal to the number of equations. In this example, we declare a 4-space so that we can add another equation later.
DeclareBasis@4D 8e1 , e2 , e3 , e4 <
Next define a 1-element for each unknown which encodes its coefficients, and one which encodes the constants
Ca Cb Cc Cd C0 = e1 + 2 e2 + e3 ; = - 2 e1 + e3 ; = 3 e1 + 7 e2 + e3 ; = 4 e1 - 5 e2 + e3 ; = 2 e1 + 9 e2 + 8 e3 ;
Suppose we wish to eliminate a and d thus giving a relationship between b and c. To accomplish this we multiply the system equation through by Ca Cd .
Ha Ca + b Cb + c Cc + d Cd L Ca Cd C0 Ca Cd
Or, since the terms involving a and d will obviously be eliminated by their product with Ca Cd , we have more simply:
Hb Cb + c Cc L Ca Cd C0 Ca Cd
This can be put in a form similar to that of equations 2.32 and 2.33 by dividing through by the right side.
2009 9 3
105
Hb Cb + c Cc L Ca Cd C0 Ca Cd
Expanding and simplifying the numerator and denominator then gives the required relationship between b and c.
%@Hb Cb + c Cc L Ca Cd D % @C0 Ca Cd D 1 63 H27 b - 29 cL 1 1
b-
2.10 Simplicity
The concept of simplicity
An important concept in the Grassmann algebra is that of simplicity. Earlier in the chapter we introduced the concept informally. Now we will discuss it in a little more detail. An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements, and n-elements are simple. In the next section we show that all (n|1)-elements are also simple. 2009 9 3
106
An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements, and n-elements are simple. In the next section we show that all (n|1)-elements are also simple. In 2-space therefore, all elements of L, L, and L are simple. In 3-space all elements of L, L, L and
0 1 2 0 1 2
L are simple. In higher dimensional spaces elements of grade 2 m n|2 are therefore the only
3
ones that may not be simple. In 4-space, the only elements that may not be simple are those of grade 2. There is a straightforward way of testing the simplicity of 2-elements not shared by elements of higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of a is a a.) Since odd elements (elements of odd grade) anti-commute, the exterior square of odd
m m m
elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its being simple. We will return to a further discussion of simplicity from the point of view of factorization in Chapter 3: The Regressive Product.
element.
n-2
a b1 + a b2 a Hb1 + b2 L
n-2 n-2
Any (n|1)-element can be expressed as the sum of simple (n|1)-elements. We can therefore prove the general case by supposing pairs of simple elements to be combined to form another simple element, until just one simple element remains.
2009 9 3
107
Since we are supposing B to be simple, B B 0. To see how this constrains the coefficients ai of the terms of B we expand and simplify the product:
% @B BD H2 a3 a4 - 2 a2 a5 + 2 a1 a6 L e1 e2 e3 e4
We thus see that the condition for a 2-element in a 4-dimensional space to be simple may be written:
a3 a4 - a2 a5 + a1 a6 0
2.34
For the bivector B to be simple we require this product B2 to be zero, hence each of its five coefficients must be zero simultaneously. We can extract the coefficients by using GrassmannAlgebra's ExtractCoefficients function, and then thread them into a list of five equations.
A = Thread@ExtractCoefficients@LBasisD@B2 D == 80, 0, 0, 0, 0<D 82 a3 a5 - 2 a2 a6 + 2 a1 a8 0, 2 a4 a5 - 2 a2 a7 + 2 a1 a9 0, 2 a4 a6 - 2 a3 a7 + 2 a1 a10 0, 2 a4 a8 - 2 a3 a9 + 2 a2 a10 0, 2 a7 a8 - 2 a6 a9 + 2 a5 a10 0<
Finally, we can solve these five equations with Mathematica's inbuilt Solve function.
Solve@A, 8a8 , a9 , a10 <D ::a8 a3 a5 - a2 a6 a1 , a9 a4 a5 - a2 a7 a1 , a10 a4 a6 - a3 a7 a1 >>
By inspection we see that these three solutions correspond to the first three equations. Consequently, from the generality of the expression for the bivector we expect any three equations to suffice. 2009 9 3
108
By inspection we see that these three solutions correspond to the first three equations. Consequently, from the generality of the expression for the bivector we expect any three equations to suffice. In summary we can say that a 2-element in a 5-space is simple if any three of the following equations is satisfied:
0 0 0 0 0
2.35
The concept of simplicity will be explored in more detail in Chapter 3: The Regressive Product.
Clearly, a factor of X must be a linear combination of u, v, w, x, y and z. Let us call it f1 . The si are scalar coefficients which we declare as scalar symbols by entering s@s_ D.
s@s_ D; f1 = s1 u + s2 v + s3 w + s4 x + s5 y + s6 z;
2009 9 3
109
X1 = % @f1 XD 0 H- 6 s1 - 6 s2 + 3 s3 L u v w x y + H- 4 s1 - 4 s2 + 2 s3 L u v w x z + H- 6 s1 - 6 s2 + 3 s3 L u v w y z + H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L u v x y z + H6 s1 + 14 s3 + 6 s4 - 4 s5 + 6 s6 L u w x y z + H6 s2 - 17 s3 - 6 s4 + 4 s5 - 6 s6 L v w x y z 0
Since the coefficients of each of these terms must be zero, we now have some equations of constraint between the coefficients of f1 . In this case it is easy to see that the first 3 are identical, so we can satisfy them by substituting 2 s1 + 2 s2 for s3 .
X2 = HX1 . s3 2 s1 + 2 s2 L Simplify H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L Hu v x y z + 2 u w x y z - 2 v w x y zL 0
Finally, we can substitute back into the expression for f1 to get an expression for a generic factor of X, which we call f2 .
f2 = f1 . s3 2 s1 + 2 s2 . s5 17 s1 + 14 s2 + 3 s4 + 3 s6 2 1 2 y H17 s1 + 14 s2 + 3 s4 + 3 s6 L
u s1 + v s2 + w H2 s1 + 2 s2 L + x s4 + z s6 +
X can now be constructed as the exterior product of any four linearly independent versions of f2 . As an example, we create 1-elements xi by letting si equal 1, and the others equal 0. x1 = f2 . 8s1 1, s2 0, s4 0, s6 0< u+2w+ 17 y 2
2009 9 3
110
x4 = f2 . 8s1 0, s2 0, s4 0, s6 1< 3y 2 +z
This turns out to be half of X. So by doubling one of the xi , say x4 , we have the final result which we can verify.
% BX u + 2 w + True 17 y 2 Hv + 2 w + 7 yL x + 3y 2 H3 y + 2 zLF
Clearly this expression can only be zero if all the coefficients si are zero, which in turn implies f must be zero. Hence in this case, X has no 1-element factors. More generally, an element being non-simple does not imply that it has no 1-element factors. Here is an non-simple element with two 1-element factors.
X = Hx y + u vL w z; f = s1 x + s2 y + s3 u + s4 v + s5 w + s6 z; % @f XD - s1 u v w x z - s2 u v w y z + s3 u w x y z + s4 v w x y z
2009 9 3
111
contained in a is defined to be the most general m-element contained in a (g, say) such that the
m +k
exterior product of the quotient g with the denominator b yields the numerator a .
m k
a
g
m
m +k
b
k
gb a
m
m +k
2.36
Note the convention adopted for the order of the factors. In Chapter 9: Exploring Grassmann Algebra we shall generalize this definition of quotient to define the general division of Grassmann numbers.
Division by a 1-element
Consider the quotient of a simple (m+1)-element a by a 1-element b. Since a contains b we can
m +1 m +1
write it as a1 a2 !am b. However, we could also have written this numerator in the more general form:
Ha1 + t1 bL Ha2 + t2 bL ! Ham + tm bL b
where the ti are arbitrary scalars. It is in this more general form that the numerator must be written before b can be 'divided out'. Thus the quotient may be written:
a1 a2 ! a m b b
Ha1 + t1 bL Ha2 + t2 bL ! Ha m + t m bL
2.37
2009 9 3
112
Division by a k-element
Suppose now we have a quotient with a simple k-element denominator. Since the denominator is contained in the numerator, we can write the quotient as:
Ha1 a2 ! am L Hb1 b2 ! bk L Hb1 b2 ! bk L
To prepare for the 'dividing out' of the b factors we rewrite the numerator in the more general form:
Ha1 + t11 b1 + ! + t1 k bk L Ha2 + t21 b1 + ! + t2 k bk L ! Ham + tm1 b1 + ! + tmk bk L Hb1 b2 ! bk L
2.38
a b1 b2 ! bk b1 b2 ! bk
a + t1 b1 + ! + tk bk
2.39
Special cases
In the special cases where a or b is a scalar, the results are unique.
m k
a b1 b2 ! bk b1 b2 ! bk b a1 a2 ! a m b
2.40
a1 a2 ! a m
2.41
2009 9 3
113
Whereas the exterior quotient of the basis trivector e1 e2 e3 by the basis vector e1 results in the variable bivector He2 + e1 t1 L He3 + e1 t2 L.
ExteriorQuotientB e1 e2 e3 e1 F
He2 + e1 t1 L He3 + e1 t2 L ExteriorQuotient requires that the factors in the denominator are also specifically displayed in the numerator. ExteriorQuotientB Hx - yL Hy - zL Hz - xL Hz - xL Hx - yL F
y - z + Hx - yL t1 + H- x + zL t2
More generally, you can use ExteriorQuotient on any Grassmann expression involving exterior quotients.
ExteriorQuotientBa e1 e2 e3 e1 +b e1 e2 e3 e1 e2 +c e1 e2 e3 e1 e2 e3 F
114
In order to explore the space of a simple m-element we can of course choose as basis for its underlying linear space, any set of m independent 1-elements whose exterior product is congruent to the given m-element. Such sets span the space of the simple m-element, and by extension, we call each of them a span of the m-element. For example, if A xyz, then we can say that a span of A is {x, y, z}. But of course, {x + y, y, z} and {a x, y, z} are also spans of A. Practically however, the simple m-elements we are working with will often be available to us in some known factored form, that is, as a given exterior product of m 1-elements. In these cases it is convenient for computational purposes to choose these factors to span the element, and, by abuse of terminology, call the list of these factors the span of the element. (We can read "the span" to mean "the current working span" of the element). For example, if A xyz, then we say that the span of A is {x, y, z}. More generally, we define the k-span of a simple m-element as the list of all the essentially different exterior products of grade k formed from the span (or 1-span) of the element. Given this provisory preamble, we can now say that the most straightforward way to conceive of the span of a simple m-element is as the basis of an m-space, where the basis is the list of factors of the m-element in order. Similarly the k-span of a simple m-element may be conceived of as the basis of the concomitant exterior product space of grade k, and the k-cospan of the element may be conceived of as its cobasis. The k-span of a simple m-element has no natural notion of invariance attached to it, because the same m-element, factorized differently, will generate different k-spans. On the other hand, the pair comprising the k-span of an m-element and its concomitant (m-k)-span will, if present together appropriately in a multilinear form, can generate results invariant to different factorizations. Many expressions in the algebra are multilinear forms, and an understanding of their properties is instructive. We will explore these forms in the sections to follow. But first, we solidify what these definitions mean by seeing how to compose a span.
Composing spans
In this section we discuss how to compose the span (1-span), k-span or k-cospan of a simple melement. As a concrete case, let us take a 4-element A. The compositions are independent of the dimension of the currently declared space, but any computations with them will require the dimension of the space to be at least that of the m-element. First we compose A. (By using ComposeSimpleForm, the ai are automatically added to the list of declared vector symbols.)
A = ComposeSimpleFormBa, 1F
4
a1 a2 a3 a4
To compose the span of A, you can use the GrassmannAlgebra function ComposeSpan[A][1] or its alternative representation #1 @AD. To compose the 2-span of A you can use ComposeSpan[A][2] or its alternative #2 @AD.
2009 9 3
115
To compose the cospan of A, you can use ComposeCospan[A][1] or #1 @AD. To compose the k-cospan of A you can use ComposeCospan[A][2] or #2 @AD. (You can enter the 2 in #2 as either a superscript or a power.) Note that while a k-span is a list of k-elements, a k-cospan is a list of (m-k)-elements.
9ComposeCospan@AD@1D, #2 @AD= 88a2 a3 a4 , - Ha1 a3 a4 L, a1 a2 a4 , - Ha1 a2 a3 L<, 8a3 a4 , - Ha2 a4 L, a2 a3 , a1 a4 , - Ha1 a3 L, a1 a2 <<
Since the exterior product is Listable (like all Grassmann products in GrassmannAlgebra), the exterior product of a k-span and its k-cospan gives a list of 4-elements which simplify to 4 copies of A.
#2 @AD #2 @AD 8a1 a2 a3 a4 , a1 a3 - Ha2 a4 L, a1 a4 a2 a3 , a2 a3 a1 a4 , a2 a4 - Ha1 a3 L, a3 a4 a1 a2 <
In the discussion of multilinear forms to follow we will usually use the alternative representation # since its compact nature makes the composition of the forms clearer. Further syntax for using # follows. To compose the k-spans or k-cospans for several values of k you can enter the values of k enclosed in [ ]. For example
#@1,3D @AD 88a1 , a2 , a3 , a4 <, 8a1 a2 a3 , a1 a2 a4 , a1 a3 a4 , a2 a3 a4 << #@1,3D @AD 88a2 a3 a4 , - Ha1 a3 a4 L, a1 a2 a4 , - Ha1 a2 a3 L<, 8a4 , -a3 , a2 , -a1 <<
If you want to multiply the elements of a span by scalar coefficients you can use Mathematica's inbuilt listability attribute of Times, and simply multiply the span by the list of coefficients (making sure that the two lists are the same length). The alias S stands for DeclareExtraScalarSymbols, and is here used to declare any subscripted s as a scalar.
S@s_ D; 8 s 1 , s 2 , s 3 , s 4 , s 5 , s 6 < #2 @ A D 8s1 a1 a2 , s2 a1 a3 , s3 a1 a4 , s4 a2 a3 , s5 a2 a4 , s6 a3 a4 <
If you want to multiply the elements of a span by scalar coefficients and then sum them, you can use Mathematica's Dot (.) function.
2009 9 3
116
8 s 1 , s 2 , s 3 , s 4 , s 5 , s 6 < . #2 @ A D s1 a1 a2 + s2 a1 a3 + s3 a1 a4 + s4 a2 a3 + s5 a2 a4 + s6 a3 a4
If you want to use some other type of listable product between your elements (other than Times) and then sum them, you can use the GrassmannAlgebra alias for Mathematica's Total function S which simply sums the elements of any list.
To pick out the ith element in any of these lists you can use Mathematica's inbuilt Part function by giving a subscript to the list in the form PiT. For example
#2 @ADP3T a1 a4
Thus we can write A as the exterior product of any two corresponding terms from a k-span and a kcospan.
%AA #2 @ADP3T #2 @ADP3T E True
Example: Refactorizations
Let us suppose we have a simple element A for which we have a factorization, and that we wish to obtain a factorization that has a given 1-element X (which we know belongs to A) as one of the factors. Take again the example of the previous section.
A = He4 - 2 e7 L H3 e5 + e2 L He1 + 2 e2 L; X = e1 - 6 e5 ;
By taking the 2-span of A and multiplying each of the elements by X we get 3 candidates.
#2 @AD X 8He4 - 2 e7 L He2 + 3 e5 L He1 - 6 e5 L, He4 - 2 e7 L He1 + 2 e2 L He1 - 6 e5 L, He2 + 3 e5 L He1 + 2 e2 L He1 - 6 e5 L<
2009 9 3
117
- He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7
Multilinear forms
In the chapters to follow we introduce various new Grassmann products in addition to the exterior product: the regressive, interior, inner, generalized, hypercomplex and Clifford products. In the computation of expressions involving these products we will often come across formulae which express the computation as a sum of terms where each term itself is a product. These sums must have the same natural invariance to transformations possessed by the original expression, and it turns out they are all examples of multilinear forms. For our purposes, a typical multilinear form is a function M@a1 , a2 , , am D of 1-elements ai which is linear in each of the ai . That is,
M@a1 , a2 , , a1 b1 + a2 b2 , , am D a1 M@a1 , a2 , , b1 , , am D + a2 M@a1 , a2 , , b2 , , am D
where the ai are scalars and the bi are also 1-elements. In this book however, because the M are usually viewed as products, we write them in their infix form, just as we have seen that Mathematica allows us to do with the exterior product: entering an expression in the argument form above displays it in its infix form.
Wedge@a1 , a2 , , a1 b1 + a2 b2 , , am D a1 Wedge@a1 , a2 , , b1 , , am D + a2 Wedge@a1 , a2 , , b2 , , am D a1 a2 Ha1 b1 + a2 b2 L am a1 a1 a2 b1 am + a2 a1 a2 b2 am
More specifically, we say that a product operation " is a linear product if it is both left and right distributive under addition, and permits scalars to be factored out.
a" b + g
m k k
a"b + a"g
m k m k
2.42
In the chapters to follow, the multilinear forms of interest may involve two, or sometimes three, different linear product operations. To represent these we choose three initially undefined infix operators from Mathematica's library of symbols, ", #, and (CircleTimes, CirclePlus, CircleDot). We make them Listable (as all the Grassmann products are), and make them behave linearly within any functions we define intended to operate on expressions involving them. As a beginning example, consider a simple 3-element A a1 a2 a3 in a space of at least 3 dimensions, the three operations ", #, and , and two arbitrary Grassmann expressions G1 , and G2 . From these we can form the expression 2009 9 3
118
As a beginning example, consider a simple 3-element A a1 a2 a3 in a space of at least 3 dimensions, the three operations ", #, and , and two arbitrary Grassmann expressions G1 , and G2 . From these we can form the expression
F HG1 " a1 L HG2 # Ha2 a3 LL
Now if we replace a2 , say, by a1 b1 + a2 b2 , we can expand the expression to get a sum of two expressions.
F HG1 " a1 L HG2 # HHa1 b1 + a2 b2 L a3 LL HG1 " a1 L HG2 # Ha1 b1 a3 + a2 b2 a3 LL HG1 " a1 L Ha1 G2 # Hb1 a3 L + a2 G2 # H b2 a3 LL HG1 " a1 L Ha1 G2 # Hb1 a3 LL + HG1 " a1 L Ha2 Q2 # H b2 a3 LL a1 HG1 " a1 L H G2 # Hb1 a3 LL + a2 HG1 " a1 L HG2 # H b2 a3 LL
Since the individual product operations are linear, the whole expression is multilinear: substituting for any of the Gi or ai with a sum of terms will lead to a sum of expressions. In later applications these linear products will of course be Grassmann products (exterior, regressive, interior, inner, generalized, hypercomplex or Clifford), so the multilinear form F will simply be a Grassmann expression (an element of the algebra under consideration).
Defining m:k-forms
In this section we define in more detail the particular types of multilinear forms we will find useful later in the book. We will find it convenient to call them m:k-forms because, for any given simple m-element, there are forms of grade k, where k runs from 0 to m. To begin consider a simple melement A.
A a 1 a 2 a k a k+1 a m
The k-span and k-cospan of A are denoted #k @AD and #k @AD. Each of these lists contain n =
m k
elements of grade k and grade m-k respectively. The ith element of any list is denoted with a
2.43
.
2.44
2009 9 3
119
2.45
Composing m:k-forms
All the product operations we will be dealing with are defined with the Mathematica attribute Listable. This means that to build up expressions, you only need to enter a product of lists (with the same number of elements in each), or the product of a single element and a list, to get a list of the products. To see how this works in building up forms, we can take the simple case of A = a1 a2 a3 and show the steps as follows.
9#1 @AD, #1 @AD= 88a1 , a2 , a3 <, 8a2 a3 , - Ha1 a3 L, a1 a2 << 9G1 # #1 @AD, G2 " #1 @AD= 88G1 ! a1 , G1 ! a2 , G1 ! a3 <, 8G2 " a2 a3 , G2 " - Ha1 a3 L, G2 " a1 a2 << HG1 # #1 @ADL IG2 " #1 @ADM 8HG1 ! a1 L HG2 " a2 a3 L, HG1 ! a2 L HG2 " - Ha1 a3 LL, HG1 ! a3 L HG2 " a1 a2 L<
If at any stage you want to sum the terms of a list, you can apply Mathematica's Total function, or its GrassmannAlgebra alias S.
2.44
2009 9 3
120
Because all our product operations are defined to be listable you can compose the forms in the previous section without explicitly indexing the terms (i) and specifying their number (n). All you need to do is eliminate the index and replace the summation sign with S.
S@HG1 # "k @ADL IG2 " "k @ADME S@H"k @ADL I"k @ADME S@HG1 # "k @ADL I"k @ADM E S@H"k @ADL IG2 " "k @ADME
For example if A = a1 a2 a3 , the following two expressions give the same result.
A = a1 a2 a3 ;
3
2.46
HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L
Note that, because there are a number of product operators with their own precedence, it is advisable to assure any expected behaviour by liberal use of parentheses in the input expression.
ExpandAndSimplifyForm@FD a c HG1 ! xL HG2 " zL + b c HG1 ! yL HG2 " zL
2009 9 3
121
A = a1 a2 a3 ; F3,1 = S@HG1 # #1 @ADL IG2 " #1 @ADME HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L
Now consider the expression F involving the factors of A and make the same substitution.
F = HG1 " a1 L HG2 # Ha2 a3 LL; Fa = HG1 " a1 L HG2 # Ha2 a3 LL . a1 a1 + a a2 HG1 " Ha1 + a a2 LL HG2 ! a2 a3 L
Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar coefficient.
2009 9 3
122
Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar coefficient.
ExpandAndSimplifyForm@Fa D HG1 " a1 L HG2 ! a2 a3 L + a HG1 " a2 L HG2 ! a2 a3 L
If the form F had been invariant to this transformation as the 3-element A was, Fa would have simplified back again to F. But Fa now has an extra term which we notice displays a2 twice, but in different parts of the product. This isolates the two occurrences of a2 so that the antisymmetry of the exterior product cannot put their product to zero. This suggests that we should antisymmetrize the original form F to create a new form Fb by adding a term featuring the same factors ai of A but reordered to bring a2 to first place. If we do this we find Fb is invariant to the substitution a1 a1 + a a2 .
Fb = HG1 " a1 L HG2 # Ha2 a3 LL + HG1 " a2 L HG2 # H- Ha1 a3 LLL; Fc = Fb . a1 a1 + a a2 HG1 " a2 L HG2 ! - HHa1 + a a2 L a3 LL + HG1 " Ha1 + a a2 LL HG2 ! a2 a3 L ExpandAndSimplifyForm@Fb Fc D True
Of course, if we want a form which is antisymmetric with respect to substitution of all of its factors ai , we will need to include all the essentially different decompositions of the 3-element into a 1element and a 2-element.
Fd = HG1 " a1 L HG2 # Ha2 a3 LL + HG1 " a2 L HG2 # H- Ha1 a3 LLL + HG1 " a3 L HG2 # Ha1 a2 LL;
Now, replacing a1 by a1 + a a2 + b a3 in this form leaves the form invariant to the substitution.
Fe = Fd . a1 a1 + a a2 + b a3 HG1 " a2 L HG2 ! - HHa1 + a a2 + b a3 L a3 LL + HG1 " a3 L HG2 ! Ha1 + a a2 + b a3 L a2 L + HG1 " Ha1 + a a2 + b a3 LL HG2 ! a2 a3 L ExpandAndSimplifyForm@Fd Fe D True
The form Fd may be recognized as the m:k-form F3,1 introduced in the previous section.
F3,1 = S@HG1 # #1 @ADL IG2 " #1 @ADME HG1 ! a1 L HG2 " a2 a3 L + HG1 ! a2 L HG2 " - Ha1 a3 LL + HG1 ! a3 L HG2 " a1 a2 L
2009 9 3
123
Properties of m:k-forms
To this point we have discussed simple m-elements A and the m:k-forms Fm,k based on them.
A a 1 a 2 a k a k+1 a m Fm,k SAHG1 # #k @ADL IG2 " #k @ADME
Our interest is to show that any transformation which leaves A invariant, also leaves any of its associated m:k-forms Fm,k invariant. These transformations are equivalent to refactorizations of the m-element: we want to show that an m:k-form is unchanged no matter what factorization of A we use. For concreteness initially, let us take A to be a 3-element as we have in previous sections.
A = a1 a2 a3 ; A is of course invariant to any of its factorizations. Any factorization Aa of A is congruent to the exterior product of three independent linear combinations of the ai . Aa = Ha1,1 a1 + a1,2 a2 + a1,3 a3 L Ha2,1 a1 + a2,2 a2 + a2,3 a3 L Ha3,1 a1 + a3,2 a2 + a3,3 a3 L;
Expanding and simplifying Aa gives, as expected, the original factorization A multiplied by the determinant of the coefficients. (But first we need to declare the coefficients as scalars.)
S@a__ D; Ab = % @ Aa D H- a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3 L a1 a2 a3 det = Det@88a1,1 , a1,2 , a1,3 <, 8a2,1 , a2,2 , a2,3 <, 8a3,1 , a3,2 , a3,3 <<D - a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3
Any set of coefficients whose determinant is unity will provide a transformation to which A is invariant. Now let us compose the range of 3:k-forms for A, and for Aa .
F3,k = TableASAHG1 # #i @ADL IG2 " #i @ADME , 8i, 0, 3<E ExpandAndSimplifyForm 8HG1 ! 1L HG2 " a1 a2 a3 L, HG1 ! a1 L HG2 " a2 a3 L HG1 ! a2 L HG2 " a1 a3 L + HG1 ! a3 L HG2 " a1 a2 L, HG1 ! a1 a2 L HG2 " a3 L - HG1 ! a1 a3 L HG2 " a2 L + HG1 ! a2 a3 L HG2 " a1 L, HG1 ! a1 a2 a3 L HG2 " 1L<
2009 9 3
124
Fa,3,k = ITableASAHG1 # #i @Aa DL IG2 " #i @Aa DME , 8i, 0, 3<E ExpandAndSimplifyForm SimplifyM . 8Simplify@detD $, Simplify@- detD - $< 8! HG1 ! 1L HG2 " a1 a2 a3 L, ! HHG1 ! a1 L HG2 " a2 a3 L - HG1 ! a2 L HG2 " a1 a3 L + HG1 ! a3 L HG2 " a1 a2 LL, ! HHG1 ! a1 a2 L HG2 " a3 L - HG1 ! a1 a3 L HG2 " a2 L + HG1 ! a2 a3 L HG2 " a1 LL, ! HG1 ! a1 a2 a3 L HG2 " 1L<
Here we have used Mathematica's inbuilt Simplify to extract the determinant of the transformation and denote it symbolically by $. If we divide by $, we can easily see that this list of forms is equal to the original list of forms.
F3,k True Fa,3,k $
Thus transformations (factorizations) which leave the 3-element A invariant, also leave all its 3:kforms invariant.
2009 9 3
125
Since the exterior product is Listable, simplifying the exterior product of a complete span with its complete cospan gives lists of the original m-element.
$A# @AD # @ADE 88u x y z<, 8u x y z, u x y z, u x y z, u x y z<, 8u x y z, u x y z, u x y z, u x y z, u x y z, u x y z<, 8u x y z, u x y z, u x y z, u x y z<, 8u x y z<<
Taking the exterior product of a complete cospan with a complete span give a similar result, but this time the exterior products of the k-cospan elements with the k-span elements include a sign H - 1 L k Hm -kL .
$A# @AD # @ADE 88u x y z<, 8- Hu x y zL, - Hu x y zL, - Hu x y zL, - Hu x y zL<, 8u x y z, u x y z, u x y z, u x y z, u x y z, u x y z<, 8- Hu x y zL, - Hu x y zL, - Hu x y zL, - Hu x y zL<, 8u x y z<<
If m is odd then k(m-k) will be even for all k, and hence there will be no change in sign. If m is even, then k(m-k) will alternate between even and odd as k ranges from 0 to m. Thus we can construct a list of signs dependent on the parity of m.
r@m_D := Mod@Range@0, mD, 2D; rr@m_D := If@EvenQ@mD, r@mD, 0D
For example
9H- 1Lrr@1D , H- 1Lrr@2D , H- 1Lrr@3D , H- 1Lrr@4D = 81, 81, - 1, 1<, 1, 81, - 1, 1, - 1, 1<<
With this sign construction, we can now determine a relationship between products of complete spans and cospans. Below we verify the identity for m ranging from 1 to 5.
!6 ; TableB$B# BAF # BAF H- 1Lrr@mD J# BAF # BAFNF, 8m, 1, 5<F
m m m m
In the more general case of a form based on the complete span and cospan of a simple m-element A, we can obtain a similar identity providing we reverse all the lists, that is, reverse each list of elements and the list of these lists. To simplify this two-level reversing process we define a function ReverseAll with alias R.
2009 9 3
126
2.47
( BRBH- 1L
rr@mD
As might be expected, the identity remains valid no matter to which form the sign change H- 1Lrr@mD or the ReverseAll operation is applied. Let
F1 @m_D := JG1 # # BAFN JG2 " # BAFN
m m
2.48
2.49
2.50
2.51
To verify these identities we first define some functions Ji @mD representing them which simplify the terms, and then tabulate the combined results for m ranging from 1 to 5.
J1 @m_D := ( @F1 @mDD ( ARAH- 1Lrr@mD F2 @mDEE ( AH- 1Lrr@mD R@F2 @mDDE; J2 @m_D := ( @F2 @mDD ( ARAH- 1Lrr@mD F1 @mDEE ( AH- 1Lrr@mD R@F1 @mDDE;
2009 9 3
127
J3 @m_D := ( @R@F1 @mDDD ( AH- 1Lrr@mD F2 @mDE; J4 @m_D := ( @R@F2 @mDDD ( AH- 1Lrr@mD F1 @mDE; Table@And@J1 @mD, J2 @mD, J3 @mD, J4 @mDD, 8m, 1, 5<D 8True, True, True, True, True<
The element A* C B* is called the union of A and B. Two elements A and B are said to be disjoint if C is of grade 0, that is, a scalar: thus AB ! 0. The formulae that we are going to develop still remain valid in this case, so that we will not need to consider it further. The first thing to note is that defining the union and intersection only defines them up to congruence because they still remain valid if we multiply and divide the factors in the formulae by an arbitrary scalar.
A A* c Hc CL; B B* c Hc CL; A* c Hc CL B* c !0
Thus now, if c C is the intersection, I 1 M A* C B* is the union. c The observation that the union and intersection are only defined up to congruence leads us to consider how we might develop a formula for the union and intersection which is not dependent on an arbitrary congruence factor. Suppose is a linear product as discussed in the previous section, not equal to the exterior product, but otherwise arbitrary. Then the product of A and B can be written as the product of their union and intersection, and the congruence factors cancel.
A B H A * CL HB* CL H A * C B* L C
This formula can be written also as
A HB* CL H A B* L C
2.52
Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A B* we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal to k* , say) for which A B* is not zero. To do this partitioning, B has to be simple, and we need to know9 some 2009 3 factorization of it. If we partition B using its span and cospan for various k and for the given factorization, then our formula becomes a special case of an m:k-form, and invariance of the result to the actual factorization of B is guaranteed.
128
Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A B* we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal to k* , say) for which A B* is not zero. To do this partitioning, B has to be simple, and we need to know some factorization of it. If we partition B using its span and cospan for various k and for the given factorization, then our formula becomes a special case of an m:k-form, and invariance of the result to the actual factorization of B is guaranteed. Hence we write the formula for the product of the possibly intersecting simple elements A and B in terms of their union and intersection as
2.53
Example: 2:k-forms
A = a1 a2 g1 ; B = b1 g1 ;
If we simply compose a table of all the 2:k-forms (k equal to 0, 1, and 2 in this case) without simplifying the forms, we get the three forms
T = TableASAHA #k @BDL #k @BDE, 8k, 0, 2<E; TableForm@TD Ha1 a2 g1 1L Hb1 g1 L Ha1 a2 g1 b1 L g1 + Ha1 a2 g1 g1 L H-b1 L Ha1 a2 g1 b1 g1 L 1
Clearly, when simplified, the first form (k = 0) is equal to A B, and the third form (k = 2) is zero. The second form (k = 1) reduces to a single term, giving us the expected expression for the union and intersection. To simplify a form, or a list of forms, we use ExpandAndSimplifyForm, or its alias (.
2009 9 3
129
(Note that ( has automatically put the factors in the second form into canonical order, based in this case on the alphabetical order of the symbols used)
Example: 3:k-forms
Had B also been a 3-element, we would have generated four forms.
A = a1 a2 g1 ; B = b1 b2 g1 ; T = TableASAHA #k @BDL #k @BDE, 8k, 0, 3<E; TableForm@TD Ha1 a2 g1 1L Hb1 b2 g1 L Ha1 a2 g1 b1 L Hb2 g1 L + Ha1 a2 g1 b2 L H- Hb1 g1 LL + Ha1 a2 g1 Ha1 a2 g1 b1 b2 L g1 + Ha1 a2 g1 b1 g1 L H-b2 L + Ha1 a2 g1 b2 g1 Ha1 a2 g1 b1 b2 g1 L 1 ( @TD TableForm Ha1 a2 g1 L Hb1 b2 g1 L - Ha1 a2 b1 g1 L Hb2 g1 L + Ha1 a2 b2 g1 L Hb1 g1 L Ha1 a2 b1 b2 g1 L g1 0
2009 9 3
130
Example: 2:k-forms
We begin with the same elements as in the first example above. But now, we express all the 1element factors in terms of the current basis. The process is independent of the actual basis, so we can choose it arbitrarily for the example, say an 8-space.
!8 ;
We compose A as the product of three factors to explicitly display the intersection as the last factor. But the algorithm does not need to have A in its factored form, so we expand and simplify before applying the formula.
A = %@He4 - 2 e7 L H3 e5 + e2 L He1 + 2 e2 LD - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7
The second element B must be supplied to the algorithm in factored form, otherwise its spans and cospans cannot be computed. However the factorization can be arbitrary, so we expand it and refactor it differently so that it does not explicitly display the intersection we have chosen for the example.
B = ExteriorFactorize@%@H2 e3 - e2 L He1 + 2 e2 LDD He1 + 4 e3 L He2 - 2 e3 L
In this case we know that k equal to 1 will give us the result we are looking for.
2009 9 3
131
F1 = ( ASAHA #1 @BDL #1 @BDEE 2 He1 e2 e3 e4 L e1 + 4 He1 e2 e3 e4 L e2 - 4 He1 e2 e3 e7 L e1 8 He1 e2 e3 e7 L e2 - 3 He1 e2 e4 e5 L e1 - 6 He1 e2 e4 e5 L e2 6 He1 e2 e5 e7 L e1 - 12 He1 e2 e5 e7 L e2 + 6 He1 e3 e4 e5 L e1 + 12 He1 e3 e4 e5 L e2 + 12 He1 e3 e5 e7 L e1 + 24 He1 e3 e5 e7 L e2 + 12 He2 e3 e4 e5 L e1 + 24 He2 e3 e4 e5 L e2 + 24 He2 e3 e5 e7 L e1 + 48 He2 e3 e5 e7 L e2
Of course, this is in expanded form. We would like to factorize it to display the union and intersection explicitly. We can do this with GrassmannAlgebra's FactorizeForm.
F2 = FactorizeForm@F1 D He1 - 6 e5 L He2 + 3 e5 L e3 + 3 e5 2 He4 - 2 e7 L H2 e1 + 4 e2 L
The intersection displayed here is congruent to the factor that we introduced as our 'hidden' intersection in the A and B supplied to the form, and is therefore a correct result as any congruent factor would be. Although the union does not explicitly display the intersection, we can verify that the intersection is indeed a factor of it, and that it is also a factor of A and B as supplied to the formula. Let U be the union and J be the intersection.
U = He1 - 6 e5 L He2 + 3 e5 L e3 + J = 2 e1 + 4 e2 ; 3 e5 2 He4 - 2 e7 L;
All the other factors of the original A and B are also easily verified to belong to U.
%@U 8He4 - 2 e7 L, H3 e5 + e2 L, H2 e3 - e2 L, He1 + 4 e3 L, He2 - 2 e3 L<D 80, 0, 0, 0, 0<
2009 9 3
132
Example 1
We take the simplest example with A non-simple, and a 1-element intersection g1 , which does not occur in the non-simple 2-element factor of A.
A = Ha1 a2 + a3 a4 L a5 g1 ; B = b1 g1 ;
Since the intersection is a 1-element, and B is a 2-element, we can get the union and intersection from the 1-span and 1-cospan of B.
Upon simplification, the second term is clearly zero, leading to the expected result as the first term showing, in this case, that the union is not simple.
Example 2
Now let B contain one of the 1-element factors which occurs in the non-simple 2-element factor of A, say a1 .
A = Ha1 a2 + a3 a4 L a5 g1 ; B = a1 g1 ;
Upon simplification, the second term is again zero, but the first term, the union, is now simple. Thus we conclude that under some circumstances (Example 1) the union and intersection formula will give straightforwardly interpretable results. But in others (Example 2) one would need to pay special attention to their interpretation.
Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining the union and intersection of A with a simple 6-element B using the formula involving the 5-span and 5-cospan of B. 2009 9 3
133
Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining the union and intersection of A with a simple 6-element B using the formula involving the 5-span and 5-cospan of B. But as we have seen in the sections above, it is really only the number of distinct symbols (hence independent 1-elements) that are important in computing unions and intersections. Thus we can simplify our computations by considering A as a 3-element in a 5-space 8e1 , e2 , e4 , e5 , e7 < and B as a simple 3-element using the formula involving the 2-span and 2-cospan of B. To obtain our first factor of A, suppose we take B as e1 e2 e4 . The
B1 = e1 e2 e4 ; ( ASAHA #2 @B1 DL #2 @B1 DEE FactorizeForm He1 e2 e4 e5 e7 L H6 e1 + 12 e2 L
The factor of interest, our first factor of A, is the intersection 6 e1 + 12 e2 . We can ignore the union as it is not of any significance in this procedure. Repeating the procedure for the remaining possible independent 3-element we can construct for B gives
BB = #3 @e1 e2 e4 e5 e7 D 8e1 e2 e4 , e1 e2 e5 , e1 e2 e7 , e1 e4 e5 , e1 e4 e7 , e1 e5 e7 , e2 e4 e5 , e2 e4 e7 , e2 e5 e7 , e4 e5 e7 < T = IFactorizeFormA( ASAHA #2 @DL #2 @DEEE &M BB 8He1 e2 e4 e5 e7 L H6 e1 + 12 e2 L, 0, He1 e2 e4 e5 e7 L H3 e1 + 6 e2 L, He1 e2 e4 e5 e7 L H2 e1 - 12 e5 L, He1 e2 e4 e5 e7 L H6 e4 - 12 e7 L, He1 e2 e4 e5 e7 L H- e1 + 6 e5 L, He1 e2 e4 e5 e7 L H2 e2 + 6 e5 L, He1 e2 e4 e5 e7 L H- 3 e4 + 6 e7 L, He1 e2 e4 e5 e7 L H- e2 - 3 e5 L, He1 e2 e4 e5 e7 L H- e4 + 2 e7 L<
By inspection we can see that there are 4 different factors. We can take the simplest form congruent to each. Clearly they are not independent
AA = He1 + 2 e2 L He1 - 6 e5 L He4 - 2 e7 L He2 + 3 e5 L; %@AAD 0
By inspection the second factorization in the list, because it does not display all the symbols of A, will be zero. The others are all candidates for a factorization of A, and all can be seen to be congruent to A when expanded and simplified. 2009 9 3
134
By inspection the second factorization in the list, because it does not display all the symbols of A, will be zero. The others are all candidates for a factorization of A, and all can be seen to be congruent to A when expanded and simplified.
%@#3 @AADD 8- 2 e1 e2 e4 + 4 e1 e2 e7 + 6 e1 e4 e5 + 12 e1 e5 e7 + 12 e2 e4 e5 + 24 e2 e5 e7 , 0, - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7 , - He1 e2 e4 L + 2 e1 e2 e7 + 3 e1 e4 e5 + 6 e1 e5 e7 + 6 e2 e4 e5 + 12 e2 e5 e7 <
In Chapter 3, we will explore an equivalent algorithm for factorizing simple elements, based on the regressive product as the multilinear product.
2.14 Summary
This chapter has introduced the exterior product. Its codification of linear dependence is the foundation of Grassmann's contribution to the mathematical and physical sciences. From it he was able to build an algebraic system widely applicable to much of the physical world. We began the chapter by formally extending the axioms of a linear space to include the exterior product. We then showed how the exterior product leads naturally to an understanding of determinants and linear systems of equations. Although Grassmann was the first to develop an algebraic structure which naturally handled these now accepted denizens of "Linear Algebra", we showed, in discussing the concepts of simplicity and exterior division, that his exterior product spaces (although themselves linear spaces) had the potential to be much richer than the simple linear space of Linear Algebra. At the end of the chapter we undertook an analysis inspired by Grassmann of the various types of possible product structures. This analysis is really quite fundamental to his contribution, and confirms the central role played by the exterior and interior products in their own right, and as we shall see later in the book, the construction of higher level products (the Clifford product, Generalized Grassmann product and hypercomplex product) as sums of these two. The chapter has not yet introduced any geometric or physical interpretations for the elements of the algebra, as, being a mathematical system, its power lies in its potentially multiple interpretations. Nevertheless, I am acutely aware that comprehension is enhanced by context, particularly the geometric. I beg your indulgence that the next chapter is still devoid of any depiction of the entities discussed, but hope that Chapter 4: Geometric Interpretations and subsequent examples will redress this balance. Chapter 3 will discover that the symmetry in the suite of linear spaces created by the exterior product leads to a beautiful "dual" product: the regressive product. Equipped with these dual products, Chapter 4 will then be able finally to provide a geometric interpretation for all the different types of elements of the spaces. Chapter 5 shows how the symmetry further enables the definition of a new operation: the complement. Then Chapter 6 shows how to combine all of these to define the interior product, a much more general product than the scalar product. These chapters form the critical foundation to Grassmann's algebra. The rest is exploration!
2009 9 3
135
3.1 Introduction
Since the linear spaces L an L are of the same dimension K
m n-m
isomorphic, the opportunity exists to define a product operation (called the regressive product) dual to the exterior product such that theorems involving exterior products of m-elements have duals involving regressive products of (n|m)-elements. Very roughly speaking, if the exterior product is associated with the notion of union, then the regressive product is associated with the notion of intersection. The regressive product appears to be little used in the recent literature. Grassmann's original development did not distinguish notationally between the exterior and regressive product operations. Instead he capitalized on the inherent duality and used a notation which, depending on the grade of the elements in the product, could only sensibly be interpreted by one or the other operation. This was a very elegant idea, but its subtlety may have been one of the reasons the notion has become lost. (See "Grassmann's notation for the regressive product" in Section 3.3.) However, since the regressive product is a simple dual operation to the exterior product, an enticing and powerful symmetry is lost by ignoring it. We will find that its 'intersection' properties are a useful conceptual and algorithmic addition to non-metric geometry, and that its algebraic properties enable a firm foundation for the development of metric geometry. Some of the results which are usually proven in metric spaces via the inner and cross products can also be proven using the exterior and regressive products, thus showing that the result is independent of whether or not the space has a metric. The approach adopted in this book is to distinguish between the two product operations by using different notations, just as Boolean algebra has its dual operations of union and intersection. We will find that this approach does not detract from the elegance of the results. We will also find that differentiating the two operations explicitly enhances the simplicity and power of the derivation of results. Since the commonly accepted modern notation for the exterior product operation is the 'wedge' symbol , we will denote the regressive product operation by a 'vee' symbol . Note however that this (unfortunately) does not correspond to the Boolean algebra usage of the 'vee' for union and the 'wedge' for intersection.
2009 9 3
136
3.2 Duality
The notion of duality
In order to ensure that the regressive product is defined as an operation correctly dual to the exterior product, we give the defining axiom set for the regressive product the same formal symbolic structure as the axiom set for the exterior product. This may be accomplished by replacing by , and replacing the grades of elements and spaces by their complementary grades. The complementary grade of a grade m is defined in a linear space of n dimensions to be n|m.
a a
m
n- m
L L
m
n- m
3.1
Note that here we are undertaking the construction of a mathematical structure and thus there is no specific mapping implied between individual elements at this stage. In Chapter 5: The Complement, we will introduce a mapping between the elements of L and L which will lead to
m n-m
the definition of complement and interior product. For concreteness, we take some examples.
6:
To form the dual of this axiom, replace with , and the grades of elements and spaces by their complementary grades.
:a L, b L> : a b
n-m n-m n-k n-k n-m n-k n-Hm +kL
>
Although this is indeed the dual of axiom 6, it is not necessary to display the grades of what are arbitrary elements in the more complex form specifically involving the dimension n of the space. It will be more convenient to display them as grades denoted by simple symbols like m and k as were the grades of the elements of the original axiom. To effect this transformation most expeditiously we first let m' = n|m, k' = n|k to get
2009 9 3
137
:a L, b L> : a b
m' m' k' k' m' k'
L
Im'+k'M-n
>
The grade of the space to which the regressive product belongs is n|(m+k) = n|((n|m')+(n|k')) = (m'+k')|n. Finally, since the primes are no longer necessary we drop them. Then the final form of the axiom dual to
L >
In words, this says that the regressive product of an m-element and a k-element is an (m+k|n)element.
8 says:
:#)*+,+B1, 1 LF, a 1 a>
0 m m
There is a unit scalar which acts as an identity under the exterior product.
For simplicity we do not normally display designated scalars with an underscripted zero. However, the duality transformation will be clearer if we rewrite the axiom with 1 in place of 1.
0
Replace with , and the grades of elements and spaces by their complementary grades.
:#)*+,+B1, 1 LF, a
n n n n-m
1 a >
n n-m
In words, this says that there is a unit n-element which acts as an identity under the regressive product.
10 says:
138
Replace with , and the grades of elements by their complementary grades. Note that it is only the grades as shown in the underscripts which should be replaced, not those figuring elsewhere in the formula.
n-m
a b
n-k
H- 1Lm k b a
n-k
n-m
Replace arbitrary grades m (wherever they occur in the formula) with n|m', k with n|k'. An arbitrary grade is one without a specific value, like 0, 1, or n.
a b H- 1LIn-m'M In- k'M b a
m' k' k' m'
In words this says that the regressive product of elements of odd complementary grade is anticommutative.
We use the symbol n for the dimension of the underlying linear space wherever we want to ensure the formulas can be understood by GrassmannAlgebra functions (for example, the Dual function to be discussed later). However in general textual comment we will usually retain the simpler symbol n to represent this dimension.
6:
2009 9 3
139
6:
k+ m -n
L >
3.2
7:
ab
m k
g a bg
j m k j
3.3
8:
There is a unit n-element which acts as an identity under the regressive product. :!"#$%$B 1 , 1 L F, a 1 a>
n n n m n m
3.4
9:
3.5
10:
The regressive product of elements of odd complementary grade is anticommutative. a b H - 1 L Hn-kL Hn- m L b a
m k k m
3.6
11:
Additive identities act as multiplicative zero elements under the regressive product. :!"#$%$B0, 0 LF, 0 a
k k k k m
0 >
k+ m -n
3.7
12:
The regressive product is both left and right distributive under addition.
2009 9 3
140
12:
The regressive product is both left and right distributive under addition.
Ka + b O g a g + b g
m m k m k m k
3.8
a b + g
m k k
ab + ag
m k m k
13:
a a b Ka aO b a a b
m k m k m k
3.9
the regressive product, just as the unit scalar 1 (or 1) acts as the multiplicative identity for the
0
8 and 8).
n 1 n
We have already seen that any basis of L contains only one element. If a basis of L is
8e1 ,e2 ,!,en <, then the single basis element of L is congruent to e1 e2 ! en . (Two
elements are congruent if one is a scalar multiple of the other.) If the basis of L is changed by an
1
arbitrary (non-singular) linear transformation, then the basis of L changes by a scalar factor which
n
is the determinant of the transformation. Any basis of L may therefore be expressed as a scalar
n
multiple of some given basis, say e1 e2 ! en . Hence we can therefore also express e1 e2 ! en as some scalar multiple c of 1. (We denote the scalar in this form so that
n
e1 e2 ! en c 1
n
3.10
Defining 1 any more specifically than this is normally done by imposing a metric onto the space.
n
This we do in Chapter 5: The Complement, and Chapter 6: The Interior Product. It turns out then that 1 is the n-element whose measure (magnitude, volume) is unity.
n
On the other hand, for geometric application in spaces without a metric, for example the calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
2009 93 any element congruent to it will define the same geometric entity.
141
On the other hand, for geometric application in spaces without a metric, for example the calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
1 11
n
a b g a bg
k j n k j
3.12
Division by n-elements.
An equation may be 'divided' through by an n-element under the regressive product.
b a b g
n m n m
ag
m m
3.13
8.
Suppose we have an n-element a expressed in terms of some basis. Then, according to 3.10 we can
n
We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n
of 1. n 2009 9 3
1 a
142
We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n 1 a
of 1.
n
aa1
n n
a -1
n
1 a
1
n 1 1: a n
a a -1 J a 1 N
n n n
1 a
1 11 1
n n n n
a b e1 e2 ! en b c 1
n n
a -1
n
1 1 b c
1
n
b c2
n
e1 e2 ! en
aa1
n n
a-1
n
1 a
1
n
3.14
a b e1 e2 ! en
n
a -1
n
b c2
e1 e2 ! en
3.15
If q and r are the orders of two magnitudes A and B, and n that of the principal domain, then the order of the product [A B] is first equal to q+r if q+r is smaller than n, and second equal to q+r|n if q+r is greater than or equal to n.
Translating this into the terminology used in this book we have:
If q and r are the grades of two elements A and B, and n that of the underlying linear space, then the grade of the product [A B] is first equal to q+r if q+r is smaller than n, and second equal to q+r|n if q+r is greater than or equal to n.
2009 9 3
143
Translating this further into the notation used in this book we have:
@ A BD A B
p p q q
p+q<n p+qn
@ A BD A B
Grassmann called the product [A B] in the first case a progressive (exterior) product, and in the second case a regressive (exterior) product. (In current terminology, which we adopt in this book, the progressive exterior product is called simply the exterior product, and the regressive exterior product is called simply the regressive product). In the equivalence above, Grassmann has opted to define the product of two elements whose grades sum to the dimension n of the space as regressive, and thus a scalar. However, the more explicit notation that we have adopted identifies that some definition is still required for the progressive (exterior) product of two such elements. The advantage of denoting the two products differently enables us to correctly define the exterior product of two elements whose grades sum to n, as an n-element. Grassmann by his choice of notation has decided to define it as a scalar. In modern terminology this is equivalent to confusing scalars with pseudo-scalars. A separate notation for the two products thus avoids this tensorially invalid confusion. We can see how then, in not being explicitly denoted, the regressive product may have become lost.
144
Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each theorem involving exterior or regressive products corresponds a dual theorem obtained by applying the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply the Duality Principle).
Example 1
The following theorem says that the exterior product of two elements is zero if the sum of their grades is greater than the dimension of the linear space.
:a b 0, m + k - n > 0>
m k
The dual theorem states that the regressive product of two elements is zero if the sum of their grades is less than the dimension of the linear space.
:a b 0, n - Hk + mL > 0>
m k
Example 2
This theorem below says that the exterior product of an element with itself is zero if it is of odd grade.
:a a 0, m 8OddPositiveIntegers<>
m m
The dual theorem states that the regressive product of an element with itself is zero if its complementary grade is odd.
:a a 0, Hn - mL 8OddPositiveIntegers<>
m m
Again, we can recover the original theorem by calculating the dual of this one.
Example 3
The following theorem states that the exterior product of unity with itself any number of times remains unity.
2009 9 3
145
11!1a a
m m
The dual theorem states the corresponding fact for the regressive product of unit n-elements 1 .
n
1 1 ! 1 a a
n n m m
10 simply enter:
k m
DualBa b H- 1Lm k b aF
m k
8 enter:
m m
:!"#$%$B 1 , 1 L F, a 1 a>
n n n m n m
9 is obtained as:
0 0 0
Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and ForAll by writing them in script.
2009 9 3
146
Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and ForAll by writing them in script.
Dual@CommonFactorAxiomD : a g b g a b g g, - j - k - m + 2 n 0>
m j k j m k j j
We can verify that the dual of the dual of the axiom is the axiom itself.
CommonFactorAxiom Dual@Dual@CommonFactorAxiomDD True
n-1
Dual@FD a b x H - 1 L -m +n a b x + a x b
m k m k m k
Again, we can verify that the dual of the dual of the formula is the formula itself.
F Dual@Dual@FDD True
147
Since is an arbitrary multilinear product (other than the exterior product), we can satisfy the second criterion by positing the same form for the regressive product.
A B H A * CL HC B* L H A * C B* L C
3.16
What additional requirements on this would one need in order to satisfy the first criterion, that is, that the axiom enables a result which does not involve the regressive product? Axiom gives us the clue.
:#)*+,+B 1 , 1 L F, a 1 a>
n n n m n m
8 above
If the grade of A* C B* is equal to the dimension of the space n (or in computations, n), then since all n-elements are congruent we can write A* C B* c 1 , where c is some scalar,
n
showing that the right hand side is congruent to the common factor (intersection) C.
H A * CL HC B* L H A * C B* L C Jc 1 N C c C C
n
2009 9 3
148
Using graded symbols, we could also write this in either of the following forms
: a g g b a g b g , m + k + j - n 0>
m j j k m j k j
: a g b g a b g g , m + k + j - n 0>
m j k j m k j j
As we will see, an axiom of this form works very well. Indeed, it turns out to be one of the fundamental underpinnings of the algebra. We call it the Common Factor Axiom and explore it further below.
Historical Note
The approach we have adopted in this chapter of treating the common factor relation as an axiom is effectively the same as Grassmann used in his first Ausdehnungslehre (1844) but differs from the approach that he used in his second Ausdehnungslehre (1862). (See Chapter 3, Section 5 in Kannenberg.) In the 1862 version Grassmann proves this relation from another which is (almost) the same as the Complement Axiom that we introduce in Chapter 5: The Complement. Whitehead [1898], and other writers in the Grassmannian tradition follow his 1862 approach. The relation which Grassmann used in the 1862 Ausdehnungslehre is in effect equivalent to assuming the space has a Euclidean metric (his Ergnzung or supplement). However the Common Factor Axiom does not depend on the space having a metric; that is, it is completely independent of any correspondence we set up between L and L . Hence we
m n-m
would rather not adopt an approach which introduces an unnecessary constraint, especially since we want to show later that the Ausdehnungslehre is easily extended to more general metrics than the Euclidean.
= n, where n is the dimension of the space. Then the Common Factor Axiom states that:
: a g b g a b g g , m + k + j - n 0>
m j k j m k j j
3.17
Thus, the regressive product of two elements a g and b g with a common factor g is equal to
m j m k k j j j j
the regressive product of the 'union' of the elements a b g with the common factor g (their 'intersection'). If a and b still contain some simple elements in common, then the product a b g is zero, hence,
m k m k j
2009 9definition 3 by the above a g b g is also zero. In what follows, we suppose that a b g
m j k j m k j
is not zero.
149
If a and b still contain some simple elements in common, then the product a b g is zero, hence,
m k m k j
is not zero. Since the union a b g is an n-element, we can write it as some scalar factor c, say, of the unit nm k j
: a g b g g , m + k + j - n 0>
m j k j j
3.18
It is easy to see that by using the anti-commutativity axiom 10, that the axiom may be arranged in any of a number of alternative forms, the most useful of which are:
: a g g b a g b g , m + k + j - n 0>
m j j k m j k j
3.19
: g a g b g a b g , m + k + j - n 0>
j m j k j m k j
3.20
And since for the regressive product, any n-element commutes with any other element, we can always rewrite the right hand side with the common factor first.
: a g g b g a g b , m + k + j - n 0>
m j j k j m j k
3.21
as:
2009 9 3
150
a1 g b g a1 b g g
m j k j m k j j
a2 g b g a2 b g g
m j k j m k j j
Adding these two equations and using the distributivity of and gives:
Ka1 + a2 O g b g Ka1 + a2 O b g g
m m j k j m m k j j
Extending this process, we see that the formula remains true for arbitrary a and b, providing g is
m k j
simple.
: a g b g a b g g g , m + k + j - n 0>
m j k j m k j j j
3.22
This is an extended version of the Common Factor Axiom. It states that: the regressive product of two arbitrary elements containing a simple common factor is congruent to that factor. For applications involving computations in a non-metric space, particularly those with a geometric interpretation, we will see that the congruence form is not restrictive. Indeed, it will be quite elucidating. For more general applications in metric spaces we will see that the associated scalar factor is no longer arbitrary but is determined by the metric imposed.
: a g b g 0, m + k + j - n ! 0>
m j k j
3.23
If there is no common factor (other than the scalar 1), then the axiom reduces to:
:a b a b 1 L , m + k - n 0>
m k m k 0
3.24
The following version yields a sort of associativity for products which are scalar.
2009 9 3
151
: ab g a bg
m k j m k j
L , m + k + j - n 0>
0
3.25
To prove this, we can use the previous formula to show that each side the equation is equal to
a b g 1.
m k j
: a g b g a b g g L, m + k + j - 2 n 0>
m j k j m k j j j
3.26
The duals of the three special cases in the previous section are:
: a g b g 0, m + k + j - 2 n ! 0>
m j k j
3.27
:a b a b 1 L , m + k - n 0>
m k m k n n
3.28
: ab g a bg
m k j m k j
L , m + k + j - 2 n 0>
n
3.29
2009 9 3
152
X = x1 e1 e2 + x2 e1 e3 + x3 e2 e3 Y = y1 e1 e2 + y2 e1 e3 + y3 e2 e3
Expanding this product, and remembering that the regressive product of identical basis 2-elements is zero, we obtain:
Z Hx1 e1 e2 L Hy2 e1 e3 L + Hx1 e1 e2 L Hy3 e2 e3 L + Hx2 e1 e3 L Hy1 e1 e2 L + Hx2 e1 e3 L Hy3 e2 e3 L + Hx3 e2 e3 L Hy1 e1 e2 L + Hx3 e2 e3 L Hy2 e1 e3 L
In a 3 space, regressive products of 2-elements are anti-commutative since H- 1LHn-mL Hn-kL = H- 1LH3-2L H3-2L = - 1. Hence we can collect pairs of terms with the same factors by making the corresponding sign change:
Z Hx1 y2 - x2 y1 L He1 e2 L He1 e3 L + Hx1 y3 - x3 y1 L He1 e2 L He2 e3 L + Hx2 y3 - x3 y2 L He1 e3 L He2 e3 L
We can now apply the Common Factor Axiom to each of these regressive products:
Z Hx1 y2 - x2 y1 L He1 e2 e3 L e1 + Hx1 y3 - x3 y1 L He1 e2 e3 L e2 + Hx2 y3 - x3 y2 L He1 e2 e3 L e3
Thus, in sum, we have the general congruence relation for the intersection of two 2-elements in a 3space.
3.30
The result on the right hand side almost looks as if it could have been obtained from a crossproduct operation. However the cross-product requires the space to have a metric, while this does not. We will see in Chapter 6 how, once we have introduced a metric, we can transform this into the formula for the cross product. This is an example of how the regressive product can generate results, independent of whether the space has a metric; whereas the usual vector algebra, in using the cross-product, must assume that it does.
2009 9 3
153
4-space
In a 4-space we only have 4 independent 1-elements available. Let A and B be 3-elements with C apparently a common factor. First, suppose we simplify them before applying the Common Factor Axiom
A u C u Hu v + x yL u x y B C x Hu v + x yL x u v x A B Hu x yL Hu v xL Hy u xL H- u x vL
The Common Factor Axiom gives us the correct result for the simplified factors because the simplified factors had a simple common factor. But of course it is not C.
A B Hy u x vL Hx uL
Now if we formally (that is, only looking at the form), (and thus incorrectly) apply the Common Factor Axiom without simplifying we get
A B Hu CL HC xL Hu C xL C
Substituting back for C on the right hand side gives zero, which is of course an incorrect result. In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms resulting from the complete expansion of a regressive product, since these terms will either have simple common factors, or be zero. On the other hand it will not necessarily give the correct result (as expected) when the symbol for the common factor takes on a non-simple value. 2009 9 3
154
In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms resulting from the complete expansion of a regressive product, since these terms will either have simple common factors, or be zero. On the other hand it will not necessarily give the correct result (as expected) when the symbol for the common factor takes on a non-simple value.
Here, the grade of A* C B* is equal to the dimension of the space. Since an n-element will commute with an element of any grade, the term on the right hand side can always be rewritten as C HA* C B* L. This latter form is often convenient from a mnemonic point of view, and we may make such exchanges without further comment. Thus
A B H A * CL HC B* L H A * C B* L C C H A * C B* L
3.31
There are two different forms of the Common Factor Theorem which we will be developing: The A form and the B form. It is useful to have both forms at hand, since depending on the case, one of them is likely to be more computationally efficient than the other; and formulae resulting from each, while ultimately yielding the same result, give distinctly different forms to their results. The A form requires A to be simple and obtains the common factor C from the factors of A by spanning A (that is, composing an appropriate span and cospan of A). The B form requires B to be simple and obtains the common factor C from the factors of B by spanning B.
The A form
The basic formula for the A form begins with the Common Factor Axiom written as
A B H A * CL B H A * BL C
3.32
us now display the grades explicitly. Let A be of grade m, B be of grade k, and C be of grade j. Let A* is then of grade m - j = n - k.
2009 9 3
155
A B A* C B A* B C
m k m -j j k m -j k j
The crux of the development comes from the intrinsic relationship between A, its span, and its cospan: that is, for any i and s, A is equal to the exterior product of the ith s-span element and the corresponding ith s-cospan element.
A = I#s @ADPiT M I#s @ADPiT M
We now focus on the right hand side of the Common Factor Axiom. If, for a given value of i, we were to replace A* by #m-j @ADPiT and C by #m-j @ADPiT , we would get the term
m -j j
Clearly, this term may be different for different values of i, since, for example, wherever the span element of A (#m-j @ADPiT ) contains a factor of the common factor it will be zero, since B will contain the same factor. In other cases it may be non-zero. As it turns out, the common factor needs the sum of these terms. We will see why in the next section when we discuss the proof of the theorem.
n
A B K " m -j B A F
m k i=1
m PiT
B O " m -j B A F
k
m PiT
3.33
m j
(or equivalently
m m- j
).
Because the exterior and regressive products have Mathematica's Listable attribute, we do not actually need to index the list elements and then sum them. If we write S as a shorthand for Mathematica's inbuilt Total function, which sums the elements of a list, we can write
3.34
To make these ideas more concrete we now apply the formula to some simple examples before proceeding to a proof in the next section.
Generalizations
It is evident that if both elements are simple, expansion in factors of the element of lower grade will be the more efficient. When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
156
When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
component terms of the product. Multiple regressive products may be treated by successive applications of the formulae above.
Example 1
Suppose we are working in a 5-space and A and B are expressed in such a way as to display their common factor C. We declare the extra vector symbols with the alias V.
!5 ; A = a1 g1 g2 ; B = g1 g2 b1 b2 ; V@a_ , b_ , g_ D;
Thus, n is 5, m (the grade of A) is 3, k (the grade of B) is 4, and j (the grade of C) is 2. For reference, the 1-span and 1-cospan of A are
9#1 @AD, #1 @AD= 88a1 , g1 , g2 <, 8g1 g2 , - Ha1 g2 L, a1 g1 <<
Now if we simply evaluate the formula, we notice that two of the three terms are zero due to repeated factors in the exterior product.
F = SAH#1 @AD BL #1 @ADE a1 g1 g2 b1 b2 g1 g2 + g1 g1 g2 b1 b2 - Ha1 g2 L + g2 g1 g2 b1 b2 a1 g1
Because the second factor in the regressive product is an n-element, we can immediately write this as congruent to the common factor:
Hg1 g2 L Ha1 b1 b2 g1 g2 L g1 g2
Example 2
We now take the same example as we did earlier when we multiplied out the regressive product term by term. X and Y are both 2-elements in a 3-space, and we wish to compute a 1-element Z common to them (remember Z can only be computed up to congruence).
!3 ; 8X, Y< = 8%x , %y < 8x1 e1 e2 + x2 e1 e3 + x3 e2 e3 , y1 e1 e2 + y2 e1 e3 + y3 e2 e3 <
To apply the Common Factor Theorem in the form above, we need X in factored form.
2009 9 3
157
Xf = ExteriorFactorize@XD x1 e1 e3 x3 x1 e2 + e3 x2 x1
To compute the span of an element, the element needs to be a simple product of 1-elements - not a scalar multiple of such a product. The GrassmannAlgebra function # ignores any such scalar factors. In applications where only a result up to congruence is required, this is entirely satisfactory. Otherwise if we wish to include the scalar factor, we will need to multiply it into one of the other factors, say the first.
Xf = Xf . a_ Hb_ c__L Ha bL c x1 e1 e3 x3 x1 e2 + e3 x2 x1
The span and cospan of Xf are both of grade 1. We display them for reference.
9#1 @Xf D, #1 @Xf D= ::x1 e1 e3 x3 x1 , e2 + e3 x2 x1 >, :e2 + e3 x2 x1 , - x1 e1 e3 x3 x1 >>
x1 e1 -
Hy1 e1 e2 + y2 e1 e3 + y3 e2 e3 L e2 +
Writing this in congruence form gives us the common factor required, and corroborates the result originally obtained by applying the Common Factor Axiom to each term of the full expansion.
Z HH- x2 y1 + x1 y2 L e1 + H- x3 y1 + x1 y3 L e2 + H- x3 y2 + x2 y3 L e3 L c He1 H- x2 y1 + x1 y2 L + e2 H- x3 y1 + x1 y3 L + e3 H- x3 y2 + x2 y3 LL
158
m+k = n+j, j > 0. Then A B is either zero, or, by the Common Factor Axiom, has a common factor
m k
C. We assume it is not zero. We then express A and B in terms of C as we have done previously.
j m k j
A A* C
m m -j j
B C B*
k j
k-j
A A1 A1 A2 A2 ! An An
m m -j j m -j j m -j j
Here n is equal to
m j
, or equivalently
m m- j
C a1 A1 + a2 A2 + + an An
j j j j
The exterior product of the ith (m-j)-span element Ai with C can be expanded to give
m -j j
Ai C m -j j a1
Ai a1 A1 + a2 A2 + + an An
m -j j j j
Ai A1 m -j j
+ a2
Ai A2 m -j j
+ + ai
Ai Ai m -j j
+ + an
Ai An m -j j
m -j
And indeed, because by definition, the exterior product of an (m-j)-span element (say, A2 ) and any of the non-corresponding cospan elements (say, A3 ) is zero, we can write for any i
j
Ai C m -j j
ai
Ai Ai m -j j
ai A
m
2009 9 3
159
Z AB
m k
m -j
A * C C B*
j j
k-j
A * C B* C
m -j j k -j j
A B* C
m k-j j
Substituting for the common factor in the right hand side gives
Z A B* C
m k-j j
A B* a1 A1 + a2 A2 + + an An
m k-j j j j
By the distributivity of the exterior and regressive products, and their behaviour with scalars, the scalar factors can be transferred and attached to A.
m
m -j
A1 B k m -j
A1 + A2 B A2 + + An B An
j m -j k j m -j k j n
3.35
m -j k j
Ai B Ai
i=1
we will call the B form of the Common Factor Theorem. In the case where both A and B are simple but not of the same grade, the form which decomposes the element of lower grade will generate the least number of terms and be more computationally efficient. Multiple regressive products may be treated by successive applications of the theorem in either of its forms.
2009 9 3
160
: A B Ai B Ai ,
m k i=1 m -j k j
m - j + k - n 0, m n K O> j 3.36
A A1 A1 A2 A2 ! An An ,
m m -j j m -j j m -j j
A B #m-j BAF
m k i=1
m PiT
B #m - j B A F
k
m PiT
: A B A Bi Bi ,
m k
i=1
m - j + k - n 0, n k > j 3.37
k-j
B B1 B1 B2 B2 ! Bn Bn ,
k j k-j j k-j j k-j
A B A #j BBF
m k i=1 m
k PiT
#j BBF
k PiT
#j BBF : B1 , B2 , !, Bn >
k k-j k-j k-j
2009 9 3
161
decompose b directly in terms of the factors of a. The Common Factor Theorem gives:
n n
a b ai b ai
n i=1 n-1
where:
a a 1 a 2 ! a n H - 1 L n-i H a 1 a 2 ! i ! a n L a i
n
The symbol i means that the ith factor is missing from the product. Substituting in the Common Factor Theorem gives:
n
a b H- 1Ln-i HHa1 a2 ! i ! an L bL ai
n i=1 n
H a 1 a 2 ! a i-1 b a i+1 ! a n L a i
i=1
Ha1 a2 ! an L b
n
H a 1 a 2 ! a i-1 b a i+1 ! a n L a i
i=1
3.38
Writing this out in full shows that we can expand the expression simply by interchanging b successively with each of the factors of a1 a2 !an , and summing the results.
3.39
b
i=1
a 1 a 2 ! a i-1 b a i+1 ! a n a1 a2 ! an
ai
3.40
This expression is, of course, equivalent to that which would have been obtained from Grassmann's approach to solving linear equations (see Chapter 2: The Exterior Product). 2009 9 3
162
This expression is, of course, equivalent to that which would have been obtained from Grassmann's approach to solving linear equations (see Chapter 2: The Exterior Product).
We keep x1 fixed initially and apply the Common Factor Theorem to rewrite the regressive product as a sum of the two products, each due to one of the essentially different rearrangements of x2 :
x1 x2 Hx1 e1 L H5 e2 + 7 e3 L - Hx1 H5 e2 + 7 e3 LL e1
The next step is to expand out the exterior products with x1 . Often this step can be done by inspection, since all products with a repeated basis element factor will be zero.
x1 x2 H3 e2 e3 e1 L H5 e2 + 7 e3 L H10 e1 e3 e2 + 21 e1 e2 e3 L e1
163
Using ToCommonFactor
ToCommonFactor reduces any regressive products in a Grassmann expression to their common factor form.
The scalar c is called the congruence factor. The congruence factor has already been defined in Section 3.3 above as the connection between the current basis n-element and the unit n-element 1
n
(in GrassmannAlgebra input we use n instead of n, in case n is being used elsewhere in your computations).
e1 e2 ! en c 1
n
Since this connection cannot be defined unless a metric is introduced, the congruence factor remains arbitrary in non-metric spaces. Although such an arbitrariness may appear at first sight to be disadvantageous, it is on the contrary, highly elucidating. In application to the computing of unions and intersections of spaces, perhaps under a geometric interpretation where they represent lines, planes and hyperplanes, the notion of congruence becomes central, since spaces can only be determined up to congruence. In the example above, x1 and x2 were expressed in terms of basis elements. ToCommonFactor can also apply the Common Factor Theorem to more general elements. For example, if x1 and x2 were expressed as symbolic bivectors, the common factor expansion could still be performed.
x1 = x y; x2 = u v; ToCommonFactor@x1 x2 D c Hy u v x - x u v yL
The expression uvx represents the coefficient of uvx when uvx is expressed in terms of the current basis n-element (in this case 3-element). As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an expression for uvx in terms of basis elements, and then expand and simplify the result using the alias % for GrassmannExpandAndSimplify. 2009 9 3
164
As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an expression for uvx in terms of basis elements, and then expand and simplify the result using the alias % for GrassmannExpandAndSimplify.
X = ComposeBasisForm@u v xD He1 u1 + e2 u2 + e3 u3 L He1 v1 + e2 v2 + e3 v3 L He1 x1 + e2 x2 + e3 x3 L % @XD H- u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L e1 e2 e3
To see most directly that these are equal, we express the vectors in terms of basis elements.
Z = ComposeBasisForm@x1 x2 D He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 u1 + e2 u2 + e3 u3 L He1 v1 + e2 v2 + e3 v3 L
2009 9 3
165
ab
2 2
However, if either one of the factors is decomposable, then a formula for the common factor can be generated. In the case below, the results are the same; remember that c is an arbitrary scalar factor.
:ToCommonFactorBa Hx yLF, ToCommonFactorBHx yL aF>
2 2
:c K- y x a + x y aO, c Ky x a - x y aO>
2 2 2 2
To demonstrate this we first declare a 4-space, and then use the GrassmannAlgebra function ComposeBasisForm to compose the required product in basis form.
!4 ; X = ComposeBasisFormBa b gF
3 3 3
Ha3,1 e1 e2 e3 + a3,2 e1 e2 e4 + a3,3 e1 e3 e4 + a3,4 e2 e3 e4 L Hb3,1 e1 e2 e3 + b3,2 e1 e2 e4 + b3,3 e1 e3 e4 + b3,4 e2 e3 e4 L Hg3,1 e1 e2 e3 + g3,2 e1 e2 e4 + g3,3 e1 e3 e4 + g3,4 e2 e3 e4 L
2009 9 3
166
Xc = ToCommonFactor@XD c2 He1 H-a3,3 b3,2 g3,1 + a3,2 b3,3 g3,1 + a3,3 b3,1 g3,2 - a3,1 b3,3 g3,2 - a3,2 b3,1 g3,3 + a3,1 b3,2 g3,3 L + e2 H-a3,4 b3,2 g3,1 + a3,2 b3,4 g3,1 + a3,4 b3,1 g3,2 a3,1 b3,4 g3,2 - a3,2 b3,1 g3,4 + a3,1 b3,2 g3,4 L + e3 H-a3,4 b3,3 g3,1 + a3,3 b3,4 g3,1 + a3,4 b3,1 g3,3 a3,1 b3,4 g3,3 - a3,3 b3,1 g3,4 + a3,1 b3,3 g3,4 L + e4 H-a3,4 b3,3 g3,2 + a3,3 b3,4 g3,2 + a3,4 b3,2 g3,3 a3,2 b3,4 g3,3 - a3,3 b3,2 g3,4 + a3,2 b3,3 g3,4 LL
Note that the congruence factor c is to the second power. This is because the original expression contained the regressive product operator twice: one c effectively stands in for each basis 4element that is produced during the calculation (3+3+3 = 4+4+1). It is easy to check that this result is indeed a common factor by taking its exterior product with each of the 3-elements. For example:
%@Xc Ha3,1 e1 e2 e3 + a3,2 e1 e2 e4 + a3,3 e1 e3 e4 + a3,4 e2 e3 e4 LD Expand 0
The 1-element factors of a and b must then have a common subspace of dimension m+k-n = j. Let
m k
g be a simple j-element which spans this common subspace. We can then write:
j
ab
m k
a g
m -j j
b g
k-j j
a b g g g
m -j k-j j j j
The Common Factor Axiom then shows us that since g is simple, then so is the original product of
j
simple elements a b.
m k
2009 9 3
167
We can show that this 2-element is simple by confirming that its exterior product with itself is zero. (This technique was discussed in Chapter 2: The Exterior Product.)
%@Zc ZcD Expand 0
2009 9 3
168
Ha1 ! am L b1 ! bm
n-1 n-1
H a 1 ! a m L b j H - 1 L j-1 b 1 ! j ! b m
n-1 n-1 n-1
H - 1 L i-1 a i b j
i n-1
Ha1 ! i ! am L
H - 1 L j-1 b 1 ! j ! b m
n-1 n-1
H - 1 L i+j a i b j
i n-1
Ha1 ! i ! am L b1 ! j ! bm
n-1 n-1
Ha1 ! a m L b1 ! b m
n-1 n-1
DetBai bj F
n-1
3.41
where DetBai bj F is the determinant of the matrix whose elements are the scalars ai bj .
n-1 n-1
a1 b1
n-1
a2 b2
n-1
- a1 b2
n-1
a2 b1
n-1
This determinant formula is of central importance in the computation of inner products to be discussed in Chapter 6.
Let the new basis be 8b1 , !, bn < and let b b1 ! bn , then the Common Factor Theorem
n
permits us to write:
n
b a bi a bi
n m i=1 n-m m m
where n = K
n O and m
b b1 b1 b2 b2 ! bn bn
n n-m m n-m m n-m m
We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining the original ordering. To make this more concrete, suppose n is 5 and m is 2: 2009 9 3
169
We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining the original ordering. To make this more concrete, suppose n is 5 and m is 2:
Hb1 b2 b3 b4 b5 L Ha1 a2 L Ha1 a2 b3 b4 b5 L Hb1 b2 L + Ha1 b2 a2 b4 b5 L Hb1 b3 L + Ha1 b2 b3 a2 b5 L Hb1 b4 L + Ha1 b2 b3 b4 a2 L Hb1 b5 L + Hb1 a1 a2 b4 b5 L Hb2 b3 L + Hb1 a1 b3 a2 b5 L Hb2 b4 L + Hb1 a1 b3 b4 a2 L Hb2 b5 L + Hb1 b2 a1 a2 b5 L Hb3 b4 L + Hb1 b2 a1 b4 a2 L Hb3 b5 L + Hb1 b2 b3 a1 a2 L Hb4 b5 L
Now let the new n-elements on the right hand side be written as scalar factors bi times the new basis n-element b.
n
bi a m n-m
bi b
n
Substituting gives
n n m i=1 n m n n i=1 m
b a bi b bi b bi bi
a bi bi
m i=1 m
bi
bi a n- m m b
n
3.42
basis.
!4 ; a = 2 e1 e2 e3 - 3 e1 e3 e4 - 2 e2 e3 e4 ;
3
Suppose now that we have another basis related to the e basis by:
2009 9 3
170
b1 b2 b3 b4
= 2 e1 + 3 e3 ; = 5 e3 - e4 ; = e1 - e3 ; = e1 + e2 + e3 + e4 ;
3
We wish to express a in terms of the new basis. Instead of inverting the transformation to find the
ei in terms of the bi , substituting for the ei in a, and simplifying, we can almost write the result
3
required by inspection.
Hb1 b2 b3 b4 L a b1 a Hb2 b3 b4 L - b2 a Hb1 b3 b4 L +
3 3 3
b3 a Hb1 b2 b4 L - b4 a Hb1 b2 b3 L
3 3
4 5
b2 b3 b4 +
2 5
b1 b3 b4 +
2 5
b1 b2 b4 -
1 5
b1 b2 b3
We can easily check this by expanding and simplifying the right hand side to give both sides in terms of the original e basis.
a % B
3
4 5
b2 b3 b4 +
2 5
b1 b3 b4 +
2 5
b1 b2 b4 -
1 5
b1 b2 b3 F
True
2009 9 3
171
a
i=1
b1 b2 ! a ! bn b1 b2 ! bi ! bn
bi
3.43
Here the denominators are the same in each term but are expressed this way to show the positioning of the factor a in the numerator. This result has already been introduced in the section above Example: The decomposition of a 1element.
H b 1 b 2 ! b n L a H b 1 b 2 ! b i-1 a b i+1 ! b n L b i
i=1
H- 1Li Hb0 b1 ! i ! bn L bi 0
i=0
3.44
For example, suppose b0 , b1 , b2 , b3 are four dependent 1-elements which span a 3-space, then the formula reduces to the identity:
3.45
We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form, and finding the common factor of each term. (To use ComposeBasisForm, we will first have to give the bi new names which do not already involve subscripts.) 2009 9 3
172
We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form, and finding the common factor of each term. (To use ComposeBasisForm, we will first have to give the bi new names which do not already involve subscripts.)
A; B = Hb1 b2 b3 L b0 Hb0 b2 b3 L b1 + Hb0 b1 b3 L b2 - Hb0 b1 b2 L b3 ; B = B . 8b0 w, b1 x, b2 y, b3 z< - H w x y zL + w x z y - w y z x + x y z w Bn = ComposeBasisForm@BD - HHe1 w1 + e2 w2 + e3 w3 L He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 LL + He1 w1 + e2 w2 + e3 w3 L He1 x1 + e2 x2 + e3 x3 L He1 z1 + e2 z2 + e3 z3 L He1 y1 + e2 y2 + e3 y3 L He1 w1 + e2 w2 + e3 w3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 L He1 x1 + e2 x2 + e3 x3 L + He1 x1 + e2 x2 + e3 x3 L He1 y1 + e2 y2 + e3 y3 L He1 z1 + e2 z2 + e3 z3 L He1 w1 + e2 w2 + e3 w3 L ToCommonFactor@Bn D 0 True
ei ej es e1 e2 ! en 1 c 1
m k p
The Common Factor Axiom can be written for these basis elements as:
ei es ej es ei ej es es ,
m
p
m + k + p n
3.46
2009 9 3
173
ej es ei
k p m
es ei ej
p m k
ei ej es 1
m k p
Substituting these four elements into the Common Factor Axiom above gives:
H- 1Lm k ej ei 1 Hei ej L
k m m k
ei ej 1 Hei ej L c ei ej
m k
m k m k
3.47
It can be seen that in this form the Common Factor Axiom does not specifically display the common factor, and indeed remains valid for all basis elements, independent of their grades. In sum: Given any two basis elements, the cobasis element of their exterior product is congruent to the regressive product of their cobasis elements.
We can write this either in the form already derived in the section above:
e1 e2 1 H e1 e2 L
e1 e2 c H e1 e2 L
2009 9 3
174
e1 e2 e3 c H e1 e2 L e3 c 1 H e1 e2 e3 L c2 H e1 e2 e3 L
e 1 e 2 ! e m c m -1 H e 1 e 2 ! e m L
3.48
A special case which we will have occasion to use in Chapter 5 is where the result reduces to a 1element.
3.49
In sum: The regressive product of cobasis elements of basis 1-elements is congruent to the cobasis element of their exterior product. In fact, this formula is just an instance of a more general result which says that: The regressive product of cobasis elements of any grade is congruent to the cobasis element of their exterior product. We will discuss a result very similar to this in more detail after we have defined the complement of an element in Chapter 5.
a is non-zero. Next, choose a set of m independent 1-elements bj whose products with b are also
m
a a a1 a2 ! a m
m
aj a Kbj b O
m n- m
3.50
The scalar factor a may be determined simply by equating any two corresponding terms of the original element and the factorized version. Note that no factorization is unique. Had different bj been chosen, a different factorization would have been obtained. Nevertheless, any one factorization may be obtained from any other by adding multiples of the factors to each factor. 2009 9 3
175
Note that no factorization is unique. Had different bj been chosen, a different factorization would have been obtained. Nevertheless, any one factorization may be obtained from any other by adding multiples of the factors to each factor. If an element is simple, then the exterior product of the element with itself is zero. The converse, however, is not true in general for elements of grade higher than 2, for it only requires the element to have just one simple factor to make the product with itself zero. If the method is applied to the factorization of a non-simple element, the result will still be a simple element. Thus an element may be tested to see if it is simple by applying the method of this section: if the factorization is not equivalent to the original element, the hypothesis of simplicity has been violated.
a vw + vx + vy + vz + wz + xz + yz
2
There are 5 independent 1-elements in the expression for a : v, w, x, y, z, hence we can choose n
2
become:
a1 a b1 b a Hv x y zL
2 3 2
a2 a b2 b a H w x y zL
2 3 2
In determining a1 the Common Factor Theorem permits us to write for arbitrary 1-elements x and y:
Hx y L Hv x y zL Hx v x y zL y - Hy v x y zL x
a1 - H w v x y zL v + H w v x y zL z v - z
a2 Hv w x y zL w + Hv w x y zL x + Hv w x y zL y + Hv w x y zL z w + x + y + z
2009 9 3
176
a1 a2 Hv - zL H w + x + y + zL
Verification by expansion of this product shows that this is indeed a factorization of the original element.
+ a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5
cobasis element.
b1 b e1 e4 e5 e2 e3
2
b2 b e2 e4 e5 - e1 e3
2
b3 b e3 e4 e5 e1 e2
2
Ha ei ej ek L e2 e3 . The Common Factor Theorem tells us that this product is zero if ei ej ek does not contain e2 e3 . We can thus simplify the product a e2 e3 by dropping
3
a e2 e3 Ha1 e1 e2 e3 + a7 e2 e3 e4 + a8 e2 e3 e5 L e2 e3
3
Furthermore, the Common Factor Theorem applied to a typical term He2 e3 ei L e2 e3 of the expansion in which both e2 e3 and e2 e3 occur yields a 1-element congruent to the remaining basis 1-element ei in the product. This effectively cancels out (up to congruence) the product e2 e3 from the original term.
He2 e3 ei L e2 e3 IHe2 e3 L He2 e3 LM ei ei
2009 9 3
177
a1 a e2 e3 a1 e1 + a7 e4 + a8 e5
3
a3 a e1 e2 a1 e3 + a2 e4 + a3 e5
3
It is clear from inspecting the product of the first terms in each 1-element that the product requires a scalar divisor of a2 1 . The final result is then:
a
3
1 a1 2
a1 a2 a3
To effect the multiplication of the factored form we use GrassmannExpandAndSimplify (in its alias form).
2009 9 3
178
A = % B
1 a1 2
e3 e4 e5
For this expression to be congruent to the original expression we clearly need to apply the simplicity conditions for a general 3-element in a 5-space. Once this is done we retrieve the original 3-element a with which we began. The simplicity conditions can be expressed by the
3
following rules, which we apply to the expression A. We will discuss them in the next section.
A . : a3 a4 a1 + + a2 a5 a1 a6 , a5 a7 a1 + a4 a8 a1 a10 >
a3 a7 a1
a2 a8 a1
a9 , -
a1 e1 e2 e3 + a2 e1 e2 e4 + a3 e1 e2 e5 + a4 e1 e3 e4 + a5 e1 e3 e5 + a6 e1 e4 e5 + a7 e2 e3 e4 + a8 e2 e3 e5 + a9 e2 e4 e5 + a10 e3 e4 e5
2009 9 3
179
Select a 2-element belonging to at least one of the terms, say e1 e2 . Drop e2 to create e1 . Then drop e1 to create e2 . Select e1 . Drop the terms a4 e2 e3 + a5 e2 e4 + a6 e3 e4 since they do not contain e1 . Factor e1 from a1 e1 e2 + a2 e1 e3 + a3 e1 e4 and eliminate it to give factor a1 .
a1 a1 e2 + a2 e3 + a3 e4
Select e2 . Drop the terms a2 e1 e3 + a3 e1 e4 + a6 e3 e4 since they do not contain e2 . Factor e2 from a1 e1 e2 + a4 e2 e3 + a5 e2 e4 and eliminate it to give factor a2 .
2009 9 3
180
a2 - a1 e1 + a4 e3 + a5 e4
e3 e4
Comparing this product to the original 2-element a gives the final factorization as
1 a1
a
2
Ha1 e2 + a2 e3 + a3 e4 L H- a1 e1 + a4 e3 + a5 e4 L
In sum: A 2-element in a 4-space may be factorized if and only if a condition on the coefficients is satisfied.
3 e1 e3 e5 + 8 e1 e4 e5 + 6 e2 e3 e4 - 18 e2 e3 e5 + 12 e3 e4 e5
Select a 3-element belonging to at least one of the terms, say e1 e2 e3 . Drop e1 to create e2 e3 . Drop e2 to create e1 e3 . Drop e3 to create e1 e2 . Select e2 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a1 .
a1 - 3 e1 + 6 e4 - 18 e5 - e1 + 2 e4 - 6 e5
Select e1 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 . 2009 9 3
181
Select e1 e3 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .
a2 3 e2 + 3 e4 - 3 e5 e2 + e4 - e5
Select e1 e2 . Drop the terms not containing it, factor it from the remainder, and eliminate it to give a3 .
a3 - 3 e3 - 4 e4 + 12 e5
Multiplying this out (here using GrassmannExpandAndSimplify in its alias form) gives:
A; !5 ; %@H- e1 + 2 e4 - 6 e5 L He2 + e4 - e5 L H- 3 e3 - 4 e4 + 12 e5 LD 3 e1 e2 e3 + 4 e1 e2 e4 - 12 e1 e2 e5 - 3 e1 e3 e4 + 3 e1 e3 e5 8 e1 e4 e5 - 6 e2 e3 e4 + 18 e2 e3 e5 - 12 e3 e4 e5
Comparing this product to the original 3-element a verifies a final factorization as:
3
a He1 - 2 e4 + 6 e5 L He2 + e4 - e5 L H- 3 e3 - 4 e4 + 12 e5 L
3
This factorization is, of course, not unique. For example, a slightly simpler factorization could be obtained by subtracting twice the first factor from the third factor to obtain:
a He1 - 2 e4 + 6 e5 L He2 + e4 - e5 L H- 3 e3 - 2 e1 L
3
Factorization of (n|1)-elements
The foregoing method may be used to prove constructively that any (n|1)-element is simple by obtaining a factorization and subsequently verifying its validity. Let the general (n|1)-element be:
n n-1
a ai ei ,
i=1
a1 ! 0
where the ei are the cobasis elements of the ei . Choose b e1 and bj ej and apply the Common Factor Theorem to obtain (for j "1):
1
2009 9 3
182
n-1
a He1 ej L K a ej O e1 - K a e1 O ej
n-1 n- 1 n i=1 n i=1
ai ei ej e1 - ai ei e1 ej H - 1 L n-1 K a j e e 1 - a 1 e e j O
n n
H- 1L
n-1
e Haj e1 - a1 ej L
n
n-1
3.52
ai ei
i=1
i=2
L Ha
e1 - a1 ei L,
a1 ! 0
3.53
By putting equation 3.46 in its equality form rather than its congruence form, we have
n
a1
n-2
ai ei H- 1L
i=1
n-1
i=2
L Ha
e1 - a1 ei L,
a1 ! 0
3.54
This enables us to tabulate the formula for different dimensions. For example
2009 9 3
183
Table@L@nD R@nD, 8n, 3, 5<D 9a1 Ha3 e1 e2 - a2 e1 e3 + a1 e2 e3 L Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L, a2 1 H- a4 e1 e2 e3 + a3 e1 e2 e4 - a2 e1 e3 e4 + a1 e2 e3 e4 L - HHa2 e1 - a1 e2 L Ha3 e1 - a1 e3 L Ha4 e1 - a1 e4 LL, 3 a1 Ha5 e1 e2 e3 e4 - a4 e1 e2 e3 e5 + a3 e1 e2 e4 e5 a2 e1 e3 e4 e5 + a1 e2 e3 e4 e5 L Ha2 e1 - a1 e2 L Ha3 e1 - a1 e3 L Ha4 e1 - a1 e4 L Ha5 e1 - a1 e5 L=
2009 9 3
184
ExteriorFactorize@XD a1 e1 a5 e5 a1 e2 + a4 e5 a1 e3 a3 e5 a1 e4 + a2 e5 a1
We can obtain the 2-element common factor of these two 3-elements by applying ToCommonFactor to their regressive product.
Z = ToCommonFactor@X YD c HH- x2 y1 + x1 y2 L e1 e2 + H- x3 y1 + x1 y3 L e1 e3 + H- x3 y2 + x2 y3 L e1 e4 + H- x4 y1 + x1 y4 L e2 e3 + H- x4 y2 + x2 y4 L e2 e4 + H- x4 y3 + x3 y4 L e3 e4 L
Applying ExteriorFactorize to this common factor shows that it is simple and can be factorized.
ExteriorFactorize@ZD c H- x2 y1 + x1 y2 L e1 + e2 + e3 Hx4 y1 - x1 y4 L - x2 y1 + x1 y2 + x2 y1 - x1 y2 + e4 Hx4 y2 - x2 y4 L - x2 y1 + x1 y2
e3 Hx3 y1 - x1 y3 L x2 y1 - x1 y2
e4 Hx3 y2 - x2 y3 L
2009 9 3
185
Because this is a 2-element in a 4-space we can of course also prove its simplicity by expanding and simplifying its exterior square to show that it is zero.
Expand@%@Z ZDD 0
If ExteriorFactorize is applied to an element which may be simple conditional on the values taken by some of its scalar symbol coefficients, the conditional result will be returned.
This Mathematica If function has syntax If[C,F], where C is a list of constraints on the scalar symbols of the expression, and F is the factorization if the constraints are satisfied. Hence the above may be read: if the simplicity condition is satisfied, then the factorization is as given, else the element is not factorizable. Instead of a list of constraints, we can also recast the list of constraints in predicate form by applying And to the list. (In this example, where there is only one constraint, the effect is simply to remove the braces from the constraint.)
X1 = X1 . c_List And c IfBa3 a4 - a2 a5 + a1 a6 0, a1 e1 a4 e3 a1 a5 e4 a1 e2 + a2 e3 a1 + a3 e4 a1 F
If we are able to assert the condition required, then the 2-element is indeed simple and the factorization is valid. Substituting this condition into the predicate form of the If statement, yields true for the predicate, hence the factorization is returned.
2009 9 3
186
X1 . a6 a1 e1 -
a2 a5 - a3 a4 a1 a1 a5 e4 a1 e2 + a2 e3 a1 + a3 e4 a1
a4 e3
In this case of a 2-element in a 5-space, we have a list of three constraints, which we can turn into a predicate by applying And to the list.
X1 = X1 . c_List And c IfBa3 a5 - a2 a6 + a1 a8 0 && a4 a5 - a2 a7 + a1 a9 0 && a4 a6 - a3 a7 + a1 a10 0, a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5 a1 e1 e2 + + + F a1 a1 a1 a1 a1 a1
a5 e3 a1
a7 e5 a1
e2 +
a4 e5
2009 9 3
187
A 2-element in a 4-space
!4 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4
2009 9 3
188
If we are able to assert the condition required, that a3 a4 - a2 a5 + a1 a6 0, then the 2-element is indeed simple.
SimpleQBX . a6 True a3 a4 - a2 a5 a1 F
A 2-element in a 5-space
!5 ; X = ComposeMElement@2, aD a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e1 e5 + a5 e2 e3 + a6 e2 e4 + a7 e2 e5 + a8 e3 e4 + a9 e3 e5 + a10 e4 e5 SimpleQ@XD If@8a3 a5 - a2 a6 + a1 a8 0, a4 a5 - a2 a7 + a1 a9 0, a4 a6 - a3 a7 + a1 a10 0<, TrueD
In this case of a 2-element in a 5-space, we have three conditions on the coefficients to satisfy before being able to assert that the element is simple.
189
The first formula to be developed (which forms a basis for much of the rest) is an expansion for the regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call this (and its dual) the Product Formula. Let x be an (n|1)-element, then:
n-1
a b x Ka x O b + H- 1L m a b x
m k n-1 m n-1 k m k
n-1
3.55
To prove this, suppose initially that a and b are simple and can be expressed as a a1 ! am
m k m
a b x HHa1 ! am L Hb1 ! bk LL x
m k n-1 n-1 m
Since the ai x and bj x are n-elements, we can rearrange the parentheses in the terms of the
n-1 n-1
sums by using the property of n-elements that they behave with regressive products just like scalars behave with exterior products: A B C KA BO C. So the right hand side becomes
n b c n b c
m
Reapplying the Common Factor Theorem in reverse enables us to condense the sums back to:
Ka x O Hb1 ! bk L + H- 1Lm k b x
m n-1 k
n-1
Ha1 ! am L
n-1
This result may be extended in a straightforward manner to the case where a and b are not simple:
m k
since a non-simple element may be expressed as the sum of simple terms, and the formula is valid for each term, then by addition it can be shown to be valid for the sum. We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual. (Note that we use n instead of n in order to let the Dual function know of its special meaning as the dimension of the space.) 2009 9 3
190
We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual. (Note that we use n instead of n in order to let the Dual function know of its special meaning as the dimension of the space.)
DualB a b x Ka x O b + H- 1Lm a b x
m k n-1 m n-1 k m k
n-1
a b x H - 1 L -m +n a b x + a x b
m k m k m k
a b x K a x O b + H - 1 L n- m a b x
m k m k m k
3.56
a b x1 x2
m k
then the right-hand side may be expanded by applying the above Product Formula twice to obtain:
3.57 + H- 1 Ln- m Ka x2 O b x1 - Ka x 1 O b x 2
m k m k
Each successive application doubles the number of terms. We started with two terms on the right hand side of the basic Product Formula. By applying it again we obtain a Product Formula with four terms on the right hand side. The next application would give us eight terms as shown in the Product Formula below.
a b Hx1 x2 x3 L
m k
Ka x1 O b x2 x3 - Ka x2 O b x1 x3 +
m k m k
3.58
2009 9 3
191
Ka x3 O b x1 x2 + Ka x1 x2 x3 O b +
m k m k
H - 1 L n- m a b x 1 x 2 x 3 + K a x 1 x 2 O b x 3 m k m k
Ka x1 x3 O b x2 + Ka x2 x3 O b x1
m k m k
Because this derivation process is simply the repeated application of a formula, we can get Mathematica to do it for us automatically.
Each term is now transformed into two terms with the forms a*((uz)v) and (-1)^(nRawGrade[u])*a*(u(vz))). Here n is the (symbolic) dimension of the underlying linear space, and the function RawGrade computes the grade of an element without consideration to the dimension of the currently declared space. Both these features enable the formula derivation to apply to an element of any grade.
3.58
2009 9 3
192
This code is explanatory only. You do not need to enter it to have DeriveProductFormula work.
a b x H - 1 L -m +n a b x + a x b
m k m k m k
DeriveProductFormulaB a b xF
m k 2
a b x1 x2
m k
a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k
2009 9 3
193
DeriveProductFormulaB a b xF
m k 3
a b x1 x2 x3
m k k m k m k
a x1 b x2 x3 - a x2 b x1 x3 + a x3 b x1 x2 +
m
H - 1 L -m +n a b x 1 x 2 x 3 + a x 1 x 2 b x 3 - a x 1 x 3 b x 2 +
m k m k m k
a x2 x3 b x1 + a x1 x2 x3 b
m k m k
DeriveProductFormulaB a b xF
m k 4
a b x1 x2 x3 x4
m k k m k m k m k m k m k
a b x1 x2 x3 x4 + a x1 x2 b x3 x4 - a x1 x3 b x2 x4 +
m
a x1 x4 b x2 x3 + a x2 x3 b x1 x4 - a x2 x4 b x1 x3 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 x 3 x 4 +
m k m m k k m m k k m k m k
a x2 b x1 x3 x4 - a x3 b x1 x2 x4 + a x4 b x1 x2 x3 - a x1 x2 x3 b x4 + a x1 x2 x4 b x3 a x1 x3 x4 b x2 + a x2 x3 x4 b x1 + a x1 x2 x3 x4 b
m k m k m k
Although these outputs are not quite as easy to read as the manually derived formulas in the previous section (because the outputs omit any brackets deemed unnecessary with the inbuilt precedence of the operators), one can at least see in these cases that the formulas are equivalent.
each of them is a specific case of a more general explicit formula. We will call this more general explicit formula the General Product Formula. We derive it below by deconstructing the result for
x, and then confirm its identity to the original iteratively derived form in the first few cases.
4
2009 9 3
194
F1 = DeriveProductFormulaB a b xF
m k 4
a b x1 x2 x3 x4
m k k m k m k m k m k m k
a b x1 x2 x3 x4 + a x1 x2 b x3 x4 - a x1 x3 b x2 x4 +
m
a x1 x4 b x2 x3 + a x2 x3 b x1 x4 - a x2 x4 b x1 x3 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 x 3 x 4 +
m k m m k k m m k k m k m k
a x2 b x1 x3 x4 - a x3 b x1 x2 x4 + a x4 b x1 x2 x3 - a x1 x2 x3 b x4 + a x1 x2 x4 b x3 a x1 x3 x4 b x2 + a x2 x3 x4 b x1 + a x1 x2 x3 x4 b
m k m k m k
The first thing we note is that apart possibly from the sign H- 1L-m+n , the terms on the right hand side are all products of the form
a xi b xi
m 4-r k r
x xi xi
j r
4-r
This means we can compute them using the r-span and r-cospan of x. For j equal to 4, r will range
j
:a x1 x2 x3 x4 b 1>
m k
Ka #1 BxFO b #1 BxF
m 4 k 4
:a x2 x3 x4 b x1 , a - Hx1 x3 x4 L b x2 ,
m k m k
a x1 x2 x4 b x3 , a - Hx1 x2 x3 L b x4 >
m k m k
Ka #2 BxFO b #2 BxF
m 4 k 4
:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k m k m k
a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >
m k m k m k
2009 9 3
195
Ka #3 BxFO b #3 BxF
m 4 k 4
:a x4 b x1 x2 x3 , a - x3 b x1 x2 x4 ,
m k m k
a x2 b x1 x3 x4 , a - x1 b x2 x3 x4 >
m k m k
Ka #4 BxFO b #4 BxF
m 4 k 4
:a 1 b x1 x2 x3 x4 >
m k
But all these terms can be computed at once using the complete span # and cospan # .
Ka # BxFO b # BxF
m 4 k 4
::a x1 x2 x3 x4 b 1>,
m k m
:a x2 x3 x4 b x1 , a - Hx1 x3 x4 L b x2 ,
m k k
a x1 x2 x4 b x3 , a - Hx1 x2 x3 L b x4 >,
m k m k m
:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k m k k
a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >,
m k m k m m k
:a x4 b x1 x2 x3 , a - x3 b x1 x2 x4 , a x2 b x1 x3 x4 ,
m k m k m k
a - x1 b x2 x3 x4 >, :a 1 b x1 x2 x3 x4 >>
m k k
Now we need to multiply the terms by a scalar factor which is 1 when the grade of the span is even, and H- 1Ln-m when it is odd. Since the lists of terms in the collection above are arranged according to the grades of their spans, we then need a list whose elements alternate between 1 and H- 1Ln-m . To construct this list, we first define a function r which returns an alternating list of 1s and 0s of the correct length j + 1, multiply this list by n - m, and then use it as a power. Mathematica's inbuilt Listable attributes of Times and Power will again return the result we want.
r@j_D := Mod@Range@0, jD, 2D H - 1 L r@4D Hn-m L 91, H- 1L-m+n , 1, H- 1L-m+n , 1=
2009 9 3
196
::a x1 x2 x3 x4 b 1>,
m k m m
H - 1 L -m +n a x 1 x 2 x 4 b x 3 , H - 1 L -m +n a - H x 1 x 2 x 3 L b x 4 > ,
m k m m k m
:a x3 x4 b x1 x2 , a - Hx2 x4 L b x1 x3 , a x2 x3 b x1 x4 ,
m k k k
a x1 x4 b x2 x3 , a - Hx1 x3 L b x2 x4 , a x1 x2 b x3 x4 >,
m k -m +n m m k -m +n m m k
:H- 1L
a x4 b x1 x2 x3 , H- 1L
k m
a - x3 b x1 x2 x4 ,
k m
H - 1 L -m +n a x 2 b x 1 x 3 x 4 , H - 1 L -m +n a - x 1 b x 2 x 3 x 4 > ,
k k
:a 1 b x1 x2 x3 x4 >>
m k
Flattening this list and then summing the terms gives the final result.
F2 = a b Hx1 x2 x3 x4 L
m k
a b x1 x2 x3 x4
m k k -m +n -m +n m k -m +n
a 1 b x 1 x 2 x 3 x 4 + H - 1 L -m +n a - x 1 b x 2 x 3 x 4 +
m
H- 1L H- 1L
m
a x2 b x1 x3 x4 + H- 1L
m m k k k m
a - x3 b x1 x2 x4 +
m k k
a x4 b x1 x2 x3 + a - Hx1 x3 L b x2 x4 +
-m +n
a - Hx2 x4 L b x1 x3 + H- 1L H- 1L
m m -m +n m k k k -m +n m k m
a - Hx1 x2 x3 L b x4 +
m m k k
a - Hx1 x3 x4 L b x2 + a x1 x2 b x3 x4 +
k m m k k
a x1 x4 b x2 x3 + a x2 x3 b x1 x4 + a x 3 x 4 b x 1 x 2 + H - 1 L -m +n a x 1 x 2 x 4 b x 3 + H- 1L a x2 x3 x4 b x1 + a x1 x2 x3 x4 b 1
Applying GrassmannExpandAndSimplify will simplify the signs and allow us to confirm the result is identical to that produced by DeriveProductFormula.
2009 9 3
197
ab x
m k j
3.59
j k j
SBFlattenBH- 1L
r@jD Hn- m L
In this formula, # BxF and # BxF are the complete span and complete cospan of X. The sign
j j j
-m +n
2009 9 3
198
8True, True, True, True, True, True, True, True, True, True<
a b x y z H - 1 L -m +n a 1 b x y z + a x b y z +
m k m k -m +n m k -m +n k m m k -m +n m m k k k m k
a - y b x z + a z b x y + H- 1L H- 1L a x y b z + H- 1L
m
a - Hx zL b y +
ayzbx + axyzb1
We can express the 3-element as a product of different 1-element factors by adding to any given factor, scalar multiples of the other factors. For example
2009 9 3
199
F2 = GeneralProductFormulaB a b Hx y Hz + a x + b yLLF
m k
a b x y Ha x + b y + zL
m k m k m m k k k k k m k
H - 1 L -m +n a 1 b x y H a x + b y + z L + a x b y H a x + b y + z L + a - y b x Ha x + b y + zL + a Ha x + b y + zL b x y +
m k -m +n -m +n -m +n
H- 1L H- 1L H- 1L
a - Hx Ha x + b y + zLL b y +
m m m
a x y b Ha x + b y + zL + a y Ha x + b y + zL b x + a x y Ha x + b y + zL b 1
associated with b. Of course, there will need to be some associated changes in the signs of the
k
terms as well. We start with the General Product Formula in the form developed above.
a b x SBFlattenBH- 1Lr@jD Hn-mL a # BxF b # BxF FF
m k j m j k j
Thus we can write the General Product formula in the alternative form
3.60
m j k j
H- 1L
200
8True, True, True, True, True, True, True, True, True, True<
2009 9 3
201
x SB
j
n- m
b " BxF
j
3.61 FF
a b
m n- m
If the element to be decomposed is a 1-element, that is j = 1, there are just two terms, reducing the decomposition formula to:
Ka # @xDO x %BSBFlattenBH- 1L
r@1D Hn-m L m
n-m
b # @xD FFF
a b
m n-m
H - 1 L m +n a b x + a x b x
m -m +n m -m + n
a b
m -m +n
To make this a little more readable we add some parentheses and absorb the sign by interchanging the order of the factors x and b .
n-m
a x b x
m n- m
Ka xO b +
m n- m
3.62
a b
m n- m
a b
m n- m
In Chapter 4: Geometric Interpretations we will explore some of the geometric significance of these formulas. They will also find application in Chapter 6: The Interior Product.
a b x H - 1 L -m +n a b x + a x b
m k m k m k
2009 9 3
202
Dual@F1 D ab x
m k -1+n
H- 1Lm a b x
m k
-1+ n
+ Ka x O b
m -1+n k
a b x1 x2
m k
a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k
Dual@F2D a b x1 x2 a b x1 x2
m k -1+n -1+n m k -1+n
+ b x1
k
-1+n
H- 1Lm -
a x1
m
-1+n
b x2
k
-1+n
+ a x2
m
-1+n
-1+n
a x1 x2
m -1+n
b
k
-1+n
a b x1 x2 x3
m k k m k m k
a x1 b x2 x3 - a x2 b x1 x3 + a x3 b x1 x2 +
m
H - 1 L -m +n a b x 1 x 2 x 3 + a x 1 x 2 b x 3 - a x 1 x 3 b x 2 +
m k m k m k
a x2 x3 b x1 + a x1 x2 x3 b
m k m k
2009 9 3
203
Dual@F3D a b x1 x2 x3
m k -1+n -1+n -1+n
a x1
m
-1+n
b x2 x3
k -1+n - 1+ n
- a x2
m
-1+n
b x1 x3
k -1+n -1+n
+ + +
a x3
m -1+n
b x1 x2
k -1+n
-1+n
+ H- 1Lm a b x1 x2 x3
m k -1+ n -1+n -1+n
a x1 x2
m -1+n
-1+n
b x3
k -1+n
- a x1 x3
m -1+n -1+n
b x2
k
-1+n
a x2 x3
m -1+n -1+n
b x1
k
-1+n
+ a x1 x2 x3
m -1+n -1+n -1+n
b
k
a b x H- 1L
m k p r=0
a xi b xi
i=1 m p-r k r
3.63
x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r
p nK O r
3.10 Summary
This chapter has introduced the regressive product as a true dual to the exterior product. This means that to every theorem T involving exterior and regressive products there corresponds a dual theorem Dual[T] such that T Dual[Dual[T]]. But although the regressive product axioms assert that the regressive product of an m-element with a k-element is an (m+k-n)-element (where n is the dimension of the underlying linear space), the axiom sets for the exterior and regressive products alone do not provide a mechanism for deriving such an element explicitly. A further explicit axiom involving both exterior and regressive products is necessary. We called this axiom the Common Factor Axiom and motivated it by a combination of algebraic and geometric argument. If the exterior product is viewed as a sort of 'union' of independent elements, the regressive product may be viewed as a sort of 'intersection'. Because Grassmann considered only Euclidean spaces, used the same notation for both exterior and regressive products, and equated scalars and pseudo-scalars, the Common Factor Axiom was effectively hidden in his notation. The Common Factor Axiom was then extended to prove one of the most important formulae in the Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then the result is specific, and depends on the metric for its precise value. Otherwise, if there is no metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It 2009 9 3 was then shown how the Common Factor Theorem could be used in the factorization of simple elements.
204
The Common Factor Axiom was then extended to prove one of the most important formulae in the Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then the result is specific, and depends on the metric for its precise value. Otherwise, if there is no metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It was then shown how the Common Factor Theorem could be used in the factorization of simple elements. Finally, it was shown how the Common Factor Theorem leads to a suite of formulas called Product Formulas. These formulas expand expressions involving an exterior product of an element with a regressive product of elements, or, involving a regressive product of an element with an exterior product of elements. In Chapter 6 we will show how these lead naturally to formulas where the regressive products are replaced by interior products. These interior product forms of the product formulae will find application throughout the rest of the book. The next chapter is an interlude in the development of the algebraic fundamentals. In it we begin to explore one of Grassmann algebra's most enticing interpretations: geometry. But at this stage we will only be discussing non-metric geometry. Chapters 5 and 6 to follow will develop the algebra's metric concepts, and thus complete the fundamentals required for subsequent applications and geometric interpretations in metric space.
2009 9 3
205
4 Geometric Interpretations
4.1 Introduction
In Chapter 2, the exterior product operation was introduced onto the elements of a linear space L,
1
enabling the construction of a series of new linear spaces L possessing new types of elements. In
m
this chapter, the elements of L will be interpreted, some as vectors, some as points. This will be
1
done by singling out one particular element of L and conferring upon it the interpretation of origin
1
point. All the other elements of the linear space then divide into two categories. Those that involve the origin point will be called (weighted) points, and those that do not will be called vectors. As this distinction is developed in succeeding sections it will be seen to be both illuminating and consistent with accepted notions. Vectors and points will be called geometric interpretations of the elements of L.
1
Some of the more important consequences however, of the distinction between vectors and points arise from the distinctions thereby generated in the higher grade spaces L. It will be shown that a
m
simple element of L takes on two interpretations. The first, that of a multivector (or m-vector) is
m
when the m-element can be expressed in terms of vectors alone. The second, that of a bound multivector (or bound m-vector) is when the m-element requires both points and vectors to express it. These simple interpreted elements will be found useful for defining geometric entities such as lines and planes and their higher dimensional analogues known as multiplanes (or m-planes). Unions and intersections of multiplanes may then be calculated straightforwardly by using the bound multivectors which define them. A multivector may be visualized as a 'free' entity with no location. A bound multivector may be visualized as 'bound' through a location in space. It is not only simple interpreted elements which will be found useful in applications however. In Chapter 7: Exploring Screw Algebra and Chapter 8: Exploring Mechanics, a basis for a theory of mechanics is developed whose principal quantities (for example, systems of forces, momentum of a system of particles, velocity of a rigid body) may be represented by a general interpreted 2element, that is, by the sum of a bound vector and a bivector. In the literature of the nineteenth century, wherever vectors and points were considered together, vectors were introduced as point differences. When it is required to designate physical quantities it is not satisfactory that all vectors should arise as the differences of points. In later literature, this problem appeared to be overcome by designating points by their position vectors alone, making vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either, since by excluding points much of the power of the calculus for dealing with free and located entities together is excluded. In this book we do not require that vectors be defined in terms of points, but rather, propose a difference of interpretation between the origin element and those elements not involving the origin. This approach permits the existence of points and vectors together without the vectors necessarily arising as point differences.
2009 9 3
vectors were introduced as point differences. When it is required to designate physical quantities it is not satisfactory that all vectors should arise as the differences of points. In later literature, this problem appeared to be overcome by designating points by their position vectors alone, making vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory Grassmann Algebra either,Book.nb since by 206 excluding points much of the power of the calculus for dealing with free and located entities together is excluded. In this book we do not require that vectors be defined in terms of points, but rather, propose a difference of interpretation between the origin element and those elements not involving the origin. This approach permits the existence of points and vectors together without the vectors necessarily arising as point differences. In this chapter, as in the preceding chapters, L and the spaces L do not yet have a metric. That is,
1 m
there is no way of calculating a measure or magnitude associated with an element. The interpretation discussed therefore may also be supposed non-metric. In the next chapter a metric will be introduced onto the uninterpreted spaces and the consequences of this for the interpreted elements developed. In summary then, it is the aim of this chapter to set out the distinct non-metric geometric interpretations of m-elements brought about by the interpretation of one specific element of L as an
1
origin point.
This depiction is unsatisfactory in that the line segment has a definite location and length whilst the vector it is depicting is supposed to possess neither of these properties. One way perhaps to depict an unlocated entity is to show it in many locations.
Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse to manipulate the vector below to reinforce that any of the vector's locations that retain its direction and sense 2009 9 3 are equally satisfactory depictions for it. This dynamic depiction is a somewhat more faithful way of depicting something without locating it. But it is still not fully satisfactory, since we are depicting the property of no-location by using any location.
207
Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse to manipulate the vector below to reinforce that any of the vector's locations that retain its direction and sense are equally satisfactory depictions for it. This dynamic depiction is a somewhat more faithful way of depicting something without locating it. But it is still not fully satisfactory, since we are depicting the property of no-location by using any location.
Another way is to allow it to be moveable to any location. HYou can drag the arrow by its name with your mouse.L
A linear space, all of whose elements are interpreted as vectors, will be called a vector space.
From this point onwards we will always show vectors docked in a visually convenient location, often one that is suggestive of the operation we wish to perform. Of course as discussed above, this does not mean it is actually located there.
Comparing vectors
In a metric space we can define the magnitude of a vector and hence compare vectors by comparing their magnitudes even if they are not in the same direction. In spaces where no metric has been imposed, as are the spaces we are discussing in this chapter, we cannot define the magnitude of a vector, and hence we cannot compare it with another vector in a different direction. However, we can compare vectors in the same direction. Vectors in the same direction are congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the congruence factor.
2009 9 3
208
However, we can compare vectors in the same direction. Vectors in the same direction are congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the congruence factor. Thus vectors a x and b x are congruent with congruence factor
a b
or
b . a
magnitude of the congruence factor that is important, but its sign. If it is positive, the vectors may be said to have the same orientation. If it is negative the vectors may be said to have an opposite orientation. Orientation is thus a relative concept. It applies in the same way to elements of any grade. For the special case of vectors however, the term sense is often used synonymously with orientation. Below we depict three vectors with conguence factors (relative to x) of 1, -1 and 2.
-x
2x
Points
The origin
In order to describe position in space it is necessary to have a reference point. This point is usually called the origin. Rather than the standard technique of implicitly assuming the origin and working only with vectors to describe position, we find it important for later applications to augment the vector space with the origin as a new element to create a new linear space with one more element in its basis. For reasons which will appear later, such a linear space will be called a bound vector space. The only difference between the origin element and the vector elements of the linear space is their interpretation. The origin element is interpreted as a point. We will denote it in GrassmannAlgebra by ", which you can access from the palette or type as *5dsO (a five-star followed by a double-struck capital O).
2009 9 3
209
It is these elements that will be used to describe position and that we will call points. The vector x is called the position vector of the point P. A position vector is just like any other vector and is therefore, of course, located nowhere (even though we show it in a conveniently suggestive position!). We depict points (other than the origin) by blue points.
Remember that the bound vector space we are discussing does not yet have a metric. That is, the distance between two points (the magnitude of the vector equal to the point difference) is not meaningful. However, the relative distances between points on the same line can be measured by means of their intensities.
2009 9 3
210
Q P + y H" + xL + y " + Hx + yL
P x x+y !
y Q
and so a vector may be viewed as a carrier of points. That is, the addition of a vector to a point carries or transforms it to another point. (See the historical note below).
Wherever pictorially feasible we will show weighted points with their weights attached to their names, and/or a size or colour change to distinguish them from the 'pure' points on the same graphic.
The sum of two points is a point of weight 2 located mid-way between them
Similarly, the sum of n points is a point with weight n, situated at the centre of mass (centre of gravity) of the points.
2009 9 3
211
l2 l1 m1 P1 Hm1 + m2 L Pc
m2 P2
l1 m1 ! l2 m2
We can also easily express this equation in terms of the position vectors of the weighted points.
mi Pi mi H" + xi L I mi M " + S mi xi S mi
Thus the sum of a number of mass-weighted points mi Pi is equivalent to the centre of gravity S m xi " + S i weighted by the total mass mi . A numerical example is included later in this section. m
i
As will be seen in Section 4.4 below, a weighted point may also be viewed as a bound scalar.
Historical Note
Sir William Rowan Hamilton in his Lectures on Quaternions [Hamilton 1853] was the first to introduce the notion of vector as 'carrier' of points.
... I regard the symbol B-A as denoting "the step from B to A": namely, that step by making which, from the given point A, we should reach or arrive at the sought point B; and so determine, generate, mark or construct that point. This step, (which we always suppose to be a straight line) may also in my opinion be properly called a vector; or more fully, it may be called "the vector of the point B from the point A": because it may be considered as having for its office, function, work, task or business, to transport or carry (in Latin vehere) a moveable point, from the given or initial position A, to the sought or final position B.
2009 9 3
212
e2 ! e1
We do not depict the basis vectors at right angles because the space does not yet have a metric, and orthogonality is a metric concept. As always, you can confirm your currently declared basis by checking the status pane at the top of the palette, We may often precede a calculation with one of these DeclareBasis aliases followed by a semicolon. This accomplishes the declaration of the basis but for brevity suppresses the confirming output. For example, below we declare a bound 3-space, and then compose a palette of the basis of the algebra constructed on it.
"3 ; BasisPalette
Basis Palette
L
0
L
1
L
2
L
3
L
4
1 ! ! e1 ! e1 e2 ! e1 e2 e3 e1 ! e2 ! e1 e3 e2 ! e3 ! e2 e3 e3 e1 e2 e1 e3 e2 e3
Of course, you can declare your own bound vector space basis if you wish. For example if you want the vector basis elements to be i, j, and k, you could enter
DeclareBasis@8i, j, k, "<D 8#, i, j, k<
e1 e2 e3
(DeclareBasis will always rearrange the ordering to make the origin " come first).
213
you can quickly compose any vectors in this basis with ComposeVector or its alias ! . The placeholder is used for entering the symbol upon which you want the coefficients to be based. Here we choose a.
Va = &a a1 e1 + a2 e2 + a3 e3 ComposeVector automatically declares the ai to be scalar symbols.
If you enter ! , you get an expression with placeholders in which you can enter your own coefficients just by clicking on any placeholder, and tabbing through them.
& e1 + e2 + e3
Note however that in this case if you want any symbolic coefficients you enter to be recognized as scalar symbols, you would need to ensure this yourself.
Composing points
Similarly, to compose a point you can use " .
Pb = 'b # + b1 e1 + b2 e2 + b3 e3 ' # + e1 + e2 + e3
Now you can use the points and vectors you have composed in expressions. For example, here we add Pb and 2 Va , then use GrassmannSimplify (alias $) to collect their coefficients.
Pb + 2 Va $ # + H2 a1 + b1 L e1 + H2 a2 + b2 L e2 + H2 a3 + b3 L e3
2009 9 3
214
M = Mi
i=1
5 H# + 4 e1 + 2 e2 - 9 e3 L + 7 H# - 5 e1 + 3 e2 - 6 e3 L + 2 H# + e1 + 3 e2 - 4 e3 L + 4 H# + 2 e1 - e2 - 2 e3 L
Simplifying this gives a weighted point with weight 18, the scalar attached to the origin.
M = % @ M D 18 # - 5 e1 + 33 e2 - 103 e3
To take the weight out as a factor, that is, expressing the result in the form mass ! point, we can use ToWeightedPointForm.
M = ToWeightedPointForm@MD 18 # 5 e1 18 + 11 e2 6 103 e3 18
7 P3
2 P1 4 P2 18 Pc
5 P4
5 e1 18
11 e2 6
103 e3 18
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by 2009 9 3 your mouse over their symbols. hovering
215
5 e1 18
11 e2 6
103 e3 18
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by hovering your mouse over their symbols.
product operation. In the preceding section the elements of L have been given two geometric
1
interpretations: that of a vector and that of a point. These interpretations in turn generate various other interpretations for the elements of L.
2
1. xy (vector by vector). 2. Px (point by vector). 3. P1 P2 (point by point). However, P1 P2 may be expressed as P1 HP2 - P1 L which is the product of a point and a vector, and thus reduces to the second case. There are thus two simple interpreted elements in L:
2
A note on terminology
The term bound as in the bound vector Px indicates that the vector x is conceived of as bound through the point P, rather than to the point P, since the latter conception would give the incorrect impression that the vector was located at the point P. By adhering to the terminology bound through, we get a slightly more correct impression of the 'freedom' that the vector enjoys. The term bound vector space will be used to denote a vector space to whose basis has been added an origin point. This should be read as bound vector-space, rather than bound-vector space. The term bound m-space will be used to denote an m-dimensional vector space to whose basis has been added an origin point.
2009 9 3
216
Bivectors
Depicting bivectors
Earlier in this chapter, a vector was depicted graphically by a directed and sensed line segment supposed to be located nowhere in particular. In like manner we depict a simple bivector by depicting its two component vectors supposed located nowhere in particular. But to indicate that they are part of the same exterior product, the vectors are depicted conveniently docked head to tail in the order of the product, and 'joined' by a somewhat transparent parallelogram in the same colour, named by default with the exterior product symbol.
This bivector defines a planar 2-direction, the precise 2-dimensional analog of the unlocated vector. Its vectors are located nowhere, and it is located nowhere. The parallelogram should be viewed as an artifact which is used to visually tie the two vectors together in the exterior product. We motivate this choice by noting that if this bivector were in a metric space, the area of the parallelogram would correspond to the magnitude of the bivector. We will discuss this further in Chapter 6. If we change the sign of both vectors in the product, we do not change the bivector, but we get a different depiction.
Changing the signs of the both vectors gives the same bivector.
Any bivector congruent to this bivector (that is, a scalar multiple of it) will represent the same 2direction. Reversing the order of the factors in a bivector is equivalent to multiplying it by -1, producing an 'opposite orientation'.
2009 9 3
217
This parallelogram depiction of the simple bivector is still misleading in a further major respect that did not arise for vectors. It incorrectly suggests a specific shape of the parallelogram. Indeed, since xy x(x + y), another valid depiction of this simple bivector would be a parallelogram with sides constructed from vectors x and x + y.
x y x Hx + yL
However in what follows, just as we will usually depict a bivector docked in a location convenient for the discussion, so too we will usually depict it in the shape most convenient for the discussion at hand, usually one with the simplest factors.
This will permit the definition of the measure of a vector (its length) and the measure of the simple bivector (its area). The measure of a simple bivector is geometrically interpreted as the area of the parallelogram formed by its two vectors. However, as demonstrated above, the bivector can be expressed in terms of an infinite number of pairs of vectors. Despite this, the simple geometric fact that they have the same base and height shows that parallelograms formed from all of them have the same area. For example, the areas of the parallelograms in the previous figure are the same. Thus the area definition of the measure of the bivector is truly an invariant measure. From this point of view the parallelogram depiction in a metric space is correctly suggestive, although the parallelogram is not of fixed shape. In Chapter 6 it will be shown that this parallelogram-area notion of measure is simply the 2dimensional case of a very much more general notion of measure applicable to entities of any grade. A sum of simple bivectors is called a bivector. In two and three dimensions all bivectors are simple. This will have important consequences for our exploration of screw algebra in Chapter 7 and its application to mechanics in Chapter 8. Earlier in the chapter it was remarked that a vector may be viewed as a 'carrier' of points. Analogously, a simple bivector may be viewed as a carrier of bound vectors. This view will be more fully explored in the next section.
Bound vectors
2009 9 3
218
Bound vectors
In mechanics the concept of force is paramount. In Chapter 8: Exploring Mechanics we will show that a force may be represented by a bound vector, and that a system of forces may be represented by a sum of bound vectors. It has already been shown that a bound vector may be expressed either as the product of a point with a vector or as the product of two points.
P1 x P1 HP2 - P1 L P1 P2 P2 P1 + x
The bound vector in the form P1 x defines a line through the point P1 in the direction of x. Similarly, in the form P1 P2 it defines the line through P1 and P2 . Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and congruence. To say that a bound vector B defines a line means that the line may be defined as the set of all points P which belong to the space of B, that is, such that B P 0. A consequence of this definition is that any other bound vector congruent to B defines the same line. We depict a bound vector as located in its line in either of two ways, by: 1. Two points and their vector difference. 2. A single point and a vector. In both cases the vector indicates the order of the factors in the exterior product.
Pointvector depiction
Pointpoint depiction
These graphical depictions of the bound vector are each misleading in their own way. The first (point-vector) depiction suggests that the vector lies in the line. Since the vector is located nowhere, it is certainly not bound to the point. However as a component of the bound vector, we will often find it convenient to imagine it docked in the line, and to speak of it as bound through the point. And the point, again as a component of the bound vector, can be anywhere in the line since if P and P* are any two points in the line, P - P* is a vector of the same direction as x and the bound vector can be expressed as either P x or P* x .
HP - P* L x 0 P x P* x
Hence, although a lone vector is located nowhere, and a lone point is immoveably fixed, the bound vector formed by the exterior product of the two is bound to a line through the point in the direction of the vector. And to add to this representational conundrum, the bound vector has no specific location in the line. The following diagram depicts a bound vector docked in two different locations in the line.
2009 9 3
219
x x P
A bound vector displaying its ability to slide along its line.
P*
The second (point-point) depiction suggests that the depicted points are of specific importance over other points in the line. However, any pair of points in the line with the same vector difference may be used to express the bound vector. Let x Q - P Q* - P* , then
P x P* x P HQ - PL P* HQ* - P* L P Q P* Q*
A bound vector displaying its ability to look like a sliding pair of points.
It has been mentioned in the previous section that a simple bivector may be viewed as a 'carrier' of bound vectors. To see this, take any bound vector Px and a bivector whose space contains x. The bivector may be expressed in terms of x and some other vector, y say, yielding yx. Thus:
P x + y x HP + yL x P* x
The geometric interpretation of the addition of such a simple bivector to a bound vector is then similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.
2009 9 3
220
you can use quickly compose any bivectors in this basis with ComposeBivector or its alias # . The placeholder is used for entering the symbol upon which you want the coefficients to be based. Here we choose a.
B a = %a a1 e1 e2 + a2 e1 e3 + a3 e1 e4 + a4 e2 e3 + a5 e2 e4 + a6 e3 e4
Just as with vectors and points, if you enter # , you get an expression with placeholders in which you can enter your own coefficients. (But if you want to do further manipulation, make sure they are declared as scalar symbols).
% e1 e2 + e1 e3 + e1 e4 + e2 e3 + e2 e4 + e3 e4
The sum of two parallel bound vectors whose vectors are equal but of opposite sense.
Consider a space of any dimension. Let !1 P x and !2 Q y be two bound vectors. Then their sum % is
2009 9 3
221
% %1 + %2 P x + Q y
Since the bound vectors are parallel we can write y equal to a x (where a is a scalar) so that
% P x + Q y P x + Q Ha xL HP + a QL x
In the special case that y is equal to - x (that is a is -1), we have that % reduces to a bivector
% HP - QL x
The sum of two bound vectors, whose vectors are equal but of opposite sense, is a bivector.
In mechanics (see Chapters 7 and 8) this is equivalent to two oppositely directed forces of equal magnitude reducing to a couple.
The sum of two parallel bound vectors is in general a bound vector parallel to them
Supposing now that a is not -1, then P + a Q is a weighted point of weight (1+a) situated on the line joining P and Q. We can shift this weight to the vectorial term (so that the bound vector again becomes the product of a (pure) point and a vector) by writing % as
% R HH1 + aL xL; R P+aQ a+1
The sum of two parallel bound vectors is a bound vector parallel to them.
In mechanics, this is equivalent to two parallel forces reducing to a resultant force parallel to them. If the two parallel forces are of equal magnitude (a is equal to 1), the resultant force passes through a point mid-way between them.
222
Because our two bivectors are not parallel, the sum of their vectors cannot be zero. Hence first term is a new bound vector which is not zero. The remaining terms are simple bivectors. In the general dimensional case, this sum of bivectors B may not be reducible to a simple bivector. In 3dimensional space, the sum of bivectors B can always be reduced to a single simple bivector. In the plane, a point R can always be found which makes the sum zero, leaving us with the result that the sum of two bound vectors in the plane (whose sum of vectors is not zero) can always be reduced to a single bound vector.
The sum of two non-parallel bound vectors in the plane is a bound vector through their point of intersection.
This result applies of course, not only to bound vectors in the plane, but to any pair of intersecting bound vectors in a space of any dimensions, since together they form a planar subspace. As we might expect, given any two bound vectors in the plane, we can check if they intersect or are parallel by extracting a common factor from their regressive product. If they intersect, their regressive product will give their point of intersection. If they are parallel, it will give their common vector. For example, if they intersect we have
"2 ; B1 = H" + a x + b yL x; B2 = H" + c x + d yL y;
2009 9 3
223
ToCommonFactor@B1 B2 D c Hc x # x y + b y # x y + # # x yL
To visualize how this works, we take a simple example. Define the two bound vectors as:
" 3 ; ) = " e1 ; * = H" + 2 e2 L e3
Choose a convenient point through which to define the resultant bound vector, say half way between the points used to define the component bound vectors.
+ = H" + e2 L He1 + e3 L
The following depictions give two different views of the same summation. The original component bound vectors are shown with a somewhat ghostly opacity, while the resultant bound vector and bivector are shown in full-blooded colours.
2009 9 3
224
We shall see in the next section that the sum of any number of bound vectors in 3-space may be reduced to the sum of a single bound vector and a single bivector In Chapter 8: Exploring Mechanics, we will see that bound vectors correspond to forces and bivectors to moments. If the resultant bound vector force is chosen through a point which makes it and its bivector moment orthogonal, such a system of forces is called a wrench.
225
Suppose now that the vector of the bound vector is not a factor of the bivector. Then we have S = Px + yz, where x, y, and z are independent. The exterior square of S is then
S S HP x + y zL HP x + y zL 2 P x y z
This forms a straightforward test to see if the reduction of the sum to a single bound vector can be effected. If the exterior square is zero, the sum can be reduced to a single bound vector. If the exterior square is not zero, then the sum cannot be reduced to a single bound vector. We have already seen this property of exterior squares of 2-elements in our discussion of simplicity in Chapter 3.
S Pi xi P HS xi L + S HPi - PL xi
The first term on the right hand side is a bound vector. The second term is a bivector. This decomposition is clearly valid in a space of any dimension.
4.1
If S xi 0 then the sum is a bivector. In spaces of vector dimension greater than 3, the bivector may not be simple. If S HPi - PL xi 0 for some P, then the sum is a bound vector. Alternatively, if HS Pi xi M HS Pi xi M 0, then the sum is a bound vector. This transformation is of fundamental importance in our exploration of mechanics in Chapter 8.
226
B = bi
i=1
H# + 4 e1 + 2 e2 - 9 e3 L H5 e3 L + H# - 5 e1 + 3 e2 - 6 e3 L H2 e1 + 3 e2 L + H# + e1 + 3 e2 - 4 e3 L He1 - e3 L + H# + 2 e1 - e2 - 2 e3 L He1 - e2 + e3 L
By expanding these products, simplifying and collecting terms, we obtain the sum of a bound vector (through the origin) " H4 e1 + 2 e2 + 5 e3 L and a bivector - 25 e1 e2 + 39 e1 e3 + 22 e2 e3 . We can use GrassmannExpandAndSimplify to do the computations for us.
B = % @BD # H4 e1 + 2 e2 + 5 e3 L - 25 e1 e2 + 39 e1 e3 + 22 e2 e3
We could just as well have expressed this 2-element as bound through (for example) the point " + e1 . To do this, we simply add e1 H4 e1 + 2 e2 + 5 e3 L to the bound vector and subtract it from the bivector to get:
H" + e1 L H4 e1 + 2 e2 + 5 e3 L - 27 e1 e2 + 34 e1 e3 + 22 e2 e3
We can of course also express the bivector in factored form in many ways. Using ExteriorFactorize (discussed in Chapter 2) gives a default result
ExteriorFactorize@- 27 e1 e2 + 34 e1 e3 + 22 e2 e3 D - 27 e1 + 22 e3 27 e2 34 e3 27
Finally, we can check to see if the sum can be reduced to a bound vector by calculating its exterior square.
2009 9 3
227
% @B BD - 230 # e1 e2 e3
Since this is not zero, the reduction of the sum to a single bound vector is not possible in this case.
be reduced to the product of a point and a simple (m|1)-vector by subtracting one of the points from all of the others. For example, take the exterior product of three points and two vectors. By subtracting the first point, say, from the other two, we can cast the product into the form of a bound simple 4-vector.
P P1 P2 x y P HP1 - PL HP2 - PL x y
1. x1 x2 !xm (the simple m-vector). 2. P x2 !xm (the bound simple (m|1)-vector). A sum of simple m-vectors is called an m-vector. If a is a (not necessarily simple) m-vector, then P a is called a bound m-vector.
m m
A sum of bound m-vectors may always be reduced to the sum of a bound m-vector and an (m+1)vector (with the proviso that either or both may be zero). These interpreted elements and their relationships will be discussed further in the following sections.
The m-vector
The simple m-vector, or multivector, is the multidimensional equivalent of the vector. As with a vector, it does not have the property of location. The m-dimensional vector space of a simple mvector may be used to define the multidimensional direction of the m-vector. If we had access to m-dimensional paper, we would depict the simple m-vector, just as we did for the bivector, by depicting its vectors conveniently docked head to tail in the order of the product, and 'joined' by a somewhat transparent m-dimensional parallelepiped in the same colour. In practice of course, we will only be depicting vectors, bivectors and trivectors. A trivector may be depicted by its three vectors in order joined by a parallelepiped.
2009 9 3
228
The orientation is given by the order of the factors in the simple m-vector. An interchange of any two factors produces an m-vector of opposite orientation. By the anti-symmetry of the exterior product, there are just two distinct orientations. In Chapter 6: The Interior Product, it will be shown that the measure of a trivector may be geometrically interpreted as the volume of this parallelepiped. However, the depiction of the simple trivector in the manner above suffers from similar defects to those already described for the bivector: namely, it incorrectly suggests a specific location and shape of the parallelepiped. A simple m-vector may also be viewed as a carrier of bound simple (m|1)-vectors in a manner analogous to that already described for the simple bivector as a carrier of bound vectors. Thus, a simple trivector may be viewed as a carrier for bound simple bivectors. (See below for a depiction of this.) A sum of simple m-vectors (that is, an m-vector) is not necessarily reducible to a simple m-vector, except in L, L, L , L.
0 1 n-1 n
Pi ai HP + bi L ai P ai + bi ai
m m m m
4.2
The first term P ai is a bound m-vector providing that ai ! 0 and the second term is an
m m
(m+1)-vector. When m = 0, a bound 0-vector or bound scalar Pa ( = a P) is seen to be equivalent to a weighted point. When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted " e1 e2 ! en .) 2009 9 3
229
When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted " e1 e2 ! en .) The graphical depiction of bound simple m-vectors presents even greater difficulties than those already discussed for bound vectors. As in the case of the bound vector, the point used to express the bound simple m-vector is not unique. In Section 4.6 we will see how bound simple m-vectors may be used to define multiplanes.
Conversely, as we have already seen, a product of m+1 points may always be expressed as the product of a point and a simple m-vector by subtracting one of the points from all of the others. The m-vector of a bound simple m-vector P0 P1 P2 !Pm may thus be expressed in terms of these points as:
HP1 - P0 L HP2 - P0 L ! HPm - P0 L
A particularly symmetrical formula results from the expansion of this product reducing it to a form no longer showing preference for P0 .
HP1 - P0 L HP2 - P0 L ! HP m - P0 L
m
H- 1Li P0 P1 ! i ! P m
i=0
4.3
2009 9 3
230
P QP+x RP+y
We can construct the same bound bivector % from these points in a number of ways
% = PQR QRP RPQ
Thus we have nine (inessentially different) ways of constructing or denoting the same bound bivector. From here on we will usually only discuss bound bivectors in any of the three conceptually distinct forms in which points are denoted before vectors:
% = PQR PQy Pxy
We depict each of these forms in a slightly different way: in each case emphasizing the entities involved in the form, but in all cases showing the order of factors in the product by the chain of arrows. (These graphical conventions will also apply to our discussion of bound trivectors later.)
Pointvectorvector
Pointpointvector
Pointpointpoint
The bound bivector % defines a plane through the points P, Q and R in the directions of x and y. Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and congruence. To say that a bound bivector % defines a plane means that the plane may be defined as the set of all points P which belong to the space of %, that is, such that % P 0. A consequence of this definition is that any other bound bivector congruent to % defines the same plane. These graphical depictions of the bound bivector are each misleading in their own way. Since the bound bivector defines a plane, we can conceive of it as lying in the plane. However, just as a bound vector can lie anywhere in its line, a bound bivector can lie anywhere in its plane. The following diagram depicts a bound bivector docked in three different locations in its plane. The depictions each have the same vectors, but differ in the point of the plane they use.
2009 9 3
231
Hence, although a lone bivector is located nowhere, and a lone point is immoveably fixed, the bound bivector formed by the exterior product of the two is bound to a plane through the point in the direction of the bivector. It has been shown earlier that a simple bivector may be viewed as a 'carrier' of bound vectors. In a similar manner, a simple trivector may be viewed as a 'carrier' of bound bivectors. To see this, take any bound bivector Pyz and a trivector xyz. Thus:
P y z + x y z HP + xL y z
The geometric interpretation of the addition of such a simple trivector to a bound bivector is then similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.
Even though we are in a bound vector space we can still can compose m-vectors with ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the coefficients on the symbol a. 2009 9 3
232
Even though we are in a bound vector space we can still can compose m-vectors with ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the coefficients on the symbol a.
Table@8m, ComposeMVector@m, aD<, 8m, 1, 3<D TableForm 1 a1 e1 + a2 e2 + a3 e3 2 a1 e1 e2 + a2 e1 e3 + a3 e2 e3 3 a e1 e2 e3
2009 9 3
233
The point space of a simple m-vector is empty. The vector space of a simple m-vector is an m-dimensional vector space. Conversely, the mdimensional vector space may be said to be defined by the m-vector. The point space of a bound simple m-vector is called an m-plane (sometimes multiplane). Thus the point space of a bound vector is a 1-plane (or line) and the point space of a bound simple bivector is a 2-plane (or, simply, a plane). The m-plane will be said to be defined by the bound simple mvector. The vector space of a bound simple m-vector is an m-dimensional vector space. The geometric interpretation for the notion of set inclusion is taken as 'to lie in'. Thus for example, a point may be said to lie in an m-plane. The point and vector spaces for a bound simple m-element are tabulated below.
m 0 1 2 m
Bound Simple m-Vector Point Space Vector Space bound scalar bound vector bound simple bivector bound simple m-vector point line plane m-plane hyperplane n|plane 1|dimensional vector space 2|dimensional vector space m|dimensional vector space Hn-1L|dimensional vector space n|dimensional vector space
Two congruent bound simple m-vectors Pa and a P a define the same m-plane. Thus, for
m m
example, the point " + x and the weighted point 2(" + x) define the same point.
2009 9 3
234
Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an mplane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a scalar factor, not zero) is the same m-plane.
Coordinate spaces
The coordinate spaces of a Grassmann algebra are the spaces defined by the basis elements. The coordinate m-spaces are the spaces defined by the basis elements of L. For example, if L has
m 1
basis 8", e1 , e2 , e3 <, that is, we are working in a bound vector 3-space, then the Grassmann algebra it generates has basis:
"3 ; LBasis 881<, 8#, e1 , e2 , e3 <, 8# e1 , # e2 , # e3 , e1 e2 , e1 e3 , e2 e3 <, 8# e1 e2 , # e1 e3 , # e2 e3 , e1 e2 e3 <, 8# e1 e2 e3 <<
Each one of these basis elements defines a coordinate space. Most familiar are the coordinate mplanes. The coordinate 1-planes "e1 , "e2 , "e3 define the coordinate axes, while the coordinate 2-planes "e1 e2 , "e1 e3 , "e2 e3 define the coordinate planes. Additionally however there are the coordinate vectors e1 , e2 , e3 and the coordinate bivectors e1 e2 , e1 e3 , e2 e3 . Perhaps less familiar is the fact that there are no coordinate m-planes in a vector space, but rather simply coordinate m-vectors.
Geometric dependence
In Chapter 2 the notion of dependence was discussed for elements of a linear space. Non-zero 1elements are said to be dependent if and only if their exterior product is zero. If the elements concerned have been endowed with a geometric interpretation, the notion of dependence takes on an additional geometric interpretation, as the following table shows.
x1 x2 0 P1 P2 0 x1 , x2 are parallel Hco|directionalL P1 , P2 are coincident
x1 x2 x3 0 x1 , x2 , x3 are co|2|directional Hor parallelL P1 P2 P3 0 P1 , P2 , P3 are collinear Hor coincidentL x1 ! xm 0 P1 ! Pm 0 x1 , !, xm are co| k |directional, k < m P1 , !, Pm are co| k |planar, k < m - 1
Thus, co-directionality and coincidence, while being distinct interpretive geometric notions, are equivalent algebraically.
2009 9 3
235
Geometric duality
The concept of duality introduced in Chapter 3 is most striking when interpreted geometrically. Suppose:
P L p V
In what follows we tabulate the dual relationships of these entities to each other.
Duality in a plane
In a plane there are just three types of geometric entity: points, lines and planes. In the table below we can see that in the plane, points and lines are 'dual' entities, and planes and scalars are 'dual' entities, because their definitions convert under the application of the Duality Principle.
L P1 P2 P L1 L2 p P1 P2 P3 1 L1 L2 L3 p LP 1 PL
Duality in a 3-plane
In the 3-plane there are just four types of geometric entity: points, lines, planes and 3-planes. In the table below we can see that in the 3-plane, lines are self-dual, points and planes are now dual, and scalars are now dual to 3-planes.
L P1 P2 L p1 p2 p P1 P2 P3 P p1 p2 p3 V P1 P2 P3 P4 1 p1 p2 p3 p4 p LP P Lp V L1 L2 1 L1 L2 V pP 1 Pp
Duality in an n-plane
From these cases the types of relationships in higher dimensions may be composed straightforwardly. For example, if P defines a point and H defines a hyperplane ((n|1)-plane), then we have the dual formulations:
H P 1 P 2 ! P n-1 P H 1 H 2 ! H n-1
2009 9 3
236
4.6 m-Planes
In an earlier section m-planes were defined as point spaces of bound simple m-vectors. In this section m-planes will be considered from three other aspects: the first in terms of a simple exterior product of points, the second as an m-vector and the third as an exterior quotient.
This equation is equivalent to the statement: there exist scalars a, a0 , ai , not all zero, such that:
a P + a0 P0 + S ai xi 0
And since this is only possible if a - a0 (since for the sum to be zero, it must be a sum of vectors) then we can rewrite the condition as:
a HP - P0 L + S ai xi 0
or equivalently:
HP - P0 L x1 x2 ! xm 0
We are thus lead to the following alternative definition of an m-plane: an m-plane defined by the bound simple m-vector P0 x1 x2 ! xm is the set of points:
8P : HP - P0 L x1 x2 ! xm 0<
This is of course equivalent to the usual definition of an m-plane. That is, since the vectors HP - P0 L, x1 , x2 , !, xm are dependent, then for scalar parameters ti :
HP - P0 L t1 x1 + t2 x2 + ! + t m x m
4.4
237
'Solving' for P and noting from Section 2.11 that the quotient of an (m+1)-element by an m-element contained in it is a 1-element with m arbitrary scalar parameters, we can write:
P P0 x1 x2 ! xm x1 x2 ! xm P0 + t1 x1 + t2 x2 + ! + tm xm
P0 x1 x2 ! x m
x1 x2 ! x m
4.5
Points
The exterior quotient of a weighted point by its weight gives (as would be expected), the (pure) point.
ExteriorQuotientB P Pa a F
Points in a line
The exterior quotient of a bound vector with its vector gives all the points in the line defined by the bound vector, parametrized by the scalar t1 .
ExteriorQuotientB P + x t1 Px x F
We can see that any other point used to define the bound vector gives effectively the same parametrization. 2009 9 3
238
We can see that any other point used to define the bound vector gives effectively the same parametrization.
ExteriorQuotientB P + a x + x t1 HP + a xL x x F
Points in a plane
The exterior quotient of a bound simple bivector with its simple bivector gives all the points in the plane defined by the bound simple bivector, parametrized by the scalars t1 and t2 .
ExteriorQuotientB P + x t1 + y t2 Pxy xy F
Points in a 3-plane
The exterior quotient of a bound simple trivector with its simple trivector gives all the points in the 3-plane defined by the bound simple trivector, parametrized by the scalars t1 , t2 and t3 .
ExteriorQuotientB Pxyz xyz F
P + x t1 + y t2 + z t3
Lines in a plane
The exterior quotient of a bound simple bivector with a vector in it gives a set of bound vectors. These bound vectors define lines in the plane of the bound simple bivector, parametrized by the scalars t1 and t2 . They are characterized by the fact that their exterior product with the vector gives the original bound simple bivector.
ExteriorQuotientB Pxy y F
HP + y t1 L Hx + y t2 L
Lines in a 3-plane
The exterior quotient of a bound simple trivector with a bivector in it gives a set of bound vectors. These bound vectors define lines in the 3-plane of the bound simple trivector, parametrized by the scalars t1 , t2 , t3 and t4 . They are characterized by the fact that their exterior product with the bivector gives the original bound simple trivector.
2009 9 3
239
ExteriorQuotientB
Pxyz yz
HP + y t1 + z t2 L Hx + y t3 + z t4 L
Planes in a 3-plane
The exterior quotient of a bound simple trivector with any vector in it gives all the planes in the 3plane of the bound simple trivector, parametrized by the scalars t1 , t2 and t3 . The exterior quotient of a bound simple trivector with a vector in it gives a set of bound bivectors. These bound bivectors define planes in the 3-plane of the bound simple trivector, parametrized by the scalars t1 , t2 and t3 . They are characterized by the fact that their exterior product with the vector gives the original bound simple trivector.
ExteriorQuotientB Pxyz z F
HP + z t1 L Hx + z t2 L Hy + z t3 L
This property of nilpotence is shared by the boundary operator of algebraic topology and the exterior derivative. Furthermore, if a product with a given 1-element is considered an operation, then the exterior, regressive and interior products are all likewise nilpotent. In Chapter 6 we will show that under the hybrid metric we will adopt for bound vector spaces (in which the origin is orthogonal to all vectors), taking the interior product of any bound element with the origin " has the same effect as the $ operator. Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from a bound one. Applied a second time to the result gives zero, since the m-vector is aready free. 2009 9 3
240
Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from a bound one. Applied a second time to the result gives zero, since the m-vector is aready free. Another way of generating the m-vector from a bound m-vector is to compose the first co-span of the bound m-vector, and then sum the elements. Suppose we have a bound trivector W in a bound m-space (m 3)
W = P1 P2 P3 P0 ;
Declare the Pi as 'vector' symbols (that is, as 1-elements), and sum the elements of its first co-span.
"5 ; V@P_ D; V1 = SA#1 @WDE P0 P1 P2 - P0 P1 P3 + P0 P2 P3 - P1 P2 P3
We can see this has the simple form we expected by factorizing it.
V2 = ExteriorFactorize@V1 D HP0 - P3 L HP1 - P3 L HP2 - P3 L
Applying the summed first cospan operation to each of these terms again gives zero as do the other operators.
SA"1 @DE
4.6
Lines in a plane
To explore lines in a plane, we first declare the basis of the plane: "2 .
" 2 8#, e1 , e2 <
A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of two points " + x and " + y where x and y are position vectors. 2009 9 3
241
A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of two points " + x and " + y where x and y are position vectors.
L H" + xL H" + yL
! + y
H! + xL H! + yL
! + x
y x
We can automatically generate a basis form for each of the points by using the GrassmannAlgebra ComposePoint function, or its alias form " .
L 'x 'y L H# + e1 x1 + e2 x2 L H# + e1 y1 + e2 y2 L
Or, we can express the line as the product of any point in it and a vector parallel to it. For example:
L H" + xL Hy - xL H" + yL Hy - xL
-x H! + y xL H y - xL
! + x
y x
If we want to compose a line element in this form, we can also use ComposeLineElement or its alias % . By leaving the placeholder unfilled we get a template for entering our own scalar coefficients. 2009 9 3
242
If we want to compose a line element in this form, we can also use ComposeLineElement or its alias % . By leaving the placeholder unfilled we get a template for entering our own scalar coefficients.
) H# + e1 + e2 L H e1 + e2 L
Alternatively, we can express L without specific reference to points in it. For example:
L " Ha e1 + b e2 L + c e1 e2
The first term " Ha e1 + b e2 L is a vector bound through the origin, and hence defines a line through the origin. The second term c e1 e2 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin.
We know that this can indeed represent a line since we can factorize it into any of the forms: A line of gradient
L " +
b a
c b
c b
! e2
J! +
c b
e1 N Ha e1 + b e2 L
! +
c b
! e1 e1
b a
243
A line of gradient
L " -
b a
c a
! e2
J! -
c a
e2 N Ha e1 + b e2 L
! ! c a
! e1
e2
ab c
e2 " +
! e2
J! -
c a
e2 N J! +
c b
e1 N
!
! c a
! + e2
c b
! e1 e1
2009 9 3
244
Lines in a 3-plane
Our normal notion of three-dimensional space corresponds to a 3-plane. Lines in a 3-plane have the same form when expressed in coordinate-free notation as they do in a plane (2-plane). Remember that a 3-plane is a bound vector 3-space whose basis may be chosen as 3 independent vectors and a point, or equivalently as 4 independent points. For example, we can still express a line in a 3-plane in any of the following equivalent forms.
L H" + xL H" + yL L H" + xL Hy - xL L H" + yL Hy - xL L " Hy - xL + x y
Here, x and y are independent vectors in the 3-plane. The coordinate form however will appear somewhat different to that in the 2-plane case. To explore this, we redeclare the basis with "3 , and use ComposePoint in its alias form to define a line as the product of 2 points.
" 3 8#, e1 , e2 , e3 < L = 'x 'y H# + e1 x1 + e2 x2 + e3 x3 L H# + e1 y1 + e2 y2 + e3 y3 L
We can expand and simplify this product with GrassmannExpandAndSimplify or its alias %.
L = % @LD # He1 H- x1 + y1 L + e2 H- x2 + y2 L + e3 H- x3 + y3 LL + H- x2 y1 + x1 y2 L e1 e2 + H- x3 y1 + x1 y3 L e1 e3 + H- x3 y2 + x2 y3 L e2 e3
The scalar coefficients in this expression are sometimes called the Plcker coordinates of the line. Alternatively, we can express L in terms of basis elements, but without specific reference to points or vectors in it. For example:
L = " Ha e1 + b e2 + c e3 L + d e1 e2 + e e2 e3 + f e1 e3 ;
The first term " Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines a line through the origin. The second term d e1 e2 + e e2 e3 + f e1 e3 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin. In order to effect this shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence there will be some constraint on the coefficients d, e, and f. To determine this we only need to determine 2009 9 3 the condition that the exterior product of the vector and the bivector is zero.
245
The first term " Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines a line through the origin. The second term d e1 e2 + e e2 e3 + f e1 e3 is a bivector whose addition represents a shift in the line parallel to itself, away from the origin. In order to effect this shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence there will be some constraint on the coefficients d, e, and f. To determine this we only need to determine the condition that the exterior product of the vector and the bivector is zero.
%@Ha e1 + b e2 + c e3 L Hd e1 e2 + e e2 e3 + f e1 e3 LD 0 Hc d + a e - b fL e1 e2 e3 0
Alternatively, this constraint amongst the coefficients could have been obtained by noting that in order to be a line, L must be simple, hence the exterior product with itself must be zero.
% @L LD 0 H2 c d + 2 a e - 2 b fL # e1 e2 e3 0
Thus the constraint that the coefficients must obey in order for a general bound vector of the form L to be a line in a 3-plane is that:
cd+ae-bf 0
This constraint is sometimes referred to as the Plcker identity. Given this constraint, and supposing neither a, b nor c to be zero, we can factorize the line into any of the following forms:
L " + f c d a d b e1 + e2 e1 e c f a e b e2 Ha e1 + b e2 + c e3 L e3 Ha e1 + b e2 + c e3 L e3 Ha e1 + b e2 + c e3 L
L " -
L " +
Each of these forms represents a line in the direction of a e1 + b e2 + c e3 and intersecting a coordinate plane. For example, the first form intersects the " e1 e2 coordinate plane in the e f e point "+ f e + e with coordinates ( c , c ,0). c 1 c 2 The most compact form, in terms of the number of scalar parameters used, is when L is expressed as the product of two points, each of which lies in a coordinate plane.
L= ac f " d a e2 f a e3 " + f c e1 + e c e2 ;
We can verify that this formulation gives us the original form of the line by expanding and simplifying the product and substituting the constraint relation previously obtained.
%@Expand@#@%@LD . Hc d + a e f bLDDD # Ha e1 + b e2 + c e3 L + d e1 e2 + f e1 e3 + e e2 e3
Similar expressions may be obtained for L in terms of points lying in the other coordinate planes. To summarize, there are three possibilities in a 3-plane, corresponding to the number of different pairs of coordinate 2-planes in the 3-plane. 2009 9 3
246
Similar expressions may be obtained for L in terms of points lying in the other coordinate planes. To summarize, there are three possibilities in a 3-plane, corresponding to the number of different pairs of coordinate 2-planes in the 3-plane.
L H! + x1 e1 + x2 e2 L H! + y2 e2 + y3 e3 L P Q L H! + x1 e1 + x2 e2 L H! + z1 e1 + z3 e3 L P R L H! + y2 e2 + y3 e3 L H! + z1 e1 + z3 e3 L Q R
4.7
We can always find the point on the line in the third coordinate plane " e3 e1 by computing the intersection of the line and the third plane.
2009 9 3
247
We can always find the point on the line in the third coordinate plane " e3 e1 by computing the intersection of the line and the third plane. To use GrassmannAlgebra for this, we first need to declare the space, and the scalar symbols.
"3 ; DeclareExtraScalarSymbols@8x1 , x2 , y2 , y3 <D;
The intersection of the line and the third coordinate plane is given by finding their common factor.
F = ToCommonFactor@L H" e1 e3 LD c H# Hx2 - y2 L - e1 x1 y2 + e3 x2 y3 L
We can confirm that this point is both on the line and on the coordinate plane by simplifying their exterior products.
8$@%@R LDD, %@R " e1 e3 D< 80, 0<
Lines in a 4-plane
Lines in a 4-plane have the same form when expressed in coordinate-free notation as they do in any multiplane. To obtain the Plcker coordinates of a line in a 4-plane, express the line as the exterior product of two points and multiply it out. The resulting coefficients of the basis elements are the Plcker coordinates of the line. Additionally, from the results above, we can expect that a line in a 4-plane may be expressed with the least number of scalar parameters as the exterior product of two points, each point lying in one of the coordinate 3-planes. For example, the expression for the line as the product of the points in the coordinate 3-planes "e1 e2 e3 and "e2 e3 e4 is
L = H" + x1 e1 + x2 e2 + x3 e3 L H" + y2 e2 + y3 e3 + y4 e4 L
Lines in an m-plane
The formulae below summarize some of the expressions for defining a line, valid in a multiplane of any dimension. Coordinate-free expressions may take any of a number of forms. For example:
L H! + xL H! + yL 4.8
2009 9 3
248
L H! + xL H! + yL L H! + xL Hy - xL L H! + yL Hy - xL L ! Hy - xL + x y
A line can be expressed in terms of the 2m coordinates of any two points on it.
L H! + x1 e1 + x2 e2 + ! + x m e m L H! + y1 e1 + y2 e2 + ! + y m e m L
When multiplied out, the expression for the line takes a form explicitly displaying the Plcker coordinates of the line.
4.9
4.10
Alternatively, a line in an m-plane "e1 e2 !em can be expressed in terms of its intersections with two of its coordinate (m|1)-planes, " e1 ! i ! em and "e1 !j !em say. The notation i means that the ith element or term is missing.
L H! + x1 e1 + ! + i + ! + x m e m L H! + y1 e1 + ! + j + ! + y m e m L
This formulation indicates that a line in m-space has at most 2(m|1) independent parameters required to describe it.
4.11
It also implies that in the special case when the line lies in one of the coordinate (m|1)-spaces, it can be even more economically expressed as the product of two points, each lying in one of the coordinate (m|2)-spaces contained in the (m|1)-space. And so on.
Planes in a 3-plane
A plane P in a 3-plane can be written in several forms. The most intuitive form perhaps is as a product of three non-collinear points P, Q, and R.
4.8
2009 9 3
249
Or, we can express it as the product of any two different points in it and a vector parallel to it (but not in the direction of the line joining the two points). For example:
2009 9 3
250
P H" + xL H" + yL Hz - yL
Or, we can express it as the product of any point in it and any two independent vectors parallel to it. For example:
P H" + xL Hy - xL Hz - yL
Or, we can express it as the product of any point in it and any line in it not through the point. For example:
2009 9 3
251
Or, we can express it as the product of any point in it and any line in it not through the point. For example:
P H" + xL L
Or, we can express it as the product of any line in it and any vector parallel to it (but not parallel to the line). For example:
P Hy - xL L
Given a basis, we can always express the plane in terms of the coordinates of the points or vectors in the expressions above. However the form which requires the least number of coordinates is that which expresses the plane as the exterior product of its three points of intersection with the 2009 9 3 axes. coordinate
252
Given a basis, we can always express the plane in terms of the coordinates of the points or vectors in the expressions above. However the form which requires the least number of coordinates is that which expresses the plane as the exterior product of its three points of intersection with the coordinate axes.
P H" + a e1 L H" + b e2 L H" + c e3 L
If the plane is parallel to one of the coordinate axes, say " e3 , it may be expressed as:
P H" + a e1 L H" + b e2 L e3
Whereas, if it is parallel to two of the coordinate axes, say " e2 and " e3 , it may be expressed as:
P H" + a e1 L e2 e3
If we wish to express a plane as the exterior product of its intersection points with the coordinate axes, we first determine its points of intersection with the axes and then take the exterior product of the resulting points. This leads to the following identity:
P HP H" e1 LL HP H" e2 LL HP H" e3 LL
Example: To express a plane in terms of its intersections with the coordinate axes
Suppose we have a plane in a 3-plane defined by three points.
" 3 ; P = H" + e1 + 2 e2 + 5 e3 L H" - e1 + 9 e2 L H" - 7 e1 + 6 e2 + 4 e3 L;
To express this plane in terms of its intersections with the coordinate axes we calculate the intersection points with the axes.
2009 9 3
253
To express this plane in terms of its intersections with the coordinate axes we calculate the intersection points with the axes.
ToCommonFactor@P H" e1 LD c H13 # + 329 e1 L ToCommonFactor@P H" e2 LD c H38 # + 329 e2 L ToCommonFactor@P H" e3 LD c H48 # + 329 e3 L
We then take the product of these points (ignoring the weights) to form the plane.
P " + 329 e1 13 " + 329 e2 38 " + 329 e3 48
To verify that this is indeed the same plane, we can check to see if these points are in the original plane. For example:
% BP " + 0 329 e1 13 F
Planes in a 4-plane
From the results above, we can expect that a plane in a 4-plane is most economically expressed as the product of three points, each point lying in one of the coordinate 2-planes. For example:
P H" + x1 e1 + x2 e2 L H" + y2 e2 + y3 e3 L H" + z3 e3 + z4 e4 L
(This is analogous to expressing a line in a 3-plane as the product of two points, each point lying in one of the coordinate 2-planes.) If a plane is expressed in any other form, we can express it in the form above by first determining its points of intersection with the coordinate planes and then taking the exterior product of the resulting points. This leads to the following identity:
P HP H" e1 e2 LL HP H" e2 e3 LL HP H" e3 e4 LL
Planes in an m-plane
A plane in an m-plane is most economically expressed as the product of three points, each point lying in one of the coordinate (m|2)-planes.
2009 9 3
254
Here the notation xi ei means that the term is missing from the sum. This formulation indicates that a plane in an m-plane has at most 3(m|2) independent scalar parameters required to describe it.
2009 9 3
255
Lines in space
Consider a general 2-element in a point 3-space.
"3 ; A = ComposeMElement@2, aD a1 # e1 + a2 # e2 + a3 # e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3
This says that the given factorization is possible only if and only if the condition a3 a4 - a2 a5 + a1 a6 0 on the coefficients of A is met. There is a straightforward way of testing the simplicity of 2-elements not shared by elements of higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of A is A A.) In our case, we can write A as A "b + B, where b is a vector, and B is a bivector. [Since elements of odd grade anti-commute, the exterior square of odd elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its being simple.] In the case of a 2-element A "b + B in a point space then, we require
A A H" b + BL H" b + BL 2 " b B 0
Thus we have the reduced requirement only that bB 0. In the case of a point 3-space
b = a1 e1 + a2 e2 + a3 e3 ; B = a4 e1 e2 + a5 e1 e3 + a6 e2 e3 ;
Simplifying bB gives
% @b BD Ha3 a4 - a2 a5 + a1 a6 L e1 e2 e3
Hence by these means we arrive at the same condition we obtained through ExteriorFactorize. At this point we should note that these results have not involved any metric concepts. The simple requirement that bB be zero, has led to the required constraint. On the other hand, if we require from the start that bB 0, that is B contains the factor a, then we can write B ab, and so write A immediately as the product of a point and a vector.
2009 9 3
256
A " b + a b H" + aL b
The vector a may be visualized as the position vector of a point on the line, and the vector b as a vector in the direction of the line. The coefficients of ab are the same as the coefficients of the cross product a!b. The cross product is the complement of the exterior product, and is orthogonal to it. (The cross product and orthogonality are metric concepts which will be introduced in the next chapter.) The (Plcker) coordinates of a line in space are often given by the coefficients of the two vectors a and a!b. Composing a and b in basis form, and then calculating their exterior product displays the coefficients of a!b.
8a = &a , b = &b , %@a bD< 8a1 e1 + a2 e2 + a3 e3 , b1 e1 + b2 e2 + b3 e3 , H- a2 b1 + a1 b2 L e1 e2 + H- a3 b1 + a1 b3 L e1 e3 + H- a3 b2 + a2 b3 L e2 e3 <
And as we would expect, the first three components as a vector are orthogonal to the second three.
8a1 , a2 , a3 <.8Ha2 b3 - a3 b2 L, Ha3 b1 - a1 b3 L, Ha1 b2 - a2 b1 L< Expand 0
In sum Despite this common practice of developing the Plcker coordinates in terms of dot and cross products, we can see from the Grassmannian approach that we began with, that it is neither necessary, nor particularly elucidating. The coordinates of a line do not need to rely on metric notions. And if the geometry is projective, they perhaps should not.
Lines in 4-space
Now consider a general 2-element in a point 4-space.
"4 ; A = ComposeMElement@2, aD a1 # e1 + a2 # e2 + a3 # e3 + a4 # e4 + a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4
Factorizing gives
2009 9 3
257
On the other hand, adopting the method of the previous section, we can write
b = a1 e1 + a2 e2 + a3 e3 + a4 e4 ; B = a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4 ;
Simplifying bB gives
% @b BD Ha3 a5 - a2 a6 + a1 a8 L e1 e2 e3 + Ha4 a5 - a2 a7 + a1 a9 L e1 e2 e4 + Ha4 a6 - a3 a7 + a1 a10 L e1 e3 e4 + Ha4 a8 - a3 a9 + a2 a10 L e2 e3 e4
Again, we arrive at the same conditions, except that with this approach we have one extra one. However, it is easy to show that the number of independent conditions is still only three. Clearly, because this product will always be a trivector, in a space of n vector dimensions, it will always have c = n(n-1)(n-2)/6 components, and hence spawn c conditions. On the other hand, if we require from the start that bB 0, that is B contains the factor a, then we can write B ab, and so write A immediately as the product of a point and a vector.
A " b + a b H" + aL b
The vector a may be visualized as the position vector of a point on the line, and the vector b as a vector in the direction of the line. The coefficients of ab cannot be expressed as the coefficients of the cross product as before since the vector space is now 4-dimensional. The (Plcker) coordinates of a line in 4-space must now be given by the coefficients of the vector a and the bivector ab. If
b = &b ; Cp = 8a = &a , %@a bD< 8a1 e1 + a2 e2 + a3 e3 + a4 e4 , H- a2 b1 + a1 b2 L e1 e2 + H- a3 b1 + a1 b3 L e1 e3 + H- a4 b1 + a1 b4 L e1 e4 + H- a3 b2 + a2 b3 L e2 e3 + H- a4 b2 + a2 b4 L e2 e4 + H- a4 b3 + a3 b4 L e3 e4 <
2009 9 3
258
In sum In a point 4-space, the Grassmannian approach generalizes, whereas the approach developed through the dot and cross products does not generalize.
To find the point of intersection of these two lines, we first find their common factor. The common factor of two elements can be found by applying ToCommonFactor to their regressive product. Remember that common factors are only determinable up to congruence (that is, to within an arbitrary scalar multiple). Hence the common factor we determine will, in general, be a weighted point.
wP = ToCommonFactor@L1 L2 D c HHb c - a dL # + Ha b c - a c dL e1 + H- a b d + b c dL e2 L
We can convert this weighted point to a form which separates the weight from the point by applying ToWeightedPointForm.
ToWeightedPointForm@wPD Hb c - a dL c # + a c Hb - dL c e1 b c c - a d c + b H- a + cL d c e2 b c c - a d c
To verify that this point lies in both lines, we can take its exterior product with each of the lines and show the results to be zero.
2009 9 3
259
In the special case in which the lines are parallel, that is b c - a d = 0, their intersection is no longer a point, but a vector defining their common direction.
To find the point of intersection of the line with the plane, we first determine their common factor (a weighted point) by applying ToCommonFactor to their regressive product, and then drop the weight by applying ToPointForm.
P = ToPointForm@ToCommonFactor@L PDD # + a e Hd f + Hc - fL gL e1 d e f - Hb e - c e + a fL g d e f - Hb e - c e + a fL g + d Hb e + Ha - eL fL g e3 d e f - Hb e - c e + a fL g
f Hb e Hd - gL + c H- a + eL gL e2
To verify that this point lies in both the line and the plane, we can take its exterior product with each of the line and the plane and show the results to be zero.
$@%@8L P, P P<DD 80, 0<
In the special case in which the line is parallel to the plane, their intersection is no longer a point, but a vector defining their common direction. When the line lies in the plane, the result from the calculation will be zero.
2009 9 3
260
To find the line of intersection of the two planes, we first determine their common factor (a line) by applying ToCommonFactor to their regressive product, and then (if we wish) drop the (arbitrary) congruence factor c by setting it equal to unity. Finally, applying GrassmannSimplify collects the terms in a convenient form with the origin # factored out.
L = $@ToCommonFactor@P1 P2 D . c 1D # Ha e Hc f - b gL e1 + b f H- c e + a gL e2 + c Hb e - a fL g e3 L + a b e f H- c + gL e1 e2 + a c e Hb - fL g e1 e3 + b c H- a + eL f g e2 e3
This is of course, a 2-element in a 4-space, and hence is not necessarily simple. However we can verify in a number of ways that this result is indeed simple, and therefore does define a line. The simplest way is to apply the GrassmannAlgebra function SimpleQ.
SimpleQ@LD True
Perhaps more convincing is to factorize L so as to display it as the product of a point and a vector.
Lp = ExteriorFactorize@LD a e Hc f - b gL # + e1 b f H- c + gL e2 -c f + b g + + c Hb - fL g e3 -c f + b g
Hb c e f - a b f gL e2 acef-abeg
Hb c e g - a c f gL e3 acef-abeg
We can extract the point and vector as specific expressions by applying the GrassmannAlgebra function ExtractArguments.
8P, V< = ExtractArguments@Lp D :# + e1 b f H- c + gL e2 -c f + b g + c Hb - fL g e3 -c f + b g + , >
Hb c e f - a b f gL e2 acef-abeg
Hb c e g - a c f gL e3 acef-abeg
To verify that both the point and the vector lie in both planes, we can take their exterior product with each of the planes and show the results to be zero.
Simplify@ %@8P1 P, P1 V, P2 P, P2 V<DD 80, 0, 0, 0<
In the special case in which the planes are parallel, their intersection is no longer a line, but a bivector defining their common 2-direction. 2009 9 3
261
In the special case in which the planes are parallel, their intersection is no longer a line, but a bivector defining their common 2-direction.
intersect at a point coplanar with these three points. Here, n is a scalar parametrizing the curve.
The solution
The osculating plane P to the curve at the point P is given by PPPP. P and P are the first and second derivatives of P with respect to n.
P = " + n e1 + n 2 e2 + n 3 e3 ; P = e1 + 2 n e2 + 3 n 2 e3 ; P = 2 e2 + 6 n e3 ; P = P P P;
" " " "
We can declare the space to be a 3-plane and subscripts of the parameter n to be scalar, and then use GrassmannSimplify to derive the expression for the osculating plane as a function of n. (We divide out a common multiple of 2 to make the resulting expressions a little simpler.)
"3 ; DeclareExtraScalarSymbols@8n, n_ <D; P = %@P 2D # Ie1 e2 + 3 n e1 e3 + 3 n2 e2 e3 M + n3 e1 e2 e3
The (weighted) point of intersection of these three planes may be obtained by calculating their common factor. The congruence factor c appears squared because there are two regressive product operations. 2009 9 3
262
The (weighted) point of intersection of these three planes may be obtained by calculating their common factor. The congruence factor c appears squared because there are two regressive product operations.
wP = ToCommonFactor@P1 P2 P3 D
2 2 2 2 2 c2 I# I- 9 n2 1 n2 + 9 n1 n2 + 9 n1 n3 - 9 n2 n3 - 9 n1 n3 + 9 n2 n3 M + 3 3 3 3 3 e1 I- 3 n3 1 n2 + 3 n1 n2 + 3 n1 n3 - 3 n2 n3 - 3 n1 n3 + 3 n2 n3 M + 3 2 3 2 2 2 3 2 3 2 3 e2 I- 3 n3 1 n2 + 3 n1 n2 + 3 n1 n3 - 3 n2 n3 - 3 n1 n3 + 3 n2 n3 M + 3 2 2 3 2 e3 I- 9 n3 1 n2 n3 + 9 n1 n2 n3 + 9 n1 n2 n3 3 2 2 2 3 9 n1 n3 2 n3 - 9 n1 n2 n3 + 9 n1 n2 n3 MM
The scalar coefficient attached to the origin factors out of the expression so that we can extract the point from the resulting weighted point to give the point of intersection Q as
Q = ToPointForm@wPD # + e3 n1 n2 n3 + 1 3 e1 Hn1 + n2 + n3 L + 1 3 e2 Hn2 n3 + n1 Hn2 + n3 LL
Finally, to show that this point of intersection Q is coplanar with the points P1 , P2 , and P3 , we compute their exterior product.
Simplify@%@P1 P2 P3 QDD 0
whole space.
2j
Ka Xbi O Xci b
G@Xci D Hj+1L m
X H- 1L
j i=1
n-m
a b
m n-m
2009 9 3
263
a X b X
j m j n-m n-m
aX b +!+
m j n -m n-m
a b
m
a b
m
It can be seen that the last term can be rearranged as the shadow of X on b excluding a.
j n- m m
aX b
m j n-m n-m
b Xa
n-m n-m j m
a b
m
b a
m
If j = 1, the decomposition formula reduces to the sum of just two components, xa and xb , where xa lies in a and xb lies in b .
m n-m
a Kx b O x xa + xb
m n- m
b Kx aO +
n- m m
4.12
a b
m n- m
b a
n- m m
x
i=1
b1 b2 ! x ! bn b1 b2 ! bi ! bn
bi
We now explore these formulae with a number of geometric examples, beginning with the simplest case of decomposition in a 2-space.
Decomposition in a 2-space
Decomposition of a vector in a bivector
Suppose we have a vector 2-space defined by the bivector ab. We wish to decompose a vector x in the space to give one component in a and the other in b. Applying the decomposition formula above, and noting that xb and xa are bivectors while xb and xa are scalars, we can use formula 3.26 and axiom
8 to give:
a JHx bL 1 N xb ab
n
xa
a Hx bL ab a
ab a
xb ab
Ja 1 N
n
xb ab
2009 9 3
264
xb
b Hx aL
b JHx aL 1 N ax ab
n
ba xa b ba
ba b
xb ab
xa ba
Jb 1 N
n
xb ab
because we are in a 2-
The coefficients of a and b are scalars showing that xa is congruent to a and xb is congruent to b. If each of the three vectors is expressed in basis form, we can determine these scalars more specifically. For example:
x = x1 e1 + x2 e2 ; a = a1 e1 + a2 e2 ; b = b1 e1 + b2 e2 ; xb ab Ha1 e1 + a2 e2 L Hb1 e1 + b2 e2 L Hx1 b2 - x2 b1 L e1 e2 Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L e1 e2 Ha1 b2 - a2 b1 L Hx1 e1 + x2 e2 L Hb1 e1 + b2 e2 L
Finally then we can express the original vector x as the required sum of two components, one in a and one in b.
x Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L a+ Hx1 a2 - x2 a1 L Hb1 a2 - b2 a1 L b
We can easily check that the right hand side does indeed reduce to x by composing x, a, and b, and then simplifying:
!2 ; &x ; a = &a ; b = &b ; Hx1 b2 - x2 b1 L Ha1 b2 - a2 b1 L e1 x1 + e2 x2 a+ Hx1 a2 - x2 a1 L Hb1 a2 - b2 a1 L b Simplify
265
Because P is a point, we would expect the sum of its two component weights to be unity.
PR QR + QP QR P HR - QL QR
We can see immediately that this is indeed so because adding them gives the same bound vector in the numerator and the denominator. We can either note this equivalence from our previous discussion of bound vectors, or do a substitution. For a substitution, let r and q be scalars, and v be a vector in (parallel to) the line.
Q=P-qv R=P+rv PR QR QP QR P HP + r vL HP - q vL HP + r vL HP - q vL P HP - q vL HP + r vL r Pv r r+q q r+q
r + q Pv q Pv
r + q Pv
2009 9 3
266
r r+q
Q+
q r+q
R x
H
xR QR
H QR L R
LQ + H
Qx QR
Qx
LR
H QR L Q
x R
Because x is a vector, we would expect the sum of its two component weights to be zero.
xR QR + Qx QR x HR - QL QR
Again, we can see immediately that this is so because adding them gives a zero bivector in the numerator. For a substitution, let a be a scalar; then R - Q is a vector in (parallel to) the line.
x = a HR - QL xR QR Qx QR a HR - QL R QR Q a HR - QL QR -a QR QR -a
QR QR
Decomposition in a 3-space
2009 9 3
267
Decomposition in a 3-space
Decomposition of a vector in a trivector
Suppose we have a vector 3-space represented by the trivector abg. We wish to decompose a vector x in this 3-space to give one component in ab and the other in g. Applying the decomposition formula gives:
x ab Ha bL Hx g L Ha bL g
xg
g Hx a bL g Ha bL
Because xab is a 3-element it can be seen immediately that the component xg can be written as a scalar multiple of g where the scalar is expressed either as a ratio of regressive products (scalars) or exterior products (n-elements).
xg x Ha bL g Ha bL g abx abg g
The component xab will be a linear combination of a and b. To show this we can expand the expression above for xab using the Common Factor Axiom.
x ab Ha bL Hx g L Ha bL g Ha x g L b Ha bL g Hb x g L a Ha bL g
Rearranging these two terms into a similar form as that derived for xg gives:
x ab xbg abg a+ axg abg b
Of course we could have obtained this decomposition directly by using the results of the decomposition formula 3.36 for decomposing a 1-element into a linear combination of the factors of an n-element.
2009 9 3
268
xbg abg
a+
axg abg
b+
abx abg
2009 9 3
269
QRP QRS
LS
P
H
PRS QRS
LQ H
QPS QRS
LR
Alternatively, we can group these terms two at a time to obtain a weighted point in the corresponding line. For example, adding the first two terms together gives us the point which we denote PQR .
P QR PRS QRS Q+ QPS QRS R
QRP QRS
LS
P
H
PRS QRS
LQ+H
QPS QRS
LR
Decomposition in a 4-space
Decomposition of a vector in a 4-vector
Suppose we have a vector 4-space represented by the trivector abgd and we wish to decompose a vector x in this 4-space into two vectors, one in ab and one in gd.
x x ab + x g d
Hence we an write
2009 9 3
270
Hence we an write
x ab xbgd abgd abxd abgd a+ axgd abgd abgx abgd b
x g d
g+
As we have previously remarked, we can rearrange these components in whatever combinations or interpretations (as points or vectors) we require.
P
i=1
a1 a2 ! P ! an a1 a2 ! ai ! an
ai
4.13
As we have already seen in the examples above, the components are simply arranged in whatever combinations are required to achieve the decomposition. In summary, it is worth reemphasizing that, although the decomposition of elements depicts geometrically quite differently depending on whether the elements are interpreted as points or vectors, the formulae are identical. As with all the formulae of the Grassmann algebra, the only difference is the interpretation.
2009 9 3
271
The advantage of the Grassmann geometry interpretation however, is that one can add a metric to it if one wishes to use metric concepts (for example the inner product).
P+y y P
x
L2 L1
- ey
We obtain the point of intersection up to congruence by applying ToCommonFactor to the regressive product of the lines.
ToCommonFactor@L1 L2 D c H- x P x y - P e P x yL
Since the congruence factor c, and Pxy (the coefficient of the exterior product in any basis) are all scalars, we can say that the intersection is congruent to the weighted point wQ given by
wQ e P + x
By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q given by 2009 9 3
272
By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q given by
QP+ x e
As e approaches zero, the lines approach parallelism, the point of intersection Q has an everincreasing vector x , causing its position to approach infinity, and the weighted point wQ has an e ever-decreasing influence from the point P, causing it to approach the (ultimately) common vector x. It may be remarked that in this calculation, the point of intersection does not involve the vector y which specifies the position of L2 . In general then, we may conclude that the resulting vector and hence the resulting point-at-infinity is the same for all lines parallel to L1 .
We can now see how the plane may be said to contain a line approaching a line-at-infinity if we define it by the product of these two weighted points.
L wQ1 wQ2 He " + e1 L He " + e2 L
This line has the property that in the limit, it contains all the points-at-infinity of the plane. We can show this by taking its exterior product with a third arbitrary point approaching a point-at-infinity, and obtain an element that becomes zero in the limit as e approaches zero.
%@He " + e1 L He " + e2 L He " + a e1 + b e2 LD He - a e - b eL # e1 e2
In fact, in the limit, the line-at-infinity is the bivector of the plane. We can see this by letting e approach zero in the expression for L or its expansion:
%@He " + e1 L He " + e2 LD # H-e e1 + e e2 L + e1 e2
Projective 3-space
In projective 3-space, all the above concepts of the projective plane carry over naturally. We can see the parallels between Projective Geometry and Grassmann Geometry as follows:
2009 9 3
273
Projective Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a line-atinfinity. All the lines-at-infinity belong to a single plane-at-infinity. Grassmann Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a bivector. All the bivectors belong to a single trivector.
Homogeneous coordinates
The plane
As we have seen, a point P in the plane may be represented up to congruence by
P = a0 " + a1 e1 + a2 e2 ;
The elements of the triplet 8ao , a1 , a2 < are called the homogeneous coordinates of the point P. In projective geometry, the triplet 8ao , a1 , a2 < and the triplet 8k ao , k a1 , k a2 <, where k is a scalar, are defined to represent the same point. In the Grassmann approach, this corresponds to weighted points with different weights also defining the same point. The cobasis to the basis of the plane is
"2 ; Cobasis 8e1 e2 , - H# e2 L, # e1 <
A line in the plane can be represented up to congruence by a linear combination of these cobasis elements
L = b0 e1 e2 + b1 H- " e2 L + b2 " e1 ;
The elements of the triplet 8b0 , b1 , b2 < are called the homogeneous coordinates of the line L. The equation to the line may then be obtained from the condition that a point lies in the line, that is, their exterior product is zero.
PL 0
On simplifying the product, we recover the equation of the line in homogeneous coordinates (a0 b0 + a1 b1 + a2 b2 0) by viewing the line as fixed and the point as variable.
DeclareExtraScalarSymbols@a_ , b_ D; % @P LD 0 Ha0 b0 + a1 b1 + a2 b2 L # e1 e2 0
However, we can also view the point as fixed, and the line as variable, wherein the same form of equation describes a variable line through a fixed point.
2009 9 3
274
" 3 ; P = a0 " + a1 e1 + a2 e2 + a3 e3 ;
The elements of the quartet 8ao , a1 , a2 , a3 < are called the homogeneous coordinates of the point P. The cobasis to the basis of the 3-space is
Cobasis 8e1 e2 e3 , - H# e2 e3 L, # e1 e3 , - H# e1 e2 L<
Hence now it is planes in the 3-space that can be represented up to congruence by a linear combination of these cobasis elements
P = b0 e1 e2 e3 + b1 H- " e2 e3 L + b2 " e1 e3 + b3 H- " e1 e2 L;
The elements of the quartet 8b0 , b1 , b2 , b3 < are called the homogeneous coordinates of the plane P. The equation to the plane may then be obtained from the condition that a point lies in the plane, that is, their exterior product is zero.
PP 0
On simplifying the product, we recover the equation of the plane in homogeneous coordinates by viewing the plane as fixed and the point as variable.
% @P PD 0 Ha0 b0 + a1 b1 + a2 b2 + a3 b3 L # e1 e2 e3 0
And as in the case of the plane, we can also view the point as fixed, and the plane as variable, wherein the same form of equation describes a variable plane through a fixed point.
Thus, such a form will only be simple, and thus able to represent a line if the coefficients are constrained by the relation 2009 9 3
275
Thus, such a form will only be simple, and thus able to represent a line if the coefficients are constrained by the relation
a3 a4 - a2 a5 + a1 a6 0
Duality
The concept of duality in Grassmann algebra corresponds closely to the concept of duality in projective geometry. For example, consider the expression for a line as the exterior product of two points.
L PQ
This may be read variously as: The points P and Q lie on the line L. The line L passes through the points P and Q. The points P and Q join to form the line L. The line L is the join of the points P and Q. In a plane, the dual expression to "a line is the exterior product of two points", is "a point is the regressive product of two lines".
P LM
This may be read variously as: The lines L and M pass through the point P. The point P lies on the lines L and M. The lines L and M intersect to form the point P. The point P is the intersection of the lines L and M. The concurrency of three points is defined by
PQR 0
Duality extends naturally to a space of any dimension. Remember that, just as in projective geometry, these entities and expressions are only defined up to congruence.
Desargues' theorem
As an example of how Grassmann algebra can be used in Projective Geometry, we prove Desargues' theorem. It should be noted that the methods used are simply applications of the exterior and regressive products, and as are all the applications in this chapter, entirely non-metric. This is in contradistinction to "proofs" of the theorem using the dot and cross products of 3dimensional vector algebra, which are metric concepts. If two triangles have corresponding vertices joined by concurrent lines, then the intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all lie on one line. 2009 9 3
276
If two triangles have corresponding vertices joined by concurrent lines, then the intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all lie on one line.
X U P
Q R W Z
Desargues' theorem.
For simplicity, and without loss of generality, we may take the fixed point O as the origin ". We can then define the points P, Q, and R simply by
P = " + p; Q = " + q; R = " + r;
And since U, V, and W are on the same lines (respectively) through O as are P, Q, and R, we can define U, V, and W using scalar multiples of the position vectors of P, Q, and R.
U = " + a p; V = " + b q; W = " + c r;
To confirm that these triplets of points are collinear, we can compute their exterior product.
8%@" P UD, %@" Q VD, %@" R WD< 80, 0, 0<
The next stage is to compute the intersections X, Y, and Z. We do this by using ToCommonFactor.
2009 9 3
277
Here A is a weighted point We can extract the associated point by using ToPointForm. (Remember that "pq represents the scalar coefficient of the "pq in any basis that might be chosen, and c is the scalar congruence factor.)
X = ToPointForm@XD Ha - a bL p a-b + H- 1 + aL b q a-b + #
We can now check whether these points are collinear by taking their exterior product. The first level of simplification gives
F = % @X Y ZD # H- 1 + aL a b H- 1 + cL Ha - bL Ha - cL H- 1 + aL Ha - a bL c Ha - bL Ha - cL H- 1 + aL2 b c Ha - bL Ha - cL + a H- 1 + bL b H- 1 + cL Ha - bL Hb - cL a H- 1 + bL2 c Ha - bL Hb - cL + + a b H- 1 + cL2 Ha - cL Hb - cL pq +
H- 1 + bL c Ha - a cL Ha - cL Hb - cL H- 1 + aL b H- 1 + cL c Ha - cL Hb - cL
pr +
H1 - aL H- 1 + bL b c Ha - bL Hb - cL
qr
On simplifying the scalar coefficients we get zero as expected, thus proving the theorem.
$@Simplify@FDD 0
Pappas' theorem
As a second example, we prove Pappas' theorem. The methods are essentially the same as used for Desargues' theorem in the previous section. Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y, Z of line pairs PV and QU, PW and RU, QW and RV, are collinear. 2009 9 3
278
Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y, Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.
R Q P Z
U V W
Pappas' theorem.
As our example, we take the more general case in which the lines through the two sets of collinear points are not parallel. For simplicity, and without loss of generality, we take their intersection point as the origin. We can then define the points P, Q, R, and U, V, W by means of two vectors p and u; and four scalars a, b, c, d.
P = " + p; Q = " + a p; R = " + b p; U = " + u; V = " + c u; W = " + d u;
By definition these triplets of points are collinear. The next stage is to compute the intersections X, Y, and Z. We do this by using ToCommonFactor and ToPointForm as in the previous section.
2009 9 3
279
We can now check whether these points are collinear by taking their exterior product. A value of zero thus proving the theorem.
F = %@X Y ZD Simplify 0
Projective n-space
A projective n-space is essentially, a bound n-space, that is, a linear space of n+1 dimensions, in which one of the basis elements is interpreted as a point, say the origin, and the other n basis elements are interpreted as vectors. In projective n-space (n 2), the concepts of the projective plane carry over naturally. Non-parallel (n-1)-planes intersect in (n-2)-planes. Parallel (n-1)-planes may be said to intersect in an (n-2)-plane-at-infinity. All the (n-2)-planes-at-infinity belong to a single (n-1)-plane-at-infinity (which is in fact the n-vector of the space). Here, a 0-plane is a point, a 1-plane is a line, a 2-plane is a plane, and so on. Two (n-1)-planes may be said to be parallel if their (n-1)-directions are the same (that is, their (n-1)-vectors are congruent). We can see how this works more concretely by thinking in terms of the actual bound elements representing these geometric objects. We can define two (n-1)-planes P and Q by
P = P a ; Q = Q b
n-1 n-1
2009 9 3
280
The Common Factor Theorem shows that P and Q will have a common (n-1)-element, C say. If
n-1 n-1
n-1
is not congruent to b , then C will be of the form C = R g , thus representing an (n-2)-plane. Projective space terminology is just simply equivalent to defining C as an (n-2)-plane-at-infinity.
n-1
All the C from all the possible intersections, being (n-1)-vectors, belong to the n-vector of the
n-1
space. In projective space terminology: an (n-1)-plane-at-infinity. In sum Thus we can see that Projective Geometry differs in no essential way from the geometry which is a natural consequence of Grassmann's algebra. The superficial differences, as is the case with many mathematical systems covering the same applications, reside in definition or interpretation.
Regions of a plane
Consider a line L in a plane. The line will divide the plane into two regions (half-planes). A halfplane region is an open region if it does not contain any points of the line. Any point P will be located in either one of the half-planes or in the line itself. We can form the exterior product PL. This exterior product will vary in weight as P varies over the regions, but will only change sign when P crosses the line. The product (and hence the weight) is of course zero when P is on the line itself. Suppose now we have a reference point Q, not in the line, and we want to find if P and Q are on the same side of the line. All we need to do is to take the ratio of PL to QL. If the ratio is positive, P and Q are on the same side. If the ratio is negative, they are on opposite sides. If the ratio is zero, P is in the line.
2009 9 3
281
P2 L QL
<0
P1 L QL
>0
Q P2 P1
As can be seen from the figure above, the reason this occurs is because the ordering of the points in the product PL is clockwise on one side of the line, and anti-clockwise on the other. (Remember that the line L itself can be expressed as the product of any two points on it.) Because the line L is in both the numerator and denominator of the ratio, the ratio is invariant with respect to how the line is expressed (for example, which two points are chosen to express it). If we know in which region the point Q is located, we can therefore determine the region in which the point P is located. In the sections below we will see how this simple result can be extended in a boolean manner to regions of higher dimension, and regions with more complex boundaries.
Example
As shown in the graphic below, let L be a line through the origin " in the direction of the basis vector e2 , the reference point Q at the point " + e1 , and point P1 and P2 at the points " + a e1 + e2 and " - b e1 + e2 respectively. The scalar coefficients a and b are strictly positive.
P2 L QL
-b < 0 L
P1 L QL
a>0
P2 Q
P1
The products of each of the three points with the line are:
Q L H" + e1 L " e2 - " e1 e2
2009 9 3
282
Thus we conclude that P1 is on the same side of the line L as Q, while P2 is on the opposite side.
Regions of a line
Before we explore more complex regions and higher dimensions, let us take a step back to consider a line as the complete space (a 1-space), and a point R in the line as dividing it into two regions. The point R now takes the role of the line L in the previous example. Suppose the 1-space has basis {",e1 }, and for simplicity we take R to be the origin ". Following the pattern of the example above, we let the reference point Q be the point " + e1 , and points P1 and P2 be the points " + a e1 and " - b e1 respectively. Here, as above, the scalar coefficients a and b are strictly positive.
P2 R QR
-b < 0
P1 R QR
a>0
P2
P1
The products of each of these three points with the dividing point R are:
Q R H" + e1 L " - " e1 P1 R H" + a e1 L " - a " e1 P2 R H" - b e1 L " b " e1
Thus we conclude that P1 is on the same side of the dividing point R as Q, while P2 is on the opposite side.
2009 9 3
283
P2 L1 QL1
<0
P2 L2 QL2
>0 L2
Q P2
P1 L1 QL1
>0
P1 L2 QL2
>0
P3 L1
P1
P3
QL1
<0
P3 L2 QL2
<0
P4
P4 L1 QL1
L1 <0
>0
P4 L2 QL2
Although for easier comprehension, the graphic shows the two lines as if they are at right angles, and the points as if they are equidistant from the intersection, remember that all the explorations in this chapter are non-metric, and so angles and distances at this stage are undefined. We can summarize the region definitions more succinctly in a table.
2009 9 3
284
Region P1 P2 P3 P4
L1
P1 L1 QL1 P2 L1 QL1 P3 L1 QL1 P4 L1 QL1
Example
To see how this works, suppose we know that some point Q is in some quadrant, and we would like to know which quadrant another given point P is in. We simply calculate the two ratios and compare their results to the table. We do this for a numerical example below. First define the lines and the point Q:
L1 = H" + e2 L H" + 2 e1 - e2 L; L2 = H" - e2 L H" + 2 e1 + e2 L; Q = " - 4 e1 + 3 e2 ;
To see if a point P is in the same quadrant as the reference point Q, we simply check that the signs L1 L2 of both products P and P are positive. Below we write a function CheckP to take a random QL QL
1 2
point P and calculate the value of the combined predicate for it.
CheckP := ModuleB8P<, P = " + RandomReal@8- 5, 5<D e1 + RandomReal@8- 5, 5<D e2 ; % @P L1 D % @P L2 D :P, > 0 && > 0>F % @Q L1 D % @Q L2 D
Below we check 10 random points and find that three of them return True for CheckP and are thus in the same quadrant as Q.
2009 9 3
285
Table@CheckP, 810<D TableForm # + 2.50516 e1 - 0.477195 e2 # - 4.78253 e1 + 4.11593 e2 # - 0.511969 e1 + 2.70047 e2 # - 2.45479 e1 + 1.84245 e2 # + 2.7643 e1 - 2.33078 e2 # + 3.40323 e1 - 4.45599 e2 # + 3.68059 e1 - 4.49096 e2 # - 1.29608 e1 + 1.51291 e2 # + 3.90768 e1 - 1.83592 e2 # + 2.52696 e1 - 0.82049 e2 False True False True False False False True False False
L1
P2
L2
Q P1 P4
L3
P3
P5
P6
Planar regions defined by three lines
P7
For a given region, represented by the point Pi we can form the ratio as we did above for each of the lines Lj and tabulate the regions as follows:
2009 9 3
286
Region P1 P2 P3 P4 P5 P6 P7
L1
P1 L1 QL1 P2 L1 QL1 P3 L1 QL1 P4 L1 QL1 P5 L1 QL1 P6 L1 QL1 P7 L1 QL1
Example
Here are some actual lines and a reference point Q:
L1 = H" + e2 L H" + 2 e1 - e2 L; L2 = H" - e2 L H" + 2 e1 + e2 L; L3 = H" - e1 - 2 e2 L H" + 3 e1 - 2 e2 L; Q = " - 3 e1 + e2 ;
To see if a point P is in the centre triangle (the region in which P4 is in), we check that the signs of the products are +, ~, +, according to the table above, and hence write the predicate CheckP4 for this triangular region as
CheckP4 := ModuleB8P<, P = " + RandomReal@8- 3, 3<D e1 + RandomReal@8- 3, 3<D e2 ; % @P L1 D % @P L2 D % @P L3 D :P, > 0 && < 0 && > 0>F % @Q L1 D % @Q L2 D % @Q L3 D
Below we check 10 random points and find that only one of them returns True for CheckP2 and is thus in the centre triangle.
2009 9 3
287
Table@CheckP4, 810<D TableForm # - 0.830005 e1 - 0.848417 e2 # - 2.94884 e1 - 2.22242 e2 # + 2.77172 e1 + 0.3699 e2 # + 0.13623 e1 - 0.109629 e2 # - 2.48096 e1 - 1.96646 e2 # - 2.17117 e1 + 0.0203384 e2 # + 1.02258 e1 + 1.51379 e2 # + 0.506026 e1 - 0.54325 e2 # - 1.15566 e1 + 0.842319 e2 # + 2.25353 e1 - 1.15175 e2 False False False False False False False True False False
2009 9 3
288
Creating a pentagonal region by intersecting the half-planes which include the point Q.
2009 9 3
289
To check, we choose three points one inside, one on, and one outside the pentagon boundary. For the point on the boundary we choose the point M, say, mid-way between P2 and P5 . For the other two we perturb the position of M by a small vertical displacement e. As expected, only the point M on the boundary returns True.
M = HP2 + P5 L 2; e = .000001 e2 ; FiveStarPentagonBoundary 8M - e, M, M + e< 8False, True, False<
Note that the pentagon boundary cannot be defined by a Boolean expression of predicates in which the ratios are zero, because the lines from which the boundary is formed extend beyond the boundary.
For example, the vertices of the 5-star are outside the pentagon.
2009 9 3
290
2009 9 3
291
The quadrilateral
The region defined by the quadrilateral can be viewed as the intersection of the (infinite) region below the lines L1 and L4 , and the (infinite) region above the lines L2 and L3 . The region below the lines L1 and L4 can be defined by the conjunction of the regions on the positive side (relative to Q) of the lines L1 and L4 .
2009 9 3
292
BelowL1L4@P_D := AndB
% @P L1 D % @Q L1 D
0,
% @P L4 D % @Q L4 D
0F
Creating a 5-star region Stage IIa: Creating the region under the lines L1 and L4
The region above the lines L2 and L3 can be defined by the disjunction of the regions on the positive side (relative to Q) of the lines L2 and L3 . This region is shown in the figure below with various opacities of red.
2009 9 3
293
AboveL2L3@P_D := OrB
% @P L2 D % @Q L2 D
0,
% @P L3 D % @Q L3 D
0F
Creating a 5-star region Stage IIb: Creating the region above the lines L2 and L3
The 5-star
And the 5-star is the disjunction of the triangle and the quadrilateral.
FiveStar@P_D := Or@Triangle@PD, Quadrilateral@PDD
And so also are the vertices of the interior pentagon explored in the previous section.
2009 9 3
294
We can add small perturbations to these vertices to create points just outside the 5-star. These should all yield False.
d = .000001 e1 ; e = .000001 e2 ; 8FiveStar 8P1 + e, P2 + e, P3 + e, P4 + e, P5 + e<, FiveStar 8A1 - e, A2 - d, A3 + e, A4 + e, A5 + d<< 88False, False, False, False, False<, 8False, False, False, False, False<<
Perturbations to create points just inside the 5-star should all yield True.
8FiveStar 8P1 - e, P2 - 2 d - e, P3 - d + e, P4 + d + e, P5 + 2 d - e<, FiveStar 8A1 + e, A2 + d, A3 - e, A4 - e, A5 - d<< 88True, True, True, True, True<, 8True, True, True, True, True<<
% @ P L P2T D % @ Q L P2T D
% @ P L P3T D % @ Q L P3T D
0FFF
This formula is valid for any set of five lines intersecting to form the 5-star, and any reference point Q inside the 5-star region. We test the formula with the points just inside the vertices of the pentagon as above.
FiveStar@, Q, 8L1 , L2 , L3 , L4 , L5 <D & 8 A1 + e, A2 + d, A3 - e, A4 - e, A5 - d< 8True, True, True, True, True<
2009 9 3
295
To create a three-dimensionalized pyramidal-type 5-star we only need to take a single point S above the origin, and extend each of the lines through S to form five intersecting planes. The formula (predicate) for this is given by
FiveStar@P, Q, L SD
Here, L is the list of lines forming the original 5-star. Since the exterior product operation Wedge is Listable, taking the exterior product of this list of lines with the point S gives the corresponding list of planes through S. We then truncate the (infinite) pyramidal cone thus formed by a sixth plane equivalent to the plane of the original planar 5-star by adding an extra predicate term. We define the plane P as the exterior product of any three of the original 5-star vertices.
P = P1 P2 P3
We must also redefine the location of the reference point Q. Since the original point Q is now on the boundary of the new three dimensional region, we will need to relocate it to an interior point. The required predicate for the truncating plane is then
2009 9 3
296
% @P PD % @Q PD
We can now define the solid region FiveStar3D as the conjunction of these two regions.
FiveStar3D@P_D := AndBFiveStar@P, Q, L SD, % @P PD % @Q PD 0F
We now change to a 3-space, and define the two new points, and the new plane.
" 3 ; Q = " + e3 ; S = " + 2 e3 ; P = P1 P2 P3 ;
To verify the formula, we can first check that the 5-star pyramid contains all its defining points.
FiveStar3D 8P1 , P2 , P3 , P4 , P5 , S< 8True, True, True, True, True, True<
We can also check that points clustered sufficiently close to the origin are inside the region.
Table@FiveStar3D@" + RandomReal@0.5D e1 + RandomReal@0.5D e2 + RandomReal@0.5D e3 D, 810<D 8True, True, True, True, True, True, True, True, True, True<
We can add small perturbations to these vertices to create points just outside the 5-star. These should all yield False.
e = .000001 e3 ; FiveStar3D H8P1 , P2 , P3 , P4 , P5 , S< + eL 8False, False, False, False, False, False<
2009 9 3
297
Summary
In 1-space, a hyperplane is a point. In 2-space, a hyperplane is a line. In 3-space, a hyperplane is a plane. In n-space, a hyperplane is an (n-1)-plane. Hyperplanes divide their space into regions. If two points P and Q lie on the same side of a hyperplane ,, then the ratio R of the exterior products of the two points with the hyperplane is positive:
R= P, Q, >0
The point Q may be considered as a reference point, and the region in which it is located as the reference region. The point P may be considered as a variable point, and the region in which it is located as the region of interest. Regions of space may be defined by Boolean functions of the predicates R > 0, R 0, R < 0, R 0, and R 0.
298
One of the most enticing features of Grassmann algebra applied to geometry which we will also explore in this section, is the idea that some geometric expressions may not only be geometrically depicted, but may also double as prescriptions for their geometric construction. The types of geometric expressions we will explore are exemplified by that leading to the equation to a conic section in the plane (discussed in the next section).
P P1 HL1 HP0 HL2 HP P2 LLLL 0
Here, the factors P0 , P1 , P2 , L1 and L2 are fixed (constant) entities defining the parameters of the conic section. P is considered the independent variable, and since it occurs twice, the expression represents an equation of the second degree. (If P had occurred m times in the expression, then the equation would have been of degree m.) The reduction of a geometric expression to an algebraic equation for a curve or surface requires that the expression be either a scalar, or an n-element. The expression above is in fact the exterior product of three points, which reduces to a 3-element in the plane (a linear space of 3 dimensions). Each time a regressive product is simplified to its common factor, for example when two lines intersect, the result is in general a weighted point multiplied by an arbitrary scalar congruence factor (symbolized c in GrassmannAlgebra). The zero on the right hand side of the equation means that we can effectively ignore the weights and congruence factors, and visualize each regressive product as resulting simply in a point. In computations however, it is most effective to retain these weights to avoid coordinates with denominators. For a geometric expression to result in an n-element it will usually be the exterior product of n points (3 points for the plane, 4 points for space). Geometrically this may be interpreted as the points all belonging to a hyperplane in the space, that is, collinear in the plane, coplanar in 3-space. For a geometric expression to result in an scalar it will usually be the regressive product of n hyperplanes (3 lines for the plane, 4 planes for 3-space). Geometrically this may be interpreted as the hyperplanes all belonging to a point in the space, that is, three lines intersecting in one point in the plane, four plane intersecting in one point in 3-space.
Historical Note
Grassmann discusses geometric constructions in Chapter 5 (Applications to Geometry) in his Ausdehnungslehre of 1862, and also in Crelles' Journal volumes XXXI, XXXVI and LII. Probably the most comprehensive additional treatment is given by Alfred North Whitehead in Book IV of his Treatise on Universal Algebra (Chapter IV: Descriptive Geometry and Chapter V: Descriptive Geometry of Conics and Cubics) 1898.
2009 9 3
299
The algebraic equation to this line is obtained by simplifying this 3-element and extracting its scalar coefficient by dividing through by the basis 3-element of the space
% @P LD " e1 e2 0
- a2 b1 + a1 b2 + a2 x1 - b2 x1 - a1 x2 + b1 x2 0
Simplifying this expression gives us the equation to a plane in 3-space in the more recognizable form A + B x1 + C x2 + D x3 0. Here we introduce the GrassmannAlgebra function ExpandAndCollect for collecting the coefficients of the algebraic equation.
2009 9 3
300
ExpandAndCollectB
% @P PD " e1 e2 e3
, 8x1 , x2 , x3 <, 3F 0
- a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 + a1 b2 c3 + Ha3 b2 - a2 b3 - a3 c2 + b3 c2 + a2 c3 - b2 c3 L x1 + H- a3 b1 + a1 b3 + a3 c1 - b3 c1 - a1 c3 + b1 c3 L x2 + Ha2 b1 - a1 b2 - a2 c1 + b2 c1 + a1 c2 - b1 c2 L x3 0
If P is a simple exterior factor of )a , the equation is true for all P, leading us nowhere. How can )a depend linearly on P, without P being an (exterior) factor of )a ? The answer lies in the regressive product. Geometrically, a line is most meaningfully constructed as an exterior product of points (P1 and Q1 , say). Hence
)a P1 Q1
So one of these points must depend on P. Suppose it is Q1 . Thus in turn, Q1 must be a regressive product, since in the plane the regressive product of two lines yields a point. Thus
Q1 L1 )b
Again, allowing P to be a simple exterior factor of one of these lines leads to a trivial dependence, so we leave one of them constant (L1 , say) and replace the other, )b by an exterior product of points, neither of which is P.
2009 9 3
301
)b P0 Q2
As in the previous steps, we leave one factor constant, and express the other as a product (in this case regressive).
Q2 L2 )c
Finally, we are linearly far enough away from P on our chain of lines through points to come back and intersect it non-trivially.
)c P P2
and eliminate )a , )b , )c , Q1 and Q2 to give the final equation for the conic section.
4.14
Note that since interchanging the order of factors in an exterior or regressive product changes at most its sign, and that this change is inconsequential to the final result, we can if we wish, rewrite the geometric equation in any of a number of ways, in particular, in reverse order.
HHHHP2 PL L2 L P0 L L1 L P1 P 0
4.15
2009 9 3
302
1. 2. 3. 4. 5.
Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 . Construct a line )c through P2 to intersect line L2 in point Q2 . Construct a line )b through Q2 and P0 to intersect line L1 in point Q1 . Construct a line )a through Q1 and P1 to intersect line )c in point P. Repeat the construction by drawing a new line )c through P2 .
We do of course still need to verify that this formula and construction leads to a scalar second degree equation in two variables. This we do in the next section.
2009 9 3
303
P0 P1 P2 L1 L2
4. Construct a line )a through Q1 and P1 to intersect line )c in point P. Q1 , P1 and P are thus on the same line, and the left hand side of the equation (which we call #1 ) becomes
#1 = 1@P P1 Q1 D H- 1 - H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL + 0 / ( y - 0 ( $ y + , H- H. - /L ( H$ - yL + 0 $ H- / + yLLL + 2 / H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H- ( + xL + H- , + (L yL - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLLL + y H- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL + 0 / ( y - 0 ( $ y + , H- H. - /L ( H$ - yL + 0 $ H- / + yLLL 2 H- - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLL / H0 H+ ( + , $ - $ x - ( yL + ( HH- + + $L x + , H- $ + yLLLLL x H/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H- ( + xL + H- , + (L yL - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLLL 1 H- - H. H+ ( + , $ - $ x - ( yL + $ H+ H- ( + xL + H- , + (L yLL / H0 H+ ( + , $ - $ x - ( yL + ( HH- + + $L x + , H- $ + yLLLLLL # e1 e2
Dividing out the basis n-element and collecting the coefficients of x and y.
2009 9 3
304
#2 = ExpandAndCollectB
#1 " e1 e2
, 8x , y <, 2F 0
To check if this gives the correct form, we can substitute in some values for the coordinates of the points and lines.
#2 . 82 2, 3 1, 4 1, 5 2, 6 1, 7 1, 8 1, 9 0, . 1, + 3< - 3 + 6 x - 3 x2 + 2 x y - y2 0
This is an ellipse.
ContourPlotA- 3 + 6 x - 3 x2 + 2 x y - y2 0, 8x, 0, 3<, 8y, 0, 3<, GridLines Automatic, ImageSize 83 72, 3 72<E
3.0
2.5
2.0
1.5
1.0
0.5
305
But this time, P0 is a fixed point while Q1 and Q2 are functions of the variable point P. Because we are looking for a more symmetric formula, let us suppose the form for Q1 is the same as that for Q2 in the previous formulation:
Qi Li HP Pi L
Here, the Li and Pi are fixed. The Qi lie on their lines Li . The equation becomes
P0 HL1 HP P1 LL HL2 HP P2 LL 0
Thus we can imagine two fixed lines L1 and L2 and three fixed points P0 , P1 , and P2 . The prescription to construct the conic section is as follows: 1. 2. 3. 4. Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 . Construct a line through the point P0 to intersect L1 and L2 in Q1 and Q2 respectively. Construct a line through Q1 and P1 . Construct a line through Q2 and P2 .
4.16
The first line through P0 , Q1 and Q2 satisfies the condition that P0 Q1 Q2 0. The second and third lines through Qi and Pi satisfy the conditions that Qi P Pi 0. Thus their intersection gives the point P. We now have two formulas purporting to describe a conic section: the formula above, and the formula we first derived.
$1 P P1 HL1 HP0 HL2 HP P2 LLLL $2 P0 HL1 HP P1 LL HL2 HP P2 LL
To see if they are the same, we could choose a basis, substitute symbolic coordinates, and check that the same second degree equation results. An alternative method is to prove the identity of the formulas from the Common Factor Theorem by applying CongruenceSimplify (1) to the symbolic expressions in the formulas. To indicate that the symbols for the lines represent elements of grade 2 we underscript them with their grade. To indicate that the symbols for the points represent elements of grade 1 we declare them as vector symbols.
"2 ; DeclareExtraVectorSymbols@8P, P0 , P1 , P2 <D;
2009 9 3
306
1BL1 HP P1 LF
2
P P1 L1 - P L1 P1
2 2
Remember that the bracketing bar expressions are scalar quantities. Z is to be interpreted as the coefficient of the n-element Z when expressed in the current basis. See ToCongruenceForm. We can expand a complete formula in a similar manner to get an alternative expression for the conic section. For example, the first formula is
1BP P1 L1 P0 L2 HP P2 L
2 2
F0 P P0 P1 +
P L2
2
P2 L1 - P L1
2 2
P2 L2
2
P L2
2
P0 L1 P P1 P2 0
2
P L2
2
P2 L1 - P L1
2 2
P2 L2
2
P P0 P1 + 4.17
P L2
2
P0 L1 P P1 P2 0
2
This is just one of a number of alternative forms. Expanding the second formula gives a different expression. In order to show that they are in fact equal we need to note that in the formulae there are four points involved, of which only three are independent in the plane. Hence we can arbitrarily express one of them (say P) in terms of the other three. (We use the symbol wP to emphasize that the expression we substitute is actually a weighted point).
wP = a P0 + b P1 + c P2 ;
In this form, the Pi can be visualized as representing a weighted point basis for the plane, and a, b and c as variable coefficients of the weighted point wP in this basis. The formulae for comparison then become
$1 = wP P1 L1 P0 L2 HwP P2 L
2 2
$1 = GrassmannCollect@Expand@1@$1 DDD a2 P0 L1
2
P0 L2 + a b P0 L1
2 2
P1 L2 + a c P0 L2
2 2
P2 L1 +
2
b c P1 L2
2
P2 L1 - b c P1 L1
2 2
P2 L2
2
P0 P1 P2
2009 9 3
307
$2 = P0 L1 HwP P1 L L2 HwP P2 L ;
2 2
$2 = GrassmannCollect@Expand@1@$2 DDD a2 P0 L1
2
P0 L2 + a b P0 L1
2 2
P1 L2 + a c P0 L2
2 2
P2 L1 +
2
b c P1 L2
2
P2 L1 - b c P1 L1
2 2
P2 L2
2
P0 P1 P2
But since any points on the lines (except O) will be satisfactory, we can choose them advantageously as the intersections with the lines P0 Pi .
R1 L1 HP0 P2 L R2 L2 HP0 P1 L
4.18
Note the symmetry. There are five fixed points, O, P1 , P2 , R1 , R2 , and the variable point P. Each point occurs twice. Swapping P1 and P2 while at the same time swapping R1 and R2 leaves the equation unchanged. We can show that these five points satisfy the equation, and thus all lie on the conic section. But since a conic section constructed through five points is unique, this equation represents the general conic section, or general quadric curve in the plane.
2009 9 3
308
We can show that these five points satisfy the equation, and thus all lie on the conic section. But since a conic section constructed through five points is unique, this equation represents the general conic section, or general quadric curve in the plane. We show this as follows. First, we can immediately read off from the equation that it is satisfied when P coincides with either P1 or P2 . To show it for O, R1 and R2 we can expand the regressive products by the Common Factor Theorem as we did in the previous section to a different view of the equation.
DeclareExtraVectorSymbols@8P, O, P1 , P2 , R1 , R2 <D; 11 = 1@HHP1 R2 L HP2 R1 LL HHO R1 L HP P1 LL HHO R2 L HP P2 LLD 0 O P P1 P P2 R2 P2 R1 R2 O P1 R1 O P P2 P P1 R1 P2 R1 R2 O P1 R2 + O P P1 P P2 R2 P1 P2 R1 O R1 R2 O P P1 O P P2 P2 R1 R2 P1 R1 R2 0
It is clear from the first factor in each of the terms that the equation is satisfied when P is equal to O. But it is not yet clear that the equation is satisfied when P is equal to R1 or R2 . To show this we try applying CongruenceSimplify to a rearranged form where the factors of the regressive products are reversed.
12 = 1@HHP2 R1 L HP1 R2 LL HHP P1 L HO R1 LL HHP P2 L HO R2 LLD 0 O P R1 O P2 R2 P1 R1 R2 P P1 P2 O P R1 O P2 R2 P1 P2 R2 P P1 R1 + O P R2 O P1 R1 P1 P2 R2 P P2 R1 O P R1 O P R2 P1 P2 R2 P1 P2 R1 0
Now we see that not only is the equation satisfied when P is equal to O, but also when P is equal to R1 . We try again with the factors of only the second and third regressive products reversed.
13 = 1@HHP1 R2 L HP2 R1 LL HHP P1 L HO R1 LL HHP P2 L HO R2 LLD 0 - O P R2 O P1 R1 P2 R1 R2 P P1 P2 + O P R1 O P2 R2 P1 P2 R1 P P1 R2 O P R2 O P1 R1 P1 P2 R1 P P2 R2 + O P R1 O P R2 P1 P2 R1 P1 P2 R2 0
This time we see immediately that the equation is satisfied for P is equal to R2 . Of course it would have been more direct to show these sorts of results by replacing the point by P in the equation, and then simplifying. For example, replacing R2 in the above equation, then simplifying, gives zero as expected.
1@HHP1 PL HP2 R1 LL HHP P1 L HO R1 LL HHP P2 L HO PLLD 0
We have thus shown that the five fixed points, O, P1 , P2 , R1 , R2 , all lie on the conic section. Here is an hyperbola as a graphic example, plotted from the original formula above.
2009 9 3
309
Here is an hyperbola as a graphic example, plotted from the original formula above.
R2
Q1
R1
L1
P0 P2 P Q2
L2
P1
Constructing an hyperbola through 5 points
And here is a circle. The only difference being the relative location of the five points.
2009 9 3
310
Q2
L2
P2
R2
P0
P1
R1
L1
Q1
Dual constructions
The Principle of Duality discussed in Chapter 3 says that to every formula such as
P0 HL1 HP P1 LL HL2 HP P2 LL 0
there corresponds a dual formula in which exterior and regressive products are interchanged, and melements are interchanged with (n-m)-elements: in this case, points are interchanged with lines. Thus we can write another formula for a conic section as
:1 = L0 HP1 H; L1 LL HP2 H; L2 LL;
where the Pi are fixed points, the Li are fixed lines, and ; is a variable line. This formula says that three lines intersect at one point. One line L0 is fixed. The other two are of the form
)i P1 H; L1 L
To see most directly how such a formula translates into an algebraic equation, we take a numerical example with fixed points and lines defined:
2009 9 3
311
P1 P2 L0 L1 L2
We can define the variable line as the product of its intersections with the coordinate axes:
"2 ; DeclareExtraScalarSymbols@x, yD; ; = H" + x e1 L H" + y e2 L;
Substituting these values into the geometric equation and simplifying gives
Expand@1@:1 DD 0 198 x y - 90 x2 y - 21 y2 - 80 x y2 + 37 x2 y2 0
We can solve for y in terms of x, but remember that these are line coordinates, not point coordinates.
y= 18 I- 11 x + 5 x2 M - 21 - 80 x + 37 x2 ;
By plotting a line for each pair of line coordinates, we see that they form the envelope of a conic section as expected.
Graphics@Table@Line@882 x, -y<, 8-x, 2 y<<D, 8x, - 20, 20, .05<D, PlotRange 88- 1, 3<, 8- .5, 3<<D
312
and replace the 2-space hyperplanes (the lines) with 3-space hyperplanes (planes). Thus we can rewrite the formula as
4.19
5. Repeat the construction by drawing a new plane Pc through L. We now derive the algebraic equation in the same manner as we have previously for the planar cases. First, define the variable point P and the fixed entities.
A; "3 ; DeclareExtraScalarSymbols@x, y, zD; P = " + x e1 + y e2 + z e3 ; DeclareExtraScalarSymbols@ 82, 3, 4, *, 6, 7, 8, 9, ., +, <, =, >, ), ?, @<D; P0 = " + 2 e1 + 3 e2 + 4 e2 ; P1 = " + * e1 + 6 e2 + 7 e3 ; L = H" + 8 e1 + 9 e2 L H" + . e2 + + e3 L; P1 = H" + < e1 L H" + = e2 L H" + > e3 L; P2 = H" + ) e1 L H" + ? e2 L H" + @ e3 L;
2009 9 3
313
-1 . - $ 3 4 5 " 6 - , . - $ 3 4 5 " 6 + 2 . / $ 3 4 5 " 6 - 1 . - ( 3 4 5 " 7 - , . - ( 3 4 5 " 7 1#/$345"7-,#/$345"7+20/$345"7-2.-(34567-1#-$34567,#-$34567+20-$34567+1.-$34"67+,.-$34"67-2./$34"67+ 10-$35"67+,0-$35"67-20/$35"67+1#-$45"67+ ,#-$45"67-2#/$45"67+.-(345"67-0-$345"67+#/$345"67+ H- 1 0 - $ 3 5 " 6 - , 0 - $ 3 5 " 6 + 2 0 / $ 3 5 " 6 - 1 # - $ 4 5 " 6 - , # - $ 4 5 " 6 + 2#/$45"6+1.-345"6+,.-345"6-2./345"6+2.(345"6.-(345"6+1-$345"6+,-$345"6-2/$345"6+1.-(34"7+ ,.-(34"7+1#/$34"7+,#/$34"7-20/$34"7+1#/345"7+ ,#/345"7-20/345"7-1#(345"7-,#(345"7+20(345"7+ 1-(345"7+,-(345"7-0-(345"7+2.-(3467+1#-$3467+ ,#-$3467-20-$3467+1#-34567+,#-34567-20-34567+ 2-(34567-#-(34567-1.-34"67-,.-34"67+2./34"672.(34"67-1-$34"67-,-$34"67+0-$34"67+2/$34"67#/$34"67-10-35"67-,0-35"67+20/35"67-20(35"67+ 0-(35"67-1#-45"67-,#-45"67+2#/45"67-2#(45"67+ # - ( 4 5 " 6 7 + 0 - 3 4 5 " 6 7 - # / 3 4 5 " 6 7 + # ( 3 4 5 " 6 7 - - ( 3 4 5 " 6 7L z + H1 0 - 3 5 " 6 + , 0 - 3 5 " 6 - 2 0 / 3 5 " 6 + 2 0 ( 3 5 " 6 - 0 - ( 3 5 " 6 + 1#-45"6+,#-45"6-2#/45"6+2#(45"6-#-(45"61-345"6-,-345"6+2/345"6-2(345"6+-(345"61#/34"7-,#/34"7+20/34"7+1#(34"7+,#(34"7-20(34"71-(34"7-,-(34"7+0-(34"7-1#-3467-,#-3467+ 20-3467-2-(3467+#-(3467+1-34"67+,-34"670 - 3 4 " 6 7 - 2 / 3 4 " 6 7 + # / 3 4 " 6 7 + 2 ( 3 4 " 6 7 - # ( 3 4 " 6 7 L z2 + H1 . - $ 4 5 " 6 + , . - $ 4 5 " 6 - 2 . / $ 4 5 " 6 + 1 . $ 3 4 5 " 6 + , . $ 3 4 5 " 6 ./$345"6+1.-(45"7+,.-(45"7+1#/$45"7+,#/$45"720/$45"7-1./345"7-,./345"7+1.(345"7+,.(345"7+ 1/$345"7+,/$345"7-0/$345"7-1.-$3467-,.-$3467+ 2./$3467-10-$3567-,0-$3567+20/$3567+ 2.-(4567-20-$4567+2#/$4567-2./34567+2.(34567+ 1#$34567+,#$34567-20$34567+1-$34567+,-$34567#/$34567-1.$34"67-,.$34"67+./$34"67-10$35"67,0$35"67+0/$35"67-.-(45"67-1#$45"67-,#$45"671-$45"67-,-$45"67+0-$45"67+2/$45"67+ . / 3 4 5 " 6 7 - . ( 3 4 5 " 6 7 + 0 $ 3 4 5 " 6 7 - / $ 3 4 5 " 6 7L x + H1 0 $ 3 5 " 6 + , 0 $ 3 5 " 6 - 0 / $ 3 5 " 6 - 1 . - 4 5 " 6 - , . - 4 5 " 6 + 2./45"6-2.(45"6+.-(45"6+1#$45"6+,#$45"6-#/$45"61$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7+ +
2009 9 3
314
1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7,.(34"7-1/$34"7-,/$34"7+0/$34"7-1#/45"7-,#/45"7+ 20/45"7+1#(45"7+,#(45"7-20(45"7-1-(45"7-,-(45"7+ 0-(45"7+1.-3467+,.-3467-.-(3467-1#$3467-,#$3467+ 20$3467-2/$3467+#/$3467+10-3567+,0-3567-20/3567+ 20(3567-0-(3567+20-4567-2#/4567+2#(4567-2-(45671-34567-,-34567+2/34567-2(34567+-(34567-./34"67+ .(34"67+1$34"67+,$34"67-0$34"67+1-45"67+,-45"670 - 4 5 " 6 7 - 2 / 4 5 " 6 7 + # / 4 5 " 6 7 + 2 ( 4 5 " 6 7 - # ( 4 5 " 6 7L z x + H- 1 . $ 4 5 " 6 - , . $ 4 5 " 6 + . / $ 4 5 " 6 + 1 . / 4 5 " 7 + , . / 4 5 " 7 1.(45"7-,.(45"7-1/$45"7-,/$45"7+0/$45"7+1.$3467+ ,.$3467-./$3467+10$3567+,0$3567-0/$3567+2./45672.(4567+20$4567-2/$4567-1$34567-,$34567+/$34567. / 4 5 " 6 7 + . ( 4 5 " 6 7 + 1 $ 4 5 " 6 7 + , $ 4 5 " 6 7 - 0 $ 4 5 " 6 7 L x2 + H1 . - $ 3 5 " 6 + , . - $ 3 5 " 6 - 2 . / $ 3 5 " 6 - 2 . $ 3 4 5 " 6 + . - $ 3 4 5 " 6 1.-$34"7-,.-$34"7+2./$34"7+1.-(35"7+,.-(35"710-$35"7-,0-$35"7+1#/$35"7+,#/$35"7-1#-$45"7,#-$45"7+2#/$45"7+1.-345"7+,.-345"7+1#$345"7+ ,#$345"7-20$345"7+0-$345"7-2/$345"7+2.-(3567+ 1#-$3567+,#-$3567-20-$3567+2.-34567-2-$34567+ #-$34567+2.$34"67-.-$34"67-.-(35"67+20$35"671-$35"67-,-$35"67+2/$35"67-#/$35"67+2#$45"67# - $ 4 5 " 6 7 - . - 3 4 5 " 6 7 - # $ 3 4 5 " 6 7 + - $ 3 4 5 " 6 7L y + H- 1 . - 3 5 " 6 - , . - 3 5 " 6 + 2 . / 3 5 " 6 - 2 . ( 3 5 " 6 + . - ( 3 5 " 6 20$35"6+0-$35"6-2#$45"6+#-$45"6+2$345"6-$345"6-2./34"7+2.(34"7-.-(34"7-1#$34"7-,#$34"7+ 20$34"7+1-$34"7+,-$34"7-0-$34"7+10-35"7+,0-35"71#/35"7-,#/35"7+1#(35"7+,#(35"7-1-(35"7-,-(35"7+ 1#-45"7+,#-45"7-2#/45"7+2#(45"7-#-(45"7-1-345"7,-345"7+2/345"7-2(345"7+-(345"7-2.-3467+2-$3467#-$3467-1#-3567-,#-3567+20-3567-2-(3567+#-(3567+ .-34"67-2$34"67+#$34"67+1-35"67+,-35"670 - 3 5 " 6 7 - 2 / 3 5 " 6 7 + # / 3 5 " 6 7 + 2 ( 3 5 " 6 7 - # ( 3 5 " 6 7L z y + H- 1 . $ 3 5 " 6 - , . $ 3 5 " 6 + . / $ 3 5 " 6 + 2 . $ 4 5 " 6 - . - $ 4 5 " 6 + 1.$34"7+,.$34"7-./$34"7+1./35"7+,./35"7-1.(35"7,.(35"7+10$35"7+,0$35"7-1/$35"7-,/$35"71.-45"7-,.-45"7+20$45"7+1-$45"7+,-$45"70-$45"7-#/$45"7-1$345"7-,$345"7+/$345"72.$3467+.-$3467+2./3567-2.(3567-1#$3567-,#$3567+ 0-$3567-2/$3567+#/$3567-2.-4567-2#$4567+2-$4567+ 2$34567--$34567-./35"67+.(35"67+1$35"67+ , $ 3 5 " 6 7 - 0 $ 3 5 " 6 7 + . - 4 5 " 6 7 - 2 $ 4 5 " 6 7 + # $ 4 5 " 6 7L x y + H2 . $ 3 5 " 6 - . - $ 3 5 " 6 - 2 . $ 3 4 " 7 + . - $ 3 4 " 7 - 1 . - 3 5 " 7 ,.-35"7-1#$35"7-,#$35"7+1-$35"7+,-$35"72#$45"7+#-$45"7+2$345"7--$345"7-2.-3567+ + + L y2
Substituting some random values for the coordinates of the fixed points, line and planes gives us 2009 93 the equation:
315
Substituting some random values for the coordinates of the fixed points, line and planes gives us the equation:
4.753 - 0.900375 z + 2.06412 z2 - 12.6175 x + 2.27084 z x + 3.12987 x2 + 2.82363 y + 2.80525 z y + 2.39794 x y - 0.486937 y2 0
The graphic below depicts this conic together with its defining points, line and planes. The construction lines are not shown as they are difficult to portray unambiguously in one view.
Note that the line L and the point P1 are on the surface. This is clear from the geometric equation.
2009 9 3
316
A typical factor of the formula, for example the first, simplifies to the weighted point:
Q1 = 1@L1 HP P1 LD HH$ - 4L H2 - xL + H- ( + 3L H1 - yLL # + HH$ 3 - ( 4L H2 - xL + H- ( + 3L H1 x - 2 yLL e1 + HH$ 3 - ( 4L H1 - yL + H- $ + 4L H1 x - 2 yLL e2
Remember that 1 is a shorthand for CongruenceSimplify. It automatically eliminates the congruence factor c arising from the regressive product in the current basis. The complete formula simplifies to
2009 9 3
317
:2 =
1@:1 D " e1 e2
HH$ 3 - ( 4L H1 - yL + H- $ + 4L H1 x - 2 yLL HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H, - xL + H- - + 3L H+ x - , yLL + HH/ - 4L H, - xL + H- - + 3L H+ - yLL HH/ ( - - $L H- 0 + xL + H- - (L H. x - 0 yLLL + H- H$ 3 - ( 4L H2 - xL - H- ( + 3L H1 x - 2 yLL HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H+ - yL + H- / + 4L H+ x - , yLL + HH/ - 4L H, - xL + H- - + 3L H+ - yLL HH- / ( + - $L H. - yL + H/ - $L H. x - 0 yLLL + HH$ - 4L H2 - xL + H- ( + 3L H1 - yLL HH- H/ 3 - - 4L H+ - yL + H/ - 4L H+ x - , yLL HH/ ( - - $L H- 0 + xL + H- - (L H. x - 0 yLL + HH/ 3 - - 4L H, - xL + H- - + 3L H+ x - , yLL HH- / ( + - $L H. - yL + H/ - $L H. x - 0 yLLL 0
It is easy to check that the expression is indeed a cubic, either by inspection, or application of ExpandAndCollect. as in the previous section. It is instructive perhaps to see a graphical depiction of how the formula works. In the graphic below we plot a typical cubic in the plane defined by the three points P1 , P2 , P3 , and the three lines L1 , L2 , and L3 . The point P is located by the condition that the three points Q1 , Q2 , and Q3 are concurrent, where each of the points Qi is defined as the intersection of the line P Pi with the line Li .
Qi = Li HP Pi L
As can be seen from the graphic, the cubic curve passes through nine special points: the three points P1 , P2 , P3 , the three intersections of the three lines L1 , L2 , and L3 , and the intersections R1 of these three lines with the lines through the points Pi taken two at a time.
:1 = HL1 HP P1 LL HL2 HP P2 LL HL3 HP P3 LL R1 = L1 HP2 P3 L R2 = L2 HP3 P1 L R3 = L3 HP1 P2 L
These special points are displayed as large semi-opaque red points (perhaps behind a blue point).
2009 9 3
318
Pascal's Theorem
Our discussion up to this point leads us to Pascal's Theorem which may be stated: If a hexagon is inscribed in a conic, then opposite sides intersect in collinear points. In the previous sections we have shown that the geometric equation below, involving five fixed points and a variable point P, leads to the algebraic equation of a conic section:
HHP1 R2 L HP2 R1 LL HHO R1 L HP P1 LL HHO R2 L HP P2 LL 0
This is therefore the converse to Pascal's Theorem: If opposite sides of a hexagon intersect in three collinear points, then the hexagon may be inscribed in a conic. To explore Pascal's Theorem further we find it conceptually advantageous to start with a fresh notation. We denote the six points by P1 , P2 , P3 , P4 , P5 , and P6 , and suppose that they are placed on the conic in arbitrary (but of course separate) positions. Let us consider the hexagon formed by these points in order, so that the sides S1 to S6 may be written, starting with P1 as
S1 = P1 P2 ; S2 = P2 P3 ; S3 = P3 P4 ; S4 = P4 P5 ; S5 = P5 P6 ; S6 = P6 P1 ;
The pairs of opposite sides can then be intersected to give the collinear points Z1 , Z2 and Z3 .
Z1 = S1 S4 ; Z2 = S2 S5 ; Z3 = S3 S6 ;
We will call these points Pascal points. The Pascal line ) on which these points lie can be written alternatively as
2009 9 3
319
) Z1 Z2 Z1 Z3 Z2 Z3
The formula is then constructed from the condition that the points Z1 , Z2 , and Z3 all lie on ), that is, their exterior product is zero.
Z1 Z2 Z3 0 HP1 P2 P4 P5 L HP2 P3 P5 P6 L HP3 P4 P6 P1 L 0
4.20
Z2 P5
S5
P6
S6
P2
S1
Z1
P1
S2
S4
P3
S3
P4
Z3
Pascal's theorem
As may be observed, this is essentially the same graphic as Constructing a circle through 5 points in the section above on Conic sections through five points, except that the points have been renamed as follows:
P1 P1 , P2 P5 , R1 P4 , R2 P2 , O P3 , P P6 , Q1 Z3 , Q2 Z2 , P0 Z1
2009 9 3
320
To compute the three Pascal points we take the regressive product of opposite sides, substitute in the values EllipsePoints, and use CongruenceSimplify (in its short form 1) and ToPointForm to do the simplifications.
" 2 ; 8Z1 , Z2 , Z3 < = ToPointForm@1@8S1 S4 , S2 S5 , S3 S6 < . EllipsePointsDD 9# + 4 I- 1 + # - 3 3 M e1 - 3 3 e2 , 3 M e1 - 3 3 e2 =
3 e2 , # + I4 - 4
Pascal's Theorem says that these three points are collinear, hence their exterior product should be zero.
% @Z1 Z2 Z3 D 0
The Pascal line may be obtained as the exterior product of any pair of the points.
8%@Z1 Z2 D, %@Z1 Z3 D, %@Z2 Z3 D< 9I4 - 4 I8 - 8 I4 - 4 3 M # e1 + 12 I- 3 + 3 M # e1 + 24 I- 3 + 3 M # e1 + 12 I- 3 + 3 M e1 e2 , 3 M e1 e2 , 3 M e1 e2 =
2009 9 3
321
Since the weight is not important, we can divide through by it and simplify to obtain the Pascal line in the form
) J" - 3 3 e2 N e1
P1
P5
S6
S1
P3
S5
S2
S4
S3
P6 P4
P2
Z3
Z2
Pascal's theorem in an ellipse
Z1
2009 9 3
322
Note that these are all vectors, not points as expected, because the opposite sides are all parallel.
Pascal's theorem for a regular hexagon Opposite sides intersect at points at infinity lying in the same bivector "
These vectors are dependent (as of course they must be in the plane); and consequently, their exterior product is zero.
2009 9 3
323
These vectors are dependent (as of course they must be in the plane); and consequently, their exterior product is zero.
% @Z1 Z2 Z3 D 0
Thus Pascal's Theorem is demonstrated in this example to hold for pairs of sides intersecting in points at infinity (vectors). The three points at infinity lie on the same line at infinity. That is, the three vectors lie in the same bivector. (Of course, this must be the case for any three vectors in the plane.)
Pascal lines
Hexagons in a conic
We have seen how Pascal's Theorem establishes a line associated with any hexagon in a conic. Suppose we have fixed the positions of the six points anywhere we wish on the conic, and named them in any order we wish. How many essentially different hexagons (whose sides may of course cross each other) can we form by joining these six points in some order? As before, we name these points P1 , P2 , P3 , P4 , P5 and P6 . Without loss of generality we can start with P1 and join it in any of five ways to the other five points. From that point we then have four ways of forming the next side; then three ways; then two ways. This makes 5!4!3!2 = 120 hexagons. However, for each hexagon P1 Pa Pb Pc Pd Pe there corresponds the same hexagon traversed in the reverse order P1 Pe Pd Pc Pb Pa . Thus there are just 60 essentially different hexagons. We use Mathematica's Permutations function to list them and Union to eliminate the ones traversed in reverse order.
2009 9 3
324
hexagons = Join@8P1 <, D & Union@Permutations@8P2 , P3 , P4 , P5 , P6 <D, SameTest H1 === Reverse@2D &LD 88P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P4 , P4 , P4 , P4 , P4 , P4 , P5 , P5 , P5 , P3 , P3 , P3 , P4 , P4 , P4 , P5 , P5 , P5 , P6 , P6 , P6 , P2 , P2 , P2 , P4 , P4 , P5 , P5 , P6 , P6 , P2 , P2 , P3 , P3 , P5 , P6 , P2 , P3 , P4 , P4 , P5 , P6 , P3 , P5 , P6 , P3 , P4 , P6 , P3 , P4 , P5 , P4 , P5 , P6 , P2 , P5 , P2 , P4 , P2 , P4 , P3 , P5 , P2 , P5 , P2 , P2 , P3 , P2 , P2 , P5 , P4 , P4 , P5 , P3 , P3 , P4 , P3 , P3 , P4 , P3 , P3 , P5 , P4 , P4 , P5 , P2 , P4 , P2 , P4 , P2 , P5 , P3 , P5 , P2 , P3 , P3 , P4 , P4 , P3 , P6 <, P6 <, P5 <, P6 <, P6 <, P5 <, P6 <, P6 <, P4 <, P5 <, P5 <, P4 <, P6 <, P6 <, P5 <, P6 <, P6 <, P6 <, P6 <, P5 <, P5 <, P6 <, P6 <, P6 <, P6 <, P6 <, P5 <, P6 <, P6 <, P6 <, 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , 8P1 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P2 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P3 , P4 , P4 , P4 , P4 , P4 , P4 , P5 , P5 , P5 , P3 , P3 , P3 , P4 , P4 , P4 , P5 , P5 , P5 , P6 , P6 , P6 , P2 , P2 , P2 , P4 , P4 , P5 , P5 , P6 , P6 , P2 , P2 , P3 , P3 , P5 , P6 , P2 , P3 , P4 , P4 , P5 , P6 , P3 , P5 , P6 , P3 , P4 , P6 , P3 , P4 , P5 , P4 , P5 , P6 , P2 , P6 , P2 , P6 , P2 , P5 , P3 , P6 , P2 , P6 , P3 , P3 , P4 , P4 , P3 , P6 , P6 , P5 , P6 , P6 , P5 , P6 , P6 , P4 , P5 , P5 , P4 , P6 , P6 , P5 , P6 , P2 , P6 , P2 , P5 , P2 , P6 , P3 , P6 , P2 , P2 , P2 , P3 , P2 , P2 , P5 <, P4 <, P4 <, P5 <, P3 <, P3 <, P4 <, P3 <, P3 <, P4 <, P3 <, P3 <, P5 <, P4 <, P4 <, P5 <, P5 <, P4 <, P4 <, P4 <, P4 <, P5 <, P5 <, P5 <, P5 <, P6 <, P5 <, P6 <, P6 <, P6 <<
We take the example of the ellipse and its points from the previous section and display their hexagons.
2009 9 3
325
2009 9 3
326
Pascal points
When two opposite sides of a hexagon are extended they intersect in a point which we have called a Pascal point. Pascal's Theorem says that the three Pascal points lie on the one line. We can convert each hexagon in the list of hexagons above to a list of three formulae, one formula for each point.
allPascalPointFormulae = hexagons . 8Pa_, Pb_, Pc_, Pd_, Pe_, Pf_< 8HPa PbL HPd PeL, HPb PcL HPe PfL, HPc PdL HPf PaL< 88P1 P2 P4 P5 , 8P1 P2 P4 P6 , 8P1 P2 P5 P4 , 8P1 P2 P5 P6 , 8P1 P2 P6 P4 , 8P1 P2 P6 P5 , 8P1 P2 P3 P5 , 8P1 P2 P3 P6 , 8P1 P2 P5 P3 , 8P1 P2 P5 P6 , 8P1 P2 P6 P3 , 8P1 P2 P6 P5 , 8P1 P2 P3 P4 , 8P1 P2 P3 P6 , 8P1 P2 P4 P3 , 8P1 P2 P4 P6 , 8P1 P2 P6 P3 , P2 P3 P5 P6 , P2 P3 P6 P5 , P2 P3 P4 P6 , P2 P3 P6 P4 , P2 P3 P4 P5 , P2 P3 P5 P4 , P2 P4 P5 P6 , P2 P4 P6 P5 , P2 P4 P3 P6 , P2 P4 P6 P3 , P2 P4 P3 P5 , P2 P4 P5 P3 , P2 P5 P4 P6 , P2 P5 P6 P4 , P2 P5 P3 P6 , P2 P5 P6 P3 , P2 P5 P3 P4 , P3 P4 P6 P1 <, P3 P4 P5 P1 <, P3 P5 P6 P1 <, P3 P5 P4 P1 <, P3 P6 P5 P1 <, P3 P6 P4 P1 <, P4 P3 P6 P1 <, P4 P3 P5 P1 <, P4 P5 P6 P1 <, P4 P5 P3 P1 <, P4 P6 P5 P1 <, P4 P6 P3 P1 <, P5 P3 P6 P1 <, P5 P3 P4 P1 <, P5 P4 P6 P1 <, P5 P4 P3 P1 <, P5 P6 P4 P1 <, ,
2009 9 3
327
8P1 P2 P6 P3 , 8P1 P2 P6 P4 , 8P1 P2 P3 P4 , 8P1 P2 P3 P5 , 8P1 P2 P4 P3 , 8P1 P2 P4 P5 , 8P1 P2 P5 P3 , 8P1 P2 P5 P4 , 8P1 P3 P4 P5 , 8P1 P3 P4 P6 , 8P1 P3 P5 P4 , 8P1 P3 P5 P6 , 8P1 P3 P6 P4 , 8P1 P3 P6 P5 , 8P1 P3 P2 P5 , 8P1 P3 P2 P6 , 8P1 P3 P5 P2 , 8P1 P3 P6 P2 , 8P1 P3 P2 P4 , 8P1 P3 P2 P6 , 8P1 P3 P4 P2 , 8P1 P3 P6 P2 , 8P1 P3 P2 P4 , 8P1 P3 P2 P5 , 8P1 P3 P4 P2 , 8P1 P3 P5 P2 , 8P1 P4 P3 P5 , 8P1 P4 P3 P6 , 8P1 P4 P5 P3 , 8P1 P4 P6 P3 , 8P1 P4 P2 P5 , 8P1 P4 P2 P6 , 8P1 P4 P5 P2 , 8P1 P4 P6 P2 , 8P1 P4 P2 P3 , 8P1 P4 P3 P2 , 8P1 P4 P2 P3 , 8P1 P4 P3 P2 , 8P1 P5 P3 P4 , 8P1 P5 P4 P3 , 8P1 P5 P2 P4 , 8P1 P5 P4 P2 , 8P1 P5 P2 P3 , 8P1 P5 P3 P2 ,
P2 P5 P3 P4 , P2 P5 P4 P3 , P2 P6 P4 P5 , P2 P6 P5 P4 , P2 P6 P3 P5 , P2 P6 P5 P3 , P2 P6 P3 P4 , P2 P6 P4 P3 , P3 P2 P5 P6 , P3 P2 P6 P5 , P3 P2 P4 P6 , P3 P2 P6 P4 , P3 P2 P4 P5 , P3 P2 P5 P4 , P3 P4 P5 P6 , P3 P4 P6 P5 , P3 P4 P2 P6 , P3 P4 P2 P5 , P3 P5 P4 P6 , P3 P5 P6 P4 , P3 P5 P2 P6 , P3 P5 P2 P4 , P3 P6 P4 P5 , P3 P6 P5 P4 , P3 P6 P2 P5 , P3 P6 P2 P4 , P4 P2 P5 P6 , P4 P2 P6 P5 , P4 P2 P3 P6 , P4 P2 P3 P5 , P4 P3 P5 P6 , P4 P3 P6 P5 , P4 P3 P2 P6 , P4 P3 P2 P5 , P4 P5 P3 P6 , P4 P5 P2 P6 , P4 P6 P3 P5 , P4 P6 P2 P5 , P5 P2 P4 P6 , P5 P2 P3 P6 , P5 P3 P4 P6 , P5 P3 P2 P6 , P5 P4 P3 P6 , P5 P4 P2 P6 ,
P5 P6 P4 P1 <, P5 P6 P3 P1 <, P6 P3 P5 P1 <, P6 P3 P4 P1 <, P6 P4 P5 P1 <, P6 P4 P3 P1 <, P6 P5 P4 P1 <, P6 P5 P3 P1 <, P2 P4 P6 P1 <, P2 P4 P5 P1 <, P2 P5 P6 P1 <, P2 P5 P4 P1 <, P2 P6 P5 P1 <, P2 P6 P4 P1 <, P4 P2 P6 P1 <, P4 P2 P5 P1 <, P4 P5 P6 P1 <, P4 P6 P5 P1 <, P5 P2 P6 P1 <, P5 P2 P4 P1 <, P5 P4 P6 P1 <, P5 P6 P4 P1 <, P6 P2 P5 P1 <, P6 P2 P4 P1 <, P6 P4 P5 P1 <, P6 P5 P4 P1 <, P2 P3 P6 P1 <, P2 P3 P5 P1 <, P2 P5 P6 P1 <, P2 P6 P5 P1 <, P3 P2 P6 P1 <, P3 P2 P5 P1 <, P3 P5 P6 P1 <, P3 P6 P5 P1 <, P5 P2 P6 P1 <, P5 P3 P6 P1 <, P6 P2 P5 P1 <, P6 P3 P5 P1 <, P2 P3 P6 P1 <, P2 P4 P6 P1 <, P3 P2 P6 P1 <, P3 P4 P6 P1 <, P4 P2 P6 P1 <, P4 P3 P6 P1 <<
This list is comprised of 3!60 formulae. But some of them define the same point because we can interchange the factors in the regressive product, or interchange the factors in each exterior product, without affecting the result. To reduce these formulae to a list of unique formulae, we can apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates only by a sign (which is unimportant). 2009differing 93
328
This list is comprised of 3!60 formulae. But some of them define the same point because we can interchange the factors in the regressive product, or interchange the factors in each exterior product, without affecting the result. To reduce these formulae to a list of unique formulae, we can apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates only differing by a sign (which is unimportant).
"2 ; pascalPointFormulae = Union@$@Flatten@allPascalPointFormulaeDD . - p_ pD 8P1 P2 P3 P4 , P1 P2 P4 P5 , P1 P3 P2 P4 , P1 P3 P4 P5 , P1 P4 P2 P5 , P1 P4 P5 P6 , P1 P5 P3 P4 , P1 P6 P2 P4 , P1 P6 P4 P5 , P2 P4 P3 P5 , P2 P5 P3 P6 , P2 P6 P4 P5 , P1 P2 P3 P5 , P1 P2 P4 P6 , P1 P3 P2 P5 , P1 P3 P4 P6 , P1 P4 P2 P6 , P1 P5 P2 P3 , P1 P5 P3 P6 , P1 P6 P2 P5 , P2 P3 P4 P5 , P2 P4 P3 P6 , P2 P5 P4 P6 , P3 P4 P5 P6 , P1 P2 P3 P6 , P1 P2 P5 P6 , P1 P3 P2 P6 , P1 P3 P5 P6 , P1 P4 P3 P5 , P1 P5 P2 P4 , P1 P5 P4 P6 , P1 P6 P3 P4 , P2 P3 P4 P6 , P2 P4 P5 P6 , P2 P6 P3 P4 , P3 P5 P4 P6 ,
P1 P4 P2 P3 , P1 P4 P3 P6 , P1 P5 P2 P6 , P1 P6 P2 P3 , P1 P6 P3 P5 , P2 P3 P5 P6 , P2 P5 P3 P4 , P2 P6 P3 P5 , P3 P6 P4 P5 <
Length@pascalPointFormulaeD 45
There are thus in general 45 distinct Pascal point formulae for the hexagons of a conic. Of course, since at this stage we have yet to discuss Pascal lines, these formulae are also valid for the 60 hexagons formed from any six points - not necessarily those constrained to lie on a conic. It should be noted that these are distinct formulae. The number of actual distinct points resulting from their application to any six given points may be less. Some may be duplicates, for example, when the hexagon has some symmetry, and some may reduce to vectors when the hexagon sides are parallel. For the ellipse and hexagon points discussed above, we can apply the formulae to get the actual points.
pascalPoints = ToPointForm@1@pascalPointFormulae . EllipsePointsDD :# + 4 4 3 # + 2 I- 1 + # + 4 e1 3 , -2 3 M e1 3 2 I- 1 + 3 M e2 , # + 4 I- 1 + 3 M e1 9 2 I1 + 3 M e1 - 3 3 e2 , e1 3 e2 , # + I8 - 4 3 M e1 ,
3 e2 , # + 2 I1 +
3 M e2 ,
2009 9 3
329
# + 4 +
4 3
e1 -
3 e2 , # + 4 I2 + 3 3 e2 2
3 M e1 - 3 I1 +
3 M e2 ,
# + 2 I2 + # + 4 I1 + # - 3 # - 3
3 M e1 -
, - 96 e1 + 72 e2 , 3 M e1 + 3 I3 + 3 e2 , 3 M e2 ,
3 M e1 - 3
3 e2 , # - 4 I2 + 3 3 e2 2
3 e2 , # -
3 e2 , # -
, #, # 3 M e2 , 3 M e1 3 M e2 , 3 M e1 3 2 I- 1 + 3 M e1 , 9 2 3
3 e2 , # + 4 I2 + 3 M e1 - 3
3 M e1 + 3 I3 +
# - 4 I1 +
3 e2 , # - 2 I2 + 3 M e1 - 3 I1 +
3 e2 2
96 e1 + 72 e2 , # - 4 I2 + # # 4 3 4 e1 3 # + I4 - 4 # + - 4 + # + 2 I- 1 + # + 4 I2 + # + I2 - 2 # + I8 - 4 # + 2 I1 + - 48 3 M e1 - 3 4 3 3 M e1 9 2 e1 I3 + -2 3 M e1 -
3 e2 , # - 2 I1 + 3 M e1 -
I1 + 3 M e2 ,
3 M e2 ,
3 e2 , # + I2 - 2
3 e2 , # + 4 I- 2 + 3 e2 , # + I8 - 4 I- 1 +
3 M e1 + 3 I- 3 + 3 e2 , 3 M e2 ,
3 M e2 ,
3 M e2 , # - 3 3 M e1 3 M e2 , 3 2 I1 +
3 M e1 , # - 2 I1 + 3 M e1 9 2 3 2 I- 1 +
3 M e1 + I3 - 3 3 M e1 I1 +
3 M e2 , # -
3 e2 , 3 M e1 3 3 e2 2 ,
3 M e2 , # + I4 - 2 3 3 e2 2 ,
3 e1 , # + 2 I- 2 +
3 M e1 -
# + 4 I- 2 + # - 4 I2 +
3 M e1 + 3 I- 3 +
3 M e2 , 3 M e1 + I3 - 3 3 M e2 >
3 M e1 , # + 4 I- 2 +
Two of these points occur in triplicate & are vectors (points at infinity).
3 e2 and & - 3
Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors (without their arrowheads) in green. 2009 9 3
330
Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors (without their arrowheads) in green.
Pascal lines
Since there are 60 essentially different hexagons one can form from 6 given points, there are thus 60 corresponding Pascal lines. We can display the equations for these lines.
PascalLineEquations = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Q1 Q2 Q3 0 8HP1 P2 P4 P5 L HP2 P3 P5 P6 L HP3 P4 P6 P1 L 0, HP1 P2 P4 P6 L HP2 P3 P6 P5 L HP3 P4 P5 P1 L 0, HP1 P2 P5 P4 L HP2 P3 P4 P6 L HP3 P5 P6 P1 L 0, HP1 P2 P5 P6 L HP2 P3 P6 P4 L HP3 P5 P4 P1 L 0, HP1 P2 P6 P4 L HP2 P3 P4 P5 L HP3 P6 P5 P1 L 0, HP1 P2 P6 P5 L HP2 P3 P5 P4 L HP3 P6 P4 P1 L 0, HP1 P2 P3 P5 L HP2 P4 P5 P6 L HP4 P3 P6 P1 L 0, HP1 P2 P3 P6 L HP2 P4 P6 P5 L HP4 P3 P5 P1 L 0, HP1 P2 P5 P3 L HP2 P4 P3 P6 L HP4 P5 P6 P1 L 0, HP1 P2 P5 P6 L HP2 P4 P6 P3 L HP4 P5 P3 P1 L 0, HP1 P2 P6 P3 L HP2 P4 P3 P5 L HP4 P6 P5 P1 L 0, HP1 P2 P6 P5 L HP2 P4 P5 P3 L HP4 P6 P3 P1 L 0, ,
2009 9 3
331
HP1 P2 P6 P5 L HP2 P4 P5 P3 L HP4 P6 P3 P1 L 0, HP1 P2 P3 P4 L HP2 P5 P4 P6 L HP5 P3 P6 P1 L 0, HP1 P2 P3 P6 L HP2 P5 P6 P4 L HP5 P3 P4 P1 L 0, HP1 P2 P4 P3 L HP2 P5 P3 P6 L HP5 P4 P6 P1 L 0, HP1 P2 P4 P6 L HP2 P5 P6 P3 L HP5 P4 P3 P1 L 0, HP1 P2 P6 P3 L HP2 P5 P3 P4 L HP5 P6 P4 P1 L 0, HP1 P2 P6 P4 L HP2 P5 P4 P3 L HP5 P6 P3 P1 L 0, HP1 P2 P3 P4 L HP2 P6 P4 P5 L HP6 P3 P5 P1 L 0, HP1 P2 P3 P5 L HP2 P6 P5 P4 L HP6 P3 P4 P1 L 0, HP1 P2 P4 P3 L HP2 P6 P3 P5 L HP6 P4 P5 P1 L 0, HP1 P2 P4 P5 L HP2 P6 P5 P3 L HP6 P4 P3 P1 L 0, HP1 P2 P5 P3 L HP2 P6 P3 P4 L HP6 P5 P4 P1 L 0, HP1 P2 P5 P4 L HP2 P6 P4 P3 L HP6 P5 P3 P1 L 0, HP1 P3 P4 P5 L HP3 P2 P5 P6 L HP2 P4 P6 P1 L 0, HP1 P3 P4 P6 L HP3 P2 P6 P5 L HP2 P4 P5 P1 L 0, HP1 P3 P5 P4 L HP3 P2 P4 P6 L HP2 P5 P6 P1 L 0, HP1 P3 P5 P6 L HP3 P2 P6 P4 L HP2 P5 P4 P1 L 0, HP1 P3 P6 P4 L HP3 P2 P4 P5 L HP2 P6 P5 P1 L 0, HP1 P3 P6 P5 L HP3 P2 P5 P4 L HP2 P6 P4 P1 L 0, HP1 P3 P2 P5 L HP3 P4 P5 P6 L HP4 P2 P6 P1 L 0, HP1 P3 P2 P6 L HP3 P4 P6 P5 L HP4 P2 P5 P1 L 0, HP1 P3 P5 P2 L HP3 P4 P2 P6 L HP4 P5 P6 P1 L 0, HP1 P3 P6 P2 L HP3 P4 P2 P5 L HP4 P6 P5 P1 L 0, HP1 P3 P2 P4 L HP3 P5 P4 P6 L HP5 P2 P6 P1 L 0, HP1 P3 P2 P6 L HP3 P5 P6 P4 L HP5 P2 P4 P1 L 0, HP1 P3 P4 P2 L HP3 P5 P2 P6 L HP5 P4 P6 P1 L 0, HP1 P3 P6 P2 L HP3 P5 P2 P4 L HP5 P6 P4 P1 L 0, HP1 P3 P2 P4 L HP3 P6 P4 P5 L HP6 P2 P5 P1 L 0, HP1 P3 P2 P5 L HP3 P6 P5 P4 L HP6 P2 P4 P1 L 0, HP1 P3 P4 P2 L HP3 P6 P2 P5 L HP6 P4 P5 P1 L 0, HP1 P3 P5 P2 L HP3 P6 P2 P4 L HP6 P5 P4 P1 L 0, HP1 P4 P3 P5 L HP4 P2 P5 P6 L HP2 P3 P6 P1 L 0, HP1 P4 P3 P6 L HP4 P2 P6 P5 L HP2 P3 P5 P1 L 0, HP1 P4 P5 P3 L HP4 P2 P3 P6 L HP2 P5 P6 P1 L 0, HP1 P4 P6 P3 L HP4 P2 P3 P5 L HP2 P6 P5 P1 L 0, HP1 P4 P2 P5 L HP4 P3 P5 P6 L HP3 P2 P6 P1 L 0, HP1 P4 P2 P6 L HP4 P3 P6 P5 L HP3 P2 P5 P1 L 0, HP1 P4 P5 P2 L HP4 P3 P2 P6 L HP3 P5 P6 P1 L 0, HP1 P4 P6 P2 L HP4 P3 P2 P5 L HP3 P6 P5 P1 L 0, HP1 P4 P2 P3 L HP4 P5 P3 P6 L HP5 P2 P6 P1 L 0, HP1 P4 P3 P2 L HP4 P5 P2 P6 L HP5 P3 P6 P1 L 0, HP1 P4 P2 P3 L HP4 P6 P3 P5 L HP6 P2 P5 P1 L 0, HP1 P4 P3 P2 L HP4 P6 P2 P5 L HP6 P3 P5 P1 L 0, HP1 P5 P3 P4 L HP5 P2 P4 P6 L HP2 P3 P6 P1 L 0, HP1 P5 P4 P3 L HP5 P2 P3 P6 L HP2 P4 P6 P1 L 0, HP1 P5 P2 P4 L HP5 P3 P4 P6 L HP3 P2 P6 P1 L 0, ,
2009 9 3
332
HP1 P5 P2 P4 L HP5 P3 P4 P6 L HP3 P2 P6 P1 L 0, HP1 P5 P4 P2 L HP5 P3 P2 P6 L HP3 P4 P6 P1 L 0, HP1 P5 P2 P3 L HP5 P4 P3 P6 L HP4 P2 P6 P1 L 0, HP1 P5 P3 P2 L HP5 P4 P2 P6 L HP4 P3 P6 P1 L 0<
We can verify these equations for the points we have chosen on the ellipse.
Union@Simplify@1@PascalLineEquations . EllipsePointsDDD 8True<
To generate formulae for the Pascal lines themselves we only have to choose any two of its Pascal points. Here we choose the first two.
PascalLineFormulae = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Q1 Q2 8HP1 P2 P4 P5 L HP2 P3 P5 P6 L, HP1 P2 P5 P4 L HP2 P3 P4 P6 L, HP1 P2 P6 P4 L HP2 P3 P4 P5 L, HP1 P2 P3 P5 L HP2 P4 P5 P6 L, HP1 P2 P5 P3 L HP2 P4 P3 P6 L, HP1 P2 P6 P3 L HP2 P4 P3 P5 L, HP1 P2 P3 P4 L HP2 P5 P4 P6 L, HP1 P2 P4 P3 L HP2 P5 P3 P6 L, HP1 P2 P6 P3 L HP2 P5 P3 P4 L, HP1 P2 P3 P4 L HP2 P6 P4 P5 L, HP1 P2 P4 P3 L HP2 P6 P3 P5 L, HP1 P2 P5 P3 L HP2 P6 P3 P4 L, HP1 P3 P4 P5 L HP3 P2 P5 P6 L, HP1 P3 P5 P4 L HP3 P2 P4 P6 L, HP1 P3 P6 P4 L HP3 P2 P4 P5 L, HP1 P3 P2 P5 L HP3 P4 P5 P6 L, HP1 P3 P5 P2 L HP3 P4 P2 P6 L, HP1 P3 P2 P4 L HP3 P5 P4 P6 L, HP1 P3 P4 P2 L HP3 P5 P2 P6 L, HP1 P3 P2 P4 L HP3 P6 P4 P5 L, HP1 P3 P4 P2 L HP3 P6 P2 P5 L, HP1 P4 P3 P5 L HP4 P2 P5 P6 L, HP1 P4 P5 P3 L HP4 P2 P3 P6 L, HP1 P4 P2 P5 L HP4 P3 P5 P6 L, HP1 P4 P5 P2 L HP4 P3 P2 P6 L, HP1 P4 P2 P3 L HP4 P5 P3 P6 L, HP1 P4 P2 P3 L HP4 P6 P3 P5 L, HP1 P5 P3 P4 L HP5 P2 P4 P6 L, HP1 P5 P2 P4 L HP5 P3 P4 P6 L, HP1 P5 P2 P3 L HP5 P4 P3 P6 L, HP1 P2 P4 P6 L HP2 P3 P6 P5 L, HP1 P2 P5 P6 L HP2 P3 P6 P4 L, HP1 P2 P6 P5 L HP2 P3 P5 P4 L, HP1 P2 P3 P6 L HP2 P4 P6 P5 L, HP1 P2 P5 P6 L HP2 P4 P6 P3 L, HP1 P2 P6 P5 L HP2 P4 P5 P3 L, HP1 P2 P3 P6 L HP2 P5 P6 P4 L, HP1 P2 P4 P6 L HP2 P5 P6 P3 L, HP1 P2 P6 P4 L HP2 P5 P4 P3 L, HP1 P2 P3 P5 L HP2 P6 P5 P4 L, HP1 P2 P4 P5 L HP2 P6 P5 P3 L, HP1 P2 P5 P4 L HP2 P6 P4 P3 L, HP1 P3 P4 P6 L HP3 P2 P6 P5 L, HP1 P3 P5 P6 L HP3 P2 P6 P4 L, HP1 P3 P6 P5 L HP3 P2 P5 P4 L, HP1 P3 P2 P6 L HP3 P4 P6 P5 L, HP1 P3 P6 P2 L HP3 P4 P2 P5 L, HP1 P3 P2 P6 L HP3 P5 P6 P4 L, HP1 P3 P6 P2 L HP3 P5 P2 P4 L, HP1 P3 P2 P5 L HP3 P6 P5 P4 L, HP1 P3 P5 P2 L HP3 P6 P2 P4 L, HP1 P4 P3 P6 L HP4 P2 P6 P5 L, HP1 P4 P6 P3 L HP4 P2 P3 P5 L, HP1 P4 P2 P6 L HP4 P3 P6 P5 L, HP1 P4 P6 P2 L HP4 P3 P2 P5 L, HP1 P4 P3 P2 L HP4 P5 P2 P6 L, HP1 P4 P3 P2 L HP4 P6 P2 P5 L, HP1 P5 P4 P3 L HP5 P2 P3 P6 L, HP1 P5 P4 P2 L HP5 P3 P2 P6 L, HP1 P5 P3 P2 L HP5 P4 P2 P6 L<
We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our chosen points.
2009 9 3
333
We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our chosen points.
PascalLines = allPascalPointFormulae . 8Q1_, Q2_, Q3_< Simplify@ToPointForm@1@Q1 . EllipsePointsDD ToPointForm@1@Q2 . EllipsePointsDDD :I# + 4 I- 1 + # + 4 e1 3 I# + 4 I- 1 + # + 2 I- 1 + # + 2 I1 + 3 M e1 - 3 3 M e1 9 2 9 2 3 e2 M I- 1 + 3 M e2 , -2 3 M e1 - 3 3 e2 M I# - 3 3 e2 M, 3 e2 M,
3 e2 I# - 3
3 M e1 -
I1 + 9 2
3 M e2 3 M e2 , 3 M e1 + 3 I- 3 + 3 M e2 M,
# + 2 I- 1 + # + 4 e1 3 # + 2 I1 + -2
3 M e1 -
I- 1 +
3 e2 I# - 4 I- 2 + 3 M e1 9 2 I1 +
3 M e2 3 M e2 M, 3 M e1 9 2 I- 1 + 3 M e2 ,
I# - 4 I- 2 + I# - 4 I- 2 + # + 2 I- 1 + # - 2 I- 1 + I# - 4 I- 2 + # + 2 I1 + # - 2 I1 + # + 2 I- 1 + # + 2 I1 + # + 4 4 3 # + 2 I- 1 +
3 M e1 + 3 I- 3 +
3 M e1 M # - 2 I- 1 + 3 M e1 3 2 I- 1 + 9 2 I- 1 +
3 M e2 3 M e2 , 3 M e1 3 2 I1 + 3 M e2 ,
3 M e1 -
3 M e1 M # - 2 I1 + 3 M e1 9 2 I1 + 3 2 3 2 I1 + I- 1 +
3 M e2 3 M e2 , 3 M e2 I# + 4 I2 + 3 M e1 M,
3 M e1 3 M e1 3 M e1 e1 9 2
I1 +
3 M e2 I# + 4 I2 + 3 M e1 -
3 M e1 M, 3 2 I1 + 3 M e2 ,
3 e2 # + 2 I1 + 3 2 I- 1 + 3 M e2 ,
3 M e1 -
2009 9 3
334
# + 2 I1 + # + 4 4 e1 3 # + 2 I- 1 + I# - 4 I- 2 + # + 4 e1 3 # + 4 4 3 I# - 4 I- 2 + 4 3 I# + 4 I- 1 + I# - 4 I- 2 + -2 4 3 # + -2
3 M e1 e1 -
3 2
I1 +
3 M e2 , 3 e2 M,
3 e2 I# -
3 e2 I# 3 M e1 3 2 I- 1 +
3 e2 M, 3 M e2 3 M e2 M, 3 M e1 - 3 I- 1 + 3 3 M e2 M, 3 e2 2 ,
3 M e1 - 3 I- 1 +
3 e2 I# - 4 I- 2 +
e1 -
3 e2 # + 2 I- 2 +
3 M e1 3
3 M e1 M # + 2 I- 2 +
3 M e1 -
3 e2 2
# + 4 -
e1 -
3 e2 I- 48
3 e1 M, 3 e1 M, 3 M e1 3 3 e2 2 3 M e1 , 3 3 e2 2
3 M e1 - 3
3 e2 M I48
3 M e1 M # - 2 I- 2 +
I# + 4 I- 1 +
3 M e1 - 3
3 e2 M # - 2 I- 2 + 3 e2 M, 3 e2 M I# - 3 3 M e1 3 M e2 M 3 M e2 , 9 2 3 e2 M, I- 1 +
H- 96 e1 + 72 e2 L I# - 3 I# + 4 I1 + 3 M e1 - 3
H96 e1 - 72 e2 L # + 2 I- 1 + I# - 4 I2 + 3 M e1 + 3 I3 + 3 M e1 9 2
3 M e2 ,
# + 2 I- 1 + I# + 4 I1 +
I- 1 +
3 M e1 - 3
3 e2 M 3 M e2 M,
I# - 4 I- 2 + I# - 4 I2 +
3 M e1 + 3 I- 3 +
3 M e1 + 3 I3 +
3 M e2 M 3 M e2 M,
I# - 4 I- 2 + I# + 4 I2 +
3 M e1 + 3 I- 3 +
3 M e1 - 3 I1 +
3 M e2 M 3 M e2 M,
I# + 4 I- 2 +
3 M e1 + 3 I- 3 +
2009 9 3
335
# + 2 I2 +
3 M e1 -
3 e2 2
3 M e2 M,
I# + 4 I- 2 + I# + 4 I2 +
3 M e1 + 3 I- 3 +
3 M e1 - 3 I1 + 3 M e1 3
3 M e2 M 3 e2 2 , # + 2 I2 + 3 M e2 M, 3 M e1 M, 3 M e1 M, 3 M e1 3 3 e2 2
# - 2 I- 2 + I# - 4 I- 2 + # + 4 3 I3 +
3 M e1 - 3 I- 1 + 3 M e1 -
3 e2 I# - 4 I2 + 3 3 e2 2 I# - 4 I2 + 3 e1 M,
# + 2 I2 + # + 4 3 I3 +
3 M e1 3 M e1 3 M e1 3 M e1 -
3 e2 I48 3 3 e2 2
# + 2 I2 + # + 4 3 I3 +
I# + 4 I2 +
3 M e1 M, 3 M e1 - 3 I- 1 + 3 M e2 M,
3 e2 I# + 4 I- 2 + 3 M e2 M 3 M e2 M, 3 e2 M,
I# + 4 I2 + 4 3
3 M e1 - 3 I1 +
I# + 4 I- 2 + # + I3 +
3 M e1 - 3 I- 1 + 3 M e1 -
3 e2 I# 3 M e2 M
I# + 4 I2 + # - 2 I1 +
3 M e1 - 3 I1 + 3 M e1 3 2 I1 + 9 2
3 M e2 , I- 1 + 3 M e2 , 9 2 I- 1 + 3 M e2 ,
# # - 2 I- 1 + I# -
3 M e1 -
3 e2 M # - 2 I- 1 + 3 M e1 3 2
3 M e1 I1 +
# # - 2 I1 + I# I# # 3
3 M e2 ,
3 e2 M I# + 4 I2 + 3 e2 M I# + 4 I- 2 + 3 e2 2
3 M e1 M, 3 M e1 + 3 I- 3 + 3 M e2 M, 3 M e2 M,
I# + 4 I- 2 +
3 M e1 + 3 I- 3 + 3 3 e2 2
I# -
3 e2 M # - 2 I- 2 +
3 M e1 -
2009 9 3
336
# -
3 e2 2
I# - 4 I- 2 +
3 M e1 - 3 I- 1 + 3 M e1 - 3 I- 1 + 3 M e1 3 M e1 M, 3 M e1 3 2 3 2 I1 + I1 + 3 3 e2 2
3 M e2 M, 3 M e2 M, ,
I# - 3 I# - 3 I# - 3 I# - 3
3 e2 M I# + 4 I- 2 + 3 e2 M # + 2 I- 2 + 3 e2 M I# - 4 I2 + 3 e2 M # + 2 I1 +
3 M e2 , 3 M e2 ,
H96 e1 + 72 e2 L # + 2 I1 + H- 96 e1 - 72 e2 L I# I# - 4 I1 + I# - 4 I1 + I# + 4 I2 + 3 M e1 - 3 3 M e1 - 3
3 M e1 -
3 e2 M, 3 e2 M I# - 4 I2 + 3 e2 M I- 48 3 M e2 M 3 M e2 M, 3 M e1 M,
3 e1 M,
3 M e1 + 3 I3 +
I# + 4 I- 2 + I# + 4 I2 +
3 M e1 - 3 I- 1 +
3 M e1 + 3 I3 + 3 M e1 3
3 M e2 M 3 e2 2 >
# + 2 I- 2 +
We can plot these 60 Pascal lines on their Pascal points. Lines normally have an infinite extent, but to make it clearer which lines correspond to which points, we have plotted them spanning just their three points, where of course one of the points may be at infinity. Thus if the three Pascal points (or vectors) of a Pascal line were Pa , Pb , and Pc we have plotted the line segments HPa , Pb L, HPb , Pc L, and HPc , Pa L. If this is being read as a Mathematica notebook, actual line expressions are discernible as Tooltips. However care should be taken when attempting to correlate the expressions shown with lines, as some lines may overlap.
2009 9 3
337
Sixty Pascal lines each joining its three points or points at infinity
Finally, we plot the sixty Pascal lines and bivectors of a regular hexagon. In this case we have shown the arrowheads on the line-bound vectors and on the bivectors. The lack of apparent symmetry is an artifact caused by the vectors and bivectors showing more information (their signs, weights and orientation) than the geometric lines and 2-directions that they are being used to represent.
2009 9 3
338
4.14 Summary
In this chapter we have explored the implications of interpreting one of the elements of the underlying linear space as an (origin) point (and the rest as vectors). This is a slightly different approach to that adopted by Grassmann and workers in the earlier Grassmannian tradition, of interpreting all the basis elements of the underlying linear space as points, and then treating any consequent point differences as vectors. Although the modern context is to interpret elements of a linear space most commonly as vectors, workers in the early nineteenth century treated the point as the paramount entity of a linear space. Indeed it was Hamilton who coined the term vector in 1853 as noted in the Historical Note at the beginning of this chapter. As we hope to have demonstrated in this chapter, the modern context is significantly poorer for ignoring the possibility of interpreting the elements of a linear space such that they can include points. We can probably trace this development back to Gibbs' early work before he became familiar with Grassmann's work, where he unwrapped Hamilton's quaternion operation into the dot and cross product. Of course, this chapter involves no metric concepts other than the comparison of weights of points (their scalar factors) or the relative lengths of vectors in the same direction. This has enabled us to see what types of geometric theorems and constructions can be done without the concept of a metric. Or, what is equivalent, with an arbitrary metric. The notion of congruence has been central. The next two chapters will introduce the two seminal concepts with which we will explore metric geometry: the complement, and the interior product. Armed with these, we will also be able to define the notions of length and angle, and more specifically, orthogonality. Because the 2009 9 3 algebra extends the vector algebra to higher grade entities, we will find these notions Grassmann also extend naturally to higher grade entities.
339
The next two chapters will introduce the two seminal concepts with which we will explore metric geometry: the complement, and the interior product. Armed with these, we will also be able to define the notions of length and angle, and more specifically, orthogonality. Because the Grassmann algebra extends the vector algebra to higher grade entities, we will find these notions also extend naturally to higher grade entities.
2009 9 3
340
5 The Complement
5.1 Introduction
Up to this point various linear spaces and the dual exterior and regressive product operations have been introduced. The elements of these spaces were incommensurable unless they were congruent, that is, nowhere was there involved the concept of measure or magnitude of an element by which it could be compared with any other element. Thus the subject of the last chapter on Geometric Interpretations was explicitly non-metric geometry; or, to put it another way, it was what geometry is before the ability to compare or measure is added. The question then arises, how do we associate a measure with the elements of a Grassmann algebra in a consistent way? Of course, we already know the approach that has developed over the last century, that of defining a metric tensor. In this book we will indeed define a metric tensor, but we will take an approach which develops from the concepts of the exterior product and the duality operations which are the foundations of the Grassmann algebra. This will enable us to see how the metric tensor on the underlying linear space generates metric tensors on the exterior linear spaces of higher grade; the implications of the symmetry of the metric tensor; and how consistently to generalize the notion of inner product to elements of arbitrary grade. One of the consequences of the anti-symmetry of the exterior product of 1-elements is that the exterior linear space of m-elements has the same dimension as the exterior linear space of (n|m)elements. We have already seen this property evidenced in the notions of duality and cobasis element. And one of the consequences of the notion of duality is that the regressive product of an m-element and an (n|m)-element is a scalar. Thus there is the opportunity of defining for each melement a corresponding 'co-m-element' of grade n|m such that the regressive product of these two elements gives a scalar. We will see that this scalar measures the square of the 'magnitude' of the melement or the (n|m)-element, and corresponds to the inner product of either of them with itself. We will also see that the notion of orthogonality is defined by the correspondence between melements and their 'co-m-elements'. But most importantly, the definition of this inner product as a regressive product of an m-element with a 'co-m-element' is immediately generalizable to elements of arbitrary grade, thus permitting a theory of interior products to be developed which is consistent with the exterior and regressive product axioms and which, via the notion of 'co-m-element', leads to explicit and easily derived formulae between elements of arbitrary (and possibly different) grade. The foundation of the notion of measure or metric then is the notion of 'co-m-element'. In this book we use the term complement rather than 'co-m-element'. In this chapter we will develop the notion of complement in preparation for the development of the notions of interior product and orthogonality in the next chapter. The complement of an element is denoted with a horizontal bar over the element. For example, the complements of a, x + y and xy are denoted a, x+y and x y.
m m
Finally, it should be noted that the term 'complement' may be used either to refer to an operation (the operation of taking the complement of an element), or to the element itself (which is the result of the operation). 2009 9 3
341
Finally, it should be noted that the term 'complement' may be used either to refer to an operation (the operation of taking the complement of an element), or to the element itself (which is the result of the operation).
Historical Note
Grassmann introduced the notion of complement (Ergnzung) into the Ausdehnungslehre of 1862 [Grassmann 1862]. He denoted the complement of an element x by preceding it with a vertical bar, viz |x. For mnemonic reasons, particularly in the derivation of formulas using the Complement Axiom, the notation for the complement used in this book is rather the horizontal bar: x. In discussing the complement, Grassmann defines the product of the n basis elements (the basis n-element) to be unity. That is @e1 e2 ! en D 1 or, in the present notation, e1 e2 ! en 1. Since Grassmann discussed only the Euclidean complement (equivalent to imposing a Euclidean metric
gij dij ), this statement in the present notation is equivalent to 1 1. The introduction of such an
identity, however, destroys the essential duality between L and L which requires rather the identity
m n-m
2009 9 3
342
n- m
5.1
The grade of the complement of an element is the complementary grade of the element. (The complementary grade of an m-element in an algebra with underlying linear space of dimension n, is n-m.)
5.2
For scalars a and b, the complement of a sum of elements (perhaps of different grades) is the sum of the complements of the elements. The complement of a scalar multiple of an element is the scalar multiple of the complement of the element.
5.3
ab ab
m k m k
5.4
Note that for the terms on each side of the expression 5.3 to be non-zero we require m+k n, while in expression 5.4 we require m+k n. Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can confirm this duality by applying the GrassmannAlgebra function Dual. 2009 9 3
343
Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can confirm this duality by applying the GrassmannAlgebra function Dual.
DualBa b a bF
m k m k
ab ab
m k m k
This axiom is of central importance in the development of the properties of the complement and interior, inner and scalar products, and formulae relating these with exterior and regressive products. In particular, it permits us to be able consistently to generate the complements of basis melements from the complements of basis 1-elements, and hence via the linearity axiom, the complements of arbitrary elements. The forms 5.3 and 5.4 may be written for any number of elements. To see this, let bgd and
k p q
agd a gd agd
m p q m p q m p q
In general then, expressions 5.3 and 5.4 may be stated in the equivalent forms:
ab!g ab!g
m k p m k p
5.5
ab!g ab!g
m k p m k p
5.6
In Grassmann's work, this axiom was hidden in his notation. However, since modern notation explicitly distinguishes the progressive and regressive products, this axiom needs to be explicitly stated.
5.7
2009 9 3
344
Axiom 1 says that the complement of an m-element is an (n|m)-element. Clearly then the complement of an (n|m)-element is an m-element. Thus the complement of the complement of an m-element is itself an m-element. In the interests of symmetry and simplicity we will require that the complement of the complement of an element is equal (apart from a possible sign) to the element itself. Although consistent algebras could no doubt be developed by rejecting this axiom, it will turn out to be an essential underpinning to the development of the standard Riemannian metric concepts to which we are accustomed. For example, its satisfaction will require the metric tensor to be symmetric. It will turn out that, again in compliance with standard results, that the index f(m,n) is m(n|m). But this result is more in the nature of a theorem than an axiom. We discuss this further in section 5.6. Although this text does not consider pseudo-Riemannian metrics, the equivalent value for the index f would in this case turn out to be m(n|m)+s, where s is the signature of the metric.
In the discussions and non-Mathematica derivations that follow in this chapter, it turns out to be more notationally convenient to use the inverse of this factor, which we denote !, and rewrite this requirement as
1 1 ! e1 e2 ! en
n
5.8
This is the axiom which finally enables us to define the unit n-element. This axiom requires that in a metric space the unit n-element be identical to the complement of unity. The hitherto unspecified scalar constant ! may now be determined from the specific complement mapping or metric under consideration. The dual of this axiom is 1 1, since 1 and 1 are duals. We can confirm by applying the Dual
n
n
function. (Note that Dual requires us to unambiguously specify that n is the dimension of the space concerned by writing it as n)
DualB1 1 F
n
1 1
Hence taking the complement of expression 5.8 and using this dual axiom tells us that the complement of the complement of unity is unity. This result clearly complies with axiom 4 . 2009 9 3
345
Hence taking the complement of expression 5.8 and using this dual axiom tells us that the complement of the complement of unity is unity. This result clearly complies with axiom 4 .
111
n
5.9
We can now write the unit n-element as 1 instead of 1 in any Grassmann algebras that have a
n
metric. For example in a metric space, 1 becomes the unit for the regressive product.
a 1a
m m
5.10
a ai ei
m m
a ai ei
m m
5.11
The complement axiom enables us to define the complement of a basis m-element in terms only of the complements of basis 1-elements.
2009 9 3
346
e1 e2 ! e m e1 e2 ! e m
5.12
Thus in order to define the complement of any element in a Grassmann algebra, we only need to define the complements of the basis 1-elements, that is, the correspondence between basis 1elements and basis (n|1)-elements.
However, it will be much more notationally convenient to define the complements of basis 1elements as a linear combination of their cobasis elements. (For a definition of cobasis elements, see Chapter 2: The Exterior Product.)
e1 a11 e1 + a12 e2 + a13 e3
Substituting for the cobasis symbols we see that we get a somewhat notationally different, though equivalent, form to the definition with which we began.
e1 a11 e2 e3 + a12 H- e1 e3 L + a13 e1 e2
ei aij ej
j=1
For reasons which the reader will perhaps discern from the choice of notation involving gij , we extract the scalar factor ! as an explicit factor from the coefficients of the complement mapping. We assume for simplicity, that ! is positive. It will turn out to be the reciprocal of the 'volume' (measure) of the basis n-element.
n
ei # gij ej
j=1
5.13
This then is the form in which we will define the complement of basis 1-elements. The scalar ! and the scalars gij are at this point otherwise entirely arbitrary. However, we must still ensure that our definition satisfies the complement of a complement axiom 4. We will see that in order to satisfy 4, some constraints will need to be imposed on ! and gij . Therefore, our task now in the ensuing sections is to determine these constraints. We begin by determining ! in terms of the gij .
347
Then we use the definition of the complement of a 1-element in terms of cobasis elements (formula 5.13)
n n j=1 n j=1
! !n
g1 j ej g2 j ej ! gn j ej
j=1
Now because ei ej is equal to - ej ei (see the axioms for the regressive product), just as ei ej is equal to - ej ei , we can rewrite this regressive product (see the section on determinants in Chapter 2: The Exterior Product) as
! !n gij e1 e2 ! en
The symbol gij denotes the determinant of the gij . In Chapter 3: The Regressive Product, we explore the regressive product of cobasis elements. Formula 3.41 is useful here:
e 1 e 2 ! e m c m -1 H e 1 e 2 ! e m L
H e1 e2 ! en L
Example in 2 dimensions
1 1 ! e1 e2 ! e1 e2
2009 9 3
348
! e1 e2 ! !2 Ig11 e1 + g12 e2 M Ig21 e1 + g22 e2 M ! !2 Hg11 g22 - g12 g21 L e1 e2 ! !2 gij e1 e2 ! !2 gij ! !2 gij !2 gij 1 ! 1 ! e1 e2 1
Example in 3 dimensions
1 1 ! e1 e2 e3 ! e1 e2 e3 ! e1 e2 e3 ! !3 Ig11 e1 + g12 e2 + g13 e3 M Ig21 e1 + g22 e2 + g23 e3 M Ig31 e1 + g32 e2 + g33 e3 M ! !3 gij e1 e2 e3 ! !3 gij ! !3 gij !2 gij 1 !2 1 !2 H e1 e2 e3 L 1
ei ! gij ej
j=1
The previous section showed that in order for this definition to conform to axiom 5 (1 1), the scalar ! must be related to the determinant of the scalar coefficients gij by
2009 9 3
349
!2 gij 1
Although we have not yet identified the gij with the metric tensor, our assumption in this text of only considering Riemannian metrics (rather than pseudo-Riemannian metrics, for example), permits us now to further suppose that the determinant of the gij is positive. Combining this with our assumption above that ! is positive, enables us to write
! 1 gij
Hence from this point on then, we take the value of ! to be the reciprocal of the positive square root of the determinant of the coefficients of the complement mapping gij .
1 gij 5.14
gij ej
j=1
2009 9 3
350
Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1
2009 9 3
351
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1
2009 9 3
352
Complement Palette
Basis 1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4 Complement e1 e2 e3 e4 e2 e3 e4 - He1 e3 e4 L e1 e2 e4 - He1 e2 e3 L e3 e4 - He2 e4 L e2 e3 e1 e4 - He1 e3 L e1 e2 e4 - e3 e2 - e1 1
2009 9 3
353
e i e i H - 1 L i-1 e 1 ! i ! e n
This simple relationship extends naturally to complements of basis elements of any grade.
5.15
ei ei
m m
5.16
e i1 ! e i m H - 1 L K m e 1 ! i1 ! i m ! e n
m
Km ig +
g=1
1 2
m H m + 1L
5.17
where the symbol means the corresponding element is missing from the product. In particular, the unit n-element is now the basis n-element:
1 e1 e2 ! en
and the complement of the basis n-element is just unity.
5.18
1 e1 e2 ! en
This simple correspondence is that of a Euclidean complement, defined by a Euclidean metric, simply given by gij dij , with consequence that ! 1.
5.19
gij dij
! 1
5.20
Finally, we can see that exterior and regressive products of basis elements with complements take on particularly simple forms.
ei ej dij 1
m m
5.21
ei ej dij
m m
5.22
These forms will be the basis for the definition of the inner product in the next chapter. We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product, despite its formal similarity to the Euclidean complement, is only a notational convenience. We do not define it for linear combinations of elements as the definition of a complement requires. 2009 9 3
354
We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product, despite its formal similarity to the Euclidean complement, is only a notational convenience. We do not define it for linear combinations of elements as the definition of a complement requires.
a ai ei
m m
b bi ei
m m
a ai ei
m m
b bi ei
m m
a b K ai ei O K bk ek O I ai bi M 1
m m m m
a b I ai bi M 1
m m
5.23
Taking the complement of this formula, or else doing a derivation mutatis mutandis, leads to the regressive product form.
a b ai bi
m m
5.24
a b Ka bO 1
m m m m
5.25
a a I ai 2 M 1 a 1
m m m
5.26
a a ai 2 a
m m m
5.27
2009 9 3
355
x y I ai bi M 1
x y ai bi
5.28
a == a 1 a 1 == a 1
The complement of a scalar can then be expressed in any of the following forms:
5.29
5.30
5.31
Orthogonality
The specific complement mapping that we impose on a Grassmann algebra will define the notion of orthogonality for that algebra. A simple element and its complement will be referred to as being orthogonal to each other. In standard linear space terminology, the space of a simple element and the space of its complement are said to be orthogonal complements of each other. This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to every 1-element in the complement of the m-element.
2009 9 3
356
This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to every 1-element in the complement of the m-element. In the special case of a 2-dimensional vector space, the complement of a vector is itself a vector.
x x
In a vector three-space, the complement of a vector is a bivector. And of course the complement of a bivector is a vector.
More generally we can say that two simple elements of the same grade a and b, are orthogonal if and only if the exterior product of one with the complement of the other is zero.
ab 0
ab 0
5.32
By taking the complement of these equations, and applying both the complement axiom, and the complement of a complement axiom, we see that we can also say that two simple elements of the same grade a and b, are orthogonal if and only if the regressive product of one with the complement of the other is zero. 2009 9 3
357
By taking the complement of these equations, and applying both the complement axiom, and the complement of a complement axiom, we see that we can also say that two simple elements of the same grade a and b, are orthogonal if and only if the regressive product of one with the complement of the other is zero.
ab 0
ab 0
5.33
358
Taking the complement of both sides of this axiom, noting that the grade of a b is m+k|n, and
m k
Hence, any regressive product can be written as the complement of an exterior product of complements.
a b H- 1Lf Hm+k-n,nL a b
m k m k
5.34
In section 5.6 below, we will show that f(m,n) is equal to m(n-m), so that f (m+k-n, n) becomes (m+k-n) (n-(m+k-n)) which simplifies to (m+k) (m+k-n). For reference here, we include this version of the formula below, although it has not yet been derived.
a b H - 1 L H m +kL H m +k-nL a b
m k m k
5.35
The expression of a regressive product in terms of an exterior product and a double application of the complement operation poses an interesting conundrum. On the left hand side, there is a product which is defined in a completely non-metric way, and is thus independent of any metric imposed on the space. On the right hand side we have a double application of the complement operation, each of which requires a metric. This leads us to the notion that the complement operation applied twice in this way is independent of any metric used to define it. A second application of the complement operation effectively 'cancels out' the metric elements introduced during the first application.
2009 9 3
359
ab a 1 a
m m
H- 1L
m Hn-m L
ba a
m m m
H - 1 L m Hn-m L b a a
m m
H- 1L
m m Hn-mL+f Hm,nL
ba a
We will see in the following chapter how this formula is the foundation of the definition of the inner product. It shows how a scalar may be obtained from the regressive product of an element with the complement of an element of the same grade. It also shows the 'equivalent' definition in terms of the exterior product. If f(m,n) is equal to m(n-m), as we will soon show, then the last formula simplifies, giving
ab a 1
m m
ba a
m m
5.36
To find the function f (m, n) we proceed to calculate the complement of the complement of an melement. What we also discover along the way, is that for a formula like this to be valid, the coefficients of our metric mapping gij must also be symmetric. And, as we have already previewed in Section 5.5 above, f (m, n) must be equal to m(n-m).
2009 9 3
360
ei ! gij ej
j=1
to give
n
ei ! gij ej
j=1
In order to determine ei , we need to obtain an expression for the complement of a cobasis element of a 1-element ej . Note that whereas the complement of a cobasis element is defined, the cobasis element of a complement is not. To simplify the development we consider, without loss of generality, a specific basis element e1 . First we express the cobasis element as a basis (n|1)-element, and then use the complement axiom to express the right-hand side as a regressive product of complements of 1-elements.
e1 e2 e3 ! en e2 e3 ! en
e 1 !n - 1 g 1 j H - 1 L j - 1 e 1 e 2 ! j ! e n
j=1
In this expression, the notation j means that the jth factor has been omitted. The scalars g1 j are the cofactors of g1 j in the array gij . Further discussion of cofactors, and how they result from such products of (n|1) elements may be found in Chapter 2. The results for regressive products are identical mutatis mutandis to those for the exterior product. By equation 3.42, regressive products of the form above simplify to a scalar multiple of the missing element. (Remember, ! is now equal to the inverse of c for the purposes of this Chapter).
H - 1 L j-1 e 1 e 2 ! j ! e n 1 !n-2 H - 1 L n-1 e j
2009 9 3
361
e 1 H - 1 L n-1 ! g 1 j e j
j=1
The final step is to see that the actual basis element chosen (e1 ) was, as expected, of no significance in determining the final form of the formula. We thus have the more general result:
n
ei H- 1Ln-1 # gij ej
j=1
5.37
Thus we have determined the complement of the cobasis element of a basis 1-element as a specific 1-element. The coefficients of this 1-element are the products of the scalar ! with cofactors of the gij , and a possible sign depending on the dimension of the space. In matrix formulation, this can be written as
B H - 1 L n-1 1 G G B
where B is a column matrix of basis vectors, and G is the matrix of cofactors of elements of G. Our major use of this formula is to derive an expression for the complement of a complement of a basis element. This we do below.
ei ! gij ej
j=1
ei ! gij ej
j=1
We can now substitute for the ej from the formula derived in the previous section to get:
n n k=1
ei H- 1L
n-1
! gij gjk ek
j=1
2009 9 3
362
B H - 1 L n-1
1 G
GG B
In Chapter 2 the sum over j of the terms gij gkj was shown to be equal to the determinant gij whenever i equals k and zero otherwise. That is:
n
Note carefully that in this sum, the order of the subscripts is reversed in the cofactor term compared to that in the expression for ei . Thus we conclude that if and only if the array gij is symmetric, that is, gij gji , can we express the complement of a complement of a basis 1-element in terms only of itself and no other basis element.
ei H- 1Ln-1 !2 gij dik ek == H- 1Ln-1 !2 gij ei
Furthermore, since we have already shown in Section 5.3 that !2 gij 1, we also have that:
e i H - 1 L n-1 e i
In sum: In order to satisfy the complement of a complement axiom for m-elements it is necessary that gij gji . For 1-elements it is also sufficient. Below we shall show that the symmetry of the gij is also sufficient for m-elements.
But of course the form of this result is valid for any basis element ei . Writing H- 1Lm Hn-1L in the
m
equivalent form H- 1Lm Hn-mL (since the parity of m(n-1) is equal to the parity of m(n-m)), we obtain that for any basis element of any grade:
e i H - 1 L m Hn-m L e i
m m
5.38
363
Finally then we have shown that, provided the complement mapping gij is symmetric and the constant ! is such that !2 gij 1, the complement of a complement axiom is satisfied by an otherwise arbitrary mapping, with sign H- 1Lm Hn-mL .
a H - 1 L m Hn-m L a
m m
5.39
Special cases
The complement of the complement of a scalar is the scalar itself.
aa
The complement of the complement of an n-element is the n-element itself.
5.40
aa
n n
5.41
The complement of the complement of any element in a 3-space is the element itself, since H- 1Lm H3-mL is positive for m equal to 0, 1, 2, or 3.
aa
m m
Alternatively, we can say that aa except when a is of odd degree in an even-dimensional space.
m m m
The simplest case of an element of odd degree in an even-dimensional space is a vector in a vector 2-space.
2009 9 3
364
x x
x = -x
Idempotent complements
No simple element (except zero) can be equal to its complement. This is because the exterior product of a simple element with its complement is congruent to the unit n-element (which is not zero) - a property that follows directly from the definition of complement. However, this is not necessarily true for non-simple elements. Let S be the sum of a simple melement (a, say) and its complement.
m
Sa+a
m m
Since H- 1Lm Hn-mL is equal to H- 1Lm Hn-1L , we can see that S S if and only if the dimension n of the space is odd, or the grade m is even. For example, in a vector 3-space, the complement of a sum of a vector and its bivector complement is identical to itself.
!3 ; S = x + x; 9S, $@SD= 8x + x, x + x<
And in a point 3-space, the sum of a 2-element and its 2-element complement is identical to itself.
"3 ; S = B + B; 9S, $@SD=
2 2
:B + B, B + B>
2 2 2 2
2009 9 3
365
early tradition of the Ausdehnungslehre tacitly assumed a Euclidean metric; that is, one in which gij dij . This metric is also the one tacitly assumed in beginning presentations of the threedimensional vector calculus, and is most evident in the definition of the cross product as a vector normal to the factors of the product. The default metric assumed by GrassmannAlgebra is the Euclidean metric. In the case that the GrassmannAlgebra package has just been loaded, entering Metric will show the components of the default Euclidean metric tensor.
Metric 881, 0, 0<, 80, 1, 0<, 80, 0, 1<<
These components are arranged as the elements of a 3!3 matrix because the default basis is 3dimensional.
2009 9 3
366
These components are arranged as the elements of a 3!3 matrix because the default basis is 3dimensional.
Basis 8e1 , e2 , e3 <
However, if the dimension of the basis is changed, the default metric changes accordingly.
DeclareBasis@8", i, j, k<D 8#, i, j, k< Metric 881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<
Declaring a metric
GrassmannAlgebra permits us to declare a metric as a matrix with any numeric or symbolic components. There are just three conditions to which a valid matrix must conform in order to be a metric: 1) It must be symmetric (and hence square). 2) Its order must be the same as the dimension of the declared linear space. 3) Its components must be scalars. It is up to the user to ensure the first two conditions. The third is handled by GrassmannAlgebra automatically assuming all symbols in the matrix are scalars.
2009 9 3
367
You can check that the components of G are now considered scalars.
ScalarQ@GD 88True, True, True<, 8True, True, True<, 8True, True, True<<
You can test to see if a symbol is a component of the currently declared metric by using MetricQ.
MetricQ@8-2,3 , -23 , -3,4 <D 8True, False, False<
MetricL[m].
2009 9 3
368
M1 = MetricL@1D; MatrixForm@M1 D $1,1 $1,2 $1,3 $1,2 $2,2 $2,3 $1,3 $2,3 $3,3 M2 = MetricL@2D; MatrixForm@M2 D - $2 1,2 + $1,1 $2,2 - $1,2 $1,3 + $1,1 $2,3 - $1,2 $1,3 + $1,1 $2,3 - $1,3 $2,2 + $1,2 $2,3 - $2 1,3 + $1,1 $3,3 - $1,3 $2,3 + $1,2 $3,3 - $2 2,3 + $2,2 $3,3
- $1,3 $2,2 + $1,2 $2,3 - $1,3 $2,3 + $1,2 $3,3 M3 = MetricL@3D; MatrixForm@M3 D
2 2 I - $2 1,3 $2,2 + 2 $1,2 $1,3 $2,3 - $1,1 $2,3 - $1,2 $3,3 + $1,1 $2,2 $3,3 M
-b n 0 bg , H a b g - b n2 L>
0 ag-n -b n 0
2009 9 3
369
defined on L. Because of the way in which the cobasis elements of L are naturally ordered
1 m
alphanumerically, their ordering and signs will not generally correspond to that of the basis elements of L . The arrangement of the elements of the metric tensor for the cobasis elements of L
n-m m
will therefore differ from the arrangement of the elements of the metric tensor for the basis elements of L . The metric tensor of a cobasis is the tensor of cofactors of the metric tensor of the
n-m
basis. Hence we can obtain the tensor of cofactors of the elements of the metric tensor of L by
m
As an example, take the general metric in 3-space discussed in the previous section. We will show that we can obtain the tensor of the cofactors of the elements of the metric tensor of L by
1
The metric on L is given as a correspondence G1 between the basis elements and cobasis elements
1
of L.
1
e1 e2 e3
2
! G1
e2 e3 - He1 e3 L ; e1 e2
G1 =
The metric on L is given as a correspondence G2 between the basis elements and cobasis elements of L.
2
e1 e2 e1 e3 e2 e3
! G2
e3 - e2 ; e1 - -1,2 -1,3 + -1,1 -2,3 - -1,3 -2,2 + -1,2 -2,3 - -2 1,3 + -1,1 -3,3 - -1,3 -2,3 + -1,2 -3,3 ; - -2 2,3 + -2,2 -3,3
We can transform the columns of basis elements in this equation into the form of the preceding one with the transformation T which is simply determined as:
T= 0 0 1 0 -1 0 ; 1 0 0 e1 e2 e1 e3 e2 e3 e2 e3 - He1 e3 L ; e1 e2 T e3 - e2 e1 e1 e2 ; e3
And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily check 2009 9this 3 in Mathematica by entering the predicate
370
And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily check this in Mathematica by entering the predicate
Simplify@G1 .HT.G2 .TL Det@G1 D IdentityMatrix@3DD True
Metric Palette
L
0
H1L 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1
L
1
L
2
L
3
L
4
H1L
2009 9 3
371
Metric Palette
L
0
H1L a 0 n 0 b 0 n 0 g ab 0 -b n 0 bg 0 a g - n2 -b n 0
L
1
L
2
L
3
H a b g - b n2 L
Metric Palette
L
0
H1L "1,1 "1,2 "1,3 "1,2 "2,2 "2,3 "1,3 "2,3 "3,3 - "2 1,2 + "1,1 "2,2 - "1,2 "1,3 + "1,1 "2,3 - "1,3 "2,2 + "1 - "2 1,3 + "1,1 "3,3 - "1,3 "2,3 + "1 - "2 2,3 + "2,2
L
1
L
2
2 2 I - "2 1,3 "2,2 + 2 "1,2 "1,3 "2,3 - "1,1 "2,3 - "1,2 "3,3 + "1,1 "2
You can paste any of these metric matrices into your notebook by clicking on the palette.
- .2 1,2 + .1,1 .2,2 - .1,2 .1,3 + .1,1 .2,3 - .1,2 .1,3 + .1,1 .2,3 - .1,3 .2,2 + .1,2 .2,3 - .2 1,3 + .1,1 .3,3 - .1,3 .2,3 + .1,2 .3,3 - .2 2,3 + .2,2 .3,3
372
Or, you can simply select the expression xy and click the button on the GrassmannAlgebra palette (or this one). Note that an expression with a bar over it symbolizes another expression (the complement of the original expression), not a GrassmannAlgebra operation on the expression. Hence no conversions will occur by entering an overbarred expression.
GrassmannComplement will also work for lists, matrices, or tensors of elements. For example, here is a matrix: M = 88a e1 e2 , b e2 <, 8- b e2 , c e3 e1 <<; MatrixForm@MD K a e1 e2 b e2 O - b e2 c e3 e1
Euclidean metric
Here we repeat the construction of the complement palette of Section 5.4 in a 2-space.
2009 9 3
373
!2 ; ComplementPalette
Complement Palette
Basis 1 e1 e2 e1 e2 Complement e1 e2 e2 - e1 1
Palettes of complements for other bases are obtained by declaring the bases, then entering the command ComplementPalette.
DeclareBasis@86, 7<D; ComplementPalette
Complement Palette
Basis Complement 1 # $ #$ #$ $ -# 1
2009 9 3
374
!2 ; DeclareMetric@gD; ComplementPalette
Complement Palette
Basis 1 e1 e2 e1 e2
g e2 g1,2 g
Complement
e1 e2 g e2 g1,1
e1 g1,2 g e1 g2,2 g
The symbol g represents the determinant of the metric tensor. It (or expressions involving it) can be evaluated in any space with any declared metric using ToMetricElements.
ToMetricElements@gD - g2 1,2 + g1,1 g2,2
The palette is arranged to 'collapse' the full determinant expression under the symbol g since it is often large. However if you wish, you can include the full form of g in the palette by substituting for it.
ComplementPalette . g ToMetricElements@gD
Complement Palette
Basis 1 e1 e2 e1 e2
e2 g1,1 -g2 1,2 +g1,1 g2,2 e2 g1,2 -g2 1,2 +g1,1 g2,2
Complement
e1 e2 -g2 1,2 +g1,1 g2,2
e1 g1,2 -g2 1,2 +g1,1 g2,2 e1 g2,2 -g2 1,2 +g1,1 g2,2
3-space
Here is the corresponding palette in 3-space.
2009 9 3
375
A; DeclareMetric@gD; ComplementPalette
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3
The determinant of this metric tensor is
ToMetricElements@gD
2 2 - g2 1,3 g2,2 + 2 g1,2 g1,3 g2,3 - g1,1 g2,3 - g1,2 g3,3 + g1,1 g2,2 g3,3
Complement
e1 e2 e3 g g1,3 e1 e2 g g2,3 e1 e2 g g3,3 e1 e2 g e3 I-g2 1,2 +g1,1 g2,2 M g
+ + +
+ +
+ +
2009 9 3
376
A; ComplementPalette
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1
We could have obtained these complements in three steps: 1. Generate the basis elements (using LBasis) 2. Take their complements (using GrassmannComplement) 3. Convert the complements to other basis elements (using ConvertComplements).
LBasis 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 << GrassmannComplement@LBasisD 981<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <= ConvertComplements@GrassmannComplement@LBasisDD 88e1 e2 e3 <, 8e2 e3 , - He1 e3 L, e1 e2 <, 8e3 , - e2 , e1 <, 81<<
2009 9 3
377
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement
e1 e2 e3 g n e1 e2 g
a e2 e3 g
b e1 e3 g
g e1 e2 g
+ +
n e2 e3 g a b e3 g
b n e1 g
I-a g+n2 M e2 g b g e1 g
b n e3 g
We could have obtained these complements in three steps: 1. Generate the basis elements (using LBasis) 2. Take their complements (using GrassmannComplement) 3. Convert the complements to other basis elements (using ConvertComplements).
LBasis 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 << GrassmannComplement@LBasisD 981<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <= ConvertComplements@GrassmannComplement@LBasisDD :: e1 e2 e3 g :b n e1 g + a b e3 g , >, : n e1 e2 g + a e2 e3 g , b g e1 g ,b e1 e3 g b n e3 g , g e1 e2 g >, : g >> + n e2 e3 g >,
I-a g + n2 M e2 g
378
The palette is arranged to 'collapse' the full determinant expression under the symbol g since it is often large. However if you wish, you can include the full form of g in the palette by substituting for it.
ComplementPalette . g ToMetricElements@gD
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 n e1 e2 a b g-b n2
Complement
e1 e2 e3 a b g-b n2
+
b e1 e3
a e2 e3 a b g-b n2
g e1 e2 a b g-b n2 b n e1
a b g-b n2
+ +
n e2 e3 a b g-b n2 a b e3 a b g-b n2
a b g-b n2
b n e3 a b g-b n2
a b g - b n2
2009 9 3
379
G = ConvertComplements@GrassmannComplement@LBasisDD :: e1 e2 e3 g g2,3 e1 e2 g g3,3 e1 e2 g : >, : g1,3 e1 e2 g g2,2 e1 e3 g g2,3 e1 e3 g + + + g1,2 e1 e3 g g1,2 e2 e3 g g1,3 e2 e3 g e2 Hg1,2 g1,3 - g1,1 g2,3 L g , e3 H- g1,2 g1,3 + g1,1 g2,3 L g + e1 H- g1,3 g2,3 + g1,2 g3,3 L g + e2 Hg1,3 g2,3 - g1,2 g3,3 L g >, : g >> + , + + >, , + g1,1 e2 e3 g ,
e1 H- g1,3 g2,2 + g1,2 g2,3 L g e2 Ig2 1,3 - g1,1 g3,3 M g e3 H- g1,3 g2,2 + g1,2 g2,3 L g e1 I- g2 2,3 + g2,2 g3,3 M g
Taking the complement of these and converting them again returns us to the original list of basis elements.
ConvertComplements@GrassmannComplement@GDD 881<, 8e1 , e2 , e3 <, 8e1 e2 , e1 e3 , e2 e3 <, 8e1 e2 e3 <<
2009 9 3
380
DeclareMetric@gD; F = ConvertComplementsB2 + a b e2 e2 c e3 F 2+ a b c g2,2 g3,3 e1 e2 g Simplify@FD 2 g + a b c g2,2 Hg3,3 e1 e2 - g2,3 e1 e3 + g1,3 e2 e3 L g a b c g2,2 g2,3 e1 e3 g + a b c g1,3 g2,2 e2 e3 g
ConvertComplements will also apply some simplification rules to expressions involving non-basis elements.
A; ConvertComplementsB2 + a b e2 e2 c x F 2+abcx DeclareMetric@gD; ConvertComplementsB2 + a b e2 e2 c x F 2 + a b c x g2,2
In the example discussed in the previous section we see that using GrassmannSimplify gives a result involving the interior product e2 e2 (in this case a scalar product) which is valid independent of the metric.
% B2 + a b e2 e2 c x F 2 + a b c He2 e2 L x
In the case of a Euclidean metric, the scalar product e2 e2 in the result is equal to 1. In the case of a general metric it is equal to g2,2 . Here are some more examples in 3-space.
2009 9 3
381
The dimension of the space may change the sign of some expressions, or indeed make them zero. Here are the same examples in 4-space, giving two sign-changes, and one zero.
!4 ; %B:x, x, x y, x y, Hu vL y, u v y, Hu vL y z, u v y z, Hx yL z, Hx yL z>F 8- x, x, x y, - Hx yL, u v y, u v y, u v y z, u v y z, Hx yL z, 0<
2009 9 3
In this chapter, we discuss only the conversions between the exterior product, Grassmann the Algebra regressive Book.nb 382 product, and the complement. The examples below show the effects of these conversions on a single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the complement of an element of any grade is the element itself, hence these conversions in a 3-space show no extra signs. In spaces of other dimension however, the results may show signs dependent on the grades of the elements. This is exemplified by the 4-dimensional cases.
Converting the exterior products to regressive products and complements yields a possible sign change in 4-space depending on the grades of the elements.
8!3 ; ExteriorToRegressive@GD, !4 ; ExteriorToRegressive@GD< : a b g d , H - 1 L k+m a b H - 1 L p+q g d >
m k p q m k p q
Converting the regressive products to exterior products also yields a possible sign change.
8!3 ; RegressiveToExterior@GD, !4 ; RegressiveToExterior@GD< : a b g d , H - 1 L k+m +p+q a b g d >
m k p q m k p q
2009 9 3
383
Ze = RegressiveToExterior@ZD a1 e1 e2 b1 e1 e2 + a1 e1 e2 b2 e1 e3 + a1 e1 e2 b3 e2 e3 + a2 e1 e3 b1 e1 e2 + a2 e1 e3 b2 e1 e3 + a2 e1 e3 b3 e2 e3 + a3 e2 e3 b1 e1 e2 + a3 e2 e3 b2 e1 e3 + a3 e2 e3 b3 e2 e3
Convert the complements. ConvertComplements assumes the currently declared metric - in this case the default Euclidean metric.
ConvertComplements@Xe D 81, e1 <
2009 9 3
384
Applying ConvertComplements assumes this metric, and shows us that both results now have the determinant of the metric tensor as multipliers.
ConvertComplements@Xe D 8g, g e1 <
Note that because their were two regressive product operations, was one regressive product operation, so
(giving g). In the example of the regressive product of bivectors in the previous example, there g was generated once.
Since this is a Euclidean vector space, we can depict the basis vectors at right angles to each other. But note that since it is a vector space, we do not depict an origin.
2009 9 3
385
Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis element, and a basis element and its cobasis element are defined by their exterior product being the basis n-element, in this case e1 e2 . It is clear from the depiction above that x and x are at right angles to each other, thus verifying our geometric interpretation of the algebraic notion of orthogonality: a simple element and its complement are orthogonal. Taking the complement of x gives -x:
x a e2 - b e1 - a e1 - b e2 - x
Continuing to take complements we find that we eventually return to the original element.
x -x xx
In a vector 2-space, taking the complement of a vector is thus equivalent to rotating the vector by one right angle counterclockwise.
386
Since this is a non-Euclidean vector space, the basis vectors are generally not at right angles to each other. We declare a general metric, and display the Complement and Metric palettes.
!2 ; DeclareMetric@gD; Grid@88ComplementPalette, MetricPalette<<D
Complement Palette
Basis 1 e1 e2 e1 e2
g e2 g1,2 g
Complement
e1 e2 g e2 g1,1
Metric Palette
L
0
e1 g1,2 g e1 g2,2 g
L
1
2009 9 3
387
x a e1 + b e2 ConvertComplementsBa e1 + b e2 F
x a e1 + b e2
e2 H- a g1,1 - b g1,2 L g
e1 Ha g1,2 + b g2,2 L g
Example
Suppose we take a simple metric and calculate x
1 2
Complement Palette
Basis 1 e1 e2 e1 e2
e1 + e2 2
Metric Palette
L
0
Complement e1 e2 - e1 + 2 e2 - e1 + e2 1
H1L
L K2 1O 1 1 1 L
2
H1L
; e1 + e2 2
x ConvertComplements@ x - e1 + 3 e2 2
x ConvertComplements@ xe1 2 e2 2
e1 + e2 2
2009 9 3
388
x ConvertComplements@ x e1 3 e2 2
e1 + e2 2
x e2 x e1 x x
We can see that although the basis vectors are neither unit vectors, nor orthogonal, that as for the Euclidean case, x is equal to -x, and x is equal to - x. The standard conceptual vehicle for exploring the notion of orthogonality is the interior product and its special cases, the inner and scalar products. These we will develop in the next chapter where we will revisit examples of this type, and connect them more closely with familiar concepts.
2009 9 3
389
!3 ; ComplementPalette
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 e2 e3 - He1 e3 L e1 e2 e3 - e2 e1 1
Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis element, and a basis element and its cobasis element are defined by their exterior product being the basis n-element, in this case e1 e2 e3 . Taking the complement of x gives x. Note that this result differs in sign from the two-dimensional case.
x a e2 e3 - b e1 e3 + c e1 e2 a e1 + b e2 + c e3 x
Since in a vector 3-space, the complement of the complement of a vector, or of a bivector returns us to the original element, continuing to take complements as we did in the two-dimensional case leads us to no further elements.
xxx xx
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be factored into the exterior product of two vectors. It is in this form that the bivector will be most easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the vector x, but they are all contained in the bivector x. You can get express the bivector x as the exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors from the first factor to the second factor, or vice-versa. 2009 9 3
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be factored into the exterior product of two vectors. It is in this form that Grassmann the bivector willBook.nb be most 390 Algebra easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the vector x, but they are all contained in the bivector x. You can get express the bivector x as the exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors from the first factor to the second factor, or vice-versa.
ExteriorFactorize@a e2 e3 - b e1 e3 + c e1 e2 D c e1 a e3 c e2 b e3 c
The vector x is orthogonal to each of these vectors since the exterior product of x with each of them is zero.
%B: e1 e1 80, 0< a e3 c c Ha e2 e3 - b e1 e3 + c e1 e2 L,
a e3
Ha e2 e3 - b e1 e3 + c e1 e2 L>F
The complements of each of these vectors will be bivectors which contain the original vector x. We can verify this easily with ConvertComplements and GrassmannSimplify.
%AConvertComplementsA 9Hc e1 - a e3 L Ha e1 + b e2 + c e3 L, Hc e2 - b e3 L Ha e1 + b e2 + c e3 L=EE 80, 0<
391
Example
Suppose we take the metric used in the example above on the non-Euclidean complement in a vector 2-space and extend it orthogonally to the third dimension.
2009 9 3
392
Complement Palette
Basis 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3 Complement e1 e2 e3 - He1 e3 L + 2 e2 e3 - He1 e3 L + e2 e3 e1 e2 e3 e1 - 2 e2 e1 - e2 1
1 2
Metric Palette
L
0
H1L 2 1 0 1 1 0 0 0 1 1 0 0 0 2 1 0 1 1 H1L
L
1
L
2
L
3
x ConvertComplements@ x - He1 e3 L + 3 e2 e3 2
x ConvertComplements@ x e1 2 + e2 2
e1 + e2 2
2009 9 3
393
and observe again that the vector x is orthogonal to each of these factors, since clearly the exterior product of the complement of x with each of these factors is zero.
%B: e1 80, 0< 3 e2 2 - He1 e3 L + 3 e2 e3 2 , e3 - He1 e3 L + 3 e2 e3 2 >F
Gij
1 0 0 g11 0 gn1
! 0 ! g1 n ! ! gn n
5.42
These hybrid metrics will be useful later when we discuss screw algebra in Chapter 7 and mechanics in Chapter 8. Of course the permitted transformations on a space with such a metric must be restricted to those which maintain the orthogonality of the origin point to the vector subspace. It will be convenient in what follows to refer to a bound space with underlying linear space of dimension n+1 as an n-plane. All the formulae for complements in the vector subspace still hold. In an n-plane, the vector subspace has dimension n, and hence the complement operation for the n-plane and its vector subspace will not be the same. We will denote the complement operation in the vector subspace by using an overvector instead of an overbar , and call it a vector space complement. The vector space complement (overvector) operation can be entered from the Basic Operations palette by using the button under the complement (overbar) button. The constant ! will be the same in both spaces and still equate to the inverse of the square root of the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the metric tensor above that:
2009 9 3
394
The constant ! will be the same in both spaces and still equate to the inverse of the square root of the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the metric tensor above that:
1 g
! e1 e2 ! en
5.43
1 g
e1 e2 ! en
5.44
1 ! 1 ! 1
5.45
5.46
In this interpretation 1 will be called the unit n-plane, while 1 will be called the unit n-vector. The unit n-plane is the unit n-vector bound through the origin.
ei
gij ej g j=1
5.47
In the n-plane, the complement ei of a basis vector ei is given by the metric of the n-plane as:
ei 1
n
ei - ! ei
5.48
More generally we have that for a basis m-vector, its complement in the n-plane is related to its complement in the vector subspace by:
2009 9 3
395
ei H- 1L m ! ei
m m
5.49
a H- 1L m ! a
m m
5.50
The vector space complement of any exterior product involving the origin " is undefined.
The complement of this equation is also zero, showing that the regressive product of vectorial elements in a bound space is zero.
ab ab 0 0
m k m k
For a regressive product to be non-zero, it needs to be able to assemble a copy of the unit nelement out of its factors. If it cannot, the regressive product is zero. In this case, the space is a bound space whose unit n-element contains the origin " but the origin is missing from both factors of the product, hence it is zero. It is important to note that the complement axiom is only defined for complements in the full space. Thus, we cannot say that
ab ab
m k m k
A simple example using basis elements in a Euclidean point 3-space will suffice. Substituting basis elements and using the definitions for the vector-space complement shows that the left side is nonzero, while the right side is zero.
e1 e2 e3 e1 e2 He2 e3 L He3 e1 L 0
396
e1 e2 ! en - "
gij ej g j=1
gij 1 ej g j=1 1
n
gij ej 1 ei ei g j=1
! ei ei
5.51
More generally, we can show that for a general m-vector, the complement of the m-vector bound through the origin in an n-plane is simply the complement of the m-vector in the vector subspace of the n-plane.
! a a
m m
5.52
be an m-vector in the bound space. Then the complement of the complement of a is given by
m
a H - 1 L m Hn-m L a
m m
The vector space complement of the vector space complement of a in the vector subspace is
m
2009 9 3
397
a H - 1 L m Hn-1-m L a
m m
Hence
a H- 1L m a
m m
5.53
We can verify this using the formulae derived in the previous section.
a H- 1Lm " a
m m
If, on the other hand, the basis of your currently declared space does contain the origin ", then ConvertComplements will convert any expressions containing OverVector complements to their equivalent OverBar forms.
"3 ; ConvertComplementsA9e1 , x, e1 , x=E 9e2 e3 , # x, - H# e2 e3 L, x=
398
Complement Palette
Basis 1 ! e1 e2 ! e1 ! e2 e1 e2 ! e1 e2 Complement ! e1 e2 e1 e2 - H! e2 L ! e1 e2 - e1 ! 1
The complement table tells us that each basis vector is orthogonal to the axis involving the other basis vector. For example, e2 is orthogonal to the axis " e1 . Suppose now we take a general vector x a e1 + b e2 . The complement of this vector is:
x a e1 + b e2 - a " e2 + b " e1 " Hb e1 - a e2 L
2009 9 3
399
Again, this is an axis (bound vector) through the origin orthogonal to the vector x. Now let us take a general point P " + x and explore what element is orthogonal to this point.
P " + x e1 e2 + " Hb e1 - a e2 L
The effect of adding the bivector e1 e2 to x is to shift the bound vector " Hb e1 - a e2 L parallel to itself. We can factor e1 e2 into the exterior product of two vectors, one parallel to b e1 - a e2 .
e1 e2 z Hb e1 - a e2 L
The vector z is the position vector of any point on the line defined by the bound vector P. A particular point of interest is the point on the line closest to the point P or the origin. The position vector z would then be a vector orthogonal to the direction of the line. Thus we can write e1 e2 as:
e1 e2 Ha e1 + b e2 L Hb e1 - a e2 L a2 + b2
The final expression for the complement of the point P " + x " + a e1 + b e2 can then be written as a bound vector in a direction orthogonal to the position vector of P.
P ! -
Ha e1 + b e2 L a +b
2 2
Hb e1 - a e2 L P* Hb e1 - a e2 L
5.54
This formula turns out to be a special case of a more general formula which we will derive later in the section, and for which we now propose some convenient notation.
P ! + x
P* ! + x*
x* -
x x2
5.55
We call the point P* the inverse point to P. Inverse points are situated on the same line through the origin, and on opposite sides of it. The product of their distances from the origin is unity.
Remembering that the vector space complement of the position vector x is denoted x, we can now rewrite our derived formula in a more condensed form.
P P* I- xM
5.56
2009 9 3
400
P x x x* P* !
P=!+x P* =!+x* P =P * - x P
Of course, since the bound space of a plane is a three-dimensional linear space, the complement of the complement of a point is the point itself, just like the complement of a complement of a vector in a vector 3-space is the vector itself. To verify this in GrassmannAlgebra we can use ConvertComplements on the formula derived in terms of basis elements above (but because the complement operation is a Protected function, we need to use the extra symbol ^ before the equals sign).
P ^= " Ha e1 + b e2 L a2 + b2 Hb e1 - a e2 L; ConvertComplements@PD
# + a e1 + b e2
2009 9 3
401
"3 ; ComplementPalette
Complement Palette
Basis 1 ! e1 e2 e3 ! e1 ! e2 ! e3 e1 e2 e1 e3 e2 e3 ! e1 e2 ! e1 e3 ! e2 e3 e1 e2 e3 ! e1 e2 e3 Complement ! e1 e2 e3 e1 e2 e3 - H! e2 e3 L ! e1 e3 - H! e1 e2 L e2 e3 - He1 e3 L e1 e2 ! e3 - H! e2 L ! e1 e3 - e2 e1 - ! 1
Each basis vector is orthogonal to the coordinate plane involving the other basis vectors. For example, e2 is orthogonal to the axis " e1 e3 . Suppose now we take a general vector x a e1 + b e2 + c e3 . The complement of this vector is:
x a e1 + b e2 + c e3 - a " e2 e3 + b " e1 e3 - c " e1 e2 " H- a e2 e3 + b e1 e3 - c e1 e2 L
Again, this is a plane (bound bivector) through the origin orthogonal to the vector x. The complement of the general point P " + x is
P " + x e1 e2 e3 + " H- a e2 e3 + b e1 e3 - c e1 e2 L
The effect of adding the trivector e1 e2 e3 to x is to shift the bound bivector " H- a e2 e3 + b e1 e3 - c e1 e2 L parallel to itself. We can factor e1 e2 e3 into the exterior product of a vector and a bivector congruent to - a e2 e3 + b e1 e3 - c e1 e2 . 2009 9 3
402
The effect of adding the trivector e1 e2 e3 to x is to shift the bound bivector " H- a e2 e3 + b e1 e3 - c e1 e2 L parallel to itself. We can factor e1 e2 e3 into the exterior product of a vector and a bivector congruent to - a e2 e3 + b e1 e3 - c e1 e2 .
e1 e2 e3 z H- a e2 e3 + b e1 e3 - c e1 e2 L
The vector z is the position vector of any point on the plane defined by the bound bivector P. A particular point of interest is the point on the plane closest to the point P or the origin. The position vector z would then be a vector orthogonal to the plane. Thus we can write e1 e2 e3 as:
e1 e2 e3 Ha e1 + b e2 + c e3 L H- a e2 e3 + b e1 e3 - c e1 e2 L a2 + b2 + c2
The final expression for the complement of the point P " + x " + a e1 + b e2 + c e3 can then be written as a bound bivector in a direction orthogonal to the position vector of P.
a2 + b2 + c2 H- a e2 e3 + b e1 e3 - c e1 e2 L
From this result we see that our condensed formula is also valid in a point 3-space.
P ! -
Ha e1 + b e2 + c e3 L
5.57
P P* I- xM
5.58
2009 9 3
403
In a point 3-space, the complement of the complement of a point is the negative of the point itself, or equivalently, the point with a weight of -1.
P = " + a e1 + b e2 + c e3 ; ConvertComplements@PD - # - a e1 - b e2 - c e3
P a H" + xL a " a + x a
m m m m
But since we have chosen x to be orthogonal to each of the ai , the terms ai x are zero so that
2009 9 3
404
Kx aO x Hx xL a
m m
By taking the complement of this expression, we can manipulate it into the form we want.
Kx aO x Hx xL a
m m
First apply the complement axiom (once on the left and twice on the right).
Hx aL x Ix xM a
m m
Now, the regressive product of an element with its complement is a scalar which, as we will see in the next chapter is the scalar product of the element with itself. This scalar product may also be written as the square of the magnitude (measure) of the element x2 . Hence on the right side we have
x x H - 1 L n-1 x x H - 1 L n-1 x x H - 1 L n-1 x 2
Substituting gives
H x a L x H - 1 L n-1 x 2 a
m m
a H- 1Lm " a
m m
We can now return to the formula for the bound element under consideration and take its complement.
2009 9 3
405
P a H" + xL a " a + x a
m m m m
P a " a + x a a + H - 1 L m +1 " x a
m m m m m
P a H- 1Lm
m
x x
2
x a + H - 1 L m +1 " x a
m m
" -
x x2
K- a xO
m
P a ! m
x x2
K- a xO
m
P ! + x
ax 0
m
5.59
This formula hold only under the condition that the position vector x of the point P is orthogonal to a. Here, we have preempted the notation of the next chapter by using a x 0 to express this
m m
condition.
I- xM
P " + x
P a ! -
Hx aL
P ! + x
ax 0
5.60
As a specific example, let x be e1 , and a be e2 . In the plane, the complement of this bound vector becomes a point.
2009 9 3
406
In a point 3-space, the complement of this bound vector becomes a second bound vector (orthogonal to the first one).
P e2 H" - e1 L He1 e2 L H" - e1 L e3
2009 9 3
407
P a b ! -
Hx a bL x2 ! + x Ha bL x 0
5.61
Again, let x be e1 and a be e2 ; and put b to be e3 . In a point 3-space, the complement of this bound bivector becomes a point.
P e2 e3 H" - e1 L He1 e2 e3 L H" - e1 L H1L " - e1
In geometric terms, this can be interpreted as saying that the complement of a line defined by two points is the intersection of the complements of the points.
2009 9 3
408
In geometric terms, this can be interpreted as saying that the complement of a line defined by two points is the intersection of the complements of the points. The simplest case is in the plane, where the complement of the points are themselves lines, and whose intersection is a point.
P Q = PQ = R R = PQ
Q
R
y P* x*
P
z* ! x
R*
z y* P
R Q*
Q
In a point 3-space, the same relations hold, but their interpretations are different. The complement P of a point is a bound bivector as we have depicted earlier. The complements of two different points P and Q yield two distinct bound bivectors which intersect in a line R, say. And the complement of this line R is the line passing through the points P and Q. The complement of a bound bivector can be composed as the regressive product of the complements of three points. In a point 3-space, this results in the intersection of three bound bivectors yielding a point. The complement of this point is a bound bivector passing through the original three points. The simple examples explored in the sections above are straightforwardly extended mutatis mutandis to higher dimensional spaces, but of course they are more challenging to depict!
2009 9 3
409
indices, for example ei . We will refer to this basis as the standard basis. Introducing a second basis reciprocal to this standard basis enables us to write the formulae for complements in a more symmetric way. This section will summarize formulae relating basis elements and their cobases and complements in terms of reciprocal bases. For simplicity, we adopt the Einstein summation convention. In L the metric tensor gij forms the relationship between the reciprocal bases.
1
ei gij ej
This relationship induces a metric tensor gi,j on L.
m
m m
5.62
ei gij ej
m
m
5.63
ei
1 g
gij ej
5.64
Here g gij is the determinant of the metric tensor. Taking the complement of formula 5.64 and substituting for ei in formula 5.66 gives:
ei
1 g
ei
5.65
2009 9 3
410
ei
g ei
5.66
ei
m
g ei
m
5.67
ei
m
1 g
ei
m
5.68
1 g
e1 e2 ! en 1
g e1 e2 ! en
5.69
e1 e2 ! en
5.70
e1 e2 ! en
1 g
5.71
ei
g H - 1 L i-1 e 1 ! i ! e n
5.72
ei
1 g
H - 1 L i-1 e 1 ! i ! e n
5.73
ei1 ! e i m
g H - 1 L K m e 1 ! i1 ! i m ! e n
5.74
e i1 ! e i m
1 g
H - 1L K m e 1 ! i1 ! i m ! e n
5.75
1 where Km = m g=1 ig + 2 m Hm + 1L and the symbol means the corresponding element is missing from the product.
2009 9 3
411
1 where Km = m g=1 ig + 2 m Hm + 1L and the symbol means the corresponding element is missing from the product.
Taking the complement of this formula and applying formula 5.69 gives the required result.
e i H - 1 L K m e 1 ! i1 ! i m ! e n
m
ei
m
g H - 1 L m Hn- m L e i
m
5.76
Similarly:
i e m
1 g
H - 1 L m Hn- m L e i
m
5.77
g ei
m
2009 9 3
412
g N
1 g
H - 1 L m Hn-m L e i
m
H - 1 L m Hn-m L e i
m
A similar result is obtained for the complement of the complement of a reciprocal basis element.
1 g
ej di 1
m
ei ej ei J
m m m
g N ej di j 1
m
ei ej di 1
m m
ei ej di j 1
m m
5.78
The exterior product of a basis element with the complement of any other basis element in the same basis is equal to the corresponding component of the metric tensor times the unit n-element.
ei ej ei Kgjk ek O ei gjk
m m m m m m m
1 g J
ek gij 1
m m ij
ei ej ei Kg
m m m
m jk
ek O ei g
m m
m jk
g N ek g
m
ei ej gij 1
m m
ei ej g
m m
m ij
5.79
ei ej di 1
ei ej di j 1
5.80
ei ej gij 1
ei ej gij 1
5.81
413
ei ej di 1
m m
ei ej di
m m
5.82
ei ej di j 1
m m
ei ej di j
m m
5.83
ei ej gij 1
m m
ei ej gij
m m
5.84
ei ej g
m m
m ij
ei ej g
m m
m ij
5.85
a a1 a2 ! am
m
Take any other n|m 1-elements am+1 , am+2 , !, an such that a1 , a2 , !, am , am+1 , !, an form an independent set, and thus a basis of L. Then by equation 5.76 we have:
1
e i1 ! e i m
g H- 1 LK m e 1 ! i1 ! i m ! e n g a a m +1 a m +2 ! a n
a a1 a2 ! am
m
where ga is the determinant of the metric tensor in the a basis. The complement of a1 a2 ! am is thus a scalar multiple of am+1 am+2 ! an . Since this is evidently simple, the assertion is proven.
5.13 Summary
2009 9 3
414
5.13 Summary
In this chapter we have shown that by defining the complement operation on a basis of L as
1
n
ei ! gij ej
j=1
as the mechanism for extending the complement to higher grade elements, the requirement that
a %a
m m
constrains gij to be symmetric and ! to have the value % metric tensor gij .
1 g
The metric tensor gij was introduced as a mapping between the two linear spaces L and L .
1 n-1
To this point none of these concepts has yet been related to notions of interior, inner or scalar products. This will be addressed in the next chapter, where we will also see that the constraint 1 !% is equivalent to requiring that the magnitude of the unit n-element is unity.
g
Once we have constrained gij to be symmetric, we are able to introduce the standard notion of a reciprocal basis. The formulae for complements of basis elements in any of the L then become
m
much simpler to express. This chapter also discusses how the complement operation can be interpreted geometrically both in vector spaces and in bound spaces. In both spaces the complement axiom holds, but its geometric interpretation in the two spaces is quite different.
2009 9 3
415
6.1 Introduction
To this point we have defined three important operations in the Grassmann algebra: the exterior product, the regressive product, and the complement. In this chapter we introduce the interior product, the fourth operation of fundamental importance to the algebra. The interior product of two elements is defined as the regressive product of one element with the complement of the other. Whilst the exterior product of an m-element and a kelement generates an (m+k)-element, the interior product of an m-element and a k-element (m k) generates an (m|k)-element. This means that the interior product of two elements of equal grade is a 0-element (or scalar). The interior product of an element with itself is a scalar, and it is this scalar that is used to define the measure of the element. The interior product of two 1-elements corresponds to the usual notion of inner, scalar, or dot product. But we will see that the notion of measure is not restricted to 1-elements. Just as one may associate the measure of a vector with a length, the measure of a bivector may be associated with an area, and the measure of a trivector with a volume. If the exterior product of an m-element and a 1-element is zero, then it is known that the 1-element is contained in the m-element. If the interior product of an m-element and a 1-element is zero, then this means that the 1-element is contained in the complement of the m-element. In this case it may be said that the 1-element is orthogonal to the m-element. The basing of the notion of interior product on the notions of regressive product and complement follows here the Grassmannian tradition rather than that of the current literature which introduces the inner product onto a linear space as an arbitrary extra definition. We do this in the belief that it is the most straightforward way to obtain consistency within the algebra and to see and exploit the relationships between the notions of exterior product, regressive product, complement and interior product, and to discover and prove formulae relating them. We use the term 'interior' in addition to 'inner' to signal that the products are not quite the same. In traditional usage the inner product has resulted in a scalar. The interior product is however more general, being able to operate on two elements of any and perhaps different grades. We reserve the term inner product for the interior product of two elements of the same grade. An inner product of two elements of grade 1 is called a scalar product. In sum: inner products are scalar whilst, in general, interior products are not. We denote the interior product with a small circle with a 'bar' through it, thus . This is to signify that it has a more extended meaning than the inner product. Thus the interior product of a with b
m k
becomes a b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k
the element of higher grade should be on the left. The interior product has the same left associativity as the negation operator, or minus sign. It is possible to define both left and right interior products, but in practice the added complexity is not rewarded by an increase in utility. 2009 9 3
We denote the interior product with a small circle with a 'bar' through it, thus . This is to signify that it has a more extended meaning than the inner product. Thus the interior product of a with b Grassmann Algebra Book.nb 416
m k
becomes a b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k
the element of higher grade should be on the left. The interior product has the same left associativity as the negation operator, or minus sign. It is possible to define both left and right interior products, but in practice the added complexity is not rewarded by an increase in utility. We will see in Chapter 10: The Generalized Product that the interior product of two elements can be expressed as a certain generalized product, which is independent of the order of its factors.
that order, was always a non-negative scalar. That is a a 0 for any element of any grade m in
m m
a space of any dimension. As is well known, this non-negativeness is a naturally accepted property of Euclidean inner products. This immediately suggests a definition for the inner product of an element with itself as a a, and,
m m
ab ab
m m m m
L
0
6.1
The result of taking the regressive product of a and a in the reverse order introduces a potential
m m
change of sign.
a a H - 1 L m Hn-m L a a
m m m m
Because of the form of Grassmann's definition of the complement, the element a a is always a
m m
non-negative scalar. However, it is clear from the above that the element a a may well not be.
m m
And furthermore, it may depend on the dimension of the space. Hence in the definition of the inner product it is important that the complemented element be the second factor. A scalar product is the inner product of two 1-elements.
2009 9 3
417
ab ab
m k m k
m -k
mk
6.2
Thus, the interior product does not depend on the dimension of the space, since the dependence on the dimension of the regressive product and the complement operations 'cancel' each other out. This makes it, like the exterior product, of special importance in the algebra. If m < k, then a b is necessarily zero (otherwise the grade of the product would be negative),
m k
hence:
ab 0
m k
m<k
6.3
An important convention
In order to avoid unnecessarily distracting caveats on every formula involving interior products, in the rest of this book we will suppose that the grade of the first factor is always greater than or equal to the grade of the second factor. The formulae will remain true even if this is not the case, but they will be trivially so by virtue of their terms reducing to zero.
a b ab
m m k k m m k k
L L
m -k
mk km 6.4
a b ab
k- m
Here, a b is the right interior product, and a b is the left interior product. By interchanging
m k m k
the order of the factors in one of the regressive products, and then interchanging the m-element with the k-element, we can see that they are related by the formula:
a b H - 1 L k Hn- m L b a
m k k m
6.5
In this book, we do not follow this dual-definition approach since we have found it to be an unnecessary complication, especially with its dependence on grade and the dimension of the space. In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to the same interior product independent of the order of its factors, thus retrieving, in this wider system, the sought-after symmetry. 2009 9 3
418
In this book, we do not follow this dual-definition approach since we have found it to be an unnecessary complication, especially with its dependence on grade and the dimension of the space. In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to the same interior product independent of the order of its factors, thus retrieving, in this wider system, the sought-after symmetry.
Historical note
Grassmann and workers in the Grassmannian tradition define the interior product of two elements as the product of one with the complement of the other, the product being either exterior or regressive depending on which interpretation produces a non-zero result. Furthermore, when the grades of the elements are equal, it is defined either way. This definition involves the confusion between scalars and n-elements discussed in Chapter 5, Section 5.1 (equivalent to assuming a Euclidean metric and identifying scalars with pseudoscalars). It is to obviate this inconsistency and restriction on generality that the approach adopted here bases its definition of the interior product explicitly on the regressive exterior product.
a L, b L
m m k k
ab L
m k
m -k
6.6
Thus, in contradistinction to the regressive product, the grade of an interior product does not depend on the dimension of the underlying linear space. If the grade of the first factor is less than that of the second, the interior product is zero.
ab 0
m k
m<k
6.7
ab
m k
g a bg
r m k r
6.8
2009 9 3
419
ab
m k
g a bg
r m k r
6.9
a1 a
m m
6.10
aa aa a a
m m m
6.11
11 1
6.12
a 1a
m
6.13
9: The inverse of a scalar with respect to the interior product is its complement.
The interior product of the complement of a scalar and the reciprocal of the scalar is the unit nelement.
1 a
1 a
6.14
10: The interior product of two elements is congruent to the interior product of their complements in reverse order.
The interior product of two elements is equal (apart from a possible sign) to the interior product of their complements in reverse order.
a b H - 1 L Hn- m L H m -kL b a
m k k m
6.15
If the elements are of the same grade, the interior product of two elements is equal to the interior product of their complements. (It will be shown later that, because of the symmetry of the metric tensor, the interior product of two elements of the same grade is symmetric, hence the order of the factors on either side of the equation may be reversed.) 2009 9 3
420
If the elements are of the same grade, the interior product of two elements is equal to the interior product of their complements. (It will be shown later that, because of the symmetry of the metric tensor, the interior product of two elements of the same grade is symmetric, hence the order of the factors on either side of the equation may be reversed.)
ab ba
m m m m
6.16
a0 0 0a
m m
6.17
Ka + b O g a g + b g
m m r m r m r
6.18
a Kb + gO a b + a g
m r r m r m r
6.19
Orthogonality
Orthogonality is a concept generated by the complement operation. If x is a 1-element and a is a simple m-element, then x and a may be said to be linearly dependent
m m
Similarly, x and a are said to be orthogonal if and only if a x 0, that is, x is contained in a.
m m m
may also be said that a 1-element x is orthogonal to a simple element a if and only if their interior
m
all xi contained in x.
k
orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula 7 m 2009 9 3 we can write a Hx1 x2 ! xk L as Ka x1 O Hx2 ! xk L, whence it becomes
m m
421
Just as the vanishing of the exterior product of two simple elements a and x implies only that they
m k
have some 1-element in common, so the vanishing of their interior product implies only that a and
m
x (m k) have some 1-element in common, and conversely. That is, there is a 1-element x such
k
:a x 0, x x 0>
m k
ax 0
m k
:a x 0, x x 0>
m k
ax 0
m k
:a x 0, x x 0>
m k
ax 0
m k
ax
m
ax 0
m
xx 0
k
6.20
Since B x is expressed as a linear combination of x1 and x2 it is clearly contained in the bivector B. Thus
B HB xL 0
The resulting vector B x is also orthogonal to x. We can show this by taking its scalar product with x.
2009 9 3
422
2009 9 3
423
2009 9 3
424
Anticommutativity axioms a b H- 1L m k b a
m k k m
a b H - 1 L Hn-kL Hn- m L b a
m k k m
6.21
Complement axioms ab ab
m k m k
ab ab
m k m k
6.22
6.23
Formula 1
a b a b H - 1 L k Hn-m L b a
m k m k k m
a b a b H - 1 L k Hn- m L b a
m k m k k m
6.24
Formula 2
a b a b H - 1 L m Hn-m L a b H - 1 L m Hn-m L a b
m k m k m k m k
6.25
Formula 3
a b a b a b H- 1Lm k b a
m k m k m k k m
a b a b a b H- 1L m k b a
m k m k m k k m
6.26
2009 9 3
425
Formula 4
a b a b H - 1 L k Hn-kL a b
m k m k m k
a b H - 1 L k Hn-kL a b
m k m k
6.27
Formula 5
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+m Hn-kL b a
m k m k k m
a b H - 1 L H m +kL Hn-kL b a
m k k m
6.28
Formula 6
a b a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+k Hn-m L b a
m k m k m k k m
a b H - 1 L k H m +kL b a
m k k m
6.29
Formula 7
a b a b a b H - 1 L m Hn-m L+k Hn -k L a b
m k m k m k m k
a b a b H- 1LHm+kL Hn-Hm+kLL a b
m k m k m k
6.30
Formula 8
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL a b
m k m k m k
a b H - 1 L k Hn-kL a b
m k m k
6.31
Formula 9
2009 9 3
426
Formula 9
a b H - 1 L k Hn-kL a b H - 1 L k Hn-kL+m Hn-m L a b
m k m k m k
a b H - 1 L k Hn-kL+ m Hn- m L a b
m k m k
6.32
a Hb b ! bL a b b ! b
m k1 k2 kp m k1 k2 kp
a b b ! b
m k1 k2 kp
a b b ! b
m k1 k2 kp
a b b ! b
m k1 k2 kp
a b b ! b
m k1 k2 kp
6.33
Thus, the interior product is left-associative. If the interior product of any of the b with a is zero, then the complete interior product is zero. We
ki m
can see this by rearranging the factors in the exterior product to bring that factor to into first position. By interchanging the order of the factors in the exterior product we can derive alternative expressions. For the case of two factors we have
a bg
m k r
H- 1Lk r a g b
m r k
ab
m k
g H- 1Lk r Ka g O b
r m r k
6.34
2009 9 3
427
The exterior product operator has a higher precedence than the interior product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<
The regressive product operator has a higher precedence than the interior product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<
The exterior product operator has a higher precedence than the regressive product operator.
8a b g , a Hb g L, Ha bL g < 8a b g, a b g, Ha bL g<
!3 ; InteriorToRegressive@AD :a b g, a b g>
m k j m k j
Converting interior products to regressive products is independent of the dimension of the space since the definitional formula is used.
2009 9 3
428
In a 4-space however, this is not so, so there may be extra signs resulting from the conversion.
!4 ; InteriorToExterior@AD :H- 1Lk+m H- 1Lm a b g, H- 1Lm a H- 1Lk b g >
m k j m k j
, H - 1 L 1+j+k+j k+m +j m g 1 a b
j m k
>
, H - 1 L k+j k+m +j m g 1 a b
j m k
>
And in a 5-space, the signs return to those of the (also odd) 3-space.
2009 9 3
429
, H - 1 L 1+j+k+j k+m +j m g 1 a b
j m k
>
To create a second element e2 orthogonal to e1 within the space concerned, we choose a second element of the space, a2 say, and form the interior product.
e2 He1 a2 L e1
From our discussion of the interior product of a simple bivector and a vector above, we can see that e2 is orthogonal to e1 . Alternatively we can take their interior product to see that it is zero:
e2 e1 HHe1 a2 L e1 L e1 He1 a2 L He1 e1 L 0
We can also see that a2 lies in the 2-element e1 e2 (that is, a2 is a linear combination of e1 and e2 ) by taking their exterior product and expanding it.
e1 HHe1 a2 L e1 L a2 e1 HHe1 e1 L a2 - He1 a2 L e1 L a2 0
We will develop the formula used for this expansion in Section 6.4: The Interior Common Factor Theorem. For the meantime however, all we need to observe is that e2 must be a linear combination of e1 and a2 , because it is expressed in no other terms. We create the rest of the orthogonal elements in a similar manner.
e3 He1 e2 a3 L He1 e2 L e4 He1 e2 e3 a4 L He1 e2 e3 L !
In general then, the (i+1)th element ei+1 of the orthogonal set e1 , e2 , e3 , ! is obtained from the previous i elements and the (i+1)th element ai+1 of the original set.
e i+1 H e 1 e 2 ! e i a i+1 L H e 1 e 2 ! e i L
6.35
Following a similar procedure to the one used for the second element, we can easily show that ei+1 is orthogonal to e1 e2 !ei and hence to each of the e1 ,e2 ,!,ei , and that ai lies in e1 e2 !ei and hence is a linear combination of e1 ,e2 ,!,ei . This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.
2009 9 3
430
This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.
Interchanging the order of the factors and replacing a and b by their complements gives
m k
: g a g b g a b g L, j m + k>
j m j k j m k j j
ga gb g ab
j m j k j m k
g
j
j m+k
6.36
a b ai b ai
m s i=1 m -p s p
m O. k
Suppose now that b b, k = n|s = m|p. Substituting for b and using the formula
s k s
a b a b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
2009 9 3
Grassmann Algebra Book.nb Suppose now that b b, k = n|s = m|p. Substituting for b and using the formula
431
a b a b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
ai b 1
k k
a b ai b ai
m k i=1 k k k
m -k
6.37
k m -k
a a1 a1 a2 a2 ! an an
m k m -k m -k
where k m, and n ! K
m O, and a is simple. k m
The formula indicates that an interior product of a simple element with another, not necessarily simple, element of equal or lower grade, may be expressed in terms of the factors of the simple element of higher grade. When a is not simple, it may always be expressed as the sum of simple components:
a a + a2 + a3 + ! (the i in the ai are superscripts, not powers). From the linearity of the
m m m m m m 1
interior product, the Interior Common Factor Theorem may then be applied to each of the simple terms.
a b Ka1 + a2 + a3 + !O b a1 b + a2 b + a3 b + !
m k m m m k m k m k m k
Thus, the Interior Common Factor Theorem may be applied to the interior product of any two elements, simple or non-simple. Indeed, the application to components in the preceding expansion may be applied to the components of a even if it is simple.
m
To make the Interior Common Factor Theorem more explicit, suppose a a1 a2 ! am , then:
m
Ha1 a2 ! a m L b
k
i1 ! ik
H a i1 ! a ik L b
k
H - 1 L Kk a 1 ! i1 ! ik ! a m
k
6.38
Kk ig +
g=1
1 2
k Hk + 1L
2009 9 3
432
b is a scalar
Ha1 a2 ! a m L b b Ha1 a2 ! a m L
0 0
6.39
b is a 1-element
m
H a 1 a 2 ! a m L b H a i b L H - 1 L i-1 a 1 ! i ! a m
i=1
6.40
InteriorCommonFactorTheoremBa bF
2 1
a b a1 a2 b - a2 b a1 + a1 b a2
2 1 1 1 1
InteriorCommonFactorTheoremBa bF
3 1
a b a1 a2 a3 b a3 b a1 a2 - a2 b a1 a3 + a1 b a2 a3
3 1 1 1 1 1
InteriorCommonFactorTheoremBa bF
4 1
a b a1 a2 a3 a4 b - a4 b a1 a2 a3 +
4 1 1 1
a3 b a1 a2 a4 - a2 b a1 a3 a4 + a1 b a2 a3 a4
1 1 1
2009 9 3
433
b is a 2-element
Ha1 a2 ! a m L b
2
Hai aj L b
i,j 2
H - 1 L i+j-1 a 1 ! i ! j ! a m
6.41
InteriorCommonFactorTheoremBa bF
3 2
a b a1 a2 a3 b b a2 a3 a1 - b a1 a3 a2 + b a1 a2 a3
3 2 2 2 2 2
InteriorCommonFactorTheoremBa bF
4 2
a b a1 a2 a3 a4 b
4 2 2
b a3 a4 a1 a2 - b a2 a4 a1 a3 + b a2 a3 a1 a4 +
2 2 2
b a1 a4 a2 a3 - b a1 a3 a2 a4 + b a1 a2 a3 a4
2 2 2
b is a 3-element
InteriorCommonFactorTheoremBa bF
4 3
a b a1 a2 a3 a4 b - b a2 a3 a4 a1 +
4 3 3 3
b a1 a3 a4 a2 - b a1 a2 a4 a3 + b a1 a2 a3 a4
3 3 3
A B Ai B Ai
m k i=1 k k k m
m -k
6.42
k m -k
A A1 A1 A2 A2 ! An An
k m -k m -k
where k m, and n ! K
m O, and A is simple. k m
2009 9 3
434
where k m, and n ! K
m O, and A is simple. k m
Just as we did for the Common Factor Theorem in Chapter 3, we can express this in a computational form. If
#k BAF :A1 , A2 , !, An >
m
#k BAF : A1 , A2 , !, An >
m
m -k
m -k
m -k
6.43
then
n
A B K"k BAF
m k i=1
m PiT
BO "k BAF
k
m PiT
6.44
where k m, and n ! K
m O, and A is simple. k m
6.45
6.46
(The Dot function is an operation on lists, and should not be confused with the interior product.) In the next section we will discuss the inner product (where the grades of the factors are equal), and show that the inner product is symmetric. Thus we can write the above formulas in the alternative forms:
n
A B KB "k BAF
m k i=1 k
m PiT
O "k BAF
m PiT
6.47
6.48
2009 9 3
435
6.49
Examples
Here are the three computational forms compared to the result from the InteriorCommonFactorTheorem function which implements the initially derived theorem.
m = 3; k = 2; n = 3;
n
A B B #k BAF
m k i=1 k
m PiT
#k BAF
m PiT
A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2
A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2
A B JB #k BAFN.#k BAF
m k k m m
A B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2
InteriorCommonFactorTheoremBA BF
3 2
A B A1 A2 A3 B JB A2 A3 N A1 - JB A1 A3 N A2 + JB A1 A2 N A3
3 2 2 2 2 2
ab ab ab 1
m m m m m m
2009 9 3
436
a b Ka bO 1
m m m m
6.50
The dual to this special case of the Common Factor Axiom says that an exterior product of two elements whose grades sum to the dimension of the space is equal to the (scalar) regressive product of the two elements multiplied by the unit n-element.
:a b a b 1 L , m + k - n 0>
m k m k n n
Now that we have defined the complement operation, the unit n-element 1 becomes 1. We can
n
a b Ka bO 1
m m m m
6.51
Taking the complement of this equation gives another form for the inner product.
ab ab
m m m m
6.52
ab ba
m m m m
6.53
This symmetry, is of course, a property that we would expect of an inner product. However it is interesting to remember that the major result which permits this symmetry is that the complement operation has been required to satisfy the complement of a complement axiom, that is, that the complement of the complement of an element should be, apart from a possible sign, equal to the element itself.
2009 9 3
437
ab ab
m m m m
6.54
a b b ! b
m k1 k2 kp
or, by rearranging the b factors to bring bj to the beginning of the product as:
HHa1 a2 ! am L bj L IH- 1Lj-1 Hb1 b2 ! j ! bm LM
The Interior Common Factor Theorem gives the first factor of this expression as:
m
H a 1 a 2 ! a m L b j H a i b j L H - 1 L i-1 a 1 a 2 ! i ! a m
i=1
2009 9 3
438
Ha1 a2 ! a m L Hb1 b2 ! b m L
m
H a i b j L H - 1 L i+j H a 1 ! i ! a m L
i=1
6.55
Hb1 ! j ! b m L
If this process is repeated, we find the strikingly simple formula:
6.56
ComposeScalarProductMatrix
ComposeScalarProductMatrix@A BD develops the inner product of A and B into the matrix of the scalar products of their factors. You can use Mathematica's inbuilt Det operation on this matrix to get its determinant, and hence the inner product.
M1 = ComposeScalarProductMatrixBa bF; MatrixForm@M1 D
3 3
a1 b1 a1 b2 a1 b3 a2 b1 a2 b2 a2 b3 a3 b1 a3 b2 a3 b3 Det@M1 D - Ha1 b3 L Ha2 b2 L Ha3 b1 L + Ha1 b2 L Ha2 b3 L Ha3 b1 L + Ha1 b3 L Ha2 b1 L Ha3 b2 L - Ha1 b1 L Ha2 b3 L Ha3 b2 L Ha1 b2 L Ha2 b1 L Ha3 b3 L + Ha1 b1 L Ha2 b2 L Ha3 b3 L
If you just want to display M1 to look like a determinant, you can use Mathematica's Grid and BracketingBar.
2009 9 3
439
BracketingBar@Grid@M1 DD a1 b1 a1 b2 a1 b3 a2 b1 a2 b2 a2 b3 a3 b1 a3 b2 a3 b3
If we reverse the order of the two elements and then take the transpose of the resulting scalar product matrix we get a matrix M2 which differs from M1 only by the ordering of its scalar products.
M2 = ComposeScalarProductMatrixBb aF; MatrixForm@Transpose@M2 DD
3 3
b1 a1 b2 a1 b3 a1 b1 a2 b2 a2 b3 a2 b1 a3 b2 a3 b3 a3
This shows clearly how the symmetry of the inner product depends on the symmetry of the scalar product (which in turn depends on the symmetry of the metric tensor).
ToScalarProducts
To obtain the expansion of the inner product more directly, you can use ToScalarProducts.
ToScalarProductsBa bF
3 3
- Ha1 b3 L Ha2 b2 L Ha3 b1 L + Ha1 b2 L Ha2 b3 L Ha3 b1 L + Ha1 b3 L Ha2 b1 L Ha3 b2 L - Ha1 b1 L Ha2 b3 L Ha3 b2 L Ha1 b2 L Ha2 b1 L Ha3 b3 L + Ha1 b1 L Ha2 b2 L Ha3 b3 L
In fact, ToScalarProducts works to convert any interior products in a Grassmann expression to scalar products. For example, here is an expression involving three interior products.
!5 ; X1 = x a + b He1 e2 e3 L e1 + c He1 e2 L He2 e3 L;
Applying ToScalarProducts expands any interior products until the expression only contains scalar products.
X2 = ToScalarProducts@X1 D a x + c H- He1 e3 L He2 e2 L + He1 e2 L He2 e3 LL + b He1 e3 L e1 e2 - b He1 e2 L e1 e3 + b He1 e1 L e2 e3
You can convert these scalar products to metric elements with ToMetricElements. For the default Euclidean metric you would get:
X3 = ToMetricElements@X2 D a x + b e2 e3
For a general metric you get the original expression with the general metric elements substituted for the scalar products.
2009 9 3
440
For a general metric you get the original expression with the general metric elements substituted for the scalar products.
DeclareMetric@gD; X4 = ToMetricElements@X2 D a x - c g1,3 g2,2 + c g1,2 g2,3 + b g1,3 e1 e2 - b g1,2 e1 e3 + b g1,1 e2 e3
discussion of cobases, and to Chapter 5 for a discussion of the complement and the metric tensor.
ab ba a b b a
m m m m m m m m
6.58
ei ej di
m m
ei ej di j
m m
6.59
ei ej ei ej g ei ej gij
m m m m m m
6.60
ei ej ei ej
m m m m
1 g
ei ej g
m m
m ij
6.61
a
m
aa
m m
6.62
2009 9 3
441
a2
6.63
The measure of unity (or its negative) is, as would be expected, equal to unity.
1 1
- 1 1
6.64
The measure of an n-element a a e1 en is the absolute value of its single scalar coefficient
n
multiplied by the determinant g of the metric tensor (which, in this book, we assume to be positive).
a a e1 ! en
n
a
n
a2 g a
6.65
The measure of the unit n-element (or its negative) is, as would be expected, equal to unity.
1 1
- 1 1
6.66
The measure of an element is equal to the measure of its complement, since, as we have shown above, a a a a.
m m m m
a a
m m
6.67
(The different heights of the bracketing bars in this formula is an artifact of Mathematica's typesetting, and is inconsequential to the meaning of the notation.) The measure of a simple element a is equal to the determinant of the scalar products of its factors.
m
6.68
In a Euclidean metric, the measure of an element is given by the familiar 'root sum of squares' formula yielding a measure that is always real and positive.
a S ai ei
m m
a
m
S ai 2
6.69
Unit elements
2009 9 3
442
Unit elements
` The unit m-element associated with the element a is denoted a and may now be defined as:
m m
` a
m
a
m
a
m
6.70
` The unit scalar a corresponding to the scalar a, is equal to +1 if a is positive and is equal to -1 if a is negative.
` a
a a
1 a>0 -1 a < 0 ` -1 -1
6.71
` 11
6.72
` a
n
a
n
a e1 ! en a g
a>0 6.73
a
n
-1 a < 0
` 1 1
` -1 -1
6.74
` a 1
m
6.75
Calculating measures
To calculate the measure of any simple element you can use the GrassmannAlgebra function Measure.
2009 9 3
443
Measure@x yD - Hx yL2 + Hx xL Hy yL Measure will compute the measure of a simple element using the currently declared metric.
For example, with the default Euclidean metric, the measure of this 3-element is 5.
Measure@H2 e1 - 3 e2 L He1 + e2 + e3 L H5 e2 + e3 LD 5
Scalars: m = 0
a2 a2
Vectors: m = 1
x2 x x
2009 9 3
444
Bivectors: m = 2
x y2 Hx yL Hx yL xx yx xy yy
If x y is interpreted as a bivector, then x y is called the area of x y. The scalar x y is in fact the area of the parallelogram formed by the vectors x and y.
Measure@x yD - Hx yL2 + Hx xL Hy yL
Graphic showing a parallelogram formed by two vectors x and y, and its area. Because of the nilpotent properties of the exterior product, the measure of the bivector is independent of the way in which it is expressed in terms of its vector factors.
Measure@Hx + yL yD - Hx yL2 + Hx xL Hy yL
Trivectors: m = 3
x y z2 Hx y zL Hx y zL xx yx zx xy yy zy xz yz zz
If x y z is interpreted as a trivector, then x y z is called the volume of x y z. The scalar x y z is in fact the volume of the parallelepiped formed by the vectors x, y and z.
Measure@x y zD , I- Hx zL2 Hy yL + 2 Hx yL Hx zL Hy zL Hx xL Hy zL2 - Hx yL2 Hz zL + Hx xL Hy yL Hz zLM
2009 9 3
445
! ! 1
a ! 0
m
ei ej gij
6.76
P ! + x
P2 1 + x2
6.77
Although this is at first sight a strange result, it is due entirely to the hybrid metric referred to above. Unlike a vector difference of two points whose measure starts off from zero and increases continuously as the two points move further apart, the measure of a single point starts off from unity as it coincides with the origin and increases continuously as it moves further away from it. The measure of the difference of two points is, as expected, equal to the measure of the associated vector.
P1 " + x1 P2 " + x2 HP1 - P2 L HP1 - P2 L P1 P1 - 2 P1 P2 + P2 P2 H1 + x1 x1 L - 2 H1 + x1 x2 L + H1 + x2 x2 L x1 x1 - 2 x1 x2 + x2 x2 Hx1 - x2 L Hx1 - x2 L
2009 9 3
446
a b Hx zL Ka xO b z + Ka zO b x
m m m m m m
If we put x z P and b a in this formula, and then rearrange the order we get:
m m
KP aO KP aO Ka aO HP PL - Ka PO Ka PO
m m m m m m
KP aO KP aO Ka aO H1 + x xL - Ka xO Ka xO
m m m m m m
KP aO KP aO a a + Ka xO Ka xO
m m m m m m
6.78
2 m m
2 m
P a a + a x
6.79
2 ` 2 ` P a 1 + a x m m
6.80
Note that the measure of a multivector bound through the origin (x 0) is just the measure of the multivector alone.
! a a
m m
6.81
` ! a 1
m
6.82
If x is the component of x orthogonal to a, then we can once again apply the triangle formula
m
above to give
Ka x O Ka x O Ka aO Hx x L - Ka x O Ka x O
m m m m m m
2009 9 3
447
Ka xO Ka xO Ka aO Hx x L
m m m m
We can thus write formulas for the measure of a bound element in the alternative forms:
2 2
P a a I1 + v 2 M
m m
6.83
` 2 P a 1 + v 2
m
6.84
Thus the measure of a bound unit multivector indicates its minimum distance from the origin.
All terms except the first are clearly zero, and since " P reduces to 1, the first term reduces to a.
m
KP aO ! a
m m
6.85
Example
Suppose we are in a bound 3-space, and we consider the bound bivector expressed as the product of three points: P Q R. Such a 2-element could, for example, define a plane. We want to find the bivector.
P = P Q R; " 3 ; P = " + e1 ; Q = " + e2 ; R = " + e3 ;
2009 9 3
448
We can convert these to scalar products using the ToScalarProducts (which is based on the Interior Common Factor Theorem).
B2 = ToScalarProducts@B1 D # HH# e2 - # e3 L e1 + H- H# e1 L + # e3 L e2 + H# e1 - # e2 L e3 L + H# # + # e3 L e1 e2 + H- H# #L - # e2 L e1 e3 + H# # + # e1 L e2 e3
Now, when we apply ToMetricElements, all the scalar products of basis vectors with the origin are put to zero, and we get the required bivector.
B3 = ToMetricElements@B2 D e1 e2 - e1 e3 + e2 e3
In terms of the original points we can rewrite this bivector just as we would if we were deriving it geometrically from the plane of P Q R.
B5 HP - R L HQ - R L
Expanding this bivector gives an important form related to the original simple exterior product P Q R.
B6 P Q - P R + Q R
This form was discussed in Chapter 4: m-Planes: The m-vector of a bound m-vector.
2009 9 3
449
!3 ; DeclareMetric@-D 88$1,1 , $1,2 , $1,3 <, 8$1,2 , $2,2 , $2,3 <, 8$1,3 , $2,3 , $3,3 <<
MetricL@2D 99- $2 1,2 + $1,1 $2,2 , - $1,2 $1,3 + $1,1 $2,3 , - $1,3 $2,2 + $1,2 $2,3 =, 9- $1,2 $1,3 + $1,1 $2,3 , - $2 1,3 + $1,1 $3,3 , - $1,3 $2,3 + $1,2 $3,3 =, 9- $1,3 $2,2 + $1,2 $2,3 , - $1,3 $2,3 + $1,2 $3,3 , - $2 2,3 + $2,2 $3,3 ==
Metric Palette
L
0
H1L #1,1 #1,2 #1,3 #1,2 #2,2 #2,3 #1,3 #2,3 #3,3 - #2 1,2 + #1,1 #2,2 - #1,2 #1,3 + #1,1 #2,3 - #1,3 #2,2 + #1,2 #2 - #2 1,3 + #1,1 #3,3 - #1,3 #2,3 + #1,2 #3 - #2 2,3 + #2,2 #3,3
L
1
L
2
2 2 I - #2 1,3 #2,2 + 2 #1,2 #1,3 #2,3 - #1,1 #2,3 - #1,2 #3,3 + #1,1 #2,2 #3,
inner products of the basis elements of L with themselves, reduce these inner products to scalar
m
products, and then substitute for each scalar product the corresponding metric tensor element from L. In what follows we will show an example of how to reproduce the result in the previous section.
1
Second, we use the Mathematica function Outer to construct the matrix of all combinations of interior products of these basis elements. 2009 9 3
450
Second, we use the Mathematica function Outer to construct the matrix of all combinations of interior products of these basis elements.
M1 = Outer@InteriorProduct, B, BD; MatrixForm@M1 D e1 e2 e1 e2 e1 e2 e1 e3 e1 e2 e2 e3 e1 e3 e1 e2 e1 e3 e1 e3 e1 e3 e2 e3 e2 e3 e1 e2 e2 e3 e1 e3 e2 e3 e2 e3
Third, we use the GrassmannAlgebra function ComposeScalarProductMatrix to convert each of these inner products into matrices of scalar products.
M2 = ComposeScalarProductMatrix@M1 D; MatrixForm@M2 D e1 e1 e2 e1 e1 e1 K e3 e1 K K e2 e1 e3 e1 e1 e1 e1 e2 O K e2 e2 e2 e1 e1 e1 e1 e2 O K e3 e2 e3 e1 e2 e1 e2 e2 O K e3 e2 e3 e1 e1 e3 e1 e2 O K e2 e3 e2 e2 e1 e3 e1 e2 O K e3 e3 e3 e2 e2 e3 e2 e2 O K e3 e3 e3 e2 e1 e3 O e2 e3 e1 e3 O e3 e3 e2 e3 O e3 e3
We can have Mathematica check that this is the same matrix as obtained from the MetricL function.
M4 MetricL@2D True
2009 9 3
451
!4 ; DeclareMetric@-D 88$1,1 , $1,2 , $1,3 , $1,4 <, 8$1,2 , $2,2 , $2,3 , $2,4 <, 8$1,3 , $2,3 , $3,3 , $3,4 <, 8$1,4 , $2,4 , $3,4 , $4,4 << MetricMatrix@3D MatrixForm
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4 !1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4 !1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4 !1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4 !1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
Of course, we are only displaying the metric tensor in this form. Its elements are correctly the determinants of the matrices displayed, not the matrices themselves.
x.
m
n-m
x
m
x
m
The Interior Common Factor Theorem theorem plays the same role in this development that the Common Factor Theorem did in Chapter 3.
2009 9 3
452
n-1
a b x Ka xO b + H- 1L m a b x
m k m k m k
6.86
a a1 ! am and b b1 ! bk . Thus:
m k
a b x HHa1 ! am L Hb1 ! bk LL x
m k m
H - 1 L i-1 H x a i L H a 1 ! i ! a m L H b 1 ! b k L
i=1 k
+ H - 1 L m +j-1 H x b j L H a 1 ! a m L H b 1 ! j ! b k L
j=1
a b x1 x2
m k
then the right-hand side may be expanded by applying the basic Product Formula twice. The first application is
2009 9 3
453
a b Hx1 x2 L Ka x1 O b + H- 1Lm a b x1
m k m k m k
x2
H- 1Lm a b x1
m k
x2 H- 1Lm Ka x2 O b x1 + a
m k m
b x1 x2
k
3.64 - H- 1L m Ka x1 O b x2 - Ka x2 O b x1
m k m k
As with the product formula for regressive products, successive application doubles the number of terms. We started with two terms on the right hand side of the basic interior Product Formula. By applying it again we obtain a Product Formula with four terms on the right hand side. The next application would give us eight terms. Thus the Product Formula for a b Hx1 x2 xj L would lead to 2j terms.
m k
Deriving interior Product Formulas from the dual regressive Product Formulas
A second derivation may be obtained directly by first computing the Dual of a regressive Product Formula, and then directly using the substitution
x
n-m
x
m
x
m
a b x1 x2
m k
a b x 1 x 2 + H - 1 L -m +n - a x 1 b x 2 + a x 2 b x 1 + a x 1 x 2 b
m k m k m k m k
2009 9 3
454
F2 = Dual@F1 D a b x1 x2 a b x1 x2
m k -1+n -1+n m k -1+n
+ b x1
k
-1+n
H- 1Lm -
a x1
m
-1+n
b x2
k
-1+n
+ a x2
m
-1+n
-1+n
a x1 x2
m -1+n
b
k
-1+n
F3 = F2 . A_ Underscript@x_, n - 1D A x
which we encode as R1 .
R1 := Is_. HA_ B_L x_ s HA xL B + s H- 1LG@AD A HB xLM
which we encode as R2 .
R2 := HHZ_ x_L y_ Z Hx yLL
Take the interior product of F0 with x1 , and apply both rules as many times as they are applicable to get F1 .
F1 = F0 x1 . 8R1 , R2 < Ka x1 O b + H- 1Lm a b x1
m k m k
Repeat by taking the interior product of F1 with x2 , expanding, and applying the rules to get F2 .
2009 9 3
455
Ka x1 x2 O b + H- 1L2 m a b x1 x2
m k m k
H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k
Ka x3 O b x1 x2 + H- 1Lm Ka x1 x2 O b x3 +
m k m k
H - 1 L 1+m K a x 1 x 3 O b x 2 + H - 1 L m K a x 2 x 3 O b x 1 +
m k m k
Ka x1 x2 x3 O b + H- 1Lm a b x1 x2 x3
m k m k
a b Hx1 x2 x3 L
m k
Ka x1 O b x2 x3 m k
Ka x2 O b x1 x3 + Ka x3 O b x1 x2 +
m k m k
3.65
k
H- 1L
Ka x1 x2 O b x3 - Ka x1 x3 O b x2 +
m k m
- Ka x2 x3 O b x1 + a b x1 x2 x3
m k m k
Ka x1 x2 x3 O b
m k
456
a b x1 Ka x1 O b + H- 1Lm a b x1
m k m k m k
H2 = DeriveInteriorProductFormulaB a b xF
m k 2
a b x 1 x 2 H - 1 L 1+m K a x 1 O b x 2 +
m k m k
H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k
2009 9 3
457
H3 = DeriveInteriorProductFormulaB a b xF
m k 3
a b x1 x2 x3 Ka x1 O b x2 x3 - Ka x2 O b x1 x3 +
m k m k m k
Ka x3 O b x1 x2 + H- 1Lm Ka x1 x2 O b x3 +
m k m k
H - 1 L 1+m K a x 1 x 3 O b x 2 + H - 1 L m K a x 2 x 3 O b x 1 +
m k m k
Ka x1 x2 x3 O b + H- 1Lm a b x1 x2 x3
m k m k
a b x1 x2 F2 === H2 ,
m k
a b x1 x2 x3 F3 === H3 >
m k
ab x
m k j
3.66
j k j
SBFlattenBH- 1L
r@jD m
In this formula, # BxF and # BxF are the complete span and complete cospan of X. The sign
j j j
function r creates a list of elements alternating between 1 and - 1. For example, for j equal to 2, and simplifying the right hand side, we have
2009 9 3
458
a b Hx1 x2 L
m k
a b x1 x2 - H- 1Lm Ka x1 O b x2 +
m k m k
H- 1Lm Ka x2 O b x1 + Ka x1 x2 O b + a b x1 x2
m k m k m k
Comparing results
We can compare the results obtained by our two formulations.
!5 ; TableBComputeInteriorProductFormulaB a b xF ===
m k j
DeriveInteriorProductFormulaB a b xF .
m k j
IH- 1Lm+1 - H- 1Lm M , 8j, 1, 5<F 8True, True, True, True, True<
2009 9 3
459
F1 = ComputeInteriorProductFormulaB a b Hx y zLF
m k
abxyz
m k
Ka xO b y z - Ka yO b x z + Ka zO b x y +
m k m k m k
H- 1Lm Ka x yO b z - H- 1Lm Ka x zO b y +
m k m k
H- 1Lm Ka y zO b x + Ka x y zO b + H- 1Lm a b x y z
m k m k m k
We can express the 3-element as a product of different 1-element factors by adding to any given factor, scalar multiples of the other factors. For example
F2 = ComputeInteriorProductFormulaB a b Hx y Hz + a x + b yLLF
m k
a b x y Ha x + b y + zL Ka xO b y Ha x + b y + zL m k m k
Ka yO b x Ha x + b y + zL + Ka Ha x + b y + zLO b x y +
m k m k
H- 1Lm Ka x yO b Ha x + b y + zL m k
H- 1Lm Ka x Ha x + b y + zLO b y +
m k
H- 1Lm Ka y Ha x + b y + zLO b x +
m k
Ka x y Ha x + b y + zLO b + H- 1Lm a b x y Ha x + b y + zL
m k m k
Thus we can write the interior Product Formula in the alternative form
3.67
2009 9 3
460
a b x SBFlattenB
m k j
Comparing results
Comparing the results of ComputeInteriorProductFormula with ComputeInteriorProductFormulaB verifies their equivalence for values of j from 1 to 10.
!10 ; TableB%BComputeInteriorProductFormulaB a b xFF ===
m k j
Ka xO x Ka xO x + H- 1Lm a Hx xL
m m m
3.67
2009 9 3
461
` ` ` ` a H- 1Lm KKa xO x - Ka xO xO
m m m
6.87
a b x Ka xO b + H- 1L m a b x
m k m k m k
6.88
a b x K a x O b + H - 1 L n-m a b x
m k m k m k
a b x K a x O b + H- 1 L m -1 a b x
m k m k m k
6.89
This product formula is the basis for the derivation of a set of formulas of geometric interest later in the chapter: the Triangle Formulas.
462
When applied to interior Product Formula 2, the terms on the right-hand side interchange forms.
K a x O b H - 1 L Hm -kL Hn-m L+n+k+1 b K a x O
m k k m
Hence interior Product Formula 2 may be expressed in a variety of ways, for example:
a b x K a x O b + H - 1 L H m -kL Hn- m L b x a
m k m k k m
6.90
a b x1 x2
m k m +k
6.91 K a x2 O b x1
m k
Ka x1 O b x2 + H- 1L
m k
a b x H - 1 L r Hn-m L a x i b x i
r=0
x x1 x1 x2 x2 ! xn xn
p r p-r r p-r r
nK
p-r
p O r
Now replace b with b , b xi with H- 1Lr Hn-rL b xi , and n|k with k to give:
n-k
n-k
n-k
2009 9 3
463
Now replace b with b , b xi with H- 1Lr Hn-rL b xi , and n|k with k to give:
k n-k n-k r n-k r
p m k p
n i=1 m p-r k r
a b x H - 1 L r H m -rL a x i b x i
r=0
6.92
x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r
nK
p O r
a b x H- 1Lr m a xi b xi
r=0
6.93 p O r
x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r
nK
element.
p
x H- 1L
p r=0
r H m -rL
` ` a xi Ka xi O
i=1 m p-r m r
6.94
x x1 x1 x2 x2 ! xn xn ,
p r p-r r p-r r p-r
p nK O r
2009 9 3
464
If, from a series of m 1-elements, one forms the exterior products of any given grade, and conjoins each of them in an interior product with its supplementary combination, then the sum of these products is zero, that is
S1 C1 + S2 C2 + 0
if S1 , S2 , are the exterior products of the m 1-elements a1 , a2 , , am of any given (say the kth) grade, and C2 , C1 , are the supplementary combinations. By the supplementary combination to an exterior product of k 1-elements out of a series of m 1elements, he is referring to the exterior product of the remaining m-k 1-elements ordered consistently with the original series. In our terminology the Si and Ci are respectively the k-span and k-cospan of the m-element A a1 a2 am . This theorem is a useful source of identities involving sums of interior products. We shall also see in Chapter 10: Exploring the Generalized Product, that it has an important role to play in proving the Generalized Product Theorem which shows the equivalence of two different forms for the expansion of the generalized product in terms of interior and exterior products. Chapter 10 will also discuss the generalization of the Zero Interior Sum Theorem leading to a Zero Generalized Sum Theorem. As we will see in the following sections, the Zero Interior Sum Theorem turns out to be another result associated with the invariance of an m:k-form to a factorization of the m-element A.
To generate an interior m:k-sum using this form, all we need to do is to specify m and k.
Fm_,k_ := SBK#k BaFO K#k BaFOF
m m
Here are the simplest cases. It will be immediately clear from these cases that m:k-sums in which k is less than m are zero, since each interior product term is zero.
2009 9 3
465
F3,2 a1 a2 a3 + a1 a3 -a2 + a2 a3 a1
Beginning with the first two factors a1 and a2 + a a1 , we can choose the scalar a to make the factors orthogonal.
a1 Ha2 + a a1 L 0 aa1 a2 a1 a1
For the first three factors, we can choose the scalars to make all three factors mutually orthogonal. Let
f13 = a1 Ha3 + b a1 + c a2 L; f23 = Ha2 + a a1 L Ha3 + b a1 + c a2 L;
We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve for the coefficients.
2009 9 3
466
We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve for the coefficients.
S = Solve@%@8f13 0, f23 0<D, 8b, c<D ::b cHa1 a3 L Ha2 a2 L - Ha1 a2 L Ha2 a3 L - Ha1 a2 L2 + Ha1 a1 L Ha2 a2 L Ha1 a2 L Ha1 a3 L - Ha1 a1 L Ha2 a3 L Ha1 a2 L2 - Ha1 a1 L Ha2 a2 L , >>
It is of interest to note that these scalars may be expressed in terms of interior products of 2elements.
:b Ha1 a2 L Ha2 a3 L Ha1 a2 L Ha1 a2 L , cHa1 a2 L Ha1 a3 L Ha1 a2 L Ha1 a2 L >
It is easy to confirm that with these definitions for a, b, and c, the first three factors of A are mutually orthogonal. First we make lists of the factors and values for the scalars we have derived.
F = 8f1 a1 , f2 a2 + a a1 , f3 a3 + b a1 + c a2 <; H = :a a1 a2 a1 a1 , b Ha1 a2 L Ha2 a3 L Ha1 a2 L Ha1 a2 L , c Ha1 a2 L Ha1 a3 L Ha1 a2 L Ha1 a2 L >;
Then we substitute them into the three orthogonality conditions, and simplify to get zero in each case.
Together@%@8f1 f2 , f1 f3 , f2 f3 < . F . ToScalarProducts@HDDD 80, 0, 0<
Finally we confirm that this refactorization does not change the original product.
%@f1 f2 f3 . F . ToScalarProducts@HDD a1 a2 a3
2009 9 3
467
Any m:k-form which is the sum of interior products of k-span and k-cospan elements is given by the expression:
Example
To demonstrate this proof in a simple case, take A to be a simple 3-element.
A = a1 a2 a3 ;
Orthogonalize A by the Gram-Schmidt process to rewrite it as the exterior product of mutually orthogonal factors.
A = a1 Ha2 + a a1 L Ha3 + b a1 + c a2 L;
Since each one of these factors is mutually orthogonal, each term is individually zero. Hence H32 0. Since H32 is equal to the original interior sum F32 the theorem is confirmed.
%@H32 F32 D True
2009 9 3
468
ab ab
m k m k
6.95
This definition preserves the basic property of the cross product: that the cross product of two elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three dimensional metric vector space.
6.96
x 1 H x 2 x 3 L H - 1 L n H x 2 x 3 L x 1
6.97
469
Hx1 x2 L x3 x1 x2 x3
6.98
Because Grassmann identified n-elements with their scalar Euclidean complements (see the historical note in the introduction to Chapter 5), he considered x1 x2 x3 in a 3-space to be a scalar. His notation for the exterior product of elements was to use square brackets @x1 x2 x3 D, thus originating the 'box' product notation for Hx1 x2 L x3 used in the three-dimensional vector algebra in the early decades of the twentieth century.
6.99
6.100
Note that this product, unlike the other products above, does not rely on a metric, because the expression involves only exterior and regressive products.
2009 9 3
470
6:
a L, b L
m m k k
ab
m k
n-H m +kL
6.101
7:
The cross product is not associative. However, it can be expressed in terms of exterior and interior products.
ab
m k
g H - 1 L Hn-1L H m +kL a b
r m k
g
r
6.102
a bg
m k r
H- 1LHn-Hk+rLL Hm+k+rL b g
k r
a
m
6.103
8:
The cross product of an element with the unit scalar 1 yields its complement. Thus the complement operation may be viewed as the cross product with unity.
1 a a1 a
m m m
6.104
The cross product of an element with a scalar yields that scalar multiple of its complement.
a a aa a a
m m m
6.105
9:
The cross product of 1 with itself is the unit n-element. The cross product of a scalar and its reciprocal is the unit n-element.
11 a
1 a
6.106
10:
2009 9 3
471
10:
The cross product of two elements is equal (apart from a possible sign) to their cross product in reverse order. The cross product of two 1-elements is anti-commutative, just as is the exterior product.
a b H- 1L m k b a
m k k m
6.107
11:
6.108
12:
The cross product is both left and right distributive under addition.
Ka + b O g a g + b g
m m r m r m r
6.109
a Kb + gO a b + a g
m r r m r m r
6.110
We can therefore write the exterior, regressive, and interior products in terms only of the cross product.
a b H- 1LHm+kL Hn-Hm+kLL 1 a b
m k m k
6.111
2009 9 3
472
6.112
a b H - 1 L m Hn- m L K 1 a O b
m k m k
6.113
These formulae show that any result in the Grassmann algebra involving exterior, regressive or interior products, or complements, could be expressed in terms of the generalized cross product alone. This is somewhat reminiscent of the role played by the Scheffer stroke (or Pierce arrow) symbol of Boolean algebra.
a b H- 1LHm+kL Hn-Hm+kLL a b
m k m k
6.114
ag bg g ab
m p k p p m k
g H - 1 L p Hn-pL g a b
p p m k
g
p
6.115
a b x H - 1 L n-1 K a x O b + H - 1 L m a b x
m k m k m k
6.116
2009 9 3
473
ab ab
m k m k
6.117
By dividing through by the inner product a b, we obtain a decomposition of the 1-element x into
m m
two components.
Ka xO b x
m m
ab
m m
+ H- 1L
m -1 m
a Kb xO
m
6.118
m
ab
m
Ka bO Hx zL Ka xO Kb zO + Ka zO Kb xO
m m m m m m
6.119
When a is equal to b and x is equal to z, this reduces to a relationship similar to the fundamental
m m
trigonometric identity.
` ` ` ` x x Ka xO Ka xO + Ka xO Ka xO
m m m m
6.120
2009 9 3
474
` ` 2 ` ` 2 a x + a x 1
m m
6.121
Similarly, formula 6.118 reduces to a more obvious decomposition for x in terms of a unit melement.
` ` ` ` x Ka x O a + H- 1 L m -1 a K a x O
m m m m
6.122
Because of its geometric significance when a is simple, the properties of this equation are worth
m
investigating further. It will be shown that the square (scalar product with itself) of each of its terms is the corresponding term of formula 6.120. That is:
` ` ` ` ` ` KKa xO aO KKa xO aO Ka xO Ka xO
m m m m m m
6.123
` ` ` ` ` ` Ka Ka xOO Ka Ka xOO Ka xO Ka xO
m m m m m m
6.124
Further, by taking the scalar product of formula 6.122 with itself, and using formulae 6.120, 6.123, and 6.124, yields the result that the terms on the right-hand side of formula 6.122 are orthogonal.
` ` ` ` KKa xO aO Ka Ka xOO 0
m m m m
6.125
It is these facts that suggest the name triangle formulae for formulae 6.121 and 6.122. Diagram of a vector x decomposed into components in and orthogonal to a.
m
Focusing now on the first factor of the inner product on the right-hand side we get:
2009 9 3
475
Note that the second factor in the interior product of the right-hand side is of grade 1. We apply the Interior Common Factor Theorem to give:
` ` ` ` Ha1 ! am xL KKa xO aO H- 1Lm KKKa xO aO xO a1 ! am +
m m m m
H - 1 L i-1
i
` ` KKKa xO aO ai O a1 ! i ! am x
m m
` ` ` ` ` ` KKa xO aO KKa xO aO Ka xO Ka xO
m m m m m m
` ` ` Ka xO a a x
m m m
6.126
enables formula 6.124 to be proved in an identical manner to formula 6.123. Using formula 6.16 we find that each form may be expressed (apart from a possible sign) in the other form provided a
m
K a x O a H - 1 L n-m -1 a H a x L H - 1 L n-m -1 a K a x O
m m m m m m
2009 9 3
476
K a x O a H - 1 L n- m -1 a K a x O
m m m m
6.127
a K a x O H - 1 L m -1 K a x O a
m m m m
6.128
` ` ` Since the proof of formula 6.103 is valid for any simple a it is also valid for a (which is equal to a)
m m m
since this is also simple. The proof of formula 6.124 is then completed by applying formula 6.127. The next four sections will look at some applications of these formulae.
6.12 Angle
Defining the angle between elements
The angle between 1-elements
The interior product enables us to define, as is standard practice, the angle between two 1-elements
` ` 2 Cos@qD2 x1 x2
6.129
Diagram of two vectors showing the angle between them. However, putting a equal to x1 and x equal to x2 in formula 6.121 yields:
m
` ` 2 ` ` 2 x1 x2 + x1 x2 1
This, together with the definition for angle above implies that:
6.130
` ` 2 Sin@qD2 x1 x2
This is a formula equivalent to the three-dimensional cross product formulation:
6.131
2009 9 3
477
` ` 2 Sin@qD2 x1 x2
but one which is not restricted to three-dimensional space. Thus formula 6.130 is an identity equivalent to sin2 HqL + cos2 HqL = 1.
` ` 2 ` ` 2 a x + a x 1
m m
` ` 2 Sin@qD2 a x
m
6.132
` ` 2 Cos@qD2 a x
m
6.133
x2 x3 .
Diagram of three vectors showing the angles between them, and the angle between the bivector and the vector. The angle between any two of the vectors is given by formula 6.129.
` ` 2 Cos@qij D2 xi xj
The cosine of the angle between the vector x1 and the bivector x2 x3 may be obtained from formula 6.124.
HHx2 x3 L x1 L HHx2 x3 L x1 L ` ` 2 a x m HHx2 x3 L Hx2 x3 LL Hx1 x1 L
2009 9 3
478
A=
First, convert the interior products to scalar products and then convert the scalar products to angle form given by formula 6.129. GrassmannAlgebra provides the function ToAngleForm for doing this in one operation.
V@x_ D; ToAngleForm@A, qD ICos@q1,2 D2 + Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 DM Csc@q2,3 D2
This result may be verified by elementary geometry to be Cos@qD2 where q is defined on the diagram above. Thus we see a verification that formula 6.124 permits the calculation of the angle between a vector and an m-vector in terms of the angles between the given vector and any m vector factors of the m-vector.
But from formulas 6.28 and 6.67 we can remove the complement operations to get the simpler expression:
Cos@qD
Hx1 x2 L Hx3 x4 L x1 x2 x3 x4
6.134
Note that, in contradistinction to the 3-space formulation using cross products, this formulation is valid in a space of any number of dimensions. An actual calculation is most readably expressed by expanding each of the terms separately. We can either represent the xi in terms of basis elements or deal with them directly. A direct expansion, valid for any metric is:
ToScalarProducts@Hx1 x2 L Hx3 x4 LD - Hx1 x4 L Hx2 x3 L + Hx1 x3 L Hx2 x4 L
2009 9 3
479
Note that this has been simplified somewhat as permitted by the symmetry of the scalar product. By putting this in its angle form we get the usual expression for the volume of a parallelepiped:
ToAngleForm@V, qD , I- x 2 x 2 x 2 I- 1 + Cos@q D2 + 1 2 3 1,2 Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 D + Cos@q2,3 D2 MM
, I1 + 2 Cos@q
6.135
Cos@q1,3 D - Cos@q2,3 D M
We can of course use the same approach in any number of dimensions. For example, the 'volume' of a 4-dimensional parallelepiped in terms of the lengths of its sides and the angles between them is:
!4 ; ToAngleForm@Measure@x1 x2 x3 x4 D, qD , Ix 2 x 2 x 2 x 2 I1 - Cos@q D2 1 2 3 4 2,3 Cos@q2,4 D2 + 2 Cos@q1,3 D Cos@q1,4 D Cos@q3,4 D - Cos@q3,4 D2 + 2 Cos@q2,3 D Cos@q2,4 D H- Cos@q1,3 D Cos@q1,4 D + Cos@q3,4 DL + 2 Cos@q1,2 D HCos@q1,4 D HCos@q2,4 D - Cos@q2,3 D Cos@q3,4 DL + Cos@q1,3 D HCos@q2,3 D - Cos@q2,4 D Cos@q3,4 DLL Cos@q1,4 D2 Sin@q2,3 D2 - Cos@q1,3 D2 Sin@q2,4 D2 Cos@q1,2 D2 Sin@q3,4 D2 MM
6.13 Projection
2009 9 3
480
6.13 Projection
To be completed.
2009 9 3
481
7.1 Introduction
In Chapter 8: Exploring Mechanics, we will see that systems of forces and momenta, and the velocity and infinitesimal displacement of a rigid body may be represented by the sum of a bound vector and a bivector. We have already noted in Chapter 1 that a single force is better represented by a bound vector than by a (free) vector. Systems of forces are then better represented by sums of bound vectors; and a sum of bound vectors may always be reduced to the sum of a single bound vector and a single bivector. We call the sum of a bound vector and a bivector a 2-entity. These geometric entities are therefore worth exploring for their ability to represent the principal physical entities of mechanics. In this chapter we begin by establishing some properties of 2-entities in an n-plane, and then show how, in a 3-plane, they take on a particularly symmetrical and potent form. This form, a 2-entity in a 3-plane, is called a screw. Since it is in the 3-plane that we wish to explore 3-dimensional mechanics we then explore the properties of screws in more detail. The objective of this chapter then, is to lay the algebraic and geometric foundations for the chapter on mechanics to follow.
Historical Note The classic text on screws and their application to mechanics is by Sir Robert Stawell Ball: A Treatise on the Theory of Screws (1900). Ball was aware of Grassmann's work as he explains in the Biographical Notes to this text in a comment on the Ausdehnungslehre of 1862.
This remarkable work, a development of an earlier volume (1844), by the same author, contains much that is of instruction and interest in connection with the present theory. ... Here we have a very general theory, which includes screw coordinates as a particular case.
The principal proponent of screw theory from a Grassmannian viewpoint was Edward Wyllys Hyde. In 1888 he wrote a paper entitled 'The Directional Theory of Screws' on which Ball comments, again in the Biographical Notes to A Treatise on the Theory of Screws.
The author writes: "I shall define a screw to be the sum of a point-vector and a planevector perpendicular to it, the former being a directed and posited line, the latter the product of two vectors, hence a directed but not posited plane." Prof. Hyde proves by his [sic] calculus many of the fundamental theorems in the present theory in a very concise manner.
2009 9 3
482
Here, P is a point, a a vector and b a bivector. Remember that only in vector 1, 2 and 3-spaces are bivectors necessarily simple. In what follows we will show that in a metric space the point P may always be chosen in such a way that the bivector b is orthogonal to the vector a. This property, when specialised to three-dimensional space, is important for the theory of screws to be developed in the rest of the chapter. We show it as follows: To the above equation, add and subtract the bound vector P* a such that IP - P* M a 0, giving:
S P* a + b* b* b + IP - P* M a
But from our first condition on the choice of P* (that is, IP - P* M a 0) we see that the second term is zero giving:
P* P -
ba aa
Finally, substituting for P* and b* in the expression for S above gives the canonical form for the 2entity S, in which the new bivector component is now orthogonal to the vector a. The bound vector component defines a line called the central axis of S.
2009 9 3
483
S P-
ba aa
a + b +
ba aa
7.1
Canonical forms in !2
In a bound vector space of two dimensions (the plane), every bound element can be expressed as a bound vector. To see this we note that any bivector is simple and can be expressed using the vector of the bound vector as one of its factors. For example, if P is a point and a, b1 , b2 , are vectors, any bound element in the plane can be expressed in the form:
P a + b1 b2
But because any bivector in the plane can be expressed as a scalar factor times any other bivector, we can also write:
b1 b2 k a
so that the bound element may now be written as the bound vector:
P a + b1 b2 P a + k a HP + kL a P* a
We can use GrassmannAlgebra to verify that formula 7.1 gives this result in a 2-space. First we declare the 2-space, and write a and b in terms of basis elements:
'2 ; a = a e1 + b e2 ; b = c e1 e2 ;
For the canonical expression [7.1] above we wish to show that the bivector term is zero. We do this by converting the interior products to scalar products using ToScalarProducts.
SimplifyBToScalarProductsBb + 0 ba aa aFF
In sum: A bound 2-element in the plane, Pa + b, may always be expressed as a bound vector:
2009 9 3
484
S Pa + b P -
ba aa
7.2
Canonical forms in !3
In a bound vector space of three dimensions, that is a 3-plane, every 2-element can be expressed as the sum of a bound vector and a bivector orthogonal to the vector of the bound vector. (The bivector is necessarily simple, because all bivectors in a 3-space are simple.) Such canonical forms are called screws, and will be discussed in more detail in the sections to follow.
Creating 2-entities
A 2-entity may be created by applying the GrassmannAlgebra function CreateElement. For example in 3-dimensional space we create a 2-entity based on the symbol s by entering:
'3 ; S = CreateElementBsF
2
s1 # e1 + s2 # e2 + s3 # e3 + s4 e1 e2 + s5 e1 e3 + s6 e2 e3
Note that CreateElement automatically declares the generated symbols si as scalars by adding the pattern s_ . We can confirm this by entering Scalars.
Scalars :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, s_ , _>
0
To explicitly factor out the origin and express the 2-element in the form S Pa + b, we can use the GrassmannAlgebra function GrassmannSimplify.
% @SD # He1 s1 + e2 s2 + e3 s3 L + s4 e1 e2 + s5 e1 e3 + s6 e2 e3
2009 9 3
485
X !a + b
X !b + a
7.3
It may be seen that the effect of taking the complement of X when it is in a form referred to the origin is equivalent to interchanging the vector a and the bivector b whilst taking their vector space complements. This means that the vector that was bound through the origin becomes a free (n|1)vector, whilst the bivector that was free becomes an (n|2)-vector bound through the origin.
2009 9 3
486
X " a + n a + b - n a H" + nL a + Hb - n aL
Or, equivalently:
X P a + bp bp b - n a
So that the relationship between X and its complement X , can finally be written:
X P a + bp
X ! bp + I! aM P
7.4
Remember, this formula is valid for the hybrid metric [5.33] in an n-plane of arbitrary dimension. We explore its application to 3-planes in the section below.
S Pa + s a
where:
a Pa s a
7.5
is the vector of the screw is the central axis of the screw is the pitch of the screw is the bivector of the screw
Remember that a is the (free) complement of the vector a in the three-plane, and hence is a simple bivector.
2009 9 3
487
` ` ` S = Pa + s a
7.6
Note that the unit screw does not have unit magnitude. The magnitude of a screw will be discussed in Section 7.5.
The first term in the expansion of the right hand side is zero, leaving:
Sa s aa
Further, by taking the free complement of this expression and invoking the definition of the interior product we get:
Sa s aa s aa s aa
In sum: We can obtain the pitch of a screw from the any of the following formulae:
Sa Sa ` ` s Sa aa aa
7.7
The second term in the expansion of the right-hand side is zero (since an element is always orthogonal to its complement), leaving: 2009 9 3
488
The second term in the expansion of the right-hand side is zero (since an element is always orthogonal to its complement), leaving:
S a HP aL a
The right-hand side of this equation may be expanded using the Interior Common Factor Theorem to give:
S a Ha PL a - Ha aL P
By taking the exterior product of this expression with a, we eliminate the first term on the righthand side to get:
HS aL a - Ha aL HP aL
By dividing through by the square of the magnitude of a, we can express this in terms of unit elements.
` ` IS aM a - HP aL
In sum: We can obtain the central axis Pa of a screw from any of the following formulae:
Pa -
HS aL a aa
` ` ` ` - IS aM a a IS aM
7.8
In order to transform the second term into the form we want, we note first that S a is a scalar. So that we can write:
S a a S a a S a a HS aL a
` ` ` ` S - IS aM a + IS aM a
7.9
This type of decomposition is in fact valid for any m-element S and 1-element a, as we have shown ` ` in Chapter 6, formula 6.68. The central axis is the term - IS aM a which is the component of S ` ` parallel to a, and the term IS aM a is s a, the component of S orthogonal to a.
489
2009 9 3
490
8 Exploring Mechanics
8.1 Introduction
Grassmann algebra applied to the field of mechanics performs an interesting synthesis between what now seem to be regarded as disparate concepts. In particular we will explore the synthesis it can effect between the concepts of force and moment; velocity and angular velocity; and linear momentum and angular momentum. This synthesis has an important concomitant, namely that the form in which the mechanical entities are represented, is for many results of interest, independent of the dimension of the space involved. It may be argued therefore, that such a representation is more fundamental than one specifically requiring a three-dimensional context (as indeed does that which uses the Gibbs-Heaviside vector algebra). This is a more concrete result than may be apparent at first sight since the form, as well as being valid for spaces of dimension three or greater, is also valid for spaces of dimension zero, one or two. Of most interest, however, is the fact that the complementary form of a mechanical entity takes on a different form depending on the number of dimensions concerned. For example, the velocity of a rigid body is the sum of a bound vector and a bivector in a space of any dimensions. Its complement is dependent on the dimension of the space, but in each case it may be viewed as representing the points of the body (if they exist) which have zero velocity. In three dimensions the complementary velocity can specify the axis of rotation of the rigid body, while in two dimensions it can specify the centre (point) of rotation. Furthermore, some results in mechanics (for example, Newton's second law or the conditions of equilibrium of a rigid body) will be shown not to require the space to be a metric space. On the other hand, use of the Gibbs-Heaviside 'cross product' to express angular momentum or the moment condition immediately supposes the space to possess a metric. Mechanics as it is known today is in the strange situation of being a field of mathematical physics in which location is very important, and of which the calculus traditionally used (being vectorial) can take no proper account. As already discussed in Chapter 1, one may take the example of the concept of force. A (physical) force is not satisfactorily represented by a vector, yet contemporary practice is still to use a vector calculus for this task. To patch up this inadequacy the concept of moment is introduced and the conditions of equilibrium of a rigid body augmented by a condition on the sum of the moments. The justification for this condition is often not well treated in contemporary texts. Many of these texts will of course rightly state that forces are not (represented by) free vectors and yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although confusing, this inaptitude is usually offset by various comments attached to the symbolic descriptions and calculations. For example, a position is represented by a 'position vector'. A position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the coordinate system. The coordinate system itself is supposed to consist of an origin point and a number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and position vectors would not occur if the calculus used to describe them were a calculus of position 2009 9 3as well as direction (vectors). (points)
Many of these texts will of course rightly state that forces are not (represented by) free vectors and yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although confusing, this inaptitude is usually offset by various comments attached Grassmann to the Algebra symbolic Book.nb 491 descriptions and calculations. For example, a position is represented by a 'position vector'. A position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the coordinate system. The coordinate system itself is supposed to consist of an origin point and a number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and position vectors would not occur if the calculus used to describe them were a calculus of position (points) as well as direction (vectors). In order to describe the phenomena of mechanics in purely vectorial terms it has been necessary therefore to devise the notions of couple and moment as notions almost distinct from that of force, thus effectively splitting in two all the results of mechanics. In traditional mechanics, to every 'linear' quantity: force, linear momentum, translational velocity, etc., corresponds an 'angular' quantity: moment, angular momentum, angular velocity. In this chapter we will show that by representing mechanical quantities correctly in terms of elements of a Grassmann algebra this dichotomy disappears and mechanics takes on a more unified form. In particular we will show that there exists a screw / representing (including moments) a system of forces (remember that a system of forces in three dimensions is not necessarily replaceable by a single force); a screw ) representing the momentum of a system of bodies (linear and angular), and a screw & representing the velocity of a rigid body (linear and angular): all invariant combinations of the linear and angular components. For example, the velocity & is a complete characterization of the kinematics of the rigid body independent of any particular point on the body used to specify its motion. Expressions for work, power and kinetic energy of a system of forces and a rigid body will be shown to be determinable by an interior product between the relevant screws. For example, the interior product of / with & will give the power of the system of forces acting on the rigid body, that is, the sum of the translational and rotational powers.
Historical Note The application of the Ausdehnungslehre to mechanics has obtained far fewer proponents than might be expected. This may be due to the fact that Grassmann himself was late going into the subject. His 'Die Mechanik nach den principien der Ausdehnungslehre' (1877) was written just a few weeks before his death. Furthermore, in the period before his ideas became completely submerged by the popularity of the Gibbs-Heaviside system (around the turn of the century) there were few people with sufficient understanding of the Ausdehnungslehre to break new ground using its methods. There are only three people who have written substantially in English using the original concepts of the Ausdehnungslehre: Edward Wyllys Hyde (1888), Alfred North Whitehead (1898), and Henry James Forder (1941). Each of them has discussed the theory of screws in more or less detail, but none has addressed the complete field of mechanics. The principal works in other languages are in German, and apart from Grassmann's paper in 1877 mentioned above, are the book by Jahnke (1905) which includes applications to mechanics, and the short monograph by Lotze (1922) which lays particular emphasis on rigid body mechanics.
2009 9 3
492
8.2 Force
Representing force
The notion of force as used in mechanics involves the concepts of magnitude, sense, and line of action. It will be readily seen in what follows that such a physical entity may be faithfully represented by a bound vector. Similarly, all the usual properties of systems of forces are faithfully represented by the analogous properties of sums of bound vectors. For the moment it is not important whether the space has a metric, or what its dimension is. Let / denote a force. Then:
$ Pf
The vector f is called the force vector and expresses the sense and direction of the force. In a metric space it would also express its magnitude. The point P is any point on the line of action of the force. It can be expressed as the sum of the origin point " and the position vector n of the point.
8.1
This simple representation delineates clearly the role of the (free) vector f. The vector f represents all the properties of the force except the position of its line of action. The operation P has the effect of 'binding' the force vector to the line through P. On the other hand the expression of a bound vector as a product of points is also useful when expressing certain types of forces, for example gravitational forces. Newton's law of gravitation for the force exerted on a point mass m1 (weighted point) by a point mass m2 may be written:
/12
G m1 m2 R2
8.2
Note that this expression correctly changes sign if the masses are interchanged.
Systems of forces
A system of forces may be represented by a sum of bound vectors.
$ $i Pi fi
8.3
A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term PHfi L to and from the sum [8.3]. 2009 9 3
493
A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term PHfi L to and from the sum [8.3].
$ P I fi M + HPi - PL fi
8.4
This 'adding and subtracting' operation is a common one in our treatment of mechanics. We call it referring the sum to the point P. Note that although the expression for / now involves the point P, it is completely independent of P since the terms involving P cancel. The sum may be said to be invariant with respect to any point used to express it. Examining the terms in the expression 8.4 above we can see that since fi is a vector (f, say) the first term reduces to the bound vector Pf representing a force through the chosen point P. The vector f is called the resultant force vector.
f fi
The second term is a sum of bivectors of the form HPi -PLfi . Such a bivector may be seen to faithfully represent a moment: specifically, the moment of the force represented by Pi fi about the point P. Let this moment be denoted 0iP .
0iP HPi - PL fi
To see that it is more reasonable that a moment be represented as a bivector rather than as a vector, one has only to consider that the physical dimensions of a moment are the product of a length and a force unit. The expression for moment above clarifies distinctly how it can arise from two located entities: the bound vector Pi fi and the point P, and yet itself be a 'free' entity. The bivector, it will be remembered, has no concept of location associated with it. This has certainly been a source of confusion among students of mechanics using the usual Gibbs-Heaviside three-dimensional vector calculus. It is well known that the moment of a force about a point does not possess the property of location, and yet it still depends on that point. While notions of 'free' and 'bound' are not properly mathematically characterized, this type confusion is likely to persist. The second term in [8.4] representing the sum of the moments of the forces about the point P will be denoted:
0P HPi - PL fi
Then, any system of forces may be represented by the sum of a bound vector and a bivector.
$ P f + %P
8.5
The bound vector Pf represents a force through an arbitrary point P with force vector equal to the sum of the force vectors of the system. The bivector 0P represent the sum of the moments of the forces about the same point P.
2009 9 3
494
The bivector 0P represent the sum of the moments of the forces about the same point P. Suppose the system of forces be referred to some other point P* . The system of forces may then be written in either of the two forms:
P f + 0P P* f + 0P*
Solving for 0P* gives us a formula relating the moment sum about different points.
0P* 0P + HP - P* L f
If the bound vector term Pf in formula 8.5 is zero then the system of forces is called a couple. If the bivector term 0P is zero then the system of forces reduces to a single force.
Equilibrium
If a sum of forces is zero, that is:
/ P f + 0P 0
then it is straightforward to show that each of Pf and 0P must be zero. Indeed if such were not the case then the bound vector would be equal to the negative of the bivector, a possibility excluded by the implicit presence of the origin in a bound vector, and its absence in a bivector. Furthermore Pf being zero implies f is zero. These considerations lead us directly to the basic theorem of statics. For a body to be in equilibrium, the sum of the forces must be zero.
$ $i 0
8.6
This one expression encapsulates both the usual conditions for equilibrium of a body, that is: that the sum of the force vectors be zero; and that the sum of the moments of the forces about an arbitrary point P be zero.
Thus, a system of forces in a metric 3-plane may be reduced to a single force through an arbitrarily chosen point P plus the vector-space complement of the usual moment vector about P.
2009 9 3
495
$ P f + HPi - PL fi
8.7
8.3 Momentum
The velocity of a particle
Suppose a point P with position vector n, that is P " + n. Since the origin is fixed, the velocity P of the point P is clearly a vector given by the timederivative of the position vector of P.
dP dt
dn dt
8.8
Representing momentum
As may be suspected from Newton's second law, momentum is of the same tensorial nature as force. The momentum of a particle is represented by a bound vector as is a force. The momentum of a system of particles (or of a rigid body, or of a system of rigid bodies) is representable by a sum of bound vectors as is a system of forces. The momentum of a particle comprises three factors: the mass, the position, and the velocity of the particle. The mass is represented by a scalar, the position by a point, and the velocity by a vector.
& P K m PO
Here
) m P P mP mP
8.9
is the particle momentum. is the particle mass. is the particle position point. is the particle velocity. is the particle point mass. is the particle momentum vector.
The momentum of a particle may be viewed either as a momentum vector bound to a line through the position of the particle, or as a velocity vector bound to a line through the point mass. The simple description [8.9] delineates clearly the role of the (free) vector m P. It represents all the properties of the momentum of a particle except its position.
496
& &i P i K m i P i O
8.10
A sum of bound vectors is not necessarily reducible to a bound vector: the momentum of a system of particles is not necessarily reducible to the momentum of a single particle. However, a sum of bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done simply by adding and subtracting the terms P mi Pi to and from the sum [8.10].
& P mi Pi + HPi - PL K mi Pi O
It cannot be too strongly emphasized that although the momentum of the system has now been referred to the point P, it is completely independent of P.
8.11
Examining the terms in formula 8.11 we see that since mi Pi is a vector (l, say) the first term reduces to the bound vector Pl representing the (linear) momentum of a 'particle' situated at the point P. The vector l is called the linear momentum of the system. The 'particle' may be viewed as a particle with mass equal to the total mass of the system and with velocity equal to the velocity of the centre of mass.
l mi Pi
The second term is a sum of bivectors of the form HPi -PLKmi Pi O. Such a bivector may be seen to faithfully represent the moment of momentum or angular momentum of the system of particles about the point P. Let this moment of momentum for particle i be denoted HiP .
HiP HPi - PL Kmi Pi O
The expression for angular momentum Hip above clarifies distinctly how it can arise from two located entities: the bound vector Pi Kmi Pi O and the point P, and yet itself be a 'free' entity. Similarly to the notion of moment, the notion of angular momentum treated using the threedimensional Gibbs-Heaviside vector calculus has caused some confusion amongst students of mechanics: the angular momentum of a particle about a point depends on the positions of the particle and of the point and yet itself has no property of location. The second term in formula 8.11 representing the sum of the moments of momenta about the point P will be denoted ,P . 2009 9 3
497
The second term in formula 8.11 representing the sum of the moments of momenta about the point P will be denoted ,P .
,P HPi - PL Kmi Pi O
Then the momentum ) of any system of particles may be represented by the sum of a bound vector and a bivector.
& P l + 'P
8.12
The bound vector Pl represents the linear momentum of the system referred to an arbitrary point P. The bivector ,P represents the angular momentum of the system about the point P.
Referring this momentum sum to a point P by adding and subtracting the term Pli we obtain:
)i P li + HPi - PL li + ,Pi
It is thus natural to represent the linear momentum vector of the system of bodies l by:
l li
That is, the momentum of a system of bodies is of the same form as the momentum of a system of particles. The linear momentum vector of the system is the sum of the linear momentum vectors of the bodies. The angular momentum of the system is made up of two parts: the sum of the angular momenta of the bodies about their respective chosen points; and the sum of the moments of the linear momenta of the bodies (referred to their respective chosen points) about the point P. Again it may be worth emphasizing that the total momentum ) is independent of the point P, whilst its component terms are dependent on it. However, we can easily convert the formulae to refer to some other point say P* .
) P l + ,P P* l + ,P*
Hence the angular momentum referred to the new point is given by:
2009 9 3
498
Hence the angular momentum referred to the new point is given by:
,P* ,P + HP - P* L l
Here
M Pi PG
is the total mass of the system. is the position of the ith particle or centre of mass of the ith body. is the position of the centre of mass of the system.
If now we refer the momentum to the centre of mass, that is, we choose P to be PG , we can write the momentum of the system as:
& PG KM PG O + 'PG
8.13
Thus the momentum ) of a system is equivalent to the momentum of a particle of mass equal to the total mass M of the system situated at the centre of mass PG plus the angular momentum about the centre of mass.
Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single particle through an arbitrarily chosen point P plus the vector-space complement of the usual moment of moment vector about P. 2009 9 3
499
Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single particle through an arbitrarily chosen point P plus the vector-space complement of the usual moment of moment vector about P.
& P l + HPi - PL li
8.14
In what follows, it will be supposed for simplicity that the masses are not time varying, and since the first term is zero, this expression becomes:
& Pi K mi Pi O
Consider now the system with momentum:
) P l + ,P
8.15
& P l + P l + 'P
8.16
The term Pl, or what is equivalent, the term PKM PG O, is zero under the following conditions: P is a fixed point. l is zero. P is parallel to PG . P is equal to PG .
2009 9 3
500
In particular then, when the point to which the momentum is referred is the centre of mass, the rate of change of momentum [4.2] becomes:
& P G l + 'P G
8.17
$&
This equation captures Newton's law in its most complete form, encapsulating both linear and angular terms. Substituting for / and ) from equations 8.5 and 8.16 we have:
8.18
P f + %P P l + P l + 'P
By equating the bound vector terms of [8.19] we obtain the vector equation:
fl
8.19
In a metric 3-plane, it is the vector complement of this bivector equation which is usually given as the moment condition.
0P P l + ,P
If P is a fixed point so that P is zero, Newton's law [8.18] is equivalent to the pair of equations:
fl %P 'P
8.20
2009 9 3
501
2009 9 3
502
9 Grassmann Algebra
9.1 Introduction
The Grassmann algebra is an algebra of "numbers" composed of linear sums of elements from any of the exterior linear spaces generated from a given linear space. These are numbers in the same sense that, for example, a complex number or a matrix is a number. The essential property that an algebra has, in addition to being a linear space is, broadly speaking, one of closure under a product operation. That is, the algebra has a product operation such that the product of any elements of the algebra is also an element of the algebra. Thus the exterior product spaces L discussed up to this point, are not algebras, since products
m
(exterior, regressive or interior) of their elements are not generally elements of the same space. However, there are two important exceptions. L is not only a linear space, but an algebra (and a
0
field) as well under the exterior and interior products. L is an algebra and a field under the
n
regressive product. Many of our examples will be using GrassmannAlgebra functions, and because we will often be changing the dimension of the space depending on the example under consideration, we will indicate a change without comment simply by entering the appropriate DeclareBasis symbol from the palette. For example the following entry:
&3 ;
form a linear space in their own right, which we call L. A basis for L is obtained from the current declared basis of L by collecting together all the basis
1
2009 9 3
503
If we wish to generate a general symbolic Grassmann number in the currently declared basis, we can use the function GrassmannAlgebra function CreateGrassmannNumber. CreateGrassmannNumber[symbol] creates a Grassmann number with scalar coefficients formed by subscripting the symbol given. For example, if we enter GrassmannNumber[x], we obtain:
CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3
The positional ordering of the xi with respect to the basis elements in any given term is due to Mathematica's internal ordering routines, and does not affect the meaning of the result. The CreateGrassmannNumber operation automatically adds the pattern for the generated coefficients xi to the list of declared scalars, as we see by entering Scalars.
Scalars :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, x_ , _>
0
If we wish to enter our own coefficients, we can use CreateGrassmannNumber with a placeholder (, pl) as argument, and then tab through the placeholders generated, replacing them with the desired coefficients:
CreateGrassmannNumber@D + e1 + e2 + e3 + e1 e2 + e1 e3 + e2 e3 + e1 e2 e3
Note that the coefficients entered here will not be automatically put on the list of declared scalars. If several Grassmann numbers are required, we can apply the operation to a list of symbols. That is, CreateGrassmannNumber is Listable. Below we generate three numbers in 2-space.
&2 ; CreateGrassmannNumber@8x, y, z<D MatrixForm x1 + e1 x3 + e2 x4 + x2 e1 e2 y1 + e1 y3 + e2 y4 + y2 e1 e2 z1 + e1 z3 + e2 z4 + z2 e1 e2
Grassmann numbers can of course be entered in general symbolic form, not just using basis elements. For example a Grassmann number might take the form:
2 + x - 3 y + 5 xy + xyz
In some cases it might be faster to change basis temporarily in order to generate a template with all but the required scalars formed.
2009 9 3
504
The rest of a Grassmann number is called its soul. The soul may be obtained from a Grassmann number by using the Soul function. The soul of the number X is:
Soul@XD e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 Body and Soul apply to Grassmann numbers whose components are given in any form. Z = Hx yL + HHx Hy + 2LL Hz - 2LL + Hz Hy xLL; 8Body@ZD, Soul@ZD< 8x y + 2 Hx zL + z y x, - 4 x + x y z - 2 x y<
The first two terms of the body are scalar because they are interior products of 1-elements (scalar products). The third term of the body is a scalar in a space of 3 dimensions.
Body and Soul are both Listable. Taking the Soul of the list of Grassmann numbers generated in 2-space above gives
2009 9 3
505
x0 + e1 x1 + e2 x2 + x3 e1 e2 SoulB y0 + e1 y1 + e2 y2 + y3 e1 e2 F MatrixForm z0 + e1 z1 + e2 z2 + z3 e1 e2 e1 x1 + e2 x2 + x3 e1 e2 e1 y1 + e2 y2 + y3 e1 e2 e1 z1 + e2 z2 + z3 e1 e2
Thus factors of odd grade anti-commute; whereas, if one of the factors is of even grade, they commute. This means that, if X is a Grassmann number whose components are all of odd grade, then it is nilpotent. The even components of a Grassmann number can be extracted with the function EvenGrade and the odd components with OddGrade.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 EvenGrade@XD x0 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 OddGrade@XD e1 x1 + e2 x2 + e3 x3 + x7 e1 e2 e3
It should be noted that the result returned by EvenGrade or OddGrade is dependent on the dimension of the current space. For example, the calculations above for the even and odd components of the number X were carried out in a 3-space. If now we change to a 2-space and repeat the calculations, we will get a different result for OddGrade[X] because the term of grade three is necessarily zero in this 2-space.
&2 ; OddGrade@XD e1 x1 + e2 x2 + e3 x3
For EvenGrade and OddGrade to apply, it is not necessary that the Grassmann number be in terms of basis elements or simple exterior products. Applying EvenGrade and OddGrade to the number Z below still separates the number into its even and odd components:
Z = Hx yL + HHx Hy + 2LL Hz - 2LL + Hz Hy xLL;
2009 9 3
506
To test whether a number is even or odd we use EvenGradeQ and OddGradeQ. Consider again the general Grassmann number X in 3-space.
8EvenGradeQ@XD, OddGradeQ@XD, EvenGradeQ@EvenGrade@XDD, OddGradeQ@EvenGrade@XDD< 8False, False, True, False< EvenGradeQ and OddGradeQ are not Listable. When applied to a list of elements, they require all the elements in the list to conform in order to return True. They can of course still be mapped over a list of elements to question the evenness or oddness of the individual components. EvenGradeQ 81, x, x y, x y z< 8True, False, True, False<
Finally, there is the question as to whether the Grassmann number 0 is even or odd. It may be recalled from Chapter 2 that the single symbol 0 is actually a shorthand for the zero element of any of the exterior linear spaces, and hence is of indeterminate grade. Entering Grade[0] will return a flag Grade0 which can be manipulated as appropriate. Therefore, both EvenGradeQ[0] and OddGradeQ[0] will return False.
8Grade@0D, EvenGradeQ@0D, OddGradeQ@0D< 8Grade0, False, False<
2009 9 3
507
Because it may be necessary to expand out a Grassmann number into a sum of terms (each of which has a definite grade) before the grades can be calculated, and because Mathematica will reorganize the ordering of those terms in the sum according to its own internal algorithms, direct correspondence with the original form is often lost. However, if a list of the grades of each term in an expansion of a Grassmann number is required, we should: Expand and simplify the number into a sum of terms. Create a list of the terms. Map Grade over the list. For example:
A = % @ZD - 4 x + x y + 2 Hx zL + x y z - x y z 1 - 2 x y A = List A 8- 4 x, x y, 2 Hx zL, x y z, - Hx y z 1L, - 2 x y< Grade A 81, 0, 0, 1, 0, 2<
To extract the components of a Grassmann number of a given grade or grades, we can use the GrassmannAlgebra ExtractGrade[m][X] function which takes a Grassmann number and extracts the components of grade m. Extracting the components of grade 1 from the number Z defined above gives:
ExtractGrade@1D@ZD -4 x + x y z ExtractGrade also works on lists or tensors of GrassmannNumbers. DeclareExtraScalars@8y_ , z_ <D :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, z_ , x_ , y_ , _>
0
x0 + e1 x1 + e2 x2 + x3 e1 e2 ExtractGrade@1DB y0 + e1 y1 + e2 y2 + y3 e1 e2 F MatrixForm z0 + e1 z1 + e2 z2 + z3 e1 e2 e1 x1 + e2 x2 e1 y1 + e2 y2 e1 z1 + e2 z2
508
This feature allows the GrassmannAlgebra package to deal with complex numbers just as it would any other scalars.
%@HHa + bL xL H yLD H a - bL x y
We now create a second Grassmann number Y so that we can look at various operations applied to two Grassmann numbers in 3-space.
Y = CreateGrassmannNumber@yD y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 e2 + y5 e1 e3 + y6 e2 e3 + y7 e1 e2 e3
The exterior product of two general Grassmann numbers in 3-space is obtained by multiplying out the numbers termwise and simplifying the result.
2009 9 3
509
When the bodies of the numbers are zero we get only components of grades two and three.
%@Soul@XD Soul@YDD H-x2 y1 + x1 y2 L e1 e2 + H-x3 y1 + x1 y3 L e1 e3 + H-x3 y2 + x2 y3 L e2 e3 + Hx6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 L e1 e2 e3
We cannot obtain the result in the form of a pure exterior product without relating the unit nelement to the basis n-element. In general these two elements may be related by a scalar multiple which we have denoted !. In 3-space this relationship is given by 1! e1 e2 e3 . To obtain the
3
result as an exterior product, but one that will also involve the scalar !, we use the GrassmannAlgebra function ToCongruenceForm.
Z1 = ToCongruenceForm@ZD 1 % Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 + e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L + e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 e2 + Hx7 y5 + x5 y7 L e1 e3 + Hx7 y6 + x6 y7 L e2 e3 + x7 y7 e1 e2 e3 L
In a space with a Euclidean metric ! is unity. GrassmannSimplify looks at the currently declared metric and substitutes the value of !. Since the currently declared metric is by default Euclidean, applying GrassmannSimplify puts ! to unity.
2009 9 3
510
The complement of X is denoted X . Entering X applies the complement operation, but does not simplify it in any way.
X x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3
Further simplification to an explicit Grassmann number depends on the metric of the space concerned and may be accomplished by applying GrassmannSimplify. GrassmannSimplify will look at the metric and make the necessary transformations. In the Euclidean metric assumed by GrassmannAlgebra as the default we have:
%@ X D e3 x4 - e2 x5 + e1 x6 + x7 + x3 e1 e2 - x2 e1 e3 + x1 e2 e3 + x0 e1 e2 e3
For metrics other than Euclidean, the expression for the complement will be more complex. We can explore the complement of a general Grassmann number in a general metric by entering DeclareMetric[$] and then applying GrassmannSimplify to X . As an example, we take the complement of a general Grassmann number in a 2-space with a general metric. We choose a 2space rather than a 3-space, because the result is less complex to display.
&2 ; DeclareMetric@-D 88$1,1 , $1,2 <, 8$1,2 , $2,2 << U = CreateGrassmannNumber@uD u0 + e1 u1 + e2 u2 + u3 e1 e2 %@ U D Simplify I-u3 $2 1,2 + e2 Hu1 $1,1 + u2 $1,2 L + u3 $1,1 $2,2 e1 Hu1 $1,2 + u2 $2,2 L + u0 e1 e2 M J - $2 1,2 + $1,1 $2,2 N
511
interior product X Y is zero if the components of X are of lesser grade than those of Y. For two general Grassmann numbers we will therefore have some of the products of the components being zero. By way of example we take two general Grassmann numbers U and V in 2-space:
&2 ; 8U = CreateGrassmannNumber@uD, V = CreateGrassmannNumber@wD< 8u0 + e1 u1 + e2 u2 + u3 e1 e2 , w0 + e1 w1 + e2 w2 + w3 e1 e2 <
Note that there are no products of the form ei Hej ek L as these have been put to zero by GrassmannSimplify. If we wish, we can convert the remaining interior products to scalar products using the GrassmannAlgebra function ToScalarProducts.
W2 = ToScalarProducts@W1 D u0 w0 + e1 u1 w0 + e2 u2 w0 + He1 e1 L u1 w1 + He1 e2 L u2 w1 He1 e2 L e1 u3 w1 + He1 e1 L e2 u3 w1 + He1 e2 L u1 w2 + He2 e2 L u2 w2 - He2 e2 L e1 u3 w2 + He1 e2 L e2 u3 w2 He1 e2 L2 u3 w3 + He1 e1 L He2 e2 L u3 w3 + u3 w0 e1 e2
Finally, we can substitute the values from the currently declared metric tensor for the scalar products using the GrassmannAlgebra function ToMetricForm. For example, if we first declare a general metric:
DeclareMetric@-D 88$1,1 , $1,2 <, 8$1,2 , $2,2 << ToMetricForm@W2 D u0 w0 + e1 u1 w0 + e2 u2 w0 + u1 w1 $1,1 + e2 u3 w1 $1,1 + u2 w1 $1,2 - e1 u3 w1 $1,2 + u1 w2 $1,2 + e2 u3 w2 $1,2 - u3 w3 $2 1,2 + u2 w2 $2,2 - e1 u3 w2 $2,2 + u3 w3 $1,1 $2,2 + u3 w0 e1 e2
2009 9 3
512
We can expand these interior products to inner products and replace them by elements of the metric tensor by using the GrassmannAlgebra function ToMetricForm. For example in the case of a Euclidean metric we have:
1; ToMetricForm@ZD x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 + x1 y1 + e2 x4 y1 + e3 x5 y1 + x2 y2 e1 x4 y2 + e3 x6 y2 + x3 y3 - e1 x5 y3 - e2 x6 y3 + x4 y4 + e3 x7 y4 + x5 y5 - e2 x7 y5 + x6 y6 + e1 x7 y6 + x7 y7 + x4 y0 e1 e2 + x7 y3 e1 e2 + x5 y0 e1 e3 - x7 y2 e1 e3 + x6 y0 e2 e3 + x7 y1 e2 e3 + x7 y0 e1 e2 e3
2009 9 3
513
Expanding products
ExpandProducts performs the first level of the simplification operation by simply expanding out any products. It treats scalars as elements of the Grassmann algebra, just like those of higher grade.Taking the expression A and expanding out both the exterior and interior products gives: A1 = ExpandProducts@AD 123 + 12z + 1y3 + 1yz + x23 + x2z + xy3 + xyz + 1yx3 + 1yxz + xyx3 + xyxz
Factoring scalars
FactorScalars collects all the scalars in each term, which figure either by way of ordinary multiplication or by way of exterior multiplication, and multiplies them as a scalar factor at the beginning of each term. Remember that both the exterior and interior product of scalars is a scalar. Mathematica will also automatically rearrange the terms in this sum according to its internal ordering algorithm. A2 = FactorScalars@A1 D 6 + 6 x + 3 y + 2 H1 zL + 2 Hx zL + y z + x y z + yxz + xyxz + 3 xy + 3 yx + 3 xyx
2009 9 3
514
Reordering factors
In order to simplify further it is necessary to put the factors of each term into a canonical order so that terms of opposite sign in an exterior product will cancel and any inner product will be transformed into just one of its two symmetric forms. This reordering may be achieved by using the GrassmannAlgebra ToStandardOrdering function. (The rules for GrassmannAlgebra standard ordering are given in the package documentation.)
A4 = ToStandardOrdering@A3 D 6 + 6 x + 3 y + 2 Hx zL + y z
Simplifying expressions
We can perform all the simplifying operations above with GrassmannSimplify. Applying GrassmannSimplify to our original expression A we get the expression A4 .
% @ A4 D 6 + 6 x + 3 y + 2 Hx zL + y z
2009 9 3
515
Direct computation is generally an inefficient way to calculate a power, as the terms are all expanded out before checking to see which are zero. It will be seen below that we can generally calculate powers more efficiently by using knowledge of which terms in the product would turn out to be zero.
2009 9 3
516
%@8Xe , Xe Xe , Xe Xe Xe , Xe Xe Xe Xe <D Simplify TableForm x0 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 x0 Hx0 + 2 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL x2 0 Hx0 + 3 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL x3 0 Hx0 + 4 Hx4 e1 e2 + x5 e1 e3 + x6 e2 e3 LL
The soul of the even part of a general Grassmann number in 3-space just happens to be nilpotent due to its being simple. (See Section 2.10.)
Xes = EvenSoul@XD x4 e1 e2 + x5 e1 e3 + x6 e2 e3 %@Xes Xes D 0
However, in the general case an even soul will not be nilpotent, but its powers will become zero when the grades of the terms in the power expansion exceed the dimension of the space. We can show this by calculating the powers of an even soul in spaces of successively higher dimensions. Let
Zes = m p + q r s t + u v w x y z; Table@8&n ; Dimension, %@Zes Zes D<, 8n, 3, 8<D TableForm 3 4 5 6 7 8 0 0 0 2 mpqrst 2 mpqrst 2 mpqrst + 2 mpuvwxyz
2009 9 3
517
X p X e p-1 H X e + p X o L
where the powers of p and p|1 refer to the exterior powers of the Grassmann numbers. For example, the cube of a general Grassmann number X in 3-space may then be calculated as
&3 ; Xe = EvenGrade@XD; Xo = OddGrade@XD; %@Xe Xe HXe + 3 Xo LD
2 2 2 2 2 x3 0 + 3 e1 x0 x1 + 3 e2 x0 x2 + 3 e3 x0 x3 + 3 x0 x4 e1 e2 + 3 x0 x5 e1 e3 + 2 3 x2 0 x6 e2 e3 + Ix0 H- 6 x2 x5 + 6 Hx3 x4 + x1 x6 LL + 3 x0 x7 M e1 e2 e3
9.1
This algorithm is used in the general power function GrassmannPower described below.
the grade of the component. (Components may be a sum of several terms. Components with no underscript are 1-elements by default.) As a first example we take a general element in 3-space.
S = s + s + s;
2 3
2009 9 3
518
SS 2 ss
2
It is clear from this that because this expression is of grade three, further multiplication by S would give zero. It thus represents an expression for the highest non-zero power (the square) of a general bodyless element in 3-space. To generalize this result, we proceed as follows: 1) Determine an expression for the highest non-zero power of an even Grassmann number with no body in an even-dimensional space. 2) Substitute this expression into the general expression for the positive power of a Grassmann number in the last section. 3) Repeat these steps for the case of an odd-dimensional space. The highest non-zero power pmax of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the result being zero. For a space of even dimension n this is obviously pmax = n . 2 For example, let X be a 2-element in a 4-space.
2
&4 ; X = CreateElementBxF
2 2
x1 e1 e2 + x2 e1 e3 + x3 e1 e4 + x4 e2 e3 + x5 e2 e4 + x6 e3 e4
The square of this number is a 4-element. Any further multiplication by a bodyless Grassmann number will give zero.
% BX XF
2 2
H- 2 x2 x5 + 2 Hx3 x4 + x1 x6 LL e1 e2 e3 e4
n for 2
X p s 2 -1 K s +
2 2
n 2
The only odd component Xo capable of generating a non-zero term is the one of grade 1. Hence Xo is equal to s and we have finally that: The maximum non-zero power of a bodyless Grassmann number in a space of even dimension n is equal to n and may be computed from the formula 2
Xpmax X 2 s 2 -1 Ks +
2 2
n 2
sO
n even
9.2
A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the 1 result being zero. For a space of odd dimension n this is obviously n. However the highest 2 power of a general bodyless number will be one greater than this due to the possibility of 2009 9 3 this even product by the element of grade 1. Hence pmax = n+1 . multiplying 2
519
A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an even Grassmann number (with no body) can be seen to be that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the 1 result being zero. For a space of odd dimension n this is obviously n. However the highest 2 power of a general bodyless number will be one greater than this due to the possibility of 1 multiplying this even product by the element of grade 1. Hence pmax = n+ . 2 Substituting s for Xe , s for Xo , and
2 2
n+1 for 2
p into the formula for the pth power and noting that the
term involving the power of s only is zero, leads to the following: The maximum non-zero power of a bodyless Grassmann number in a space of odd dimension n is 1 equal to n+ and may be computed from the formula 2
n+1 2
Xpmax X
Hn + 1L 2
s
2
n-1 2
n odd
9.3
A formula which gives the highest power for both even and odd spaces is
pmax
1 4
H2 n + 1 - H- 1Ln L
9.4
The following table gives the maximum non-zero power of a bodiless Grassmann number and the formula for it for spaces up to dimension 8.
n pmax 1 2 3 4 5 6 7 8 1 1 2 2 3 3 4 4 Xpmax s s+s
2
2 ss
2
K2 s + s O s
2 2 2
3 sss
2
K3 s + s O s s
2 2 2 2 2 2
4 ssss K4 s + s O s s s
2 2 2 2
2009 9 3
520
True
True
Finding an inverse of X is equivalent to finding the Grassmann number A such that XA1 or A X1. In what follows we shall show that A commutes with X and therefore is a unique inverse which we may call the inverse of X. To find A we need to solve the equation XA-10. First we create a general Grassmann number for A.
A = CreateGrassmannNumber@aD a0 + e1 a1 + e2 a2 + e3 a3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 + a7 e1 e2 e3
2009 9 3
521
To make this expression zero we need to solve for the ai in terms of the xi which make the coefficients zero. This is most easily done with the GrassmannSolve function to be introduced later. Here, to see more clearly the process, we do it directly with Mathematica's Solve function.
S = Solve@8- 1 + a0 x0 0, Ha1 x0 + a0 x1 L 0, Ha2 x0 + a0 x2 L 0, Ha3 x0 + a0 x3 L 0, Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L 0, Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L 0, Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L 0, Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L 0<D Flatten :a7 2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 x3 0 x5 x2 0 , a6 x6 x2 0 , a1 x1 x2 0 , a4 x2 x2 0 x4 x2 0 , x3 x2 0 1 x0
a5 -
, a2 -
, a3 -
, a0
>
We denote the inverse of X by Xr . We obtain an explicit expression for Xr by substituting the values obtained above in the formula for A.
Xr = A . S 1 x0 e1 x1 x2 0 x2 0 + e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x3 0 x5 e1 e3 x2 0 -
x6 e2 e3
H2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 L e1 e2 e3
To verify that this is indeed the inverse and that the inverse commutes, we calculate the products of X and Xr
%@8X Xr , Xr X<D Simplify 81, 1<
To avoid having to rewrite the coefficients for the Solve equations, we can use the GrassmannAlgebra function GrassmannSolve (discussed in Section 9.6) to get the same results. To use GrassmannSolve we only need to enter a single undefined symbol (here we have used Y)for the Grassmann number we are looking to solve for.
2009 9 3
522
x6 e2 e3 x2 0
H- 2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 e2 e3
It is evident from this example that (at least in 3-space), for a Grassmann number to have an inverse, it must have a non-zero body. To calculate an inverse directly, we can use the function GrassmannInverse. To see the pattern for inverses in general, we calculate the inverses of a general Grassmann number in 1, 2, 3, and 4spaces:
Inverse in a 1-space
&1 ; X1 = CreateGrassmannNumber@xD x0 + e1 x1 X1 r = GrassmannInverse@X1 D 1 x0 e1 x1 x2 0
Inverse in a 2-space
&2 ; X2 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2 X2 r = GrassmannInverse@X2 D 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0
Inverse in a 3-space
&3 ; X3 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 X3 r = GrassmannInverse@X3 D 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x5 e1 e3 x2 0 x7 x2 0 -
x6 e2 e3
- 2 x2 x5 + 2 Hx3 x4 + x1 x6 L x3 0
e1 e2 e3
Inverse in a 4-space
2009 9 3
523
Inverse in a 4-space
&4 ; X4 = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 e2 + x6 e1 e3 + x7 e1 e4 + x8 e2 e3 + x9 e2 e4 + x10 e3 e4 + x11 e1 e2 e3 + x12 e1 e2 e4 + x13 e1 e3 e4 + x14 e2 e3 e4 + x15 e1 e2 e3 e4 X4 r = GrassmannInverse@X4 D 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 x2 0 x3 0 - 2 x2 x7 + 2 Hx4 x5 + x1 x9 L x3 0 - 2 x3 x7 + 2 Hx4 x6 + x1 x10 L x3 0 - 2 x3 x9 + 2 Hx4 x8 + x2 x10 L x3 0 - 2 x6 x9 + 2 Hx7 x8 + x5 x10 L x3 0 e3 x3 x2 0 e4 x4 x2 0 x2 0 x11 x2 0 x12 x2 0 x13 x2 0 x14 x2 0 x15 x2 0 x5 e1 e2 x2 0 x9 e2 e4 x2 0 x10 e3 e4 x2 0 +
x6 e1 e3
x7 e1 e4
x8 e2 e3
- 2 x2 x6 + 2 Hx3 x5 + x1 x8 L
e1 e2 e3 +
e1 e2 e4 +
e1 e3 e4 +
e2 e3 e4 +
e1 e2 e3 e4
We can see this most easily by writing the expansion in the form below and noting that all the intermediate terms cancel.
1 - b + b2 - b3 + b4 - ... % bq + b - b2 + b3 - b4 + ... bq % bq+1
Alternatively, consider the product in the reverse order I1-b+b2 -b3 +b4 - ...%bq MH1+bL. Expanding this gives precisely the same result. Hence a Grassmann number and its inverse commute. Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a space of n dimensions, the greatest non-zero power is pmax = 1 I2 n + 1 + H- 1Ln M. Thus if q is 4 equal to3pmax , bq+1 is equal to zero, and the identity becomes: 2009 9
524
Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a space of n dimensions, the greatest non-zero power is pmax = 1 I2 n + 1 + H- 1Ln M. Thus if q is 4 equal to pmax , bq+1 is equal to zero, and the identity becomes:
H1 + bL I1 - b + b2 - b3 + b4 - ... % bpmax M 1
We have thus shown that 1 - b + b2 - b3 + b4 ... % bpmax is the inverse of 1+b.
9.5
If now we have a general Grassmann number X, say, we can write X as Xx0 H1+bL, so that if Xs is the soul of X, then
X x0 H1 + bL x0 + Xs b Xs x0
-1
1 x0
1-
Xs x0
Xs x0
Xs x0
+ ... %
Xs x0
pmax
9.6
We tabulate some examples for Xx0 +Xs , expressing the inverse X-1 both in terms of Xs and x0 alone, and in terms of the even and odd components s and s of Xs .
2
n pmax 1 2 3 4 1 1 2 2
1 x0 1 x0 1 x0 1 x0
X -1 J1 J1 Xs x0 Xs x0 Xs x0 Xs x0
X -1 N N
X 2
0
1 x0 1 x0 1 x0 1 x0
J1 J1 Xs x0
Xs x0 Xs x0 2 x0 2
N N s sO
2
J1 J1 -
+ J xs N N +
X J xs 0
K1 Xs x0
N N
K1 -
1 x0 2
K2 s + s O s O
2 2
It is easy to verify the formulae of the table for dimensions 1 and 2 from the results above. The formulae for spaces of dimension 3 and 4 are given below. But for a slight rearrangement of terms they are identical to the results above.
2009 9 3
525
2009 9 3
526
Z = GrassmannPower@X, - 3D 1 x3 0 3 e1 x1 x4 0 3 e2 x2 x4 0 3 x3 e1 e2 x4 0
General powers
GrassmannPower will also work with general elements that are not expressed in terms of basis elements. X = 1 + x + x y + x y z; 8Y = GrassmannPower@X, 3D, Z = GrassmannPower@X, - 3D< 81 + 3 x + 3 x y, 1 - 3 x - 3 x y< %@8Y Z, Z Y<D 81, 1<
Symbolic powers
We take a general Grassmann number in 2-space.
&2 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + x3 e1 e2 Y = GrassmannPower@X, nD
-1+n 1+n 1+n xn x1 + n e2 xx2 + n xx3 e1 e2 0 + n e1 x0 0 0
Z = GrassmannPower@X, - nD
n -1-n 1-n 1-n xx1 - n e2 xx2 - n xx3 e1 e2 0 - n e1 x0 0 0
Before we verify that these are indeed inverses of each other, we need to declare the power n to be a scalar.
DeclareExtraScalars@nD; %@8Y Z, Z Y<D 81, 1<
2009 9 3
527
Z = GrassmannPower@X, - nD
n -1-n 1-n 1-n xx1 - n e2 xx2 - n e3 xx3 0 - n e1 x0 0 0 1-n 1-n 1-n n xx4 e1 e2 - n xx5 e1 e3 - n xx6 e2 e3 + 0 0 0 2-n IxIx3 In x4 + n2 x4 M - n x2 x5 - n2 x2 x5 + n x1 x6 + n2 x1 x6 M 0 1-n n xx7 M e1 e2 e3 0
-x3 x4 + x2 x5 - x1 x6 4
2 x3 0
e1 e2 e3
2009 9 3
528
Z = GrassmannPower@X, -pD
-1-p 1-p 1-p x-p x1 - p e2 xx2 - p xx3 e1 e2 0 - p e1 x0 0 0
We can calculate the values of the parameters by using the GrassmannAlgebra function GrassmannScalarSolve.
GrassmannScalarSolve@Z 0D ::a 1 2 , b1 2 , c - 1, d - 8, e - 2, g 1 2 >>
GrassmannScalarSolve will also work for several equations and for general symbols (other than basis elements). Z1 = 8a H- xL + b y H- 2 xL + f == c x + d x y, H7 - aL y == H13 - dL y x - f<; GrassmannScalarSolve@Z1 D ::a 7, c - 7, d 13, b 13 2 , f 0>>
GrassmannScalarSolve can be used in several other ways. As with other Mathematica functions its usage statement can be obtained by entering ?GrassmannScalarSolve.
2009 9 3
529
? GrassmannScalarSolve GrassmannScalarSolve@eqnsD attempts to find the values of those HdeclaredL scalar variables which make the equations True. GrassmannScalarSolve@eqns,scalarsD attempts to find the values of the scalars which make the equations True. If not already in the DeclaredScalars list the scalars will be added to it. GrassmannScalarSolve@eqns,scalars,elimsD attempts to find the values of the scalars which make the equations True while eliminating the scalars elims. If the equations are not fully solvable, GrassmannScalarSolve will still find the values of the scalar variables which reduce the number of terms in the equations as much as possible, and will additionally return the reduced equations.
Suppose first that we have an equation involving an unknown Grassmann number, A, say. For example if X is a general number in 3-space, and we want to find its inverse A as we did in the Section 9.5 above, we can solve the following equation for A:
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3 R = GrassmannSolve@X A 1, AD ::A 1 x0 e1 x1 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x3 0 x5 e1 e3 x2 0 >>
x6 e2 e3 x2 0
H- 2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 e2 e3
2009 9 3
530
% @X A 1 . R D 8True<
In general, GrassmannSolve can solve the same sorts of equations that Mathematica's inbuilt Solve routines can deal with because it uses Solve as its main calculation engine. In particular, it can handle powers of the unknowns. Further, it does not require the equations necessarily to be expressed in terms of basis elements. For example, we can take a quadratic equation in a:
Q = a a + 2 H1 + x + x yL a + H1 + 2 x + 2 x yL 0; S = GrassmannSolve@Q, aD 88a - 1 + x C1 + C2 x y<<
Here there are two arbitrary parameters C1 and C2 introduced by the GrassmannSolve function. (C is a symbol reserved by Mathematica for the generation of unknown coefficients in the solution of equations.) Now test whether a is indeed a solution to the equation:
% @Q . SD 8True<
Note that here we have a double infinity of solutions, parametrized by the pair {C1 ,C2 }. This means that for any values of C1 and C2 , S is a solution. For example for various C1 and C2 :
8%@Q . a - 1 - xD, %@Q . a - 1D, %@Q . a - 1 - 9 x yD< 8True, True, True< GrassmannSolve can also take several equations in several unknowns. As an example we take the following pair of equations: Q = 8H1 + xL b + x y a 3 + 2 x y, H1 - yL a a + H2 + y + x + 5 x yL 7 - 5 x<;
Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We can add the symbol x to the list of Grassmann numbers to be solved for.
2009 9 3
531
Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We can add the symbol x to the list of Grassmann numbers to be solved for.
S = GrassmannSolve@Q, 8a, b, x<D ::a C2 + y I- 31 - 36 C1 + 11 C2 2M 12 C2 18 - 11 + C2 2 1 6 + y 2 - C2 , 12 - 11 + C2 2 + 203 y 121 6 C2 - 11 + C2 2 5 6 -
b-
108 C1 I- 11 +
2 C2 2M
x y C1 +
I5 - C2 2 M>, :a y C3 , b
18 11
, x
31 y 36
>>
Again we get two solutions, but this time involving some arbitrary scalar constants.
%@Q . SD Simplify 88True, True<, 8True, True<<
or to the equation:
X ZY
To distinguish these two possibilities, we call the solution Z to the first equation XYZ the left exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the left. Correspondingly, we call the solution Z to the second equation XZY the right exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the right. To solve for Z in either case, we can use the GrassmannSolve function. For example if X and Y are general elements in a 2-space, the left exterior quotient is obtained as:
&2 ; X = CreateGrassmannNumber@xD; Y = CreateGrassmannNumber@yD; GrassmannSolve@X Y Z, 8Z<D ::Z x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 >>
e2 H-x2 y0 + x0 y2 L
H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2 y2 0
532
e2 H-x2 y0 + x0 y2 L
H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2 y2 0
Note the differences in signs in the e1 e2 term. To obtain these quotients more directly, GrassmannAlgebra provides the functions LeftGrassmannDivide and RightGrassmannDivide.
? LeftGrassmannDivide LeftGrassmannDivide@X,YD calculates the Grassmann number Z such that X == YZ. ? RightGrassmannDivide RightGrassmannDivide@X,YD calculates the Grassmann number Z such that X == ZY.
A shorthand infix notation for the division operation is provided by the down-vector symbol LeftDownVector B (LeftGrassmannDivide) and RightDownArrow E (RightGrassmannDivide). The previous examples would then take the form
X Y x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 X Y x0 y0 e1 H-x1 y0 + x0 y1 L y2 0 y2 0 e2 H-x2 y0 + x0 y2 L y2 0 e2 H-x2 y0 + x0 y2 L y2 0 -
H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2
H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 e2
2009 9 3
533
81 X, 1 X< : 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0 , 1 x0 e1 x1 x2 0 e2 x2 x2 0 x3 e1 e2 x2 0 >
Division by a scalar
Division by a scalar is just the Grassmann number divided by the scalar as expected.
8X a, X a< : x0 a + e1 x1 a + e2 x2 a + x3 e1 e2 a , x0 a + e1 x1 a + e2 x2 a + x3 e1 e2 a >
2009 9 3
534
Hx yL x y + x C1 + C2 x y
The quotient is returned as a linear combination of elements containing arbitrary scalar constants C1 and C2 . To see that this is indeed a left quotient, we can multiply it by the denominator from the left and see that we get the numerator.
%@x Hy + x C1 + C2 x yLD xy
We get a slightly different result for the right quotient which we verify by multiplying from the right.
Hx yL x - y + x C1 + C2 x y %@H- y + x C1 + C2 x yL xD xy
In some circumstances we may not want the most general result. Say, for example, we knew that the result we wanted had to be a 1-element. We can use the GrassmannAlgebra function ExtractGrade.
ExtractGrade@1D@Hx y zL Hx - 2 y + 3 z + x zLD z x 3 2 1 2 9 C1 - 3 C2 - C2 3 C3 2 C3 2 + + y H- 1 + 3 C1 + 2 C2 + C3 L
2 3 C1 2
Should there be insufficient information to determine a result, the quotient will be returned unchanged. For example.
Hx yL z Hx yL z
2009 9 3
535
By using GrassmannFactor we find a general factorization into two Grassmann numbers of lower maximum grade. We want to see if there is a factorization in the form Ha+b e1 +c e2 LHf+g e1 +h e2 L, so we enter this as the second argument. The scalars that we want to find are {a,b,c,f,g,h}, so we enter this list as the third argument.
Xf = GrassmannFactor@X, Ha + b e1 + c e2 L Hf + g e1 + h e2 L, 8a, b, c, f, g, h<D x0 f + e2 H- h x0 + f x2 L f2 e1 Hh x1 - f x3 L x2 + e1 H- h x0 x1 + f x1 x2 + f x0 x3 L f2 x2
f + h e2 +
This factorization is valid for any values of the scalars {a,b,c,f,g,h} which appear in the result, as we can verify by applying GrassmannSimplify to the result.
2009 9 3
536
%@XfD x0 + e1 x1 + e2 x2 + x3 e1 e2
y1 e1 e2 + y2 e1 e3 + y3 e2 e3
By using GrassmannFactor we can determine a factorization. In this example we obtain two classes of solution, each with different parameters.
Xf = GrassmannFactor@Y, Ha e1 + b e2 + c e3 L Hd e1 + e e2 + f e3 L, 8a, b, c, d, e, f<D e1 Jy2 + : c e3 +
c H-f y1 +e y2 L N y3
e2 Hc e + y3 L f ,
e e2 + f e3 +
e1 H- f y1 + e y2 L y3
b e y2 y3
e1 Jy1 + b e2 + e
N -
e3 y3 e
e e2 +
e e1 y2 y3
>
Since every (n|1)-element in an n-space is simple, we know the 3-element can be factorized. Suppose also that we would like the factorization in the following form:
J = Hx a1 + y a2 L Hy b2 + z b3 L Hz g3 + w g4 L Hx a1 + y a2 L Hy b2 + z b3 L Hz g3 + w g4 L
2009 9 3
537
It has been shown in Section 9.5 that the degree pmax of the highest non-zero power of a Grassmann number in a space of n dimensions is given by pmax = 1 I2 n + 1 - H- 1Ln M. 4 If we expand the series up to the term in this power, it then becomes a formula for a function f of a Grassmann number in that space. The tables below give explicit expressions for a function f[X] of a Grassmann variable in spaces of dimensions 1 to 4.
n pmax 1 2 3 4 1 1 2 2 f@XD f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs f@x0 D + f @x0 D Xs + f@x0 D + f @x0 D Xs +
1 2 1 2
If we replace the square of the soul of X by its simpler formulation in terms of the components of X of the first and second grade we get a computationally more convenient expression in the case on dimensions 3 and 4.
2009 9 3
538
n pmax 1 2 3 4 1 1 2 2
f@x0 D + f @x0 D Xs +
1 2
f" @x0 D K2 s + s O s
2 2
2009 9 3
539
Table@8i, H&i ; GrassmannFunctionForm@f@Xb , Xs DDL<, 8i, 1, 8<D TableForm 1 2 3 4 5 6 7 8 f@Xb D + Xs f @Xb D f@Xb D + Xs f @Xb D f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D + f@Xb D + Xs f @Xb D +
1 2 1 2 1 2 1 2 1 2 1 2 X2 s f @Xb D X2 s f @Xb D X2 s f @Xb D + X2 s f @Xb D + X2 s f @Xb D + X2 s f @Xb D + 1 6 1 6 1 6 1 6 H3L X3 @Xb D s f H3L X3 @Xb D s f H3L X3 @Xb D + s f H3L X3 @Xb D + s f 1 24 1 24 H4L X4 @Xb D s f H4L X4 @Xb D s f
1 2
H1,2L Xs Y2 @Xb , Yb D + s f
1 2
H2,1L X2 @Xb , Yb D + s Ys f
1 4
2 H2,2L X2 @Xb , Yb D s Ys f
2009 9 3
540
H0,3L Y3 @Xb , Yb D + Xs fH1,0L @Xb , Yb D + Xs Ys fH1,1L @Xb , Yb D + s f H1,2L Xs Y2 @Xb , Yb D + s f H2,0L X2 @Xb , Yb D + s f
1 6
H1,3L Xs Y3 @Xb , Yb D + s f
1 2
H2,1L X2 @Xb , Yb D + s Ys f
1 4
2 H2,2L X2 @Xb , Yb D + s Ys f
12 1 12
1 6 1
H3,0L X3 @Xb , Yb D + s f
1 6
H3,1L X3 @Xb , Yb D + s Ys f
36
3 H3,3L X3 @Xb , Yb D s Ys f
In the sections below we explore the GrassmannFunction operation and give some examples. We will work in a 3-space with the general Grassmann number X.
&3 ; X = CreateGrassmannNumber@xD x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3
2009 9 3
541
GrassmannFunctionAX-1 E 1 x0 e1 x1 x2 0 x2 0 e2 x2 x2 0 e3 x3 x2 0 x4 e1 e2 x2 0 x5 e1 e3 x2 0 x7 x2 0 -
x6 e2 e3
- 2 x2 x5 + 2 Hx3 x4 + x1 x6 L x3 0
e1 e2 e3
But we can also use GrassmannFunction@8xa , x<, XD, Ha &L@XD GrassmannFunction or GrassmannPower[X,a] to get the same result.
Logarithmic functions
2009 9 3
542
Logarithmic functions
The logarithm of X is:
GrassmannFunction@Log@XDD Log@x0 D + x6 e2 e3 x0 e1 x1 x0 + + e2 x2 x0 + e3 x3 x0 + x4 e1 e2 x0 + x7 x0 + x5 e1 e3 x0 +
-x3 x4 + x2 x5 - x1 x6 x2 0
e1 e2 e3
We expect the logarithm of the previously calculated exponential to be the original number X.
Log@expXD GrassmannFunction Simplify PowerExpand x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 e2 + x5 e1 e3 + x6 e2 e3 + x7 e1 e2 e3
2009 9 3
543
2009 9 3
544
However, to allow for the two different exterior products that X and Y can have together, the variables in the second argument {X,Y} may be interchanged. The parameters x and y of the first argument {x y,{x,y}} are simply dummy variables and can stay the same. Thus YX may be obtained by evaluating:
% @Y XD x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 e2
If we compute their product using the exterior product operation we observe that the order must be important, and that a different result is obtained when the order is reversed.
%@expX expYD x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2
2009 9 3
545
We can compute these two results also by using GrassmannFunction. Note that the second argument in the second computation has its components in reverse order.
GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8X, Y<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2 GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8Y, X<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 e2
On the other hand, if we wish to compute the exponential of the sum X + Y we need to use GrassmannFunction, because there can be two possibilities for its definition. That is, although Exp[x+y] appears to be independent of the order of its arguments, an interchange in the 'ordering' argument from {X,Y} to {Y,X} will change the signs in the result.
GrassmannFunction@8Exp@x + yD, 8x, y<<, 8X, Y<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 e2 GrassmannFunction@8Exp@x + yD, 8x, y<<, 8Y, X<D x0 +y0 + x0 +y0 e1 Hx1 + y1 L + x0 +y0 e2 Hx2 + y2 L + x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 e2
In sum: A function of several Grassmann variables may have different results depending on the ordering of the variables in the function, even if their usual (scalar) form is commutative.
9.10 Summary
To be completed
2009 9 3
546
10.1 Introduction
In this chapter we define and explore a new product which we call the generalized Grassmann product. The generalized Grassmann product was originally developed by the author in order to treat the hypercomplex and Clifford products of general elements in a succinct manner. The hypercomplex and Clifford products of general elements can lead to quite complex expressions, however we will show that such expressions always reduce to simple sums of generalized Grassmann products. We discuss the hypercomplex product in the next chapter, and the Clifford product in Chapter 12. The generalized Grassmann product is really a sequence of products indexed by a parameter, called the order of the product. When the order is zero, the product reduces to the exterior product. When the order is its maximum value, the product reduces to the interior product. In between, the sequence of generalized Grassmann products makes a transition between the two. For example, consider an exterior product of simple elements. It only takes one 1-element to be common to both factors of the exterior product for the product to be zero. On the other end of the sequence, even if one of the simple elements has all its 1-element factors in common with the other simple element, their interior product is not zero. The elements of the sequence of generalized products require progressively more common 1-elements in their factors for them to reduce to zero. Specifically, if two simple elements contain a common factor of grade g, then their generalized Grassmann products of order less than g are zero. The generalized Grassmann product has another enticing property. Although the interior product of elements of different grade is not commutative, the generalized Grasmann product form of the interior product is commutative. For brevity in the rest of this book, where no confusion will arise, we may call the generalized Grassmann product simply the generalized product.
2009 9 3
547
k K O l
a b a bj bj
m l k j=1 m l k-l
10.1
b b1 b1 b2 b2 !
k l k-l l k-l
Here, b is expressed in all of the K O essentially different arrangements of its 1-element factors
k
k l
into a l-element and a (k-l)-element. This pairwise decomposition of a simple exterior product is the same as that used in the Common Factor Theorem in chapters 3 and 6. The grade of a generalized Grassmann product a b is therefore m + k - 2l, and like the grades of
m l k
the exterior and interior products in terms of which it is defined, is independent of the dimension of the underlying space. Note the similarity to the form of the Interior Common Factor Theorem. However, in the definition of the generalized product there is no requirement for the interior product on the right hand side to be an inner product, since an exterior product has replaced the ordinary multiplication operator. For brevity in the discussion that follows, we will assume without explicitly drawing attention to the fact, that an element undergoing a factorization is necessarily simple.
a b ab
m 0 k m k
l0
10.2
2009 9 3
548
Case 0 < l < Min[m, k]: Reduction to exterior and interior products
When the order l of the generalized product is greater than zero but less than the smaller of the grades of the factors in the product, the product may be expanded in terms of either factor (provided it is simple). The formula for expansion in terms of the first factor is similar to the definition [10.1] which expands in terms of the second factor.
m K O l
a b ai b ai
m l k
i=1
m-l
10.3
a a1 a1 a2 a2 !
m l m-l l m-l
In Section 10.3 we will prove that the product can actually be expanded in terms of both factors, and that this formula is an almost immediate consequent. For completeness it should be remarked that [10.3] is also valid for l equal to 0 or Min[m,k]. We shall include these special cases in the ranges for l in subsequent formulas.
a b ab
m l k m l k m k m
a b ba
k
This is a particularly enticing property of the generalized product. If the interior product of two elements is non-zero, but they are of unequal grade, an interchange in the order of their factors will give an interior product which is zero. If an interior product of two factors is to be non-zero, it must have the larger factor first. The interior product is non-commutative. By contrast, the generalized product form of the interior product of two elements of unequal grade is commutative and equal to that interior product which is non-zero, whichever one it may be.
a b b a ab
m k k k k m m k
km
2009 9 3
549
ab 0
m l k
10.5
This result, in which a sum of terms is identically zero, is a source of many useful identities between exterior and interior products. It will be explored in Section 10.4 below.
ab 0
m l k
l Max@m, kD
10.6
a b Undefined
m l k
l > Max@m, kD
10.7
by converting the interior products with the Interior Common Factor Theorem (See Chapter 6). To show this, we start with the definition [10.1] of the generalized Grassmann product.
2009 9 3
550
k K O l
a b a bj bj
m l k j=1 m l k-l
From the Interior Common Factor Theorem we can write the interior product abj as:
m l
m K O l
a bj
m l
ai bj
l l
ai
m-l
i=1
Substituting this in the above expression for the generalized product gives:
k K O l m l k
m K O l
ab
j=1
ai bj
l l
ai bj
m-l k-l
i=1
The expansion of the generalized product in terms of both factors a and b is thus given by:
m k
m k K O K O l l
a b ai bj
m l k i=1 j=1 l l
ai bj
m-l k-l
0 l Min@m, kD
10.8
a a1 a1 a2 a2 !,
m l m-l l m-l
b b1 b1 b2 b2 !
k l k-l l k-l
reverse the order of the exterior product to obtain the terms of a b except for possibly a sign. The
m l k
ba
k l m j=1 i=1 m k K OK O l l
bj ai
l l
bj ai
k-l m-l
j=1 i=1
ai bj H- 1LHm-lL Hk-lL ai bj
l l m-l k-l
2009 9 3
551
a b H- 1LHm-lL Hk-lL b a
m l k k l m
10.9
It is easy to see, by using the distributivity of the generalized product that this relationship holds also for non-simple a and b.
m k
i=1
m-l
Interchanging the order of the terms of the exterior product then gives the required alternative expression:
m K O l
a b ai b ai
m l k
i=1
m
m-l
0 l Min@m, kD
a a1 a1 a2 a2 !
l m-l l m-l
where l is the order of the product and X and Y are the factors. On entering this expression into Mathematica it will display it in the form:
XY
l
Alternatively, you can click on the generalized product button on the GrassmannAlgebra
palette, and then tab through the placeholders, entering the elements of the product as you go. To enter multiple products, click repeatedly on the button. For example to enter a concatenated product of four elements, click three times: 2009 9 3
552
To enter multiple products, click repeatedly on the button. For example to enter a concatenated product of four elements, click three times:
If you select this expression and enter it, you will see how the inherently binary product is grouped.
JJ N N
Tabbing through the placeholders and entering factors and product orders, we might obtain something like:
KJX YN ZO W
4 3 2
Here, X, Y, Z, and W represent any Grassmann expressions. Using Mathematica's FullForm operation shows how this is internally represented.
FullFormBJJX YN ZN WF
4 1 0
You can edit the expression at any stage to group the products in a different order.
FullFormBX KKY ZO WOF
4 3 2
Finally, you can also enter a generalized product sequentially by typing the first factor, then clicking on the product operator from the first column of the GrassmannAlgebra palette, then
k l
m l
terms (0 l m).
These sums, although appearing different because expressed in terms of interior products, are of course equal, as has been shown in the previous section. The GrassmannAlgebra function ToInteriorProducts will take any generalized product and convert it to a sum involving exterior and interior products by expanding whichever factor leads to the simplest expression. If the elements are given in symbolic grade-underscripted form, ToInteriorProducts will first create a corresponding simple exterior product before expanding. 2009 9 3
553
The GrassmannAlgebra function ToInteriorProducts will take any generalized product and convert it to a sum involving exterior and interior products by expanding whichever factor leads to the simplest expression. If the elements are given in symbolic grade-underscripted form, ToInteriorProducts will first create a corresponding simple exterior product before expanding. For example, suppose the generalized product is given in symbolic form as a b. Applying
m 2 3
A = ToInteriorProductsBa bF
m 2 3
Ka b1 b2 O b3 - Ka b1 b3 O b2 + Ka b2 b3 O b1
m m m
B = ToInteriorProductsBa bF
4 2 k
b a1 a2 a3 a4 - b a1 a3 a2 a4 + b a1 a4 a2 a3 +
k k k
b a2 a3 a1 a4 - b a2 a4 a1 a3 + b a3 a4 a1 a2
k k k
3 2
4 2
(= 6)
If now we let m equal 4 and k equal 3, these two expressions become equal.
A1 = ToInteriorProductsBa bF . a ComposeSimpleFormBaF
m 2 3 m 4
Although these expressions are equal, they appear different because of their expansion in terms of interior products. To display them in the same form, we need to reduce the interior products to inner (and exterior) products.
554
Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this product we must first increase the dimension of the space to at least 4 (since it is zero in a space of lesser dimension). Then, we convert any interior products to inner products by applying ToInnerProducts, and simplify the result with GrassmannExpandAndSimplify.
!4 ; A2 = %@ToInnerProducts@A1 DD Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 B2 = %@ToInnerProducts@B1 DD Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3
By inspection we see that these two expressions are the same. Checking with Mathematica verifies this.
A2 B2 True
The identity of the forms A2 and B2 is an example of the expansion of a generalized product in terms of either factor yielding the same result.
expressed as a simple exterior product. We need to be specific about the grade k, so for this example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater than or equal to k in order to have the resulting formulas remain valid should we later substitute specific values. 2009 9 3
Book.nb Consider a generalized product a b of an arbitrary m-element a andGrassmann a simple Algebra k-element b
555
m l k
expressed as a simple exterior product. We need to be specific about the grade k, so for this example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater than or equal to k in order to have the resulting formulas remain valid should we later substitute specific values. The spine and cospine of b are given by
3
S = SpineBbF
3
We can now generate the list of products of the form Ka sO c, where s is an element of S, and
m
c is the corresponding element of Sc, by threading these operations across the two lists. T1 = ThreadBThreadBa SF ScF
m
:Ka 1O b1 b2 b3 , Ka b1 O b2 b3 ,
m m
Ka b2 O - Hb1 b3 L, Ka b3 O b1 b2 , Ka b1 b2 O b3 ,
m m m
Ka b1 b3 O -b2 , Ka b2 b3 O b1 , Ka b1 b2 b3 O 1>
m m m
The collection of elements of this list which are of the same grade (g, say) will form the terms in the sum of the generalized product of the same grade g. We can see the distribution of grades by applying Grade to the list.
Grade@T1 D 83 + m, 1 + m, 1 + m, 1 + m, - 1 + m, - 1 + m, - 1 + m, - 3 + m<
We can select the terms of a given grade (using Select), add them together (using Plus), and and tabulate these sums over the different grades g (using Table). We also reverse the list to get the grades in decreasing order.
T2 = Reverse@Table@Plus Select@T1 , GradeQ@gDD, 8g, m - 3, m + 3, 2<DD :Ka 1O b1 b2 b3 ,
m
Ka b2 O - Hb1 b3 L + Ka b1 O b2 b3 + Ka b3 O b1 b2 ,
m m m
Ka b1 b2 O b3 + Ka b1 b3 O -b2 + Ka b2 b3 O b1 ,
m m m
Ka b1 b2 b3 O 1>
m
2009 9 3
556
:a b, a b, a b, a b>
m 0 3 m 1 3 m 2 3 m 3 3
a b Ka b2 O - Hb1 b3 L + Ka b1 O b2 b3 + Ka b3 O b1 b2
m 1 3 m m m
a b Ka b1 b2 O b3 + Ka b1 b3 O -b2 + Ka b2 b3 O b1
m 2 3 m m m
a b Ka b1 b2 b3 O 1
m 3 3 m
Finally, we can collect these steps into a single function, generalize it for arbitrary simple b, and
k
restrict it to apply only to the grades of factors we have assumed in this development. That is, the grade of a should be symbolic, but constrained to be greater than or equal to the grade of b, which should itself be greater than 1.
GeneralizedProductList@GeneralizedProduct@l_D@a_, b_DD ; HHead@RawGrade@aDD === Symbol RawGrade@aD RawGrade@bDL && RawGrade@bD > 1 := ModuleB8m, k, B, Bc, T1, T2, T3, g, m<, m = RawGrade@aD; k = RawGrade@bD; B = Spine@bD; Bc = Cospine@bD; T1 = Thread@Thread@a BD BcD; T2 = Reverse@Table@Plus Select@T1, RawGrade@D g &D, 8g, m - k, m + k, 2<DD; T3 = TableBa b, 8m, 0, k<F; TableForm@Thread@T3 T2DDF;
m
Example
Here are the formulas for a 2-element and a 4-element as second factors.
2009 9 3
557
GeneralizedProductListBa bF
m l 2
a b Ka 1O b1 b2
m 0 2 m 1 2 m 2 2 m
a b Ka b1 O b2 + Ka b2 O -b1
m m
a b Ka b1 b2 O 1
m
GeneralizedProductListBa bF
m l 4
a b Ka 1O b1 b2 b3 b4
m 0 4 m 1 4 m 2 4 m 3 4 m 4 4 m
a b Ka b2 O - Hb1 b3 b4 L + Ka b4 O - Hb1 b2 b3 L + Ka b1 O b2 b3
m m m
a b Ka b1 b3 O - Hb2 b4 L + Ka b2 b4 O - Hb1 b3 L + Ka b1 b2 O b3
m m m
a b Ka b1 b2 b3 O b4 + Ka b1 b2 b4 O -b3 + Ka b1 b3 b4 O b2 + K
m m m
a b Ka b1 b2 b3 b4 O 1
m
A:
a b a bj bj
m l k j=1 k K O l m l k-l
10.10
m k-l l
B:
a b a bj bj
m l k j=1
b b1 b1 b2 b2 !
k l k-l l k-l
The identity between the A form and the B form is the source of many useful relations in the Grassmann and Clifford algebras. We call it the Generalized Product Theorem.
2009 9 3
558
The identity between the A form and the B form is the source of many useful relations in the Grassmann and Clifford algebras. We call it the Generalized Product Theorem. In the previous sections, the identity between the expansions in terms of either of the two factors was shown to be at the inner product level. The identity between the A form and the B form is at a further level of complexity - that of the scalar product. That is, in order to show the identity between the two forms, a generalized product may need to be reduced to an expression involving exterior and scalar products.
equal to 3. I have not yet found such a straightforward proof for the general case. The example of the next section will extend this result to show that the theorem is true for m and k equal to 4 and l equal to 2. By using the quasi-commutativity of the generalized product we can thus show that the theorem is valid for all generalized products in a 4-space. The example will also make clearer the detail by which the equality of the two forms is achieved.
l=1
We begin with the B form for l equal to 1, and show that it reduces to the A form for l equal to 1.
k
a b a bj bj
m 1 k j=1 m k-1 1
b b1 b1 b2 b2 !
k 1 k-1 1 k-1
a b a bj bj + H- 1Lm a bj bj
The first sum is of the A form, while the second sum is zero on account of the Zero Interior Sum theorem. Hence we have proved the theorem for l equal to 1.
l = k-1
This time we begin with the A form for l equal to k-1, and show that it reduces to the B form for l equal to k-1.
2009 9 3
559
a b a bj bj
m k-1 k j=1 m k-1 1
b b1 b1 b2 b2 !
k k-1 1 k-1 1
which enables us to expand the summand again to give two types of terms
k m k-1 k j=1 m 1 k-1 k j=1 m k-1 1
a b a b j b j + H - 1 L m +1 a b j b j
The first sum is of the B form, while the second sum is zero on account of the Zero Interior Sum theorem. Hence we have proved the theorem for l equal to k-1.
both the A and B forms. Note the 'B' at the end of the function name ToInteriorProducts in the second case to indicate that it will generate the B form).
A = ToInteriorProductsBa bF
m 2 4
Ka b1 b2 O b3 b4 - Ka b1 b3 O b2 b4 + Ka b1 b4 O b2 b3 +
m m m
Ka b2 b3 O b1 b4 - Ka b2 b4 O b1 b3 + Ka b3 b4 O b1 b2
m m m
B = ToInteriorProductsBBa bF
m 2 4
a b1 b2 b3 b4 - a b1 b3 b2 b4 + a b1 b4 b2 b3 +
m m m
a b2 b3 b1 b4 - a b2 b4 b1 b3 + a b3 b4 b1 b2
m m m
Note that the A form and the B form at this first (interior product) level of their expansion both 4 have the same number of terms (in this case = 6). Although the expressions also both have the 2 same concatenation of factors, their grouping is different. Due to Mathematica's inbuilt precedence, parentheses are not needed in the second case: the interior product has higher precedence, so parentheses effectively surround the exterior products.
2009 9 3
560
The next step is to convert these two expressions to their inner product forms. But to do this we will need to specify the grade of a. For this example, we take m equal to 4, and replace a in the
m m
expressions above by " (and consequently also we need to increase the dimension of the space above the default value of 3 to prevent " being treated as zero).
!6 ; 2 = ComposeSimpleFormBaF
4
a1 a2 a3 a4 A1 = %BToInnerProductsBA . a 2FF
m
Ha3 a4 b3 b4 L a1 a2 b1 b2 - Ha3 a4 b2 b4 L a1 a2 b1 b3 + Ha3 a4 b2 b3 L a1 a2 b1 b4 + Ha3 a4 b1 b4 L a1 a2 b2 b3 Ha3 a4 b1 b3 L a1 a2 b2 b4 + Ha3 a4 b1 b2 L a1 a2 b3 b4 Ha2 a4 b3 b4 L a1 a3 b1 b2 + Ha2 a4 b2 b4 L a1 a3 b1 b3 Ha2 a4 b2 b3 L a1 a3 b1 b4 - Ha2 a4 b1 b4 L a1 a3 b2 b3 + Ha2 a4 b1 b3 L a1 a3 b2 b4 - Ha2 a4 b1 b2 L a1 a3 b3 b4 + Ha2 a3 b3 b4 L a1 a4 b1 b2 - Ha2 a3 b2 b4 L a1 a4 b1 b3 + Ha2 a3 b2 b3 L a1 a4 b1 b4 + Ha2 a3 b1 b4 L a1 a4 b2 b3 Ha2 a3 b1 b3 L a1 a4 b2 b4 + Ha2 a3 b1 b2 L a1 a4 b3 b4 + Ha1 a4 b3 b4 L a2 a3 b1 b2 - Ha1 a4 b2 b4 L a2 a3 b1 b3 + Ha1 a4 b2 b3 L a2 a3 b1 b4 + Ha1 a4 b1 b4 L a2 a3 b2 b3 Ha1 a4 b1 b3 L a2 a3 b2 b4 + Ha1 a4 b1 b2 L a2 a3 b3 b4 Ha1 a3 b3 b4 L a2 a4 b1 b2 + Ha1 a3 b2 b4 L a2 a4 b1 b3 Ha1 a3 b2 b3 L a2 a4 b1 b4 - Ha1 a3 b1 b4 L a2 a4 b2 b3 + Ha1 a3 b1 b3 L a2 a4 b2 b4 - Ha1 a3 b1 b2 L a2 a4 b3 b4 + Ha1 a2 b3 b4 L a3 a4 b1 b2 - Ha1 a2 b2 b4 L a3 a4 b1 b3 + Ha1 a2 b2 b3 L a3 a4 b1 b4 + Ha1 a2 b1 b4 L a3 a4 b2 b3 Ha1 a2 b1 b3 L a3 a4 b2 b4 + Ha1 a2 b1 b2 L a3 a4 b3 b4
2009 9 3
561
B1 = %BToInnerProductsBB . a 2FF
m
H2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 LL a1 a2 a3 a4 + H- Ha4 b2 b3 b4 L + a4 b3 b2 b4 - a4 b4 b2 b3 L a1 a2 a3 b1 Ha4 b1 b3 b4 - a4 b3 b1 b4 + a4 b4 b1 b3 L a1 a2 a3 b2 + H- Ha4 b1 b2 b4 L + a4 b2 b1 b4 - a4 b4 b1 b2 L a1 a2 a3 b3 Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 b4 + Ha3 b2 b3 b4 - a3 b3 b2 b4 + a3 b4 b2 b3 L a1 a2 a4 b1 + H- Ha3 b1 b3 b4 L + a3 b3 b1 b4 - a3 b4 b1 b3 L a1 a2 a4 b2 Ha3 b1 b2 b4 - a3 b2 b1 b4 + a3 b4 b1 b2 L a1 a2 a4 b3 + H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 b4 Ha3 a4 b3 b4 L a1 a2 b1 b2 - Ha3 a4 b2 b4 L a1 a2 b1 b3 + Ha3 a4 b2 b3 L a1 a2 b1 b4 + Ha3 a4 b1 b4 L a1 a2 b2 b3 Ha3 a4 b1 b3 L a1 a2 b2 b4 + Ha3 a4 b1 b2 L a1 a2 b3 b4 + H- Ha2 b2 b3 b4 L + a2 b3 b2 b4 - a2 b4 b2 b3 L a1 a3 a4 b1 Ha2 b1 b3 b4 - a2 b3 b1 b4 + a2 b4 b1 b3 L a1 a3 a4 b2 + H- Ha2 b1 b2 b4 L + a2 b2 b1 b4 - a2 b4 b1 b2 L a1 a3 a4 b3 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 b4 Ha2 a4 b3 b4 L a1 a3 b1 b2 + Ha2 a4 b2 b4 L a1 a3 b1 b3 Ha2 a4 b2 b3 L a1 a3 b1 b4 - Ha2 a4 b1 b4 L a1 a3 b2 b3 + Ha2 a4 b1 b3 L a1 a3 b2 b4 - Ha2 a4 b1 b2 L a1 a3 b3 b4 + Ha2 a3 b3 b4 L a1 a4 b1 b2 - Ha2 a3 b2 b4 L a1 a4 b1 b3 + Ha2 a3 b2 b3 L a1 a4 b1 b4 + Ha2 a3 b1 b4 L a1 a4 b2 b3 Ha2 a3 b1 b3 L a1 a4 b2 b4 + Ha2 a3 b1 b2 L a1 a4 b3 b4 + Ha1 b2 b3 b4 - a1 b3 b2 b4 + a1 b4 b2 b3 L a2 a3 a4 b1 + H- Ha1 b1 b3 b4 L + a1 b3 b1 b4 - a1 b4 b1 b3 L a2 a3 a4 b2 Ha1 b1 b2 b4 - a1 b2 b1 b4 + a1 b4 b1 b2 L a2 a3 a4 b3 + H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 b4 Ha1 a4 b3 b4 L a2 a3 b1 b2 - Ha1 a4 b2 b4 L a2 a3 b1 b3 + Ha1 a4 b2 b3 L a2 a3 b1 b4 + Ha1 a4 b1 b4 L a2 a3 b2 b3 Ha1 a4 b1 b3 L a2 a3 b2 b4 + Ha1 a4 b1 b2 L a2 a3 b3 b4 Ha1 a3 b3 b4 L a2 a4 b1 b2 + Ha1 a3 b2 b4 L a2 a4 b1 b3 Ha1 a3 b2 b3 L a2 a4 b1 b4 - Ha1 a3 b1 b4 L a2 a4 b2 b3 + Ha1 a3 b1 b3 L a2 a4 b2 b4 - Ha1 a3 b1 b2 L a2 a4 b3 b4 + Ha1 a2 b3 b4 L a3 a4 b1 b2 - Ha1 a2 b2 b4 L a3 a4 b1 b3 + Ha1 a2 b2 b3 L a3 a4 b1 b4 + Ha1 a2 b1 b4 L a3 a4 b2 b3 Ha1 a2 b1 b3 L a3 a4 b2 b4 + Ha1 a2 b1 b2 L a3 a4 b3 b4
+ +
+ +
+ +
+ +
It can be seen that at this inner product level, these two expressions are not of the same form: B1 contains additional terms to those of A1 . The interior products of A1 are only of the form Hai aj br bs L, whereas the extra terms of B1 are of the forms Ibi bj br bs M andHai bj br bs L. Calculating their difference gives:
2009 9 3
562
AB = B1 - A1 H2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 LL a1 a2 a3 a4 + H- Ha4 b2 b3 b4 L + a4 b3 b2 b4 - a4 b4 b2 b3 L a1 a2 a3 b1 Ha4 b1 b3 b4 - a4 b3 b1 b4 + a4 b4 b1 b3 L a1 a2 a3 b2 + H- Ha4 b1 b2 b4 L + a4 b2 b1 b4 - a4 b4 b1 b2 L a1 a2 a3 b3 Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 b4 + Ha3 b2 b3 b4 - a3 b3 b2 b4 + a3 b4 b2 b3 L a1 a2 a4 b1 + H- Ha3 b1 b3 b4 L + a3 b3 b1 b4 - a3 b4 b1 b3 L a1 a2 a4 b2 Ha3 b1 b2 b4 - a3 b2 b1 b4 + a3 b4 b1 b2 L a1 a2 a4 b3 + H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 b4 H- Ha2 b2 b3 b4 L + a2 b3 b2 b4 - a2 b4 b2 b3 L a1 a3 a4 b1 Ha2 b1 b3 b4 - a2 b3 b1 b4 + a2 b4 b1 b3 L a1 a3 a4 b2 + H- Ha2 b1 b2 b4 L + a2 b2 b1 b4 - a2 b4 b1 b2 L a1 a3 a4 b3 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 b4 + Ha1 b2 b3 b4 - a1 b3 b2 b4 + a1 b4 b2 b3 L a2 a3 a4 b1 + H- Ha1 b1 b3 b4 L + a1 b3 b1 b4 - a1 b4 b1 b3 L a2 a3 a4 b2 Ha1 b1 b2 b4 - a1 b2 b1 b4 + a1 b4 b1 b2 L a2 a3 a4 b3 + H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 b4
+ +
+ + + +
By inspection, we can see that each scalar coefficient is a zero interior sum, thus making AB equal to zero and proving the theorem for this particular case. We can also verify this directly by reducing the inner products of AB to scalar products.
%@ToScalarProducts@ABDD 0
that we need to expand the product to the scalar product level before the identity of the two expansions become evident.
2009 9 3
563
8a1 a2 a3 a4 , b1 b2 b3 <
a1 a2 a3 a4 b1 b2 b3 a1 a2 a3 a4 b2 b1 b3 + a1 a2 a3 a4 b3 b1 b2
To expand with respect to a we reverse the order of the generalized product to give b a and
4 3 2 4
a1 a2 b1 b2 b3 a3 a4 - a1 a3 b1 b2 b3 a2 a4 + a1 a4 b1 b2 b3 a2 a3 + a2 a3 b1 b2 b3 a1 a4 a2 a4 b1 b2 b3 a1 a3 + a3 a4 b1 b2 b3 a1 a2
2009 9 3
564
B1 = %@ToInnerProducts@BDD Ha3 a4 b2 b3 L a1 a2 b1 Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha2 a3 a4 b3 - a2 a4 a3 b3 + a2 b3 a3 a4 L a1 b1 b2 + H- Ha2 a3 a4 b2 L + a2 a4 a3 b2 - a2 b2 a3 a4 L a1 b1 b3 + Ha2 a3 a4 b1 - a2 a4 a3 b1 + a2 b1 a3 a4 L a1 b2 b3 + Ha1 a4 b2 b3 L a2 a3 b1 - Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 - Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 - Ha1 a3 b1 b2 L a2 a4 b3 + H- Ha1 a3 a4 b3 L + a1 a4 a3 b3 - a1 b3 a3 a4 L a2 b1 b2 + Ha1 a3 a4 b2 - a1 a4 a3 b2 + a1 b2 a3 a4 L a2 b1 b3 + H- Ha1 a3 a4 b1 L + a1 a4 a3 b1 - a1 b1 a3 a4 L a2 b2 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 + Ha1 a2 a4 b3 - a1 a4 a2 b3 + a1 b3 a2 a4 L a3 b1 b2 + H- Ha1 a2 a4 b2 L + a1 a4 a2 b2 - a1 b2 a2 a4 L a3 b1 b3 + Ha1 a2 a4 b1 - a1 a4 a2 b1 + a1 b1 a2 a4 L a3 b2 b3 + H- Ha1 a2 a3 b3 L + a1 a3 a2 b3 - a1 b3 a2 a3 L a4 b1 b2 + Ha1 a2 a3 b2 - a1 a3 a2 b2 + a1 b2 a2 a3 L a4 b1 b3 + H- Ha1 a2 a3 b1 L + a1 a3 a2 b1 - a1 b1 a2 a3 L a4 b2 b3 + H2 Ha1 a2 a3 a4 L - 2 Ha1 a3 a2 a4 L + 2 Ha1 a4 a2 a3 LL b1 b2 b3
By observation we can see that A1 and B1 contain some common terms. We compute the difference of the two inner product forms and collect terms with the same exterior product factor:
2009 9 3
565
AB = B1 - A1 - Ha4 b1 b2 b3 - a4 b2 b1 b3 + a4 b3 b1 b2 L a1 a2 a3 H- Ha3 b1 b2 b3 L + a3 b2 b1 b3 - a3 b3 b1 b2 L a1 a2 a4 Ha2 b1 b2 b3 - a2 b2 b1 b3 + a2 b3 b1 b2 L a1 a3 a4 + Ha2 a3 a4 b3 - a2 a4 a3 b3 + a2 b3 a3 a4 L a1 b1 b2 + H- Ha2 a3 a4 b2 L + a2 a4 a3 b2 - a2 b2 a3 a4 L a1 b1 b3 + Ha2 a3 a4 b1 - a2 a4 a3 b1 + a2 b1 a3 a4 L a1 b2 b3 H- Ha1 b1 b2 b3 L + a1 b2 b1 b3 - a1 b3 b1 b2 L a2 a3 a4 + H- Ha1 a3 a4 b3 L + a1 a4 a3 b3 - a1 b3 a3 a4 L a2 b1 b2 + Ha1 a3 a4 b2 - a1 a4 a3 b2 + a1 b2 a3 a4 L a2 b1 b3 + H- Ha1 a3 a4 b1 L + a1 a4 a3 b1 - a1 b1 a3 a4 L a2 b2 b3 + Ha1 a2 a4 b3 - a1 a4 a2 b3 + a1 b3 a2 a4 L a3 b1 b2 + H- Ha1 a2 a4 b2 L + a1 a4 a2 b2 - a1 b2 a2 a4 L a3 b1 b3 + Ha1 a2 a4 b1 - a1 a4 a2 b1 + a1 b1 a2 a4 L a3 b2 b3 + H- Ha1 a2 a3 b3 L + a1 a3 a2 b3 - a1 b3 a2 a3 L a4 b1 b2 + Ha1 a2 a3 b2 - a1 a3 a2 b2 + a1 b2 a2 a3 L a4 b1 b3 + H- Ha1 a2 a3 b1 L + a1 a3 a2 b1 - a1 b1 a2 a3 L a4 b2 b3 + H2 Ha1 a2 a3 a4 L - 2 Ha1 a3 a2 a4 L + 2 Ha1 a4 a2 a3 LL b1 b2 b3
This is an example of how the B form of the generalized product may be expanded in terms of either factor.
When l is equal to the larger of the grades of the factors, the generalized product reduces to a single interior product which is zero by virtue of its left factor being of lesser grade than its right factor. Suppose l = k > m. Then:
2009 9 3
566
a b ab 0
m k k m k
l=k>m
These relationships are the source of an interesting suite of identities relating exterior and interior products. We take some examples; in each case we verify that the result is zero by converting the expression to its scalar product form. But we will need to circumvent the default behaviour of ToInteriorProducts by not intially telling it the grade of one of the terms (since it would normally return 0 for these type of products). We also need to declare some vector symbols and use the B variant of ToInteriorProducts (ToInteriorProductsB) to force the desired expansion.
DeclareExtraVectorSymbols@a_ D;
Example 1
A123 = ToInteriorProductsBBa bF . a a1
2 3
a1 b1 b2 b3 - a1 b2 b1 b3 + a1 b3 b1 b2 $@ToScalarProducts@A123 DD 0
Example 2
A124 = ToInteriorProductsBBa bF . a a1
2 4
a1 b1 b2 b3 b4 - a1 b1 b3 b2 b4 + a1 b1 b4 b2 b3 + a1 b2 b3 b1 b4 - a1 b2 b4 b1 b3 + a1 b3 b4 b1 b2 $@ToScalarProducts@A124 DD 0
Example 3
A234 = ToInteriorProductsBBa bF . a a1 a2
3 4
- Ha1 a2 b1 b2 b3 b4 L + a1 a2 b2 b1 b3 b4 a1 a2 b3 b1 b2 b4 + a1 a2 b4 b1 b2 b3 $@ToScalarProducts@A234 DD 0
2009 9 3
567
a b a bj bj
m l k j=1 m k-l l
b b1 b1 b2 b2 !
k l k-l l k-l
a gA
m l a
b gB
k l b
Substituting these products into the formula above enables us to begin writing the generalized product sum:
gA gB gAB g +
l a l l b l a b l
But we quickly see that because the subsequent terms must now contain some common terms in their exterior product, they must all reduce to zero. Hence, for simple elements if the order of the generalized product is equal to the grade of the common factor, the generalized product reduces to a simple interior product.
gA gB gAB g
l a l l b l a b l
10.11
2009 9 3
568
Ag gB AgB g
a l l l b a l b l
10.12
Ag Bg ABg g
a l l b l a b l l
In a similar way, we can see that if the order of the generalized product is less than the grade of the common factor, all terms must now contain some common terms in their exterior product, and the generalized product reduces to zero.
gA gB 0
m a l m b
l<m
10.13
Of course, this result is also valid in the case that either or both of A and B are scalars.
a b
g g B 0,
m l m b
g A g 0,
m a l m
gg0
m l m
l<m
10.14
Congruent factors
If both A and B are scalars, that is, a and b are equal to 0, they may be factored out of the formulas.
a b
g g gg , l m
m l m m m
g g 0, l ! m
m l m
10.15
If a and b are both n-elements, then they are conguent (that is, differ by at most a scalar factor)
n n
a b ab , l n
n l n n n
a b 0, l ! n
n l n
10.16
These results give rise to a series of relationships among the exterior and interior products of factors of a simple element. These relationships are independent of the dimension of the space concerned. For example, applying the result to simple 2, 3 and 4-elements gives the following relationships at the interior product level:
2009 9 3
569
Example: 2-elements
ToInteriorProductsBa aF 0
2 1 2
Ka a1 O a2 - Ka a2 O a1 0
2 2
Example: 3-elements
ToInteriorProductsBa aF 0
3 1 3
a1 a2 a a3 - a1 a3 a a2 + a2 a3 a a1 0
3 3 3
ToInteriorProductsBa aF 0
3 2 3
a a1 a2 a3 - a a1 a3 a2 + a a2 a3 a1 0
3 3 3
Example: 4-elements
ToInteriorProductsBa aF 0
4 1 4
Ka a1 O a2 a3 a4 - Ka a2 O a1 a3 a4 +
4 4
Ka a3 O a1 a2 a4 - Ka a4 O a1 a2 a3 0
4 4
ToInteriorProductsBa aF 0
4 2 4
a1 a2 Ka a3 a4 O - a1 a3 Ka a2 a4 O + a1 a4 Ka a2 a3 O +
4 4 4
a2 a3 Ka a1 a4 O - a2 a4 Ka a1 a3 O + a3 a4 Ka a1 a2 O 0
4 4 4
ToInteriorProductsBa aF 0
4 3 4
Ka a1 a2 a3 O a4 - Ka a1 a2 a4 O a3 +
4 4
Ka a1 a3 a4 O a2 - Ka a2 a3 a4 O a1 0
4 4
At the inner product level, some of the generalized products yield further relationships. For example, a a confirms the Zero Interior Sum theorem.
4 2 4
2009 9 3
570
%BToInnerProductsBa aFF 0
4 2 4
X a1 + a2 + a3 +
m m m m
From the axioms for the exterior product it is straightforward to show that XXH-1Lm XX. This
m m m m
means, of course, that the exterior product of a general m-element with itself is zero if m is odd. In a similar manner, the formula for the quasi-commutativity of the generalized product is:
a b H- 1LHm-lL Hk-lL b a
m l k k l m
This shows that the generalized product of a general m-element with itself is:
X X H- 1LHm-lL X X
m l m m l m
Thus the generalization of the above result for the exterior product is that the generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is odd, or alternatively, if just one of m and l is odd.
a1 + a2 + a3 + a1 + a2 + a3 +
m m m l m m m
m-l
10.17
OddIntegers
Orthogonal factors
The right hand side of formula 10.11 can be expanded using the Interior Common Factor Theorem (see Chapter 6, The Interior Product) below:
n
a b ai b ai
m k i=1 k m -k k k
m -k
a a1 a1 a2 a2 ! an an
m k m -k k
m -k
To do this we put
a g A B,
m l a b
bg
k l
571
g g1 g2 gi gl
l
A A1 A2 Ai Aa
a
B B1 B2 Bi Ba
B
Each of the subsequent terms will, in the sum on the right hand side of this expansion, contain at least one of the Ai or Bj in the inner product. Thus, if g is totally orthogonal to both A and B, then
l a b
g A g B g A B g g g A B,
l a l l b l a b l l l a b
10.18
gi Aj 0, gi Bj 0
(A simple element is totally orthogonal to another simple element if each of its 1-element factors is orthogonal to each of the 1-element factors of the other element.)
2009 9 3
572
TabulateBasisProducts@n_DBx_ y_F :=
l
ModuleB8R, T<, J; DeclareBasis@nD; T = ComposeTableB8"Generalized Products of Basis Elements"<, TableB:R = 8Bx yF, ToMetricElements@
l
ToScalarProducts@RDD>, 8l, 0, n<F, Range@0, nD, 8"General Metric ", "Euclidean Metric"<F; K; TF
0 0 0 e1 e2 e3 e1 e2 e3
0 0 0 1
TabulateBasisProducts@3DBe1 e2 e3 e1 e2 F
l
0 0 e1 e2 e3 e1 e2 0
0 0 e3 0
2009 9 3
573
TabulateBasisProducts@3DBe1 e2 e3 e1 F
l
0 e1 e2 e3 e1 0 0
0 e2 e3 0 0
TabulateBasisProducts@3DBe1 e2 e1 e2 F
l
0 0 e1 e2 e1 e2 0
0 0 1 0
TabulateBasisProducts@3DBe1 e2 e1 e3 F
l
0 e1 e2 e3 e1 e1 e2 e1 e3 0
0 e2 e3 0 0
TabulateBasisProducts@3DBe1 e2 e1 F
l
0 e1 e2 e1 0 0
0 e2 0 0
2009 9 3
574
TabulateBasisProducts@3DBe1 e2 e3 F
l
e1 e2 e3 e1 e2 e3 0 0
e1 e2 e3 0 0 0
TabulateBasisProducts@3DBe1 e1 F
l
0 e1 e1 0 0
0 1 0 0
TabulateBasisProducts@3DBe1 e2 F
l
e1 e2 e1 e2 0 0
e1 e2 0 0 0
This suggests a way in which we might be able to find the common factor of two elements. In Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the regressive product. Based on this theorem, we were able to derive an Interior Common Factor Theorem in Chapter 6. The formula defining the generalized product is a generalization of this theorem. Thus, it is not so surprising that the generalized product might have application to finding common factors. 2009 9 3
575
This suggests a way in which we might be able to find the common factor of two elements. In Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the regressive product. Based on this theorem, we were able to derive an Interior Common Factor Theorem in Chapter 6. The formula defining the generalized product is a generalization of this theorem. Thus, it is not so surprising that the generalized product might have application to finding common factors. In Chapter 3 we explored finding common factors without yet having introduced the notion of metric. The generalized product formula above however, supposes that a metric has been introduced onto the underlying linear space. But although the interior product g A B g
l a b l
The formulas for finding common factors using the regressive product developed in Chapter 3 required that the sum of the grades of the elements which contained the common factors be greater than the dimension of the underlying linear space. The formula above has the advantage of being independent of the dimension of the space. A concomitant disadvantage however, is that the product g A B may not be displayed in simple form, making it harder to extract the common
l a b
factor g.
l
We note also that even though the formula seems to explicitly display the common factor g, this is
l
only because the arguments g A and g B, to the generalized product are displayed explicitly in
l a l b
an already factorized form. In fact we are faced with the same situation that occurs with the common factor calculation based on the regressive product: a common factor is determined only up to congruence. We can see this noting that the formula above shows that for any scalar s, s g is
l
B
l
sg
l
1 s
gAB
l a b
sg
l
Example
To begin, suppose we are working in a space of large enough dimension to require none of the products to be zero because their grade exceeded the dimension of the space. For the purpose of this example we set it (arbitrarily) to be 8.
!8 ;
2009 9 3
576
With the knowledge that they are simple, and (because their exterior product is zero) have a (simple) common factor (of still unknown grade) we begin computing the progressively higher orders of generalized products. The first non-zero generalized product will be an expression from which we can extract the common factor. The order of this non-zero generalized product will give the grade of the common factor. The zero-order generalized product is just the exterior product that we computed above. Its returning 0 means that the common factor is of grade at least 1.
% BX YF
0
The first-order generalized product also returns 0. This means that the common factor is of grade at least 2.
% BX YF
1
The second-order generalized product does not return 0. This means that the common factor is of grade 2.
Z = % BX YF
2
- 129 He1 e2 e3 e4 e5 e1 e3 L 86 He1 e2 e3 e4 e5 e1 e5 L + 36 He1 e2 e3 e4 e5 e2 e3 L + 24 He1 e2 e3 e4 e5 e2 e5 L - 129 He1 e2 e3 e4 e5 e3 e4 L 168 He1 e2 e3 e4 e5 e3 e5 L + 86 He1 e2 e3 e4 e5 e4 e5 L
Since we can only extract the common factor up to congruence, we can simply replace each term of the form e1 e2 e3 e4 e5 ei ej with ei ej .
Z1 = Z . e1 e2 e3 e4 e5 e_ e - 129 e1 e3 - 86 e1 e5 + 36 e2 e3 + 24 e2 e5 - 129 e3 e4 - 168 e3 e5 + 86 e4 e5
We can verify that Z1 is a common factor of each of X and Y by taking their generalized products of order 1 and seeing if they do indeed result in zero.
2009 9 3
577
%BToInnerProductsB%B:Z1 X, Z1 Y>FFF
1 1
80, 0<
Remember that an exterior factorization is never unique, reflecting the fact that any linear combination of these factors is a factor of both X and Y.
Z3 = a e1 12 e2 43 - e4 56 e5 43 + b e3 + 2 e5 3 ;
10.19
g a+b
p l m k
ga + gb
p l m p l k
10.20
10.21
2009 9 3
578
a a a a aa a a a a
0 m m 0 m m m
10.22
a b ab ab a b
0
10.23
aaaa 0
l m m l
l>0
10.24
ab
m l k
10.25
10.26
a b ab , l m
m l m m m
a b 0, l ! m
m l m
10.27
2009 9 3
579
gA gB 0
m a l m b
l<m
10.28
The generalized product containing a common factor reduces to an interior product when l = m
The generalized product of elements containing a common factor reduces to an interior product whenever the order of the generalized product l is equal to the grade of the common factor.
gA gB gAB g
l a l l b l a b l
10.29
The generalized square of a general (not necessarily simple) element is zero if m-l is odd.
The generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is odd.
a1 + a2 + a3 + a1 + a2 + a3 +
m m m l m m m
m-l
10.30
OddIntegers The generalized product reduces to an exterior product when the common factor is totally orthogonal to the remaining factors.
The generalized product of elements containing a common factor reduces to an exterior product whenever the common factor is totally orthogonal to the other elements.
g A g B g A B g g g A B,
l a l l b l a b l l l a b
10.31
gi Aj 0, gi Bj 0
2009 9 3
580
a b ai bj
m l k i=1 j=1 l l
ai bj
m-l k-l
a a1 a1 a2 a2 !,
m l m-l l m-l
For simplicity, and without loss of generality (given the quasi-commutativity of the generalized product), we can assume k is less than or equal to m, so that l is then less than k. We begin by looking at the simplest case, that in which the order l is equal to 1. The formula above then becomes:
m m 1 k k
a b
i=1 j=1
ai bj
1 1
ai bj
m -1 k-1
a a1 a1 a2 a2 !,
m 1 m -1 1 m -1
b b1 b1 b2 b2 !
k 1 k-1 1 k-1
To get a clearer picture of the form of this sum, we can take a specific example, say for m equal to 3 and k equal to 2:
3 3 1 2 2 2 1
a b Hai bj L ai bj
i=1 j=1
G = ToScalarProductsBa bF
3 1 2
Thus we can see that the generalized product is a linear combination of terms, each of which is an exterior product with a scalar product as coefficient. 2009 9 3
581
Thus we can see that the generalized product is a linear combination of terms, each of which is an exterior product with a scalar product as coefficient. The distinguishing feature of this linear combination is the way in which it combines all the essentially different combinations of 1-element factors of the original elements. Before we proceed further, we will explore ways in which we can visualize the essential components that make up this type of expression.
As a first example, we will take m equal to 3, k equal to 2, and l equal to 1, as above. Composing the lists and displaying them in MatrixForm for easier visualization gives
A = List SpineBa, 1F MatrixForm;
3
and multiply it out (using the interior product as the product operation) by applying the GrassmannAlgebra function MatrixProduct (or its alias B). (Note that MatrixProduct removes any MatrixForm wrappers.)
2009 9 3
582
Mi = B@A BD MatrixForm a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2
and multiply it out (using the exterior product as the product operation). We won't simplify the double negative signs in order to retain the correspondence with the previous expressions.
Me = B@Ac BcD MatrixForm a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1
Finally, we can put these two matrices side by side to show the interior and exteior components of the final expression.
Mi Me a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2 a2 a3 b2 a2 a3 -b1 - Ha1 a3 L b2 - Ha1 a3 L -b1 a1 a2 b2 a1 a2 -b1
The generalized product is now the sum of the element-by-element products (not the matrix product) of these matrices.
$@Plus Flatten@Mi Me . MatrixForm IdentityDD - Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2
Visualization examples
We can collect the operations of the the previous section into a simple function to easily visualize any generalized product.
2009 9 3
583
VizualizeGeneralizedProductBX_ Y_F :=
l_
Module@8A, Ac, B, Bc, Mi, Me<, A = List Spine@X, lD; Ac = List Cospine@X, lD; B = 8Spine@Y, lD<; Bc = 8Cospine@Y, lD<; Mi = 9@A BD; Me = 9@Ac BcD; MatrixForm@MiD * MatrixForm@MeDD;
Example 1
In this first example we visualize all the non-zero generalized products of a 3-element and 2element.
VizualizeGeneralizedProductBa bF
3 0 2
H 1 1 L H a1 a2 a3 b1 b2 L VizualizeGeneralizedProductBa bF
3 1 2
a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2
VizualizeGeneralizedProductBa bF a1 a2 b1 b2 a1 a3 b1 b2 a2 a3 b1 b2 a3 1 -a2 1 a1 1
Example 2
In this example we visualize some non-zero generalized products of a 4-element and 3-element.
VizualizeGeneralizedProductBa bF
4 1 3
a1 b1 a2 b1 a3 b1 a4 b1
a1 b2 a2 b2 a3 b2 a4 b2
a1 b3 a2 b3 a3 b3 a4 b3
a2 a3 a4 b2 b3 - Ha1 a3 a4 L b2 b3 a1 a2 a4 b2 b3 - Ha1 a2 a3 L b2 b3
2009 9 3
584
VizualizeGeneralizedProductBa bF
4 2 3
a1 a2 b1 b2 a1 a3 b1 b2 a1 a4 b1 b2 a2 a3 b1 b2 a2 a4 b1 b2 a3 a4 b1 b2 a3 a4 b3 - Ha2 a4 L b3 a2 a3 b3 a1 a4 b3 - Ha1 a3 L b3 a1 a2 b3
a1 a2 b1 b3 a1 a3 b1 b3 a1 a4 b1 b3 a2 a3 b1 b3 a2 a4 b1 b3 a3 a4 b1 b3
a1 a2 b2 b3 a1 a3 b2 b3 a1 a4 b2 b3 a2 a3 b2 b3 a2 a4 b2 b3 a3 a4 b2 b3 a3 a4 b1 - Ha2 a4 L b1 a2 a3 b1 a1 a4 b1 - Ha1 a3 L b1 a1 a2 b1
VizualizeGeneralizedProductBa bF
4 3 3
a1 a2 a3 b1 b2 b3 a1 a2 a4 b1 b2 b3 a1 a3 a4 b1 b2 b3 a2 a3 a4 b1 b2 b3
a4 1 -a3 1 a2 1 -a1 1
The components
Now that we are able to visualize the generalized product in terms of its exterior and interior product components, it becomes easier to see the influence on the product result of its factors. In the examples above we expressed the generalized product as the sum of the element-by-element (ordinary) products of an interior product matrix, and an exterior product matrix.
VizualizeGeneralizedProductBa bF
3 1 2
a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2
a1 b1 a1 b2 a2 b1 a2 b2 a3 b1 a3 b2
Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
2009 9 3
585
Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
We can see from this that b is totally orthogonal to a if and only if every one of the 6 scalar
2 3
products is zero.
The exterior product of a general element belonging to a with a general element belonging to b is:
3 2
%@Ha a1 + b a2 + c a3 L Hd b1 + e b2 LD a d a1 b1 + a e a1 b2 + b d a2 b1 + b e a2 b2 + c d a3 b1 + c e a3 b2
Common elements
The generalized product is
G = ToScalarProductsBa bF
3 1 2
- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 G1 = $@G . b1 a1 D - Ha1 b2 L a1 a2 a3 + Ha1 a3 L a1 a2 b2 Ha1 a2 L a1 a3 b2 + Ha1 a1 L a2 a3 b2 G2 = $@G1 . b2 a2 D 0
2009 9 3
586
!8 ; H = %BToInnerProductsBa bFF
4 1 3
- Ha4 b3 L a1 a2 a3 b1 b2 + Ha4 b2 L a1 a2 a3 b1 b3 Ha4 b1 L a1 a2 a3 b2 b3 + Ha3 b3 L a1 a2 a4 b1 b2 Ha3 b2 L a1 a2 a4 b1 b3 + Ha3 b1 L a1 a2 a4 b2 b3 Ha2 b3 L a1 a3 a4 b1 b2 + Ha2 b2 L a1 a3 a4 b1 b3 Ha2 b1 L a1 a3 a4 b2 b3 + Ha1 b3 L a2 a3 a4 b1 b2 Ha1 b2 L a2 a3 a4 b1 b3 + Ha1 b1 L a2 a3 a4 b2 b3 H1 = $@H . b1 a1 D - Ha1 b3 L a1 a2 a3 a4 b2 + Ha1 b2 L a1 a2 a3 a4 b3 Ha1 a4 L a1 a2 a3 b2 b3 + Ha1 a3 L a1 a2 a4 b2 b3 Ha1 a2 L a1 a3 a4 b2 b3 + Ha1 a1 L a2 a3 a4 b2 b3 H2 = $@H1 . b2 a2 D 0
- Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 Ha2 b1 L a1 a3 b2 - Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 G1 = %@G . b1 Ha a1 + b a2 + c a3 LD H- a Ha1 b2 L - b Ha2 b2 L - c Ha3 b2 LL a1 a2 a3 + Ha Ha1 a3 L + b Ha2 a3 L + c Ha3 a3 LL a1 a2 b2 + H- a Ha1 a2 L - b Ha2 a2 L - c Ha2 a3 LL a1 a3 b2 + Ha Ha1 a1 L + b Ha1 a2 L + c Ha1 a3 LL a2 a3 b2 G2 = %@G1 . b2 Hd a1 + e a2 + f a3 LD 0
2009 9 3
587
A larger expression
!8 ; H = %BToInnerProductsBa bFF
4 1 3
- Ha4 b3 L a1 a2 a3 b1 b2 + Ha4 b2 L a1 a2 a3 b1 b3 Ha4 b1 L a1 a2 a3 b2 b3 + Ha3 b3 L a1 a2 a4 b1 b2 Ha3 b2 L a1 a2 a4 b1 b3 + Ha3 b1 L a1 a2 a4 b2 b3 Ha2 b3 L a1 a3 a4 b1 b2 + Ha2 b2 L a1 a3 a4 b1 b3 Ha2 b1 L a1 a3 a4 b2 b3 + Ha1 b3 L a2 a3 a4 b1 b2 Ha1 b2 L a2 a3 a4 b1 b3 + Ha1 b1 L a2 a3 a4 b2 b3 H1 = $@H . b3 z . a4 zD Hz b2 L z a1 a2 a3 b1 - Hz b1 L z a1 a2 a3 b2 + Hz a3 L z a1 a2 b1 b2 - Hz a2 L z a1 a3 b1 b2 + Hz a1 L z a2 a3 b1 b2 - Hz zL a1 a2 a3 b1 b2
H2 = $@H1 . b2 y . a2 yD 0
More completely
H3 = %@H . b1 Ha a1 + b a2 + c a3 + d a4 LD H- a Ha1 b3 L - b Ha2 b3 L - c Ha3 b3 L - d Ha4 b3 LL a1 a2 a3 a4 b2 + Ha Ha1 b2 L + b Ha2 b2 L + c Ha3 b2 L + d Ha4 b2 LL a1 a2 a3 a4 b3 + H- a Ha1 a4 L - b Ha2 a4 L - c Ha3 a4 L - d Ha4 a4 LL a1 a2 a3 b2 b3 + Ha Ha1 a3 L + b Ha2 a3 L + c Ha3 a3 L + d Ha3 a4 LL a1 a2 a4 b2 b3 + H- a Ha1 a2 L - b Ha2 a2 L - c Ha2 a3 L - d Ha2 a4 LL a1 a3 a4 b2 b3 + Ha Ha1 a1 L + b Ha1 a2 L + c Ha1 a3 L + d Ha1 a4 LL a2 a3 a4 b2 b3 H4 = %@H3 . b2 He a1 + f a2 + g a3 + h a4 LD 0
2009 9 3
588
Mi1 = MatrixForm@88a1 b1 , a1 b2 <, 8a2 b1 , a2 b2 <, 8a3 b1 , a3 b2 <<D . b1 Ha a1 + b a2 + c a3 L . b2 Hd a1 + e a2 + f a3 L a1 Ha a1 + b a2 + c a3 L a1 Hd a1 + e a2 + f a3 L a2 Ha a1 + b a2 + c a3 L a2 Hd a1 + e a2 + f a3 L a3 Ha a1 + b a2 + c a3 L a3 Hd a1 + e a2 + f a3 L Me1 = MatrixForm@88a2 a3 b2 , a2 a3 -b1 <, 8- Ha1 a3 L b2 , - Ha1 a3 L -b1 <, 8a1 a2 b2 , a1 a2 -b1 <<D . b1 Ha a1 + b a2 + c a3 L . b2 Hd a1 + e a2 + f a3 L a2 a3 Hd a1 + e a2 + f a3 L a2 a3 H- a a1 - b a2 - c a3 L - Ha1 a3 L Hd a1 + e a2 + f a3 L - Ha1 a3 L H- a a1 - b a2 - c a3 L a1 a2 Hd a1 + e a2 + f a3 L a1 a2 H- a a1 - b a2 - c a3 L Me2 = % Me1 d a1 a2 a3 - a a1 a2 a3 e a1 a2 a3 - b a1 a2 a3 f a1 a2 a3 - c a1 a2 a3
This reduces to
a1 d Ha a1 + b a2 + c a3 L a1 H- aL Hd a1 + e a2 + f a3 L a2 e Ha a1 + b a2 + c a3 L a2 H- bL Hd a1 + e a2 + f a3 L a3 f Ha a1 + b a2 + c a3 L a3 H- cL Hd a1 + e a2 + f a3 L
It can be seen that for every scalar product, there is a corresponding negative version which will cancell it when the terms are summed. Hence, if a generalised product of order 1 has either 1. factors with at least two 1-elements in common, or 2. factors which are totally orthogonal then it is zero. If a generalised product of order 1 is zero, then it may have 1. factors with at least two 1-elements in common, or 2. factors which are totally orthogonal These two cases are disjoint: factors cannot both be totally orthogonal and have elements in common. For elements in common will give rise to scalar products of the form
a1 a1
2009 9 3
589
If b2 is orthogonal tp the a
Go = G . 8a1 b2 0, a2 b2 0, a3 b2 0< Ha3 b1 L a1 a2 b2 - Ha2 b1 L a1 a3 b2 + Ha1 b1 L a2 a3 b2 HHa1 a2 a3 L b1 L b2
A larger expression
!8 ; J = %BToInnerProductsBa bFF
4 2 3
Ha3 a4 b2 b3 L a1 a2 b1 - Ha3 a4 b1 b3 L a1 a2 b2 + Ha3 a4 b1 b2 L a1 a2 b3 - Ha2 a4 b2 b3 L a1 a3 b1 + Ha2 a4 b1 b3 L a1 a3 b2 - Ha2 a4 b1 b2 L a1 a3 b3 + Ha2 a3 b2 b3 L a1 a4 b1 - Ha2 a3 b1 b3 L a1 a4 b2 + Ha2 a3 b1 b2 L a1 a4 b3 + Ha1 a4 b2 b3 L a2 a3 b1 Ha1 a4 b1 b3 L a2 a3 b2 + Ha1 a4 b1 b2 L a2 a3 b3 Ha1 a3 b2 b3 L a2 a4 b1 + Ha1 a3 b1 b3 L a2 a4 b2 Ha1 a3 b1 b2 L a2 a4 b3 + Ha1 a2 b2 b3 L a3 a4 b1 Ha1 a2 b1 b3 L a3 a4 b2 + Ha1 a2 b1 b2 L a3 a4 b3 J1 = $@J . b3 z . a4 zD - Hz a3 b1 b2 L z a1 a2 + Hz a2 b1 b2 L z a1 a3 + Hz b2 a2 a3 L z a1 b1 Hz b1 a2 a3 L z a1 b2 - Hz a1 b1 b2 L z a2 a3 Hz b2 a1 a3 L z a2 b1 + Hz b1 a1 a3 L z a2 b2 Hz b2 a1 a2 L z a3 b1 - Hz b1 a1 a2 L z a3 b2 Hz a3 z b2 L a1 a2 b1 - Hz a3 z b1 L a1 a2 b2 Hz a2 z b2 L a1 a3 b1 + Hz a2 z b1 L a1 a3 b2 Hz a1 z b2 L a2 a3 b1 - Hz a1 z b1 L a2 a3 b2 + + +
2009 9 3
590
J2 = $@J1 . b2 x . a3 xD Hx a2 z b1 - x b1 z a2 L x z a1 + H- Hx a1 z b1 L + x b1 z a1 L x z a2 + Hx z a1 a2 L x z b1 + Hx z z b1 L x a1 a2 Hx z z a2 L x a1 b1 + Hx z z a1 L x a2 b1 Hx z x b1 L z a1 a2 + Hx z x a2 L z a1 b1 Hx z x a1 L z a2 b1 + Hx z x zL a1 a2 b1 %@ToScalarProducts@J2DD H- Hx b1 L Hz a2 L + Hx a2 L Hz b1 LL x z a1 + HHx b1 L Hz a1 L - Hx a1 L Hz b1 LL x z a2 + H- Hx a2 L Hz a1 L + Hx a1 L Hz a2 LL x z b1 + H- Hx b1 L Hz zL + Hx zL Hz b1 LL x a1 a2 + HHx a2 L Hz zL - Hx zL Hz a2 LL x a1 b1 + H- Hx a1 L Hz zL + Hx zL Hz a1 LL x a2 b1 + HHx zL Hx b1 L - Hx xL Hz b1 LL z a1 a2 + H- Hx zL Hx a2 L + Hx xL Hz a2 LL z a1 b1 + HHx zL Hx a1 L - Hx xL Hz a1 LL z a2 b1 + I- Hx zL2 + Hx xL Hz zLM a1 a2 b1 J3 = $@J2 . b1 y . a2 yD H- Hx y z a1 L + x z y a1 - x a1 y zL x y z %@ToScalarProducts@J3DD 0 Ha1 y x zL Hy x zL 0
2
2009 9 3
591
ag bg 0
m p l k p
l<p
ag bg abg g
m p l k p m k p p
l p
ag bg ! 0
m p l k p
l> p
g A g B g A B g g g A B,
l a l l b l a b l l l a b
10.32
gi Aj 0, gi Bj 0
ga gb gg
p p m l p k m p p k
a b
m l-p k
10.33 lp
g = g1 g2 ! gp
a gi b gi 0
gg
p p
a b H- 1Lp l
m l k
ga b g
p m l k p
H- 1Lp m a g b
m l p m k
g
p k
10.34
g = g1 g2 ! gp
p
a gi b gi 0
Hypothesis
gA gB g A B
l m m l k l m m-l k
g g g A B,
l l l m m-l k
2009 9 3
592
H- Ha2 g1 L Hb3 g1 L + Ha2 b3 L Hg1 g1 LL a1 b1 b2 + HHa2 g1 L Hb2 g1 L - Ha2 b2 L Hg1 g1 LL a1 b1 b3 + H- Ha2 b3 L Hb2 g1 L + Ha2 b2 L Hb3 g1 LL a1 b1 g1 + H- Ha2 g1 L Hb1 g1 L + Ha2 b1 L Hg1 g1 LL a1 b2 b3 + HHa2 b3 L Hb1 g1 L - Ha2 b1 L Hb3 g1 LL a1 b2 g1 + H- Ha2 b2 L Hb1 g1 L + Ha2 b1 L Hb2 g1 LL a1 b3 g1 + HHa1 g1 L Hb3 g1 L - Ha1 b3 L Hg1 g1 LL a2 b1 b2 + H- Ha1 g1 L Hb2 g1 L + Ha1 b2 L Hg1 g1 LL a2 b1 b3 + HHa1 b3 L Hb2 g1 L - Ha1 b2 L Hb3 g1 LL a2 b1 g1 + HHa1 g1 L Hb1 g1 L - Ha1 b1 L Hg1 g1 LL a2 b2 b3 + H- Ha1 b3 L Hb1 g1 L + Ha1 b1 L Hb3 g1 LL a2 b2 g1 + HHa1 b2 L Hb1 g1 L - Ha1 b1 L Hb2 g1 LL a2 b3 g1 + H- Ha1 g1 L Ha2 b3 L + Ha1 b3 L Ha2 g1 LL b1 b2 g1 + HHa1 g1 L Ha2 b2 L - Ha1 b2 L Ha2 g1 LL b1 b3 g1 + H- Ha1 g1 L Ha2 b1 L + Ha1 b1 L Ha2 g1 LL b2 b3 g1 AB4 = $@AB3 OrthogonalSimplify@88a1 a2 , g1 <, 8b1 b2 b3 , g1 <<DD Ha2 b3 L Hg1 g1 L a1 b1 b2 - Ha2 b2 L Hg1 g1 L a1 b1 b3 + Ha2 b1 L Hg1 g1 L a1 b2 b3 - Ha1 b3 L Hg1 g1 L a2 b1 b2 + Ha1 b2 L Hg1 g1 L a2 b1 b3 - Ha1 b1 L Hg1 g1 L a2 b2 b3 AB5 = $BToScalarProductsBa1 a2 b1 b2 b3 FF
1
- Ha2 b3 L a1 b1 b2 + Ha2 b2 L a1 b1 b3 - Ha2 b1 L a1 b2 b3 + Ha1 b3 L a2 b1 b2 - Ha1 b2 L a2 b1 b3 + Ha1 b1 L a2 b2 b3 AB4x = AB4 . Hg1 g1 L 1 Ha2 b3 L a1 b1 b2 - Ha2 b2 L a1 b1 b3 + Ha2 b1 L a1 b2 b3 Ha1 b3 L a2 b1 b2 + Ha1 b2 L a2 b1 b3 - Ha1 b1 L a2 b2 b3
Orthogonal elements 2
2009 9 3
593
Orthogonal elements 2
Taking the exterior product of an element with a generalized product is equivalent to taking the exterior product only with the elements which are not orthogonal to it. Consider an element made up of two factors, B and G, and a third element A totally orthogonal to B. Then the generalized product of A with G B is equal to the exterior product of B with the generalized product of A and G. That is, there is a sort of quasi-associativity between the factors.
a gb a g b
m l p k m l p k
ai bj 0
l Min@m, pD
10.35
a gb 0
m l p k
ai bj 0
l > Min@m, pD
10.36
For orthogonal elements, the generalized products cease being zero when the order of the product equals or exceeds the grade of the common factor. We assume in the following that
di ej 0
Ka dO b e
m r l k p
!0
l<m+k
Ka dO b e
m r l k p
==
lm+k
Ka dO b e
m r l k p
l>m+k
A; !12 ; DeclareExtraVectorSymbols@ 8Subscript@a, _D, Subscript@b, _D, Subscript@g, _D, Subscript@d, _D, Subscript@e, _D<D 8p, q, r, s, t, u, v, w, x, y, z, a_ , b_ , g_ , d_ , e_ <
2009 9 3
594
Ka dO e
m r l p
di ej 0
l : Min[m,p]
Ka dO e ! 0
m r l p
l<m
Ka dO e d a e
m r l p r m l p
l Min@m, pD
Ka dO e 0
m r l p
l>m
a gb a g b
m l p k m l p k
ai bj 0
l Min@m, pD
H 1 1 L H a1 a2 d1 e1 e2 e3 L VizualizeGeneralizedProductBA BF
1
OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D a1 e1 a1 e2 a1 e3 a2 e1 a2 e2 a2 e3 0 0 0 a2 d1 e2 e3 a2 d1 - He1 e3 L a2 d1 e1 e2 - Ha1 d1 L e2 e3 - Ha1 d1 L - He1 e3 L - Ha1 d1 L e1 e2 a1 a2 e2 e3 a1 a2 - He1 e3 L a1 a2 e1 e2 ToScalarProductsBHa1 a2 L He1 e2 e3 LF
1
2009 9 3
595
AB2a = $BToInnerProductsBA BF
2
OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<DF Ha1 a2 e3 e4 L d1 d2 d3 e1 e2 Ha1 a2 e2 e4 L d1 d2 d3 e1 e3 + Ha1 a2 e2 e3 L d1 d2 d3 e1 e4 + Ha1 a2 e1 e4 L d1 d2 d3 e2 e3 Ha1 a2 e1 e3 L d1 d2 d3 e2 e4 + Ha1 a2 e1 e2 L d1 d2 d3 e3 e4 AB2b = %BToInnerProductsBJHa1 a2 L He1 e2 e3 e4 LNF d1 d2 d3 F
2
Ha1 a2 e3 e4 L d1 d2 d3 e1 e2 Ha1 a2 e2 e4 L d1 d2 d3 e1 e3 + Ha1 a2 e2 e3 L d1 d2 d3 e1 e4 + Ha1 a2 e1 e4 L d1 d2 d3 e2 e3 Ha1 a2 e1 e3 L d1 d2 d3 e2 e4 + Ha1 a2 e1 e2 L d1 d2 d3 e3 e4 AB2a AB2b True VizualizeGeneralizedProductBA BF
3
OrthogonalSimplify@88d1 d2 d3 , e1 e2 e3 e4 <<D
Inner::incom : Length 0 of dimension 1 in 8< is incommensurate with length 1 of dimension 1 in 88e1 e2 e3 <<. More Inner::incom : Length 0 of dimension 1 in 8< is incommensurate with length 1 of dimension 1 in 881<<. More
Inner@CircleMinus, 8<, 88e1 e2 e3 <<, PlusD Inner@Wedge, 8<, 881<<, PlusD VizualizeGeneralizedProductBA BF
4
2009 9 3
596
1 b 1 bj bj 1 bj bj
j=1 j=1
When m is greater than zero, the first A form is clearly zero because the grade of the left factor of the interior product (1) is less than that of the right factor. Hence the B form is zero also. Thus we have the immediate result that
k K O m
bj bj 0
j=1 k-m m
10.37
m k-m
b b1 b1 b2 b2 !
k m k-m
b b
k-m
+ b b
k-m
+ b2 b3 + ! 0
k-m m
If m is less than or equal to k-m, this sum is zero as shown by 10.16. However, the sum created by interchanging the order of the factors in the interior products is also zero, since each term in the sum now becomes zero by virtue of the second factor being of higher grade than the first.
b1 b1 + b2 b2 + b2 b3 + ! 0
l k-m l k-m l k-m
These two results are collected together and referred to as the Zero Interior Sum Theorem. It is valid for 0 < m < k.
2009 9 3
10.38
597
b b1 b1 b2 b2 b3 b3 !
k m k-m m k-m m k-m
b1 b1 + b2 b2 + b2 b3 + ! 0
m k-m m k-m m k-m
b1 b1 + b2 b2 + b2 b3 + ! 0
k-m m k-m m k-m m
BaseBbF
3
If we take the interior products of the corresponding elements of this list (using Thread), and then add (Plus) together the cases (Cases) in which the grade of the second factor is equal to the chosen value of m (here we choose m equal to 1), we get the terms of the zero interior sum.
Plus CasesBThreadBBaseBbF CobaseBbFF, a_ b_ ; RawGrade@bD 1F
3 3
b1 b2 b3 + b1 b3 -b2 + b2 b3 b1
We then give our function a name and add a condition 0 < m < RawGrade[b] to restrict the formulas to the valid range of m.
ComposeZeroInteriorSum@m_D@b_D ; 0 < m < RawGrade@bD := Plus Cases@Thread@Base@bD Cobase@bDD, a_ b_ ; RawGrade@bD mD;
Examples
For a 2-element, the zero interior sum is obviously zero.
2009 9 3
10.38
598
ComposeZeroInteriorSum@1DBbF
2
b1 b2 + b2 -b1
For a 3-element, with m equal to 1 (one 1-element factor to the right of the interior product sign), we get
ComposeZeroInteriorSum@1DBbF
3
b1 b2 b3 + b1 b3 -b2 + b2 b3 b1
Two factors to the right of the interior product sign means one to the left, and For m equal to 2 (two 1-element factors to the right of the interior product sign), we get
ComposeZeroInteriorSum@2DBbF
3
b1 b2 b3 + b2 - Hb1 b3 L + b3 b1 b2
In this case, each of the interior products is zero in its own right. However, remember that ComposeZeroInteriorSum is GrassmannAlgebra "composer", and hence by design, does not effect any simplifications. For an element of grade 4, we get non-trivial results for m equal to 1, and for m equal to 2.
ComposeZeroInteriorSum@1DBbF
4
b1 b2 b3 b4 + b1 b3 - Hb2 b4 L + b1 b4 b2 b3 + b2 b3 b1 b4 + b2 b4 - Hb1 b3 L + b3 b4 b1 b2
Some zero interior sums can be simplified. For example the previous result can be written
Z2 = $@Z1 D 2 Hb1 b2 b3 b4 L - 2 Hb1 b3 b2 b4 L + 2 Hb1 b4 b2 b3 L
We can also verify that these expressions are indeed zero by converting them to their scalar product form. For example:
$@ToScalarProducts@Z2 DD 0
2009 9 3
599
< k, and l ! 0. The Zero Interior Sum theorem is the special case of the Zero Generalized Sum theorem in which l Min[m,k-m].
b b1 b1 b2 b2 b3 b3 !
k m k-m m l k-m m k-m m k-m
b1 b1 + b2 b2 + b2 b3 + ! 0
m l k-m m l k-m k-m l m
l!0 l!0
10.39
b1 b1 + b2 b2 + b2 b3 + ! 0
k-m l m k-m l m
To compose a zero generalized sum, we simply modify the code for the zero interior sum to replace the interior product by the generalized product, and add another argument and condition for l.
ComposeZeroGeneralizedSum@m_, l_D@b_D ; 0 < m < RawGrade@bD && l > 0 := Plus CasesB ThreadBBase@bD Cobase@bDF, a_ b_ ; RawGrade@bD mF;
l l
Examples
Here is the only valid non-trivial zero generalized sum for a 4-element that does not reduce to a zero interior sum.
ComposeZeroGeneralizedSum@2, 1DBbF
4
b1 b2 b3 b4 + b1 b3 - Hb2 b4 L + b1 b4 b2 b3 +
1 1 1
b2 b3 b1 b4 + b2 b4 - Hb1 b3 L + b3 b4 b1 b2
1 1 1
Here we tabulate the first 50 cases to confirm that they reduce to zero.
2009 9 3
600
b b1 b2 b3 b4 b5
5
The generalized product is zero when l is greater than the minimum of 5 and 5-m. It is also quasicommutative, only changing by a (possible) sign when its factors are interchanged:
bi bi H- 1LHm-lL Hk-m-lL bi bi
m l k-m k-m l m
F2 b1 b1 + b2 b2 + b3 b3 + 0
2 1 3 2 1 3 2 1 3
F3 b1 b1 + b2 b2 + b3 b3 + 0
2 2 3 2 2 3 2 2 3
Since the terms in F1 and F3 are already interior products, they are zero by the Zero Interior Sum Theorem. It remains to show that F2 is zero To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
2009 9 3
601
To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
b1 b2 b3 b4 b5 + b1 b2 b4 - Hb3 b5 L +
1 1
b1 b2 b5 b3 b4 + b1 b3 b4 b2 b5 + b1 b3 b5 - Hb2 b4 L +
1 1 1 1 1 1 1
b1 b4 b5 b2 b3 + b2 b3 b4 - Hb1 b5 L + b2 b3 b5 b1 b4 + b2 b4 b5 - Hb1 b3 L + b3 b4 b5 b1 b2
1
Before we proceed with the proof for this specific case, we verify first that Z1 is indeed zero.
%@ToScalarProducts@Z1 DD 0
2. Collect together the terms having the same exterior product factor
A simple way to do this is to replace the exterior product with an arbitrary scalar multiple. In this form it becomes clear that each sum of the terms associated with each scalar multiple is a zero interior sum, and hence zero, making the complete expression zero, and thus proving the theorem for this case.
Z3 = Z2 . bi_ B_ bi B Hb2 b3 b4 b5 L b1 - Hb2 b3 b5 b4 L b1 + Hb2 b4 b5 b3 L b1 - Hb3 b4 b5 b2 L b1 - Hb1 b3 b4 b5 L b2 Hb1 b3 b5 b4 L b2 - Hb1 b4 b5 b3 L b2 + Hb3 b4 b5 b1 L b2 Hb1 b2 b4 b5 L b3 - Hb1 b2 b5 b4 L b3 + Hb1 b4 b5 b2 L b3 Hb2 b4 b5 b1 L b3 - Hb1 b2 b3 b5 L b4 + Hb1 b2 b5 b3 L b4 Hb1 b3 b5 b2 L b4 + Hb2 b3 b5 b1 L b4 + Hb1 b2 b3 b4 L b5 Hb1 b2 b4 b3 L b5 + Hb1 b3 b4 b2 L b5 - Hb2 b3 b4 b1 L b5 + + -
2009 9 3
602
Clearly the two sides of the relation are not in general equal. Hence the generalized product is not in general associative.
2009 9 3
603
form x y z . Since we have shown above that these forms will in general be different, we
m l k m p
will refer to the first one as the A form, and to the second as the B form.
H- 1Ls A x y
l=0 m l k
n-l p
z
1 2
where sA = m l +
1 2
l Hl + 1L + Hm + kL Hn - lL +
Hn - lL Hn - l + 1L.
We define a triple generalized sum of form B to be the sum of all the signed triple generalized products of form B of the same elements and which have the same grade n.
n
H - 1 L sB x y z
l=0 m l k n-l p
where sB = m l +
1 2
l Hl + 1L + k Hn - lL +
1 2
Hn - lL Hn - l + 1L.
For example, we tabulate below the triple generalized sums of form A for grades of 0, 1 and 2:
2009 9 3
604
TableForm 0 1 2 xy z
m 0 k 0 p
H - 1 L 1+m -
x y z + H - 1 L 1+k+m
m 1 k 0 p
xy z
m 0 k 1 p
x y z + H- 1Lk
m 2 k 0 p
xy z - xy z
m 1 k 1 p m 0 k 2 p
Similarly the triple generalized sums of form B for grades of 0, 1 and 2 are
TableB:i, TripleGeneralizedSumB@iDBx, y, zF>, 8i, 0, 2<F
m k p
TableForm 0 1 2 x yz
m 0 k 0 p
H - 1 L 1+k x y z
m 0 k 1 p
+ H - 1 L 1+m x y z
m 1 k 0 p
- x yz
m 0 k 2 p
+ H - 1 L 2+k+m x y z
m 1 k 1 p
-x yz
m 2 k 0 p
H- 1Ls A x y
l=0
z H - 1 L sB x y z
l=0
m l k n-l p
10.40
where sA = m l + and sB = m l +
1 2 1 2
l Hl + 1L + Hm + kL Hn - lL + l Hl + 1L + k Hn - lL +
1 2
1 2
Hn - lL Hn - l + 1L
Hn - lL Hn - l + 1L.
If this conjecture can be shown to be true, then the associativity of the Clifford product of a general Grassmann expression can be straightforwardly proven using the definition of the Clifford product in terms of exterior and interior products (or, what is equivalent, in terms of generalized products). That is, the rules of Clifford algebra may be entirely determined by the axioms and theorems of the Grassmann algebra, making it unnecessary, and indeed potentially inconsistent, to introduce any special Clifford algebra axioms.
605
xy z + xy z- xy z
3 2 2 0 3 1 2 1 3 0 2 2
B = TripleGeneralizedSumB@2DBx, y, zF
3 2
- x yz
3 0 2 2
-x yz -x yz
3 1 2 1 3 2 2 0
We could demonstrate the equality of these two sums directly by converting their difference into scalar product form, thus allowing terms to cancel.
Expand@ToScalarProducts@A - BDD 0
However it is more instructive to convert each of the terms in the difference separately to scalar products.
X = List HA - BL :x y z , 3 0 2 2
xy z , x yz ,
3 2 2 0 3 1 2 1
x y z, x y z , 3 1 2 1 3 2 2 0
xy z >
3 0 2 2
2009 9 3
606
X1 = Expand@ToScalarProducts@XDD 80, - Hx2 y2 L Hx3 y1 L z x1 + Hx2 y1 L Hx3 y2 L z x1 + Hx1 y2 L Hx3 y1 L z x2 - Hx1 y1 L Hx3 y2 L z x2 Hx1 y2 L Hx2 y1 L z x3 + Hx1 y1 L Hx2 y2 L z x3 , - Hz y2 L Hx3 y1 L x1 x2 + Hz y1 L Hx3 y2 L x1 x2 + Hz y2 L Hx2 y1 L x1 x3 - Hz y1 L Hx2 y2 L x1 x3 Hz y2 L Hx1 y1 L x2 x3 + Hz y1 L Hx1 y2 L x2 x3 , Hz y2 L Hx3 y1 L x1 x2 - Hz y1 L Hx3 y2 L x1 x2 Hz y2 L Hx2 y1 L x1 x3 + Hz y1 L Hx2 y2 L x1 x3 Hz x3 L Hx2 y2 L x1 y1 + Hz x2 L Hx3 y2 L x1 y1 + Hz x3 L Hx2 y1 L x1 y2 - Hz x2 L Hx3 y1 L x1 y2 + Hz y2 L Hx1 y1 L x2 x3 - Hz y1 L Hx1 y2 L x2 x3 + Hz x3 L Hx1 y2 L x2 y1 - Hz x1 L Hx3 y2 L x2 y1 Hz x3 L Hx1 y1 L x2 y2 + Hz x1 L Hx3 y1 L x2 y2 Hz x2 L Hx1 y2 L x3 y1 + Hz x1 L Hx2 y2 L x3 y1 + Hz x2 L Hx1 y1 L x3 y2 - Hz x1 L Hx2 y1 L x3 y2 , Hx2 y2 L Hx3 y1 L z x1 - Hx2 y1 L Hx3 y2 L z x1 Hx1 y2 L Hx3 y1 L z x2 + Hx1 y1 L Hx3 y2 L z x2 + Hx1 y2 L Hx2 y1 L z x3 - Hx1 y1 L Hx2 y2 L z x3 + Hz x3 L Hx2 y2 L x1 y1 - Hz x2 L Hx3 y2 L x1 y1 Hz x3 L Hx2 y1 L x1 y2 + Hz x2 L Hx3 y1 L x1 y2 Hz x3 L Hx1 y2 L x2 y1 + Hz x1 L Hx3 y2 L x2 y1 + Hz x3 L Hx1 y1 L x2 y2 - Hz x1 L Hx3 y1 L x2 y2 + Hz x2 L Hx1 y2 L x3 y1 - Hz x1 L Hx2 y2 L x3 y1 Hz x2 L Hx1 y1 L x3 y2 + Hz x1 L Hx2 y1 L x3 y2 , 0<
TestTripleGeneralizedSumConjecture[n_][x_,y_,z_]:= Expand[ToScalarProducts[TripleGeneralizedSumA[n][x,y,z]TripleGeneralizedSumB[n][x,y,z]]]
As an example of this procedure we run through the first 320 cases. A value of zero will validate the conjecture for that case.
2009 9 3
607
TableBTestTripleGeneralizedSumConjecture@nDBx, y, zF,
m k p
8n, 0, 4<, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<F Flatten 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0< 0, 0, 0, 0, 0,
A conjecture
As an example we suggest a conjecture for a relationship amongst generalized products of order 1 of the following form, valid for any m, k, and p.
a b g + H- 1Lx b g a + H- 1Ly g a b 0
m 1 k 1 p k 1 p 1 m p 1 m 1 k
10.41
The signs H-1Lx and H-1Ly are at this point unknown, but if the conjecture is true we expect them to be simple functions of m, k, p and their products. We conjecture therefore that in any particular case they will be either +1 or |1.
2009 9 3
608
Y = ToScalarProductsB b g aF;
k 1 p 1 m
X + Y + Z == 0 X + Y - Z == 0 X - Y + Z == 0 X - Y - Z == 0DF
Table@T = TestPostulate@8m, k, p<D; Print@88m, k, p<, T<D; T, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<D Flatten 8True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True<
It can be seen that the formula is valid for some values of x and y in the first 64 cases. This gives us some confidence that our efforts may not be wasted if we wished to prove the formula and/or find the values of x and y an terms of m, k, and p.
2009 9 3
609
elements with an intersection g. Generalized products of such intersecting elements are zero
p
whenever the grade l of the product is less than the grade of the intersection.
ga gb 0
p m l p k
l<p
10.42
We can test this by reducing the expression to scalar products and tabulating cases.
FlattenBTableBA = ToScalarProductsB g a g b F;
p m l p k
Print@88m, k, p, l<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 1, 3<, 8l, 0, p - 1<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
k
Of course, this result is also valid in the case that either or both the grades of a and b is zero.
m
g gb ga g g g 0
p l p k p m l p p l p
l<p
10.43
The case l ! p
For the case where l is equal to, or greater than the grade of the intersection p, the generalized product of such intersecting elements may be expressed as the interior product of the common factor g with a generalized product of the remaining factors. This generalized product is of order
p
lower by the grade of the common factor. Hence the formula is only valid for l p (since the generalized product has only been defined for non-negative orders).
10.44
2009 9 3
610
g a g b H- 1Lp Hl-pL
p m l p k
ga
p m
l-p k
b g
p
g a g b H- 1Lp m a
p m l p k
m l-p
gb
p k
g
p
We can test these formulae by reducing the expressions on either side to scalar products and tabulating cases. For l > p, the first formula gives
FlattenBTableB A = ToScalarProductsB g a g b - H- 1Lp Hl-pL
p m l p k
ga
p m
l-p k
b g F;
p
Print@88m, k, p, l<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<, 8l, p + 1, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
The case of l = p is tabulated in the next section. The second formula can be derived from the first by using the quasi-commutativity of the generalized product. To check, we tabulate the first few cases:
FlattenB TableBToScalarProductsB g a g b - H- 1Lp m a
p m l p k
m l-p
gb
p k
g F,
p
8m, 1, 2<, 8k, 1, m<, 8p, 0, m<, 8l, p + 1, Min@p + m, p + kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
a gb
m m p k
g H- 1Lp m
p
ag b g
m p m k p
10.45
FlattenB TableBToScalarProductsB a g b
m m p k
g - H- 1Lp m
p
a g b g F,
m p m k p
8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8m, 0, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
611
ga gb gab g
p m p p k p m k p
ag gb agb g
m p p p k m p k p
10.46
ag bg abg g
m p p k p m k p p
Print@88m, k, p<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
ab 0
m l k
ai bj 0
l>0
10.47
2009 9 3
612
FlattenBTableBToScalarProductsBa bF .
m l k
OrthogonalSimplificationRulesB::a, b>>F,
m k
The GrassmannAlgebra function OrthogonalSimplificationRules generates a list of rules which put to zero all the scalar products of a 1-element from a with a 1-element from b.
m k
But since ai bj 0, the interior products of a and any factors of gb in the expansion of the
m p k k
generalized product will be zero whenever they contain any factors of b. That is, the only non-zero terms in the expansion of the generalized product are of the form agi g i b. Hence:
m l p-l k
a gb a g b
m l p k m l p k
ai bj 0
l Min@m, pD
10.48
a gb 0
m l p k
ai bj 0
l > Min@m, pD
10.49
FlattenBTableBToScalarProductsBa g b - a g b F .
m l p k m l p k
OrthogonalSimplificationRulesB::a, b>>F,
m k
8m, 0, 3<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@m, pD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
By using the quasi-commutativity of the generalized and exterior products, we can transform the above results to:
2009 9 3
10.50
613
ag b
m p l k
H- 1L m l a g b
m p l k
ai bj 0
l Min@k, pD
ga b 0
p m l k
ai bj 0
l > Min@k, pD
10.51
FlattenBTableBToScalarProductsB a g b - H- 1Lm l a g b F .
m p l k m p l k
OrthogonalSimplificationRulesB::a, b>>F,
m k
8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@k, pD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
elements with an intersection g. As has been shown in Section 10.12, generalized products of such
p
intersecting elements are zero whenever the grade l of the product is less than the grade of the intersection. Hence the product will be zero for l < p, independent of the orthogonality relationships of its factors.
The case l ! p
Consider now the case of three simple elements a, b and g where g is totally orthogonal to both a
m k p p m m
and b (and hence to ab). A simple element g is totally orthogonal to an element a if and only if
k m m k p p
The generalized product of the elements ga and gb of order l p is a scalar factor gg times
p m m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
k
10.50
614
The generalized product of the elements ga and gb of order l p is a scalar factor gg times
p m m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
k
ga gb gg
p p m l p k m p p k
a b
m l-p k
10.52 lp
g = g1 g2 ! gp
a gi b gi 0
Note here that the factors of g are not necessarily orthogonal to each other. Neither are the factors
p
We can test this out by making a table of cases. Here we look at the first 25 cases.
FlattenBTableBToScalarProductsB g a g b - g g
p m l p k p p
a b F .
m l-p k
8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, p, p + Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
By combining the results of the previous section with those of this section, we see that we can also write that:
ga b a gb
p m l k m l p k
gg
p p
a b H- 1Lp l
m l k
ga b g
p m l k p
H- 1Lp m a g b
m l p m k
g
p k
10.53
g = g1 g2 ! gp
p
a gi b gi 0
2009 9 3
615
FlattenB TableBToScalarProductsB a g b
m l p k
g - H- 1Lp m g g
p p p
ab
m l k
F .
8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, 0, Min@m, kD<FF 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
and b must be congruent (differ only by (possibly) a scalar factor), and hence by formula 10.17 is
2
zero. If one of the factors is a scalar, then the only non-zero generalized product is that of order zero, equivalent to the ordinary field product (and, incidentally, also to the exterior and interior products).
0-space
In a space of zero dimensions (the underlying field of the Grassmann algebra), the generalized product reduces to the exterior product and hence to the usual scalar (field) product.
2009 9 3
616
ToScalarProductsBa bF
0
ab
Higher order products (for example a b) are of course zero since the order of the product is
1
greater than the minimum of the grades of the factors (which in this case is zero). In sum: The only non-zero generalized product in a space of zero dimensions is the product of order zero, equivalent to the underlying field product.
1-space
Products of zero order
In a space of one dimension there is only one basis element, so the product of zero order (that is, the exterior product) of any two elements is zero:
Ha e1 L Hb e1 L a b e1 e1 0
0
ToScalarProductsBHa e1 L Hb e1 LF
0
ToScalarProductsBHa e1 L Hb e1 LF
1
a b He1 e1 L
In sum: The only non-zero generalized product of elements (other than where one is a scalar) in a space of one dimension is the product of order one, equivalent to the scalar product.
2-space
Products of zero order
The product of zero order of two elements reduces to their exterior product. Hence in a 2-space, the only non-zero products (apart from when one is a scalar) is the exterior product of two 1-elements.
Ha e1 + b e2 L Hc e1 + d e2 L Ha e1 + b e2 L Hc e1 + d e2 L
0
617
ToScalarProductsBHa e1 + b e2 L Hc e1 + d e2 LF
1
The product of the first order of a 1-element and a 2-element is also commutative and reduces to their interior product.
Ha e1 + b e2 L Hc e1 e2 L
1
Hc e1 e2 L Ha e1 + b e2 L Hc e1 e2 L Ha e1 + b e2 L
1
ToScalarProductsBHa e1 + b e2 L Hc e1 e2 LF
1
The product of the first order of two 2-elements is zero. This can be determined directly from [10.17].
aa 0,
m l m
l!m
ToScalarProductsBHa e1 e2 L Hb e1 e2 LF
1
Products of the second order of two 2-elements reduce to their inner product:
Ha e1 e2 L Hb e1 e2 L Ha e1 e2 L Hb e1 e2 L
2
ToInnerProductsBHa e1 e2 L Hb e1 e2 LF
2
a e1 e2 b e1 e2
2009 9 3
618
In sum: The only non-zero generalized products of elements (other than where one is a scalar) in a space of two dimensions reduce to exterior, interior, inner or scalar products.
10.17 Summary
To be completed
2009 9 3
619
11.1 Introduction
In this chapter we explore the relationship of Grassmann algebra to the hypercomplex algebras. Typical examples of hypercomplex algebras are the algebras of the real numbers !, complex numbers #, quaternions $, octonions (or Cayley numbers) %, sedenions &, and Clifford algebras !. We will show that each of these algebras can be generated as an algebra of Grassmann numbers under a new product which we take the liberty of calling the hypercomplex product. This hypercomplex product of an m-element a and a k-element b is denoted ab and defined as a
m k m k m k
Min@m,kD
ab
m k
l=0
m,l,k
ab
m l k
11.1
Here, the sign s is a function of the grades m and k of the factors and the order l of the generalized product and takes the values +1 or -1 depending on the type of hypercomplex product being defined. Note particularly that this approach does not require the algebra to be defined in terms of generators or basis elements other than those of the Grassmann algebra. We will see in what follows that this leads to considerably more insight on the nature of the hypercomplex algebras than afforded by the traditional algebraic approach. The generalized Grassmann products on the right-hand side subsume and extend the principal dimension-independent products of geometric linear algebra: the exterior and interior products. It is enticing to postulate that many algebras with a geometric interpretation may be defined by sums of generalized products. Our approach then, is not to define a hypercomplex algebra by defining new (hypercomplex) elements, but rather to define a new product called the hypercomplex product which acts on ordinary Grassmann numbers. This approach has the conceptual advantage that the hypercomplex algebra can now sit comfortably inside the Grassmann algebra and be used in geometric constructions just like the exterior and interior products. Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann numbers acting under the hypercomplex product. The hypercomplex product defined above, in which the hypercomplex signs s are not yet specified, may be viewed as a definition of the most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by constraining these signs to have given values. The traditional hypercomplex algebras will have one set of constraints, while the Clifford algebras will have another. And just as with the Grassmann 2009 9 3 they will have significantly different properties depending on the dimension of their algebras, underlying space. We will also see that one of the major sources of difference is due to the fact that spaces of dimension higher than three can have non-simple elements. For example, this is the reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
m,l,k
m,l,k
Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k
620
which the hypercomplex signs s are not yet specified, may be viewed as a definition of the most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by constraining these signs to have given values. The traditional hypercomplex algebras will have one set of constraints, while the Clifford algebras will have another. And just as with the Grassmann algebras, they will have significantly different properties depending on the dimension of their underlying space. We will also see that one of the major sources of difference is due to the fact that spaces of dimension higher than three can have non-simple elements. For example, this is the reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of three dimensions, is the largest hypercomplex algebra with a scalar norm. There is some care needed with terminology. In our approach to hypercomplex algebras, there are no hypercomplex numbers per se, but only different types of hypercomplex product (that is, with different sets of constraints on the hypercomplex signs), and spaces of different dimension. These two factors together give the traditional hypercomplex algebras their individual flavours. So, what then is a quaternion? From our approach, it is not an object, but rather a Grassmann number living in a two-dimensional subspace (the Grassmann algebra on an underlying linear space of two dimensions has 4 basis elements) responding to a specific type of hypercomplex product. With this meaning, and in order to concord with traditional usage, we shall continue to speak of hypercomplex numbers, such as quaternions, as if they were objects in their own right. "Hypercomplex number" should be read as "Grassmann number under the hypercomplex product". "Quaternion" should be read as "Hypercomplex number in a Grassmann algebra of two dimensions". More carefully, by "Grassmann algebra of two dimensions", we mean a Grassmann algebra constructed on a two-dimensional subspace of the underlying linear space. It turns out that the real numbers are hypercomplex numbers in a subspace of zero dimensions, the complex numbers are hypercomplex numbers in a subspace of one dimension, the quaternions are hypercomplex numbers in a subspace of two dimensions, the octonions are hypercomplex numbers in a subspace of three dimensions, and the sedenions are hypercomplex numbers in a subspace of four dimensions. The number of base elements in the hypercomplex algebra is equal to the number of basis elements in the corresponding Grassmann algebra. We can see from this approach that different hypercomplex numbers can now live and work in the same space. For example, complex numbers can operate on vectors, or quaternions on octonions. In Chapter 12: Exploring Clifford Algebra we will also show that the Clifford product may be defined in a space of any number of dimensions as a hypercomplex product with signs:
m,l,k
H- 1Ll Hm-lL+ 2
l Hl-1L
We could also explore the algebras generated by products of the type defined by definition 11.1, but which have some of the s
m,l,k m,l,k
1, l 0
0, l > 0
defines the algebra with the hypercomplex product reducing to the exterior product, and
m,l,k
1, l = Min@m, kD
m,l,k
0, l ! Min@m, kD
defines the algebra with the hypercomplex product reducing to the interior product. Both of these definitions however lead to products having zero divisors, that is, some products can be zero even though neither of its factors is zero. Because one of the principally useful characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this chapter to exploring just those algebras. 2009 9 3
621
Both of these definitions however lead to products having zero divisors, that is, some products can be zero even though neither of its factors is zero. Because one of the principally useful characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this chapter to exploring just those algebras. That is, we shall always assume that s
m,l,k
is not zero.
m,l,k
We begin by exploring the constraints we would require on the hypercomplex signs s in definition 11.1 which will lead to algebras with a scalar norm. It turns out that we can ensure a scalar norm (up to the Octonions) by only constraining a subset of the signs. The specification of the rest of the signs will then be explored by considering spaces of increasing dimension, starting with a space of zero dimensions generating the real numbers. In order to embed the algebras defined on lower dimensional spaces within those defined on higher dimensional spaces we will maintain the lower dimensional relations we determine for the hypercomplex signs when we explore the higher dimensional spaces. Finally, we note that this approach to hypercomplex numbers is invariant in a tensorial sense. The traditional product tables for the hypercomplex numbers are recovered in our approach by assuming the underlying space has an Euclidean metric. However, we will show that the hypercomplex product definition in 11.1 above is still meaningful under different metrics. Indeed, a simple change of metric can lead to a whole range of new hypercomplex algebras.
Ka1 + a2 + O b a1 b + a2 b +
m m k m k m k
11.2
a b1 + b2 + a b1 + a b2 +
m k k m k m k
a1 + a2 + b a1 b + a2 b +
m1 m2 k m1 k m2 k
11.3
a b1 + b2 + a b1 + a b2 +
m k1 k2 m k1 m k2
2009 9 3
622
Factorization of scalars
By using the definition 11.1 of the hypercomplex product and the properties of the generalized Grassmann product we can easily show that scalars may be factorized out of any hypercomplex products:
Ka aO b a a b a a b
m k m k m k
11.4
Multiplication by scalars
The next property we will require of the hypercomplex product is that it behaves as expected when one of the elements is a scalar. That is:
ab ba b a
m m m
11.5
m,l,k
From the relations 11.5 and the definition 11.1 we can determine the constraints on the which will accomplish this.
Min@m,0D
ab
m
l=0 Min@0,mD
m,l,0
ab
m l
m,0,0
ab
m
m,0,0
ba
m
ba
m
l=0
0,l,m
ba
l m
0,0,m
ba
m
0,0,m
ba
m
Hence the first constraints we impose on the s multiplication by scalars are that:
m,0,0
m,l,k
0,0,m
11.6
2009 9 3
623
HypercomplexScalarConstraints := : s
m_,0,0
1,
0,0,k_
1>
11.7
These rules are the first in a collection which we will eventually use to define the hypercomplex algebras.
1. Hence:
ab ba a b
11.8
The hypercomplex product in a Grassmann algebra of zero dimensions is therefore equivalent to the (usual) real field product of the underlying linear space. Hypercomplex numbers in a Grassmann algebra of zero dimensions are therefore (isomorphic to) the real numbers.
The conjugate
The conjugate of a Grassmann number X is denoted Xc and is defined as the body Xb (scalar part) of X minus the soul Xs (non-scalar part) of X.
X Xb + Xs Example
Let X be a Grassmann number in 3 dimensions.
!3 ; X = 0a
Xc Xb - Xs
11.9
a0 + a1 e1 + a2 e2 + a3 e3 + a4 e1 e2 + a5 e1 e3 + a6 e2 e3 + a7 e1 e2 e3
624
11.10
The norm of an m-element a is then denoted CBaF and defined as the hypercomplex product of a
m m m
,BaF a ac -a a
m m m m m
m!0
11.11
in which case the generalized product (and hence the hypercomplex product) becomes an inner product. Thus for m not zero we have:
aa
m m m,m,m
aa
m m
m!0
Equation 11.11 then implies that for the norm to be a positive scalar quantity we must have
m,m,m
s -1.
,BaF -a a a a
m m m m m
a simple
m
11.12
m,m,m
-1
m!0
11.13
The norm of a scalar a is just a2 aa, so formula 11.12 applies also for m equal to zero.
625
HypercomplexSquareConstraints := :
m_,m_,m_
; m ! 0 - 1>;
11.14
These rules are the second in a collection which we will eventually use to define the hypercomplex algebras. We remark that, although the constraints 11.14 have been determined from a consideration of the hypercomplex product of a simple element with itself, it must also now apply to the interior product term of any hypercomplex product of two elements of the same grade. From formula 11.1, the hypercomplex product of two elements of the same grade can be written
m
ab
m m l=0
m,l,m
ab
m l m
m,0,m
ab+
m 0 m
m,1,m
ab++
m 1 m
m,m,m
ab
m m m
The last term involves s , and we must take it to be -1 here also. This expansion also reminds us that while a a is necessarily scalar, ab is not.
m m m m
m,m,m
To investigate the norm further we look at the product AA. Suppose A is the sum of two (not necessarily simple) elements of different grade: an m-element a and a k-element b . Then AA
m k
becomes:
AA a + b a + b aa + ab + ba + bb
m k m k m m m k k m k k
2009 9 3
626
We would like the norm of a general Grassmann number to be a scalar quantity. But none of the generalized product components of ab or ba can be scalar if m and k are different, so we must
m k k m
product, thus eliminating both products from the expression for the norm. This requirement of antisymmetry can always be satisfied because, since the grades are different, the two products are always distinguishable.
ab - ba
m k k m
11.15
of different grade must be antisymmetric in order to enable the norm of a general sum of simple elements to be scalar. We now explore what constraints are required on the hypercomplex signs in order to effect this antisymmetry. We begin with the definition of the hypercomplex product.
Min@m,kD
ab
m k
l=0
m,l,k
ab
m l k
Now write the definition for the factors in the reverse order, and reverse the order of the generalized Grassmann products on the right hand side back to their initial order. (The formula for doing this is in the previous chapter).
Min@m,kD
ba
k m
l=0
k,l,m
Min@m,kD
ba
k l m
l=0
k,l,m
H- 1LHm-lL Hk-lL a b
m l k
- H- 1LHm-lL Hk-lL
m,l,k
k,l,m
m,l,k
k,l,m
s .
2009 9 3
627
k - l H m - lL Hk - lL even even odd odd odd even even even even even odd even odd odd even even
This table can be summarized by the rule: If m and k are of the same parity, but l is of the opposite parity, then the parity of (m-l)(k-l) is odd; otherwise it is even. This can be encoded by the constraint: We encode these rules as the HypercomplexAntisymmetryConstraints (alias Has).
HypercomplexAntisymmetryConstraints := :
m_?EvenQ,l_?OddQ,k_?EvenQ
; m < k s ,
k,l,m
k,l,m
m_?OddQ,l_?EvenQ,k_?OddQ
11.16
; m < k s ,
k,l,m
m_,l_,k_
; m < k - s >
m,l,k
Note that this constraint does not constrain the values of the remaining s
for m > k.
The antisymmetry axiom then allows us to write AA as the sum of hypercomplex squares of the components of different grade of A.
AA a +b + g + a +b + g + aa + bb + gg +
m k p m k p m m k k p p
2009 9 3
628
,Ba + a + b + g + F a2 - a a - b b - g g -
m k p m m k k p p
11.17
A a +b + g +
m k p
Then by 11.12 and 11.15 we have that the norm of A is positive and may be written as the sum of inner products of the individual simple elements.
,Ba + a + b + g + F a2 + a a + b b + g g +
m k p m m k k p p
11.18
following analysis will be critical to our understanding of the properties of hypercomplex numbers in spaces of dimension greater than 3. Suppose a is the sum of two simple elements a1 and a2 . The product aa then becomes:
m m m m m
a a Ka1 + a2 O Ka1 + a2 O a1 a1 + a1 a2 + a2 a1 + a2 a2
m m m m m m m m m m m m m m
Since a1 and a2 are simple, a1 a1 and a2 a2 are scalar, and can be written:
m m m m m m
a1 a1 + a2 a2 - a1 a1 - a2 a2
m m m m m m m m
The expression of the remaining terms a1 a2 and a2 a1 in terms of exterior and interior products
m m m m
2009 9 3
629
Scalar constraints
HypercomplexScalarConstraints : s 1,
m_,0,0 0,0,k_
s 1>
Square constraints
HypercomplexSquareConstraints :
m_,m_,m_
; m # 0 - 1>
Antisymmetry constraints
HypercomplexAntisymmetryConstraints :
m_?EvenQ,l_?OddQ,k_?EvenQ
; m < k s ,
k,l,m m_,l_,k_
k,l,m
m_?OddQ,l_?EvenQ,k_?OddQ
; m < k s ,
; m < k - s >
k,l,m
Note that whereas the scalar and square constraints gave specific values, these antisymmetry constraints still leave half of each pair unspecified.
Hypercomplex constraints
We collect these as our hypercomplex constraints which we designate H.
H := Flatten@8HypercomplexScalarConstraints, HypercomplexSquareConstraints, HypercomplexAntisymmetryConstraints<D
Residual constraints
Although these constraints cover a significnt range of values of the hypercomplex signs, there are two residual constraints that must be specified by other considerations.
m,l,k
m>k m>l
m,l,m
630
a1 a2
m m l=0
m,l,m
a1 a2
m l m
a2 a1
m m l=0
m,l,m
a2 a1
m l m
The generalized products on the right may now be reversed together with the change of sign H- 1LHm-lL Hm-lL = H- 1LHm-lL .
m
a2 a1 H- 1LHm-lL
m m l=0
m,l,m
a1 a2
m l m
a1 a2 + a2 a1 I1 + H- 1LHm-lL M
m m m m l=0
m,l,m
a1 a2
m l m
11.19
Because we will need to refer to this sum of products subsequently, we will call it the symmetrized product of two m-elements (because the expression does not change if the order of the elements is reversed).
2009 9 3
631
that:
m
a1 a2 + a2 a1 I1 + H- 1Lm-l M
m m m m l=0
m,l,m
a1 a2
m l m
The term 1+H-1Lm-l will be 2 for m and l both even or both odd, and zero otherwise. To get a clearer picture of what this formula entails, we write it out explicitly for the lower values of m. Note that we use the fact already established that s -1.
m,m,m
11.20
11.21
a1 a2 - a1 a2
2 2 2 2
11.22
m=3 a1 a2 + a2 a1 == 2
3 3 3 3 3,1,3
a1 a2 - a1 a2
3 1 3 3 3
11.23
m=4 a1 a2 + a2 a1 == 2
4 4 4 4 4,0,4
a1 a2 +
4 4
4,2,4
a1 a2 - a1 a2
4 2 4 4 4
11.24
2009 9 3
632
m,m,m
Ka1 a2 O .
m m m,m,m
And since the hypercomplex square constraint requires that s -1,we can thus write:
BodyBa1 a2 + a2 a1 F - 2 a1 a2
m m m m m m
11.25
Suppose now that we have an m-element written as the sum of two m-elements.
a a1 + a2
m m m
BodyBa aF -a1 a1 - 2 a1 a2 - a2 a2 - a a
m m m m m m m m m m
Hence, even if a component element like a is not simple, the body of its hypercomplex square is
m
given by the same expression as if it were simple, that is, the negative of its inner product square.
BodyBa aF - a a
m m m m
11.26
It is straightforward to see that this is valid for aa being a sum of any number of m-elements. Thus we can say that the body of the hypercomplex square of an arbitrary m-element is the negative of its inner product square.
2009 9 3
633
m -2
SoulBa aF I1 + H- 1LHm-lL M
m m l=0
m,l,m
a1 a2
m l m
11.27
(Here we have used the fact that because of the coefficient I1 + H- 1LHm-lL M, the terms with l equal to m|1 is always zero, so we only need to sum to m|2). For convenience in generating examples of this expression, we temporarily define the soul of aa
m m
as a function of m, and denote it h[m]. To make it easier to read we convert the generalized products where possible to exterior and interior products.
h@m_D :=
m -2
I1 + H- 1LHm-lL M
l=0
m,l,m
a1 a2 SimplifyGeneralizedProducts
m l m
With this function we can make a palette of souls for various values of m. We do this below for m from 1 to 6.
ComposePalette@8"Soul of a Hypercomplex Square"<, Table@8m, h@mD<, 8m, 1, 6<DD
0 2 2 2 2 s
4,0,4 2,0,2
a1 a2
2 2
3,1,3
a1 a2
3 1 3 4,2,4
a1 a2 + 2
4 4
a1 a2
4 2 4
5,1,5
a1 a2 + 2
5 1 5 6,2,6
5,3,5
a1 a2
5 3 5 6,4,6
a1 a2 + 2
6 6
a1 a2 + 2
6 2 6
a1 a2
6 4 6
Of particular note is the soul of the symmetrized product of two different simple 2-elements.
2
2,0,2
a1 a2
2 2
This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these spaces under the hypercomplex product is guaranteed to be scalar.
2009 9 3
634
This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these spaces under the hypercomplex product is guaranteed to be scalar. In a space of 4 dimensions, on the other hand, this element may not be zero. Hence the space of dimension 3 is the highest-dimensional space in which the norm of a Grassmann number is guaranteed to be scalar.
Here, we suppose a to be scalar, and the rest of the terms to be nonscalar simple or non-simple elements. The generalized hypercomplex norm C[X] of X may be written as:
C@XD a2 - a a - b b - g g -
m m k k p p
The hypercomplex square aa of an element, has in general, both a body and a soul.
m m
BodyBa aF - a a
m m m m
The soul of aa depends on the terms of which it is composed. If aa1 +a2 +a3 + then
m m m m m m m -2 m,l,m
SoulBa aF I1 + H- 1LHm-lL M
m m l=0
a1 a2 + a1 a3 + a2 a3 +
m l m m l m m l m
If the component elements of different grade of X are simple, then its soul is zero, and its norm becomes the scalar:
C@XD a2 + a a + b b + g g +
m m k k p p
It is only in 3-space that we are guaranteed that all the components of a Grassmann number are simple. Therefore it is only in 3-space that we are guaranteed that the norm of a Grassmann number under the hypercomplex product is scalar.
2009 9 3
635
aa
l=0
1,l,1
aa
l
1,0,1
aa +
1,1,1
aa
1,1,1
aa
In the usual notation, the product of two complex numbers would be written:
Ha + bL Hc + dL Ha c - b dL + H b c + a dL
Simplifying this using the relations 11.6 and 11.7 above gives:
Ha c + b d e eL + Hb c + a dL e Ka c + b d
1,1,1
e eO + Hb c + a dL e
1,1,1
ee to be -1.
One immediate interpretation that we can explore to satisfy this is that e is a unit 1-element (with
ee 1), and
1,1,1
s -1.
This then is the constraint we will impose to allow the incorporation of complex numbers in the hypercomplex structure.
1,1,1
-1
11.28
2009 9 3
636
ee 1
11.29
- - 1
11.30
In this interpretation, instead of the being a new entity with the special property 2 -1, the focus is shifted to interpret it as a unit 1-element acting under a new product operation . Complex numbers are then interpreted as Grassmann numbers in a space of one dimension under the hypercomplex product operation. For example, the norm of a complex number a+b a+be a+b can be calculated as:
C@a + bD Ha + bL Ha - bL a2 - b b a2 + b b a2 + b2
,@a + bD a2 + b b a2 + b2
11.31
To make this easier to read we format this with the GrassmannAlgebra function TableTemplate.
TableTemplate@TD
11 a1 b1
1a aa ba
1b ab bb
1 Ha bL a Ha bL b Ha bL
Ha bL 1 Ha bL a Ha bL b Ha bL Ha bL
Products where one of the factors is a scalar have already been discussed. Products of a 1-element with itself have been discussed in the section on complex numbers. Applying these results to the table above enables us to simplify it to: 2009 9 3
637
Products where one of the factors is a scalar have already been discussed. Products of a 1-element with itself have been discussed in the section on complex numbers. Applying these results to the table above enables us to simplify it to:
1 a b
a - Ha aL ba
b ab - Hb bL
ab a Ha bL b Ha bL
a b Ha bL a Ha bL b Ha bL Ha bL
In this table there are four essentially different new products which we have not yet discussed.
ab a Ha bL Ha bL a Ha bL Ha bL
In the next three subsections we will take each of these products and, with a view to developing the quaternion algebra, show how they may be expressed in terms of exterior and interior products.
ab +
1,1,1
Ha bL
1,0,1
a b - Ha bL
The hypercomplex product ba can be obtained by reversing the sign of the exterior product, since the scalar product is symmetric.
1,0,1
ab
a b - Ha bL a b - Ha bL
ba - s
1,0,1
11.32
a Ha bL
l=0 Min@2,1D
1,l,2
a Ha bL
l
1,0,2
a Ha bL +
1,1,2
Ha bL a
Ha bL a
l=0
2,l,1
Ha bL a
l
2,0,1
Ha bL a +
2,1,1
Ha bL a
2009 9 3
638
Since the first term involving only exterior products is zero, we obtain:
1,1,2
a Ha bL Ha bL a
s s
Ha bL a Ha bL a
2,1,1
11.33
Ha bL Ha bL ==
l=0
2,l,2
Ha bL Ha bL ==
l
2,2,2
Ha bL Ha bL
Ha bL Ha bL ==
2,2,2
Ha bL Ha bL
11.34
1 a b b - s
a - Ha aL
1,0,1 1,0,1
b s a b - Ha bL - Hb bL
2,1,1 1,1,2
ab s s Ha bL a 11.35 Ha bL b
a b - Ha bL Ha bL a s
1,1,2
2,1,1
Ha bL b
2,2,2
Ha bL Ha b
2009 9 3
639
1 a b ab
a -1 - s
1,0,1
b
1,0,1
ab
1,1,2
ab
b a
ab b
-1 - s
2,1,1
- s a s
1,1,2
2,1,1
2,2,2
a(
b)
ab #
11.36
1 $
$ -1
%
1,0,1
& &
1,1,2
% $
-1 - s
2,1,1
- s $ s
1,1,2
2,2,2
-1
-1
11.37
2009 9 3
640
1 $
$ -1
& 11.38
& -% $
% - & -1 & %
-$ -1
Substituting these values back into the original table 11.17 gives a hypercomplex product table in terms only of exterior and interior products. This table defines the real, complex, and quaternion product operations.
1 a b ab
a - Ha aL - a b - Ha bL Ha bL a
b a b - Ha bL - Hb bL Ha bL b
ab - Ha bL a - Ha bL b - Ha bL Ha bL 11.39
Q a + a + ab
Here, a is a scalar, a is a 1-element and ab is congruent to any 2-element of the space.
11.40
The norm of Q is denoted C[Q] and given as the hypercomplex product of Q with its conjugate Qc . expanding using formula 11.5 gives:
C@QD Ha + a + a bL Ha -a -a bL a2 - Ha + a bL Ha + a bL
Whence:
a Ha bL - Ha bL a
Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the hypercomplex product or the inner product. 2009 9 3
641
Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the hypercomplex product or the inner product.
,@QD a2 - a a - Ha bL Ha bL ,@QD a2 + a a + Ha bL Ha bL
In terms of 3, 4, and !
Q=a+b3+c4 +d! C@QD a2 - Hb 3 + c 4L Hb 3 + c 4L - Hd !L Hd !L a2 - b2 H3 3L - c2 H4 4L - d2 H! !L a2 + b2 H3 3L + c2 H4 4L + d2 H! !L a2 + b2 + c2 + d2
11.41
11.42
1 a b
a - Ha aL - ab
b ab - Hb bL
ab - Ha aL b Hb bL a 11.43
a b Ha aL b - Hb bL a - Ha aL Hb bL
In the notation we have used, the generators are 1, a, b, and ab. The parameters are aa and bb.
2009 9 3
642
12.1 Introduction
In this chapter we define a new type of algebra: the Clifford algebra. Clifford algebra was originally proposed by William Kingdon Clifford (Clifford ) based on Grassmann's ideas. Clifford algebras have now found an important place as mathematical systems for describing some of the more fundamental theories of mathematical physics. Clifford algebra is based on a new kind of product called the Clifford product. In this chapter we will show how the Clifford product can be defined straightforwardly in terms of the exterior and interior products that we have already developed, without introducing any new axioms. This approach has the dual advantage of ensuring consistency and enabling all the results we have developed previously to be applied to Clifford numbers. In the previous chapter we have gone to some lengths to define and explore the generalized Grassmann product. In this chapter we will use the generalized Grassmann product to define the Clifford product in its most general form as a simple sum of generalized products. Computations in general Clifford algebras can be complicated, and this approach at least permits us to render the complexities in a straightforward manner. From the general case, we then show how Clifford products of intersecting and orthogonal elements simplify. This is the normal case treated in introductory discussions of Clifford algebra. Clifford algebras occur throughout mathematical physics, the most well known being the real numbers, complex numbers, quaternions, and complex quaternions. In this book we show how Clifford algebras can be firmly based on the Grassmann algebra as a sum of generalized Grassmann products. Historical Note The seminal work on Clifford algebra is by Clifford in his paper Applications of Grassmann's Extensive Algebra that he published in the American Journal of Mathematics Pure and Applied in 1878 (Clifford ). Clifford became a great admirer of Grassmann and one of those rare contemporaries who appears to have understood his work. The first paragraph of this paper contains the following passage.
Until recently I was unacquainted with the Ausdehnungslehre, and knew only so much of it as is contained in the author's geometrical papers in Crelle's Journal and in Hankel's Lectures on Complex Numbers. I may, perhaps, therefore be permitted to express my profound admiration of that extraordinary work, and my conviction that its principles will exercise a vast influence on the future of mathematical science.
2009 9 3
643
Min@m,kD
ab
m k
l=0
H- 1Ll Hm-lL+ 2
l Hl-1L
ab
m l k
12.1
The most surprising property of the Clifford product is its associativity, even though it is defined in terms of products which are non-associative. The associativity of the Clifford products is not directly evident from the definition.
ab
m 0 0
ab
m
a b - H- 1Lm Ka bO
m 0 m 1
ab
m 2
a b - H- 1Lm a b - a b
m 0 2 m 1 2 m 2 2
ab
m 3
a b - H- 1Lm a b - a b + H- 1Lm a b
m 0 3 m 1 3 m 2 3 m 3 3
ab
m 4
a b - H- 1Lm a b - a b + H- 1Lm a b + a b
m 0 4 m 1 4 m 2 4 m 3 4 m 4 4
2009 9 3
644
elements of a given Clifford product are either of even grade (if m + k is even), or of odd grade (if m + k is odd). To calculate the full range of grades in a space of unspecified dimension we can use the GrassmannAlgebra function RawGrade. For example:
RawGradeBa bF
3 2
81, 3, 5<
Given a space of a certain dimension, some of these elements may necessarily be zero (and thus result in a grade of Grade0), because their grade is larger than the dimension of the space. For example, in a 3-space:
&3 ; GradeBa bF
3 2
81, 3, Grade0<
In the general discussion of the Clifford product that follows, we will assume that the dimension of the space is high enough to avoid any terms of the product becoming zero because their grade exceeds the dimension of the space. In later more specific examples, however, the dimension of the space becomes an important factor in determining the structure of the particular Clifford algebra under consideration.
and 5. In GrassmannAlgebra we can effect this conversion by applying the ToGeneralizedProducts function.
ToGeneralizedProductsBa bF
3 2
ab+ab-ab
3 0 2 3 1 2 3 2 2
Multiple Clifford products can be expanded in the same way. For example:
2009 9 3
645
ToGeneralizedProductsBa b gF
3 2
ab g+ ab g+- ab g+
3 0 2 0 3 1 2 0 3 2 2 0
ab g+ ab g+- ab g
3 0 2 1 3 1 2 1 3 2 2 1
ToGeneralizedProducts can also be used to expand Clifford products in any Grassmann expression, or list of Grassmann expressions. For example we can express the Clifford product of two general Grassmann numbers in 2-space in terms of generalized products. &2 ; X = CreateGrassmannNumber@xD CreateGrassmannNumber@yD Hx0 + e1 x1 + e2 x2 + x3 e1 e2 L Hy0 + e1 y1 + e2 y2 + y3 e1 e2 L X1 = ToGeneralizedProducts@XD x0 y0 + x0 He1 y1 L + x0 He2 y2 L + x0 Hy3 e1 e2 L +
0 0 0 0
Note that ToInteriorProducts works with explicit elements, but also as above, automatically creates a simple exterior product from an underscripted element.
ToInteriorProducts expands the Clifford product by using the A form of the generalized product. A second function ToInteriorProductsB expands the Clifford product by using the B form of the expansion to give a different but equivalent expression.
2009 9 3
646
ToInteriorProducts expands the Clifford product by using the A form of the generalized product. A second function ToInteriorProductsB expands the Clifford product by using the B form of the expansion to give a different but equivalent expression. ToInteriorProductsBBa bF
3 2
- Ha1 a2 a3 b1 b2 L - a1 a2 a3 b1 b2 + a1 a2 a3 b2 b1 + a1 a2 a3 b1 b2
In this example, the difference is evidenced in the second and third terms.
Ha2 b2 L Ha3 b1 L a1 - Ha2 b1 L Ha3 b2 L a1 Ha1 b2 L Ha3 b1 L a2 + Ha1 b1 L Ha3 b2 L a2 + Ha1 b2 L Ha2 b1 L a3 - Ha1 b1 L Ha2 b2 L a3 - Ha3 b2 L a1 a2 b1 + Ha3 b1 L a1 a2 b2 + Ha2 b2 L a1 a3 b1 - Ha2 b1 L a1 a3 b2 Ha1 b2 L a2 a3 b1 + Ha1 b1 L a2 a3 b2 + a1 a2 a3 b1 b2
2009 9 3
647
H a 1 a 2 ! a m L a m a m -1 ! a 1
We can easily work out the number of permutations to achieve this rearrangement as
1 2
12.2
m Hm -1L
to m Hm-1L .
a H- 1L 2
m
m H m -1L
a m H m -1L a
m m
12.3
The operation of taking the reverse of an element is called reversion. In Mathematica, the superscript dagger is represented by SuperDagger. Thus
SuperDaggerBaF returns a .
m m
81, x, y x, 3 a3 a2 a1 , bk ! b2 b1 , am ! a2 a1 bk ! b2 b1 - am ! a2 a1 gp ! g2 g1 <
On the other hand, if we wish just to put exterior products into reverse form (that is, reversing the factors, and changing the sign of the product so that the result remains equal to the original), we can use the GrassmannAlgebra function ReverseForm.
ReverseFormB:1, x, x y, a, b, a b - g >F
3 k m k p
648
a a1 a2 ! am
aa a a
m m
12.4
Hence the Clifford product of any number of scalars is just their underlying field product.
ToScalarProducts@a b cD abc
xy xy + xy
The Clifford product of any number of 1-elements can be computed.
ToScalarProducts@x y zD z Hx yL - y Hx zL + x Hy zL + x y z ToScalarProducts@w x y zD H w zL Hx yL - H w yL Hx zL + H w xL Hy zL + Hy zL w x - Hx zL w y + Hx yL w z + H w zL x y - H w yL x z + H w xL y z + w x y z
12.5
2009 9 3
649
xa xa + ax
m m m
12.6
a x a x - H- 1L m a x
m m m
12.7
we can add and subtract equations 12.6 and 12.7 to express the exterior product and interior products of a 1-element and an m-element in terms of Clifford products
xa
m
1 2 1 2
Kx a + H- 1L m a xO
m m
12.8
ax
m
Kx a - H- 1L m a xO
m m
12.9
a b a b - H- 1L m a b - a b
m 2 m 2 m 1 2 m 2
12.10
2009 9 3
650
The first of these generalized products is equivalent to an exterior product of grade 4. The second is a generalized product of grade 2. The third reduces to an interior product of grade 0. We can see this more explicitly by converting to interior products.
ToInteriorProducts@Hx yL Hu vLD - Hu v x yL - Hx y uL v + Hx y vL u + u v x y
We can convert the middle terms to inner (in this case scalar) products.
ToInnerProducts@Hx yL Hu vLD - Hu v x yL + Hv yL u x Hv xL u y - Hu yL v x + Hu xL v y + u v x y
Finally we can express the Clifford number in terms only of exterior and scalar products.
ToScalarProducts@Hx yL Hu vLD Hu yL Hv xL - Hu xL Hv yL + Hv yL u x Hv xL u y - Hu yL v x + Hu xL v y + u v x y
g g H- 1Ll Hp-lL+ 2
l=0
l Hl-1L
gg
p l p
Since the only non-zero generalized product of the form g g is that for which l = p, that is gg,
p l p p p
g g H- 1L 2
p p
p Hp-1L
gg
p p
Or, alternatively:
g g g g g g
p p p p p p
12.11
651
ab
m k
l=0
H- 1Ll Hm-lL+ 2
l Hl-1L
ab
m l k
Alternate forms for the generalized product have been discussed in Chapter 10. The generalized product, and hence the Clifford product, may be expanded by decomposition of a, by
m
m K O l
a b ai b ai
m l k
a a1 a1 a2 a2 !
m l m-l l m-l
i=1
m-l
Substituting this into the expression for the Clifford product gives:
m K O Min@m,kD l
ab
m k
l=0
H- 1Ll Hm-lL+ 2
i=1
l m-l
l Hl-1L
m-l
ai b ai
k l
a a1 a1 a2 a2 !
m l m-l
Our objective is to rearrange the formula for the Clifford product so that the signs are absorbed into the formula, thus making the form of the formula independent of the values of m and l. We can do
1
l Hl-1L
order of the decomposition of a into a l-element and a (m-l)-element to absorb the H-1Ll Hm-lL
m
2009 9 3
652
m K O Min@m,kD l
ab
m k
l=0
ai b ai
i=1
m-l m-l k l
a a1 a1 a2 a2 !
m m-l l l
Because of the symmetry of the expression with respect to l and (m-l), we can write m equal to (m-l) and then l becomes (m-m). This enables us to write the formula a little simpler by arranging for the factors of grade m to come before those of (m-m). Finally, because of the inherent arbitrariness of the symbol m, we can change it back to l to get the formula in a more accustomed form.
m
Min@m,kD KlO
ab
m k
l=0
ai b ai
i=1
m-l
12.12
a a1 a1 a2 a2 !
m l m-l l m-l
The right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can develop the Clifford product ab as a sum of interior products in this particular form by using
m k m
Example 1
We can expand any explicit elements.
ToInteriorProductsD@Hx yL Hu vLD x y v u + x Hu v yL - y Hu v xL + u v x y
Note that this is a different (although equivalent) result from that obtained using ToInteriorProducts.
ToInteriorProducts@Hx yL Hu vLD - Hu v x yL - Hx y uL v + Hx y vL u + u v x y
Example 2
We can also enter the elements as graded variables and have GrassmannAlgebra create the requisite products.
ToInteriorProductsDBa bF
2 2
b1 b2 a2 a1 + a1 Hb1 b2 a2 L - a2 Hb1 b2 a1 L + a1 a2 b1 b2
Example 3
2009 9 3
653
Example 3
The formula shows that it does not depend on the grade of b for its form. Thus we can still obtain
k
A = ToInteriorProductsDBa b F
2 k
b a2 a1 + a1 b a2 - a2 b a1 + a1 a2 b
k k k k
Example 4
Note that if k is less than the grade of the first factor, some of the interior product terms may be zero, thus simplifying the expression.
B = ToInteriorProductsDBa b F
2
- Ha1 a2 bL + b a1 a2
Although this does not immediately look like the expression B above, we can see that it is the same by noting that the first term of A1 is zero, and expanding their difference to scalar products.
ToScalarProducts@B - A1D 0
by reversing the order of the factors in the generalized products, and then expanding the generalized products in their B form expansion.
Min@m,kD
ab
m k
l=0
H- 1Ll Hm-lL+ 2
l Hl-1L+Hm-lL Hk-lL
ba
k l m
m K O Min@m,kD l
l=0
1
H- 1Lk Hm-lL+ 2
i=1
l Hl-1L
l Hl-1L
b ai ai
k m-l l
to get:
654
l Hl-1L
to get:
m K O Min@m,kD l
ab
m k
l=0
i=1
l
m-l
ai b ai
k l
a a1 a1 a2 a2 !
m l m-l m-l
In order to get this into a form for direct comparison of our previously derived result in formula 12.12, we write H-1Ll Hm-lL a a1 a1 a2 a2 ! , then interchange l and (m-l) as before to
m m-l l m-l l
get finally:
m
Min@m,kD KlO
ab
m k
l=0
H- 1Ll Hm-lL ai b ai
i=1
m-l
12.13
a a1 a1 a2 a2 !
m l m-l l m-l
As before, the right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can develop the Clifford product ab in this form by using
m k
B of the generalized product theorem (Section 10.5) and substitute directly to get either of:
k K O Min@m,kD l
ab
m k
l=0
H- 1Ll Hm-lL a bj
j=1 m l
bj
k-l
12.14
b b1 b1 b2 b2 !
k l k-l l k-l
2009 9 3
655
k K O Min@m,kD l
ab
m k
l=0
H- 1Ll Hm-lL a bj bj
j=1 m k-l l
12.15
b b1 b1 b2 b2 !
k l k-l l k-l
H- 1L 2
l Hl-1L
Min@m,kD
m k K OK O l l l
ab
m k
l=0
H- 1Ll Hm-lL ai bj
i=1 j=1 m k K OK O l l l
ai bj
m-l k-l
Min@m,kD
12.16
ab
m k
l=0
H- 1Ll Hm-lL ai bj
i=1 j=1 l l
ai bj
m-l k-l
a a1 a1 a2 a2 !
m l m-l l m-l
b b1 b1 b2 b2 !
k l k-l l k-l
in both the D form and the C form. We see that all terms except the interior and exterior products at the ends of the expression appear to be different. There are of course equal numbers of terms in either form. Note that if the parentheses were not there, the expressions would be identical except for (possibly) the signs of the terms. Although these central terms differ between the two forms, we can show by reducing them to scalar products that their sums are the same.
2009 9 3
656
The D form
X = ToInteriorProductsDBa bF
4 k
b a4 a3 a2 a1 + a1 b a4 a3 a2 k k
a2 b a4 a3 a1 + a3 b a4 a2 a1 k k
a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 +
k k k
a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 +
k k k
a3 a4 b a2 a1 + a1 a2 a3 b a4 - a1 a2 a4 b a3 +
k k k
a1 a3 a4 b a2 - a2 a3 a4 b a1 + a1 a2 a3 a4 b
k k k
The C form
Note that since the exterior product has higher precedence in Mathematica than the interior product, a1 ba2 is equivalent to a1 b a2 , and no parentheses are necessary in the terms of
k k
the C form.
ToInteriorProductsCBa bF
4 k
b a4 a3 a2 a1 - a1 b a4 a3 a2 +
k k k k k k k
a2 b a4 a3 a1 - a3 b a4 a2 a1 +
k k k k k k k k k
a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 + a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 + a3 a4 b a2 a1 - a1 a2 a3 b a4 + a1 a2 a4 b a3 a1 a3 a4 b a2 + a2 a3 a4 b a1 + a1 a2 a3 a4 b
Either form is easy to write down directly by observing simple mnemonic rules.
1. Calculate the ai
l
2009 9 3
657
1. Calculate the ai
l
These are all the essentially different combinations of factors of all grades. The list has the same form as the basis of the Grassmann algebra whose n-element is a.
m
S = ThreadBb RF
k
:b a4 a3 a2 a1 , b a4 a3 a2 , b - Ha4 a3 a1 L, b a4 a2 a1 ,
k k k k k k k k k
2009 9 3
658
5. Take the exterior product of the elements of the two lists B and S
T = Thread@B SD :1 b a4 a3 a2 a1 , a1 b a4 a3 a2 ,
k k
a2 b - Ha4 a3 a1 L , a3 b a4 a2 a1 , a4 b - Ha3 a2 a1 L ,
k k k
a1 a2 b a4 a3 , a1 a3 b - Ha4 a2 L , a1 a4 b a3 a2 ,
k k k
a2 a3 b a4 a1 , a2 a4 b - Ha3 a1 L , a3 a4 b a2 a1 ,
k k k
a1 a2 a3 b a4 , a1 a2 a4 b -a3 , a1 a3 a4 b a2 ,
k k k
a2 a3 a4 b -a1 , a1 a2 a3 a4 b 1 >
k k
6. Add the terms and simplify the result by factoring out the scalars
U = FactorScalars@Plus TD b a4 a3 a2 a1 + a1 b a4 a3 a2 k k
a2 b a4 a3 a1 + a3 b a4 a2 a1 k k
a4 b a3 a2 a1 + a1 a2 b a4 a3 - a1 a3 b a4 a2 +
k k k
a1 a4 b a3 a2 + a2 a3 b a4 a1 - a2 a4 b a3 a1 +
k k k
a3 a4 b a2 a1 + a1 a2 a3 b a4 - a1 a2 a4 b a3 +
k k k
a1 a3 a4 b a2 - a2 a3 a4 b a1 + a1 a2 a3 a4 b
k k k
2009 9 3
659
ga gb
p m p k
l=0
H- 1Ll Hp+m-lL + 2
l Hl-1L
ga gb
p m l p k
ga
p m
-p+l k
b g
p
ga gb
p m p k
l=0
H- 1Lp+l Hm-lL + 2
l Hl-1L
ga
p m
l-p k
b g
p
Since the terms on the right hand side are zero for l < p, we can define m = l - p and rewrite the right hand side as:
Min@m,kD
ga gb
p m p k
H- 1Lw
m=0
H- 1Lm Hp+m-mL + 2
m Hm-1L
ga b g
p m m k p
where w = m p +
1 2
p Hp - 1L. Hence we can finally cast the right hand side as a Clifford product
p
ga gb
p m p k
H - 1 L m p+ 2
p Hp-1L
ga b g
p m k p
12.17
In a similar fashion we can show that the right hand side can be written in the alternative form:
1
ga gb
p m p k
H- 1L 2
p Hp-1L
a gb
m p k
g
p
12.18
By reversing the order of a and g in the first factor, formula 12.17 can be written as:
m p
2009 9 3
660
ag gb
m p p k
H- 1L 2
p Hp-1L
ga b g
p m k p
12.19
p Hp-1L
a g g b
m p p k
ga b g
p m k p
12.20
a g g b
m p p k
H- 1L m p
ag b g
m p k p
12.21
a g g b
m p p k
H- 1L m p a g b
m p k
g
p
12.22
ag b g a gb
m p k p m p k
g
p
12.23
The relations derived up to this point are completely general and apply to Clifford products in an arbitrary space with an arbitrary metric. For example they do not require any of the elements to be orthogonal.
put b equal to unity. Remembering that the Clifford product with a scalar reduces to the ordinary
k
product we obtain:
a g g H- 1Lm p a g g H- 1Lm p a g g
m p p m p p m p p
Next, put a equal to unity, and then, to enable comparison with the previous formulae, replace b by
m k
a.
m
2009 9 3
661
g g a
p p m
ga g ga g
p m p p m p
Finally we note that, since the far right hand sides of these two equations are equal, all the expressions are equal.
a g g g g a
m p p p p m
12.24 ag g
m p p
g a g g a g H- 1L
p m p p m p
mp
By putting a to unity in these equations we can recover the relation 12.11 between the Clifford
m
Thus we see immediately from the definition of the Clifford product that the Clifford product of two totally orthogonal elements is equal to their exterior product.
ab ab
m k m k
ai bj 0
12.25
generalized products.
Min@m,k+pD
a gb
m p k
l=0
H - 1 L l Hm - lL+ 2
l Hl-1L
a gb
m l p k
2009 9 3
662
a gb a g b
m l p k m l p k
ai bj 0
l Min@m, pD
Hence:
a gb ag b
m p k m p k
ai bj 0
12.26
10.37 gives:
Min@p,kD
ag b
m p k
1
l=0
H- 1Ll Hm+p-lL + 2
l Hl-1L+m l
a g b
m p l k
l Hl-1L+m l
H- 1Ll Hp-lL + 2
l Hl-1L
ag b a gb
m p k m p k
ai bj 0
12.27
2009 9 3
663
FlattenBTableBToScalarProductsB a g b - a g b F .
m p k m p k
OrthogonalSimplificationRulesB::a, b>>F,
m k
H- 1Lm p
ag b g
m p k p
a g g b
m p p k
H- 1Lm p a g b
m p k
g
p
But we have also shown in Section 12.8 that if a and b are totally orthogonal, then:
m k
ag b a gb
m p k m p k
a gb ag b
m p k m p k
Hence the right hand sides of the first equations may be written, in the orthogonal case, as:
a g g b
m p p k
H- 1L m p a g b
m p k
g
p
ai bj 0
12.28
a g g b
m p p k
H- 1L m p
ag b g
m p k p
ai bj 0
12.29
Orthogonal intersection
2009 9 3
664
Orthogonal intersection
Consider the case of three simple elements a, b and g where g is totally orthogonal to both a and
m k p p m m
b (and hence to ab). A simple element g is totally orthogonal to an element a if and only if
k m m k p p m k
agi 0 for all gi belonging to g. Here, we do not assume that a and b are orthogonal.
As before we consider the Clifford product of intersecting elements, but because of the orthogonality relationships, we can replace the generalized products on the right hand side by the right hand side of equation 10.39.
Min@m,kD+p
ga gb
p m p k
l=0
H- 1Ll Hp+m-lL + 2
l Hl-1L
ga gb
p m l p k
Min@m,kD+p
l=p
H- 1Ll Hp+m-lL + 2
l Hl-1L
gg
p p
a b
m l-p k
ga gb
p m p k
gg
p p
m=0
H- 1LHm+pL Hm-mL + 2
Min@m,kD
Hm+pL Hm+p-1L
ab
m m k
H - 1 L m p+ 2
p Hp-1L
gg
p p
m=0
H- 1Lm Hm-mL + 2
m Hm-1L
ab
m m k
H - 1 L m p+ 2
p Hp-1L
gg
p p
ab
m k
a g g b
m p p k
gg
p p
ab
m k
a gi b gi 0
m k
12.30
Note carefully, that for this formula to be true, a and b are not necessarily orthogonal. However, if
m k
they are orthogonal we can express the Clifford product on the right hand side as an exterior product.
2009 9 3
665
a g g b
m p p k
12.31
gg
p p
ab
m k
ai bj ai gs bj gs 0
g g g g g g
p p p p p p
12.32
g g g g g g
p p p p p p
12.33
g g g g g g g g g
p p p p p p p p p
12.34
gg gg gg
p p p p p p
12.35
2009 9 3
666
a g g g g a
m p p p p m
12.36 ag g
m p p
g a g g a g H- 1L
p m p p m p
mp
a g g b
m p p k
H- 1L m p
ag b g
m p k p
12.37
a g g b
m p p k
H- 1L m p a g b
m p k
g
p
12.38
a gb ag b
m p k m p k
ai bj 0
12.39
ag b a gb
m p k m p k
ai bj 0
12.40
a g g b
m p p k
H- 1L m p a g b
m p k
g
p
ai bj 0
12.41
2009 9 3
667
a g g b
m p p k
H- 1L m p
ag b g
m p k p
ai bj 0
12.42
a g g b
m p p k
12.43
gg
p p
ab
m k
ai gs bj gs 0
Orthogonal elements
a and b are totally orthogonal
m k
ab ab
m k m k
ai bj 0
12.44
ag gb
m p p k
12.45
g g
p p
ab
m k
ai bj ai gs bj gs 0
2009 9 3
668
The Clifford product of totally orthogonal elements reduces to the exterior product
If we know that all the factors of a Clifford product are totally orthogonal, then we can interchange the Clifford product and the exterior product at will. Hence, for totally orthogonal elements, the Clifford and exterior products are associative, and we do not need to include parentheses.
ai bj ai gs bj gs 0
12.46
Note carefully however, that this associativity does not extend to the factors of the m-, k-, or pelements unless the factors of the m-, k-, or p-element concerned are mutually orthogonal. In which case we could for example then write:
a1 a2 ! a m a1 a2 ! a m
ai aj 0
i!j
12.47
Example
For example if xy and z are totally orthogonal, that is xz 0 and yz 0, then we can write
Hx yL z x y z x Hy zL - y Hx zL
But since x is not necessarily orthogonal to y, these are not the same as
x y z Hx yL z x Hy zL
We can see the difference by reducing the Clifford products to scalar products.
ToScalarProducts@8Hx yL z, x y z, x Hy zL, - y Hx zL<D . OrthogonalSimplificationRules@88x y, z<<D 8x y z, x y z, x y z, x y z< ToScalarProducts@8x y z, Hx yL z, x Hy zL<D . OrthogonalSimplificationRules@88x y, z<<D 8z Hx yL + x y z, z Hx yL + x y z, z Hx yL + x y z<
2009 9 3
669
g g
p p
ab
m k
ai bj ai gs bj gs 0
Note that it is not necessary that the factors of a be orthogonal to each other. This is true also for
m
the factors of b or of g.
k p
We begin by writing a Clifford product of three elements, and associating the first pair. The elements contain factors which are specific to the element (a, b, w), pairwise common (g, d, e),
m k r p q s
and common to all three (q). This product will therefore represent the most general case since other
z
dewq
q s r z
q g q g
z p z p
aebd dewq
m s k q q s r z
q g q g
z p z p
H- 1Lsk
e d e d
s q s q
a b Kw qO
m k r z
H- 1Lsk
q g q g
z p z p
e d e d
s q s q
abwq
m k r z
On the other hand we can associate the second pair to obtain the same result.
2009 9 3
670
aeqg
m s z p
gqbd dewq
p z k q q s r z
a e q g H- 1Lzk+z Hs+rL
m s z p
q d q d
z q z q
gb ew
p k s r
H- 1Lzk+z Hs+rL
q d q d
z q z q
aeqg gbew
m s z p p k s r
q d q d
z q z q
aqeg gebw
m z s p p s k r
q d q d
z q z q
e g e g
s p s p
Ka qO b w
m z k r
q g q g
z p z p
e d e d
s q s q
abwq
m k r z
In sum: We have shown that the Clifford product of possibly intersecting but otherwise totally orthogonal elements is associative. Any factors which make up a given individual element are not specifically involved, and hence it is of no consequence to the associativity of the element with another whether or not these factors are mutually orthogonal.
e d e d e e
s q s q s s
d d
q q
12.48
r z
H- 1Lsk g g Kq qO Ke eO d d
p p z z s s q q
abwq
m k
A mnemonic for making this transformation is then 1. Rearrange the factors in a Clifford product to get common factors adjacent to the Clifford product symbol, taking care to include any change of sign due to the quasi-commutativity of the exterior product. 2. Replace the common factors by their inner product, but with one copy being reversed. 3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the exterior product.
2009 9 3
A mnemonic for making this transformation is then 1. Rearrange the factors in a Clifford product to get common factors adjacent Grassmann to Algebra the Clifford Book.nb 671 product symbol, taking care to include any change of sign due to the quasi-commutativity of the exterior product. 2. Replace the common factors by their inner product, but with one copy being reversed. 3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the exterior product. Remember that for these relations to hold all the elements must be totally orthogonal to each other. Note that if, in addition, the 1-element factors of any of these elements, g say, are orthogonal to
p
has been shown in Section 6.3 that an arbitrary simple m-element may be expressed in terms of m orthogonal 1-element factors (the Gram-Schmidt orthogonalization process). Suppose that n such orthogonal 1-elements e1 ,e2 ,!,en have been found in terms of which a, b, and g can be
m k p
expressed. Writing the m-elements, k-elements and p-elements formed from the ei as ej , er , and
m k
a S aj ej
m m
b S br er
k k
g S cs es
p p
S cs es S S S aj br cs ej er es
p m k p
But we have already shown in the previous section that the Clifford product of orthogonal elements is associative. That is:
ej er es ej er es ej er es
m k p m k p m k p
ab g a bg abg
m k p m k p m k p
12.49
The associativity of the Clifford product is usually taken as an axiom. However, in this book we have chosen to show that associativity is a consequence of our definition of the Clifford product in terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford algebras are consistent. 2009 9 3
672
The associativity of the Clifford product is usually taken as an axiom. However, in this book we have chosen to show that associativity is a consequence of our definition of the Clifford product in terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford algebras are consistent.
Y = CreateElementByF; Z = CreateElementBzF;
k p
Or, we can tabulate a number of results together. For example, elements of all grades in all spaces of dimension 0, 1, 2, and 3.
Table@CliffordAssociativityTest@nD@m, k, pD, 8n, 0, 3<, 8m, 0, n<, 8k, 0, n<, 8p, 0, m<D 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
2009 9 3
673
ab
m k o
. That is:
ab ab
m k m k e
+ ab
m k o
12.50
ab
m k
l=0
H- 1Ll Hm-lL+ 2
l Hl-1L
ab
m l k
ab
m k e
l = 0,2,4,!
H- 1L 2
ab
m l k
12.51
(Here, the limit of the sum is understood to mean the greatest even integer less than or equal to Min[m,k].) The odd generalized products are:
Min@m,kD
ab
m k o
H- 1L
l = 1,3,5,!
H- 1L
l+1 2
ab
m l k
12.52
(In this case, the limit of the sum is understood to mean the greatest odd integer less than or equal to Min[m,k].) Note carefully that the evenness and oddness to which we refer is to the order of the generalized product not to the grade of the Clifford product.
2009 9 3
674
Min@m,kD
ba
k m e
l = 0,2,4,!
H- 1L 2
ba
k l m
l = 0,2,4,!
H- 1L 2 +Hm-lL Hk-lL a b
m l k
ba
k m e
H- 1L
mk
l = 0,2,4,!
H- 1L 2
ab
m l k
12.53
Hence, the even terms of both products are equal, except when both factors of the product are of odd grade.
ba
k m e
H- 1L m k a b
m k e
12.54
ba
k
m o
H- 1L
l+1
l = 1,3,5,!
H- 1L
+Hm-lL Hk-lL
ab
m l k
Min@m,kD
ba
k
m o
- H- 1L
mk
H- 1L
l = 1,3,5,!
H- 1L
l+1 2
ab
m l k
12.55
ba
k m o
- H- 1L m k a b
m k o
12.56
2009 9 3
675
ab ab
m k m k e
+ ab
m k o
12.57
H- 1L m k b a a b
k m m k e
- ab
m k o
12.58
ab
m k e
1 2
a b + H- 1L m k b a
m k k m
12.59
ab
m k o
1 2
a b - H- 1L m k b a
m k k m
12.60
ab
m
1 2
Ka b + H- 1L m b a O
m m
12.61
1 2
Ka b - H- 1L m b a O - H- 1L m a b
m m m
12.62
1 2 1 2
ab + ba
m 2 2 m
ab - ab
m 2 m 2
12.63
ab - ba
m 2 2 m
- H- 1L m a b
m 1 2
12.64
676
Real algebra
All the products that we have developed up to this stage, the exterior, interior, generalized Grassmann and Clifford products possess the valuable and consistent property that when applied to scalars yield results equivalent to the usual (underlying field) product. The field has been identified with a space of zero dimensions, and the scalars identified with its elements. Thus, if a and b are scalars, then:
ab ab ab a b a b
0
12.65
Thus the real algebra is isomorphic with the Clifford algebra of a space of zero dimensions.
2009 9 3
677
There are only four possible Clifford products of these basis elements. We can construct a table of these products by using the GrassmannAlgebra function CliffordProductTable.
CliffordProductTable@D 881 1, 1 e1 <, 8e1 1, e1 e1 <<
Usually however, to make the products easier to read and use, we will display them in the form of a palette using the GrassmannAlgebra function PaletteForm. (We can click on the palette to enter any of its expressions into the notebook).
C1 = CliffordProductTable@D; PaletteForm@C1 D
11
1 e1
e1 1 e1 e1
In the general case any Clifford product may be expressed in terms of exterior and interior products. We can see this by applying ToInteriorProducts to the table (although only interior (here scalar) products result from this simple case),
C2 = ToInteriorProducts@C1 D; PaletteForm@C2 D
e1
e1 e1 e1
Different Clifford algebras may be generated depending on the metric chosen for the space. In this example we can see that the types of Clifford algebra which we can generate in a 1-space are dependent only on the choice of a single scalar value for the scalar product e1 e1 . The Clifford product of two general elements of the algebra is
Ha + b e1 L Hc + d e1 L ToScalarProducts a c + b d He1 e1 L + b c e1 + a d e1
It is clear to see that if we choose e1 e1 -1, we have an algebra isomorphic to the complex algebra. The basis 1-element e1 then plays the role of the imaginary unit . We can generate this particular algebra immediately by declaring the metric, and then generating the product table.
2009 9 3
678
-1
However, our main purpose in discussing this very simple example in so much detail is to emphasize that even in this case, there are an infinite number of Clifford algebras on a 1-space depending on the choice of the scalar value for e1 e1 . The complex algebra, although it has surely proven itself to be the most useful, is just one among many. Finally, we note that all Clifford algebras possess the real algebra as their simplest even subalgebra.
11 e1 1 e2 1
1 e1 e1 e1 e2 e1
1 e2 e1 e2 e2 e2
11
0
1 e1
0
1 e2
0 1
e1 1
0
e1 e1 + e1 e1
0
e1 e2 + e1 e2
0 1
e2 1
0 0
e2 e1 + e2 e1
0 1 0 1
e2 e2 + e2 e2
0 1 0
679
The next level of expression is obtained by applying ToInteriorProducts to reduce the generalized products to exterior and interior products.
C3 = ToInteriorProducts@C2 D; PaletteForm@C3 D
1 e1 e2
e1 e1 e1 e1 e2 - e1 e2
e2 e1 e2 + e1 e2 e2 e2
e1 e1 e2 e1 e2
1 e1 e2
e1 e1 e1 e1 e2 - e1 e2
e2 e1 e2 + e1 e2 e2 e2 - He1 - He2
1 e1 e2
e1 e1 e1 - He1 e2 L
e2 e1 e2 e2 e2
e1 e2 He1 e1 L e2 - He2 e2 L e1
680
In the rest of what follows however, we will restrict ourselves to metrics in which e1 e1 %1 and e2 e2 %1, whence there are three cases of interest.
8e1 e1 + 1, e2 e2 + 1< 8e1 e1 + 1, e2 e2 - 1< 8e1 e1 - 1, e2 e2 - 1<
As observed previously the case 8e1 e1 -1,e2 e2 +1< is isomorphic to the case 8e1 e1 +1,e2 e2 -1<, so we do not need to consider it.
1 e1 e2 e1 e2
e1 1 - He1 e2 L - e2
e2 e1 e2 1 e1
e1 e2 e2 - e1 -1
Inspecting this table for interesting structures or substructures, we note first that the even subalgebra (that is, the algebra based on products of the even basis elements) is isomorphic to the complex algebra. For our own explorations we can use the palette to construct a product table for the subalgebra, or we can create a table using the GrassmannAlgebra function TableTemplate, and edit it by deleting the middle rows and columns.
TableTemplate@C6 D
1 e1 e2 e1 e2 1 e1 e2
e1 1 - He1 e2 L - e2 e1 e2 -1
e2 e1 e2 1 e1
e1 e2 e2 - e1 -1
If we want to create a palette for the subalgebra, we have to edit the normal list (matrix) form and then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra function EvenCliffordProductTable which creates a Clifford product table from just the basis elements of even grade. We then set the metric we want, convert the Clifford product to scalar products, evaluate the scalar products according to the chosen metric, and display the resulting table as a palette.
2009 9 3
If we want to create a palette for the subalgebra, we have to edit the normal list Algebra (matrix) form and Grassmann Book.nb 681 then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra function EvenCliffordProductTable which creates a Clifford product table from just the basis elements of even grade. We then set the metric we want, convert the Clifford product to scalar products, evaluate the scalar products according to the chosen metric, and display the resulting table as a palette.
C7 = EvenCliffordProductTable@D; DeclareMetric@81, 0<, 80, 1<D; PaletteForm@ToMetricForm@ToScalarProducts@C7 DDD
1 e1 e2
e1 e2 -1
Here we see that the basis element e1 e2 has the property that its (Clifford) square is -1. We can see how this arises by carrying out the elementary operations on the product. Note that e1 e1 e2 e2 1 since we have assumed the 2-space under consideration is Euclidean.
He1 e2 L He1 e2 L - He2 e1 L He1 e2 L - He1 e1 L He2 e2 L - 1
Thus the pair 81,e1 e2 < generates an algebra under the Clifford product operation, isomorphic to the complex algebra. It is also the basis of the even Grassmann algebra of L.
2
1 e1 e2 e1 e2
e1 1 - He1 e2 L - e2
e2 e1 e2 -1 - e1
e1 e2 e2 e1 1
1 e1 e2 e1 e2
e1 -1 - He1 e2 L e2
e2 e1 e2 -1 - e1
e1 e2 - e2 e1 -1
We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we Mathematica's double-struck symbols and choose the correspondence 8e1 3,e2 4,e1 e2 !<.
2009 9 3
682
We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we Mathematica's double-struck symbols and choose the correspondence 8e1 3,e2 4,e1 e2 !<.
C9 = C8 . 8e1 3, e2 4, e1 e2 !<; PaletteForm@C9 D
&
$ -1
& -% $
% -& -1 & %
-$ -1
Having verified that the structure is indeed quaternionic, let us return to the original specification in terms of the basis of the Grassmann algebra. A quaternion can be written in terms of these basis elements as:
Q a + b e1 + c e2 + d e1 e2 Ha + b e1 L + Hc + d e1 L e2
Now, because e1 and e2 are orthogonal, e1 e2 is equal to e1 e2 . But for any further calculations we will need to use the Clifford product form. Hence we write
Q a + b e1 + c e2 + d e1 e2 Ha + b e1 L + Hc + d e1 L e2
12.66
Hence under one interpretation, each of e1 and e2 and their Clifford product e1 e2 behaves as a different imaginary unit. Under the second interpretation, a quaternion is a complex number with imaginary unit e2 , whose components are complex numbers based on e1 as the imaginary unit.
2009 9 3
683
Since the basis elements are orthogonal, we can use the GrassmannAlgebra function CliffordToOrthogonalScalarProducts for computing the Clifford products. This function is faster for larger calculations than the more general ToScalarProducts used in the previous examples.
C2 = ToMetricForm@CliffordToOrthogonalScalarProducts@C1 DD; PaletteForm@C2 D
1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3
e1 1 - He1 e2 L - He1 e3 L - e2 - e3 e1 e2 e3 e2 e3
e3 e1 e3 e2 e3 1 e1 e2 e3 e1 e2 e1 e2
e1 e2 e2 - e1 e1 e2 e3 -1 e2 e3 - He1 e3 L - e3
e1 e3 - He1 - He2 e1 e2
We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra. Pauli's representation of the algebra was by means of 2!2 matrices over the complex field: 2009 9 3
684
We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra. Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:
0 1 O 1 0 0 - O 0 1 0 O 0 -1
s1 K
s2 K
s3 K
To show this we can: 1) Replace the symbols for the basis elements of the Grassmann algebra in the table above by symbols for the Pauli matrices. Replace the exterior product operation by the matrix product operation. 2) Replace the symbols by the actual matrices, allowing Mathematica to perform the matrix products. 3) Replace the resulting Pauli matrices with the corresponding basis elements of the Grassmann algebra. 4) Verify that the resulting table is the same as the original table. We perform these steps in sequence. Because of the size of the output, only the electronic version will show the complete tables.
s2 s1 . s3 s1 -s1
2009 9 3
685
1 0 0 1 0 - 1 0 O , s1 K O , s2 K O, s3 K O>; 0 1 1 0 0 0 -1
Step 3:
Here we let the first row (column) of the product table correspond back to the basis elements of the Grassmann representation, and make the substitution: throughout the whole table.
C5 = C4 . Thread@First@C4 D BasisL@DD . Thread@- First@C4D - BasisL@DD 881, e1 , e2 , e3 , e1 e2 , e1 e3 , e2 e3 , e1 e2 e3 <, 8e1 , 1, e1 e2 , e1 e3 , e2 , e3 , e1 e2 e3 , e2 e3 <, 8e2 , - He1 e2 L, 1, e2 e3 , - e1 , - He1 e2 e3 L, e3 , - He1 e3 L<, 8e3 , - He1 e3 L, - He2 e3 L, 1, e1 e2 e3 , - e1 , - e2 , e1 e2 <, 8e1 e2 , - e2 , e1 , e1 e2 e3 , - 1, - He2 e3 L, e1 e3 , - e3 <, 8e1 e3 , - e3 , - He1 e2 e3 L, e1 , e2 e3 , - 1, - He1 e2 L, e2 <, 8e2 e3 , e1 e2 e3 , - e3 , e2 , - He1 e3 L, e1 e2 , - 1, - e1 <, 8e1 e2 e3 , e2 e3 , - He1 e3 L, e1 e2 , - e3 , e2 , - e1 , - 1<<
Step 4: Verification
Verify that this is the table with which we began.
C5 C2 True
2009 9 3
686
1 e1 e2 e1 e3
e1 e2 -1 e2 e3
e1 e3 - He2 e3 L -1 e1 e2
e2 e3 e1 e3 - He1 e2 L -1
e2 e3 - He1 e3 L
From this multiplication table we can see that the even subalgebra of the Clifford algebra of 3space is isomorphic to the quaternions. To see the isomorphism more clearly, replace the bivectors by 3, 4, and !.
C5 = C4 . 8e2 e3 3, e1 e3 4, e1 e2 !<; PaletteForm@C5 D
&
$ %
& -1 -$ % $
-1 -& -1
$ -% &
Biquaternions
We now explore the metric in which 8e1 e1 - 1, e2 e2 - 1, e3 e3 - 1<. To generate the Clifford product table for this metric we enter:
2009 9 3
687
1 e1 e2 e3 e1 e2 e1 e3 e2 e3
e1 -1 - He1 e2 L - He1 e3 L e2 e3 e1 e2 e3
e2 e1 e2 -1 - He2 e3 L - e1 - He1 e2 e3 L e3 e1 e3
e3 e1 e3 e2 e3 -1 e1 e2 e3 - e1 - e2 - He1 e2 L
e1 e2 - e2 e1 e1 e2 e3 -1 - He2 e3 L e1 e3 - e3
e1 - He1 e1 e2 - He1 e2
e1 e2 e3 - He2 e3 L
Note that every one of the basis elements of the Grassmann algebra (except 1 and e1 e2 e3 ) acts as an imaginary unit under the Clifford product. This enables us to build up a general element of the algebra as a sum of nested complex numbers, or of nested quaternions. To show this, we begin by writing a general element in terms of the basis elements of the Grassmann algebra:
QQ a + b e1 + c e2 + d e3 + e e1 e2 + f e1 e3 + g e2 e3 + h e1 e2 e3
Or, as previously argued, since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford product and rearrange the terms in the sum to give
QQ Ha + b e1 + c e2 + e e1 e2 L + Hd + f e1 + g e2 + h e1 e2 L e3
Historical Note This complex sum of two quaternions was called a biquaternion by Hamilton [Hamilton, Elements of Quaternions, p133] but Clifford in a footnote to his Preliminary Sketch of Biquaternions [Clifford, Mathematical Papers, Chelsea] says 'Hamilton's biquaternion is a quaternion with complex coefficients; but it is convenient (as Prof. Pierce remarks) to suppose from the beginning that all scalars may be complex. As the word is thus no longer wanted in its old meaning, I have made bold to use it in a new one." Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of Quaternions] for a complex vector, that is, for a vector x + y, where x and y are vectors; and the word biquaternion for a complex quaternion q0 + q1 , where q0 and q1 are quaternions. He emphasizes here that " is the (scalar) imaginary of algebra, and not a symbol for a geometrically real right versor "
2009 9 3
688
Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of Quaternions] for a complex vector, that is, for a vector x + y, where x and y are vectors; and the word biquaternion for a complex quaternion q0 + q1 , where q0 and q1 are quaternions. He emphasizes here that " is the (scalar) imaginary of algebra, and not a symbol for a geometrically real right versor " Hamilton introduces his biquaternion as the quotient of a bivector (his usage) by a (real) vector.
11 e1 1 e2 1 e3 1 e4 1 He1 e2 L 1 He1 e3 L 1 He1 e4 L 1 He2 e3 L 1 He2 e4 L 1 He3 e4 L 1 He1 e2 e3 L 1 He1 e2 e4 L 1 He1 e3 e4 L 1 He2 e3 e4 L 1
1 e1 e1 e1 e2 e1 e3 e1 e4 e1 He1 e2 L e1 He1 e3 L e1 He1 e4 L e1 He2 e3 L e1 He2 e4 L e1 He3 e4 L e1 He1 e2 e3 L e1 He1 e2 e4 L e1 He1 e3 e4 L e1 He2 e3 e4 L e1
1 e2 e1 e2 e2 e2 e3 e2 e4 e2 He1 e2 L e2 He1 e3 L e2 He1 e4 L e2 He2 e3 L e2 He2 e4 L e2 He3 e4 L e2 He1 e2 e3 L e2 He1 e2 e4 L e2 He1 e3 e4 L e2 He2 e3 e4 L e2 H H H H
689
Since the basis elements are orthogonal, we can use the faster GrassmannAlgebra function CliffordToOrthogonalScalarProducts for computing the Clifford products.
C2 = CliffordToOrthogonalScalarProducts@C1 D ToMetricForm; PaletteForm@C2 D
1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4
This table is well off the page in the printed version, but we can condense the notation for the basis elements of L by replacing ei !ej by ei,!,j . To do this we can use the GrassmannAlgebra function ToBasisIndexedForm. For example
2009 9 3
690
To display the table in condensed notation we make up a rule for each of the basis elements.
C3 = C2 . Reverse@Thread@BasisL@D ToBasisIndexedForm@BasisL@DDDD; PaletteForm@C3 D
1 e1 e2 e3 e4 e1,2 e1,3 e1,4 e2,3 e2,4 e3,4 e1,2,3 e1,2,4 e1,3,4 e2,3,4 e1,2,3,4
e1 1 - e1,2 - e1,3 - e1,4 - e2 - e3 - e4 e1,2,3 e1,2,4 e1,3,4 e2,3 e2,4 e3,4 - e1,2,3,4 - e2,3,4
e2 e1,2 1 - e2,3 - e2,4 e1 - e1,2,3 - e1,2,4 - e3 - e4 e2,3,4 - e1,3 - e1,4 e1,2,3,4 e3,4 e1,3,4
e3 e1,3 e2,3 1 - e3,4 e1,2,3 e1 - e1,3,4 e2 - e2,3,4 - e4 e1,2 - e1,2,3,4 - e1,4 - e2,4 - e1,2,4
e4 e1,4 e2,4 e3,4 1 e1,2,4 e1,3,4 e1 e2,3,4 e2 e3 e1,2,3,4 e1,2 e1,3 e2,3 e1,2,3
e1,2 e2 - e1 e1,2,3 e1,2,4 -1 e2,3 e2,4 - e1,3 - e1,4 e1,2,3,4 - e3 - e4 e2,3,4 - e1,3,4 - e3,4
e1,3 e3 - e1,2,3 - e1 e1,3,4 - e2,3 -1 e3,4 e1,2 - e1,2,3,4 - e1,4 e2 - e2,3,4 - e4 e1,2,4 e2,4
e e4 -e -e e1, e e e2 e2 e3 -e -
2009 9 3
691
1 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e4
2009 9 3
692
1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4
To confirm that this structure is isomorphic to the Dirac algebra we can go through the same procedure that we followed in the case of the Pauli algebra.
2009 9 3
693
I g0 g1 g2 g3 g0 .g1 g0 .g2 g0 .g3 g1 .g2 g1 .g3 g2 .g3 g0 .g1 .g2 g0 .g1 .g3 g0 .g2 .g3 g1 .g2 .g3 g0 .g1 .g2 .g3
g0 I -g0 .g1 -g0 .g2 -g0 .g3 -g1 -g2 -g3 g0 .g1 .g2 g0 .g1 .g3 g0 .g2 .g3 g1 .g2 g1 .g3 g2 .g3 -g0 .g1 .g2 .g3 -g1 .g2 .g3
g1 g0 .g1 -I -g1 .g2 -g1 .g3 -g0 -g0 .g1 .g2 -g0 .g1 .g3 g2 g3 g1 .g2 .g3 g0 .g2 g0 .g3 g0 .g1 .g2 .g3 -g2 .g3 -g0 .g2 .g3
g2 g0 .g2 g1 .g2 -I -g2 .g3 g0 .g1 .g2 -g0 -g0 .g2 .g3 -g1 -g1 .g2 .g3 g3 -g0 .g1 -g0 .g1 .g2 .g3 g0 .g3 g1 .g3 g0 .g1 .g3
g3 g0 .g3 g1 .g3 g2 .g3 -I g0 .g1 .g3 g0 .g2 .g3 -g0 g1 .g2 .g3 -g1 -g2 g0 .g1 .g2 . -g0 .g1 -g0 .g2 -g1 .g2 -g0 .g1 .g2
C8 = C7 . :I
, g0
, g1
g2
0 0 0 0 0 - 0 , g3 0 - 0 0 0 0 0
0 0 -1 0 0 0 1 0 0 0 -1 0
0 1 >; MatrixForm@C8 D 0 0
2009 9 3
694
1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 1 0 0 0 0 -1 1 0 0 0 0 -1 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 0 0 -1 -1 0 0 0 0 1 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 0 0 - 0 0 0 0 - - 0 0 0 0 - 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0
0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 -1 0 0 1 0 0 0 0 0 0 -1 0 0 1 0 -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0
0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 - -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 - 0 0 0 0 - - 0 0 0 0 - 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0
0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 - 0 0 0 0
0 0 0 -1 0 0 -1 0 0 -1 0 0 -1 0 0 0 0 0 0 -1 0 0 -1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 -1 - 0 0 0 0 0 0 0 0 0 0 0 0 - 0 1 0 0 -1 0 0 0 0 0 0 -1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 - 0 0 0 0 0 0 0 0 - 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 0 0 -1 0 0 0 0 1 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 0 -1 0 0 0 0 1 1 0 0 0 0 -1 0 0 0 0 0 0 0 0 - 0 0 0 0 - 0 0 0 - 0 0 - 0 0 0 0 0 0 0 0 0 0 - 0 0 - 0 0 0 0 0 0 - 0 0 - 0
Step 3:
2009 9 3
695
Step 3:
C9 = C8 . Thread@First@C8 D BasisL@DD . Thread@- First@C8 D - BasisL@DD;
Step 4: Verification
C9 C6 True
2009 9 3
696
1 e1 e2 e3 e4 e1 e2 e1 e3 e1 e4 e2 e3 e2 e4 e3 e4 e1 e2 e3 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4
Note that both the vectors and bivectors act as an imaginary units under the Clifford product. This enables us to build up a general element of the algebra as a sum of nested complex numbers, quaternions, or complex quaternions. To show this, we begin by writing a general element in terms of the basis elements of the Grassmann algebra:
X = CreateGrassmannNumber@aD a0 + e1 a1 + e2 a2 + e3 a3 + e4 a4 + a5 e1 e2 + a6 e1 e3 + a7 e1 e4 + a8 e2 e3 + a9 e2 e4 + a10 e3 e4 + a11 e1 e2 e3 + a12 e1 e2 e4 + a13 e1 e3 e4 + a14 e2 e3 e4 + a15 e1 e2 e3 e4
We can then factor this expression to display it as a nested sum of numbers of the form Ck ai +aj e1 . (Of course any basis element will do upon which to base the factorization.)
X Ia0 + a1 e1 M + Ia2 + a5 e1 M e2 + IIa3 + a6 e1 M + Ia8 + a11 e1 M e2 M e3 + IIa4 + a7 e1 M + Ia9 + a12 e1 M e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M e2 M e3 M e4
2009 9 3
697
Since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford product to give
X Ia0 + a1 e1 M + Ia2 + a5 e1 M e2 + IIa3 + a6 e1 M + Ia8 + a11 e1 M e2 M e3 + IIa4 + a7 e1 M + Ia9 + a12 e1 M e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M e2 M e3 M e4 C1 + C2 e2 + HC3 + C4 e2 L e3 + HC5 + C6 e2 + HC7 + C8 e2 L e3 L e4 Q1 + Q2 e3 + HQ3 + Q4 e3 L e4 QQ1 + QQ2 e4
12.17 Rotations
To be completed
2009 9 3
698
13.1 Introduction
This chapter introduces the concept of a matrix of Grassmann numbers, which we will call a Grassmann matrix. Wherever it makes sense, the operations discussed will work also for listed collections of the components of tensors of any order as per the Mathematica representation: a set of lists nested to a certain number of levels. Thus, for example, an operation may also work for vectors or a list containing only one element. We begin by discussing some quick methods for generating matrices of Grassmann numbers, particularly matrices of symbolic Grassmann numbers, where it can become tedious to define all the scalar factors involved. We then discuss the common algebraic operations like multiplication by scalars, addition, multiplication, taking the complement, finding the grade of elements, simplification, taking components or determining the type of elements involved. Separate sections are devoted to discussing the notions of transpose, determinant and adjoint of a Grassmann matrix. These notions do not carry over directly into the algebra of Grassmann matrices due to the non-commutative nature of Grassmann numbers. Matrix powers are then discussed, and the inverse of a Grassmann matrix defined in terms of its positive integer powers. Non-integer powers are defined for matrices whose bodies have distinct eigenvalues. The determination of the eigensystem of a Grassmann matrix is discussed for this class of matrix. Finally, functions of Grassmann matrices are discussed based on the determination of their eigensystems. We show that relationships that we expect of scalars, for example Log[Exp[A]] = A or Sin@AD2 + Cos@AD2 = 1, can still apply to Grassmann matrices.
2009 9 3
699
? CreateMatrixForm CreateMatrixForm@DD@X,SD constructs an array of the specified dimensions with copies of the expression X formed by indexing its scalars and variables with subscripts. S is an optional list of excluded symbols. D is a list of dimensions of the array Hwhich may be symbolic for lists and matricesL. Note that an indexed scalar is not recognised as a scalar unless it has an underscripted 0, or is declared as a scalar.
Suppose we require a 2!2 matrix of general Grassmann numbers of the form X (given below) in a 2-space.
X = a + b e1 + c e2 + d e1 e2 a + b e1 + c e2 + d e1 e2
We declare the 2-space, and then create a 2!2 matrix of the form X.
&2 ; M = CreateMatrixForm@82, 2<D@XD; MatrixForm@MD a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2
We need to declare the extra scalars that we have formed. This can be done with just one pattern (provided we wish all symbols with this pattern to be scalar).
DeclareExtraScalars@8Subscript@_, _, _D<D :a, b, c, d, e, f, g, h, %, H_ _L ?InnerProductQ, __,_ , _>
0
2009 9 3
700
2009 9 3
701
If we wish to simplify any orthogonal elements, we can use ToMetricForm. Since we are in a Euclidean space the scalar product term is zero.
ToMetricForm@%@BDD 881, 2<, 8e3 , 3<<
2009 9 3
702
Finally, we can extract the elements of any specific grade using ExtractGrade.
ExtractGrade@2D@MD MatrixForm d1,1 e1 e2 d1,2 e1 e2 d2,1 e1 e2 d2,2 e1 e2
EvenGradeQ, OddGradeQ, EvenSoulQ and GradeQ all interrogate the individual elements of a matrix. EvenGradeQ@AD MatrixForm K False False O True False
2009 9 3
703
To check the type of a complete matrix we can ask whether it is free of either True or False values.
FreeQ@GradeQ@1D@AD, FalseD False FreeQ@EvenSoulQ@AD, TrueD True
Matrix addition
Due to the Listable attribute of the Plus operation in Mathematica, matrices of the same size add automatically.
2009 9 3
704
A+M 882 + 3 e1 + a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 , e1 + 6 e2 + a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 <, 85 + a2,1 + e1 b2,1 + e2 c2,1 - 9 e1 e2 + d2,1 e1 e2 , e2 + a2,2 + e1 b2,2 + e2 c2,2 + e1 e2 + d2,2 e1 e2 <<
Matrix products
As with the product of two Grassmann numbers, entering the product of two matrices of Grassmann numbers, does not effect any multiplication. For example, take the matrix A in a 2space created in the previous section.
MatrixForm@AD MatrixForm@AD K 2 + 3 e1 e1 + 6 e2 2 + 3 e1 e1 + 6 e2 OK O 5 - 9 e1 e2 e2 + e1 e2 5 - 9 e1 e2 e2 + e1 e2
To multiply out the product AA, we use the GrassmannAlgebra MatrixProduct function which performs the multiplication. The short-hand for MatrixProduct is a script capital M $, obtained by typing scM.
2009 9 3
705
B@ A A D 88H2 + 3 e1 L H2 + 3 e1 L + He1 + 6 e2 L H5 - 9 e1 e2 L, H2 + 3 e1 L He1 + 6 e2 L + He1 + 6 e2 L He2 + e1 e2 L<, 8H5 - 9 e1 e2 L H2 + 3 e1 L + He2 + e1 e2 L H5 - 9 e1 e2 L, H5 - 9 e1 e2 L He1 + 6 e2 L + He2 + e1 e2 L He2 + e1 e2 L<<
To take the product and simplify at the same time, apply the GrassmannSimplify function.
%@B@A ADD MatrixForm K 4 + 17 e1 + 30 e2 2 e1 + 12 e2 + 19 e1 e2 O 10 + 15 e1 + 5 e2 - 13 e1 e2 5 e1 + 30 e2
Regressive products
We can also take the regressive product.
%@B@A ADD MatrixForm K e1 e2 H- 9 e1 - 54 e2 L e1 e2 H19 + e1 + 6 e2 L O e1 e2 H- 13 - 27 e1 - 9 e2 - 9 e1 e2 L e1 e2 H- 9 e1 - 52 e2 + e1 e2 L
Interior products
We calculate an interior product of two matrices in the same way.
P = %@B@A ADD 884 + 9 He1 e1 L + 11 e1 + 30 e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L<, 810 - 27 He1 e2 e1 L - 9 He1 e2 e1 e2 L + 5 e2 - 13 e1 e2 , e2 e2 - 9 He1 e2 e1 L - 53 He1 e2 e2 L + e1 e2 e1 e2 <<
To convert the interior products to scalar products, use the GrassmannAlgebra function ToScalarProducts on the matrix.
P1 = ToScalarProducts@PD 984 + 9 He1 e1 L + 11 e1 + 30 e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L<, 910 + 9 He1 e2 L2 - 9 He1 e1 L He2 e2 L + 27 He1 e2 L e1 + 5 e2 27 He1 e1 L e2 - 13 e1 e2 , - He1 e2 L2 + e2 e2 + He1 e1 L He2 e2 L + 9 He1 e2 L e1 + 53 He2 e2 L e1 - 9 He1 e1 L e2 - 53 He1 e2 L e2 ==
This product can be simplified further if the values of the scalar products of the basis elements are known. In a Euclidean space, we obtain:
ToMetricForm@P1D MatrixForm K 13 + 11 e1 + 30 e2 9 O 1 - 22 e2 - 13 e1 e2 2 + 53 e1 - 9 e2
Clifford products
2009 9 3
706
Clifford products
The Clifford product of two matrices can be calculated in stages. First we multiply the elements and expand and simplify the terms with GrassmannSimplify.
C1 = %@B@A ADD 882 2 + 3 2 e1 + 3 e1 2 + e1 5 + 9 e1 e1 - 9 e1 He1 e2 L + 6 e2 5 - 54 e2 He1 e2 L, 2 e1 + 6 2 e2 + 3 e1 e1 + 19 e1 e2 + e1 He1 e2 L + 6 e2 e2 + 6 e2 He1 e2 L<, 85 2 + 3 5 e1 + e2 5 - 9 e2 He1 e2 L - 9 He1 e2 L 2 + He1 e2 L 5 - 27 He1 e2 L e1 - 9 He1 e2 L He1 e2 L, 5 e1 + 6 5 e2 + e2 e2 + e2 He1 e2 L - 9 He1 e2 L e1 53 He1 e2 L e2 + He1 e2 L He1 e2 L<<
Next we can convert the Clifford products into exterior and scalar products with ToScalarProducts.
C2 = ToScalarProducts@C1D 984 + 9 He1 e1 L + 17 e1 + 9 He1 e2 L e1 + 54 He2 e2 L e1 + 30 e2 - 9 He1 e1 L e2 - 54 He1 e2 L e2 , 3 He1 e1 L + 19 He1 e2 L + 6 He2 e2 L + 2 e1 - He1 e2 L e1 6 He2 e2 L e1 + 12 e2 + He1 e1 L e2 + 6 He1 e2 L e2 + 19 e1 e2 <, 910 - 9 He1 e2 L2 + 9 He1 e1 L He2 e2 L + 15 e1 - 27 He1 e2 L e1 + 9 He2 e2 L e1 + 5 e2 + 27 He1 e1 L e2 - 9 He1 e2 L e2 - 13 e1 e2 , He1 e2 L2 + e2 e2 - He1 e1 L He2 e2 L + 5 e1 - 9 He1 e2 L e1 54 He2 e2 L e1 + 30 e2 + 9 He1 e1 L e2 + 54 He1 e2 L e2 ==
2009 9 3
707
If the elements of two Grassmann matrices commute, then just as for the usual case, the transpose of a product is the product of the transposes in reverse order. If they anti-commute, then the transpose of a product is the negative of the product of the transposes in reverse order. Thus, the corresponding terms in the two expansions above which involve an even matrix will be equal, and the last terms involving two odd matrices will differ by a sign:
T H Ae Be LT BT e Ae T H Ae Bo LT BT o Ae T H Ao Be LT BT e Ao T H Ao Bo LT - IBT o Ao M
Thus
T T T T H A BLT BT A T - 2 IBT o Ao M B A + 2 H Ao Bo L
13.1
H A BLT BT A T
Ao Bo 0
13.2
2009 9 3
708
M MatrixForm a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 e2 a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 e2 TestTransposeRelation@M, AD True
It should be remarked that for the transpose of a product to be equal to the product of the transposes in reverse order, the evenness of the matrices is a sufficient condition, but not a necessary one. All that is required is that the exterior product of the odd components of the matrices be zero. This may be achieved without the odd components themselves necessarily being zero.
The natural definition of the determinant of this matrix would be the product of the diagonal elements. However, which product should it be, xy or yx?. The possible non-commutativity of the Grassmann numbers on the diagonal may give rise to different products of the diagonal elements, depending on the ordering of the elements. On the other hand, if the product of the diagonal elements of a diagonal Grassmann matrix is unique, as would be the case for even elements, the determinant may be defined uniquely as that product.
2009 9 3
709
Note that GrassmannDeterminant does not carry out any simplification, but instead leaves the products in raw form to facilitate the interpretation of the process occurring. To simplify the result one needs to use GrassmannSimplify.
%@B1D - a1,2 a2,1 + a1,1 a2,2 + Ha2,2 d1,1 - a2,1 d1,2 - a1,2 d2,1 + a1,1 d2,2 L e1 e2
2009 9 3
710
Because this matrix has no body, there will eventually be a power which is zero, dependent on the dimension of the space, which in this case is 4. Here we see that it is the fourth power.
A4 = %@B@A A3 DD; MatrixForm@A4 D K 0 0 O 0 0
B4 = %@B@B B@B B@B BDDDD; MatrixForm@B4 D K 1 + 4 x - 6 uy - 4 uvy + 8 uxy 4 y - 6 vy + 6 xy + 4 vxy 4 u - 6 uv + 6 ux - 4 uvx 1 + 4 v + 6 uy - 8 uvy + 4 ux
GrassmannIntegerPower
Although we will see later that the GrassmannAlgebra function GrassmannMatrixPower will actually deal with more general cases than the positive integer powers discussed here, we can write a simple function GrassmannIntegerPower to calculate any integer power. Like GrassmannMatrixPower we can also easily store the result for further use in the same Mathematica session. This makes subsequent calculations involving the powers already calculated much faster.
2009 9 3
711
However, on the way to calculating the fourth power, the function GrassmannIntegerPower has remembered the values of the all the integer powers up to the fourth, in this case the second, third and fourth. We can get immediate access to these by adding the Dimension of the space as a third argument. Since the power of a matrix may take on a different form in spaces of different dimensions (due to some terms being zero because their degree exceeds the dimension of the space), the power is recalled only by including the Dimension as the third argument.
Dimension 4 GrassmannIntegerPower@B, 2, 4D MatrixForm Timing :0. Second, K 1 + 2 x - uy 2 y - vy + xy O> 2 u - uv + ux 1 + 2 v + uy
GrassmannMatrixPower
The principal and more general function for calculating matrix powers provided by GrassmannAlgebra is GrassmannMatrixPower. Here we verify that it gives the same result as our simple GrassmannIntegerPower function.
GrassmannMatrixPower@B, 4D 881 + 4 x - 6 u y - 4 u v y + 8 u x y, 4 y - 6 v y + 6 x y + 4 v x y<, 84 u - 6 u v + 6 u x - 4 u v x, 1 + 4 v + 6 u y - 8 u v y + 4 u x y<<
2009 9 3
712
The matrix A-4 is calculated by GrassmannMatrixPower by taking the fourth power of the inverse.
iA4 = GrassmannMatrixPower@A, - 4D 881 + 10 x y, - 4 x<, 8- 4 y, 1 - 10 x y<<
We could of course get the same result by taking the inverse of the fourth power.
GrassmannMatrixInverse@GrassmannMatrixPower@A, 4DD 881 + 10 x y, - 4 x<, 8- 4 y, 1 - 10 x y<<
2 M x>, 2 M x y>>
2 M y,
I- 4 + 3
More generally, we can extract a symbolic pth root. But to have the simplest result we need to ensure that p has been declared a scalar first, 2009 9 3
713
More generally, we can extract a symbolic pth root. But to have the simplest result we need to ensure that p has been declared a scalar first,
DeclareExtraScalars@8p<D :a, b, c, d, e, f, g, h, p, %, H_ _L ?InnerProductQ, _>
0
pthRootA = GrassmannMatrixPowerBA,
1
1 p
::1 + - 1 + 2 p -
1 p
1
x y, - 1 + 2 p 2
-1+ p
1
x>,
: -1 + 2 p
y, 2 p + - 1 + 2 p -
x y>>
If the power required is not integral and the eigenvalues are not distinct, GrassmannMatrixPower will return a message and the unevaluated input. Let B be a simple 2!2 matrix with equal eigenvalues 1 and 1
B = 881, x<, 8y, 1<<; MatrixForm@BD K 1 x O y 1 1 2 F
GrassmannMatrixPowerBB,
Eigenvalues::notDistinct : The matrix 881, x<, 8y, 1<< does not have distinct scalar eigenvalues. The operation applies only to matrices with distinct scalar eigenvalues.
GrassmannMatrixPowerB881, x<, 8y, 1<<, 1 2 F
2009 9 3
714
? GrassmannDistinctEigenvaluesMatrixPower GrassmannDistinctEigenvaluesMatrixPower@A,pD calculates the power p of a Grassmann matrix A with distinct eigenvalues. p may be either numeric or symbolic.
We can check whether a matrix has distinct eigenvalues with DistinctEigenvaluesQ. For example, suppose we wish to calculate the 100th power of the matrix A below.
A MatrixForm K 1 x O y 2
DistinctEigenvaluesQ@AD True A 881, x<, 8y, 2<< GrassmannDistinctEigenvaluesMatrixPower@A, 100D 881 + 1 267 650 600 228 229 401 496 703 205 275 x y, 1 267 650 600 228 229 401 496 703 205 375 x<, 81 267 650 600 228 229 401 496 703 205 375 y, 1 267 650 600 228 229 401 496 703 205 376 62 114 879 411 183 240 673 338 457 063 425 x y<<
Now, since Xk has no body, its highest non-zero power will be equal to the dimension n of the space. That is Xk n+1 0. Thus
HI + Xk L II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M I
We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the inverse of HI + Xk L. 2009 9 3
13.3
715
We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the inverse of HI + Xk L. If now we have a general Grassmann matrix X, say, we can write X as X Xb + Xs , where Xb is the body of X and Xs is the soul of X, and then take Xb out as a factor to get:
X X b + X s X b I I + X b -1 X s M X b H I + X k L
Pre- and post-multiplying the equation above for the inverse of HI + Xk Lby Xb and Xb -1 respectively gives:
Xb HI + Xk L II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1 I
Since the first factor is simply X, we finally obtain the inverse of X in terms of Xk Xb -1 Xs as:
X-1 II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1
Or, specifically in terms of the body and soul of X:
13.4
13.5
GrassmannMatrixInverse
This formula is straightforward to implement as a function. In GrassmannAlgebra we can calculate the inverse of a Grassmann matrix X by using GrassmannMatrixInverse.
? GrassmannMatrixInverse GrassmannMatrixInverse@XD computes the inverse of the matrix X of Grassmann numbers in a space of the currently declared number of dimensions.
A definition for GrassmannMatrixInverse may be quite straighforwardly developed from formula 13.4.
2009 9 3
716
12 1
H6 + e1 e4 + e2 e4 - 3 e1 e2 e3 + e1 e2 e4 L,
Finally, if we try to find the inverse of a matrix with a singular body we get the expected type of error messages.
2009 9 3
717
GrassmannMatrixInverse@AD
Alternatively, if the equations need to be expressed as XMY in which case the solution X is given by:
%@B@Y GrassmannMatrixInverse@MDDD
2009 9 3
718
19 xy 4
+ -
21 y 41 xy + 4 4 7y 29 xy - 4 4
AX XL
13.6
The pair {X,L} is called the eigensystem of A. If X is invertible we can post-multiply by X-1 to get:
A X L X -1
and pre-multiply by X-1 to get:
13.7
2009 9 3
719
X -1 A X L
13.8
It is this decomposition which allows us to define functions of Grassmann matrices. This will be discussed further in Section 13.10. The number of scalar equations resulting from the matrix equation AX XL is n2 . The number of unknowns that we are likely to be able to determine is therefore n2 . There are n unknown eigenvalues in L, leaving n2 - n unknowns to be determined in X. Since X occurs on both sides of the equation, we need only determine its columns (or rows) to within a scalar multiple. Hence there are really only n2 - n unknowns to be determined in X, which leads us to hope that a decomposition of this form is indeed possible. The process we adopt is to assume unknown general Grassmann numbers for the diagonal components of L and for the components of X. If the dimension of the space is N, there will be 2N scalar coefficients to be determined for each of the n2 unknowns. We then use Mathematica's powerful Solve function to obtain the values of these unknowns. In practice, during the calculations, we assume that the basis of the space is composed of just the non-scalar symbols existing in the matrix A. This enables the reduction of computation should the currently declared basis be of higher dimension than the number of different 1-elements in the matrix, and also allows the eigensystem to be obtained for matrices which are not expressed in terms of basis elements. For simplicity we also use Mathematica's JordanDecomposition function to determine the scalar eigensystem of the matrix A before proceeding on to find the non-scalar components. To see the relationship of the scalar eigensystem to the complete Grassmann eigensystem, rewrite the eigensystem equation AX XL in terms of body and soul, and extract that part of the equation that is pure body.
HAb + As L HXb + Xs L == HXb + Xs L HLb + Ls L
The first term on each side is the body of the equation, and the second terms (in brackets) its soul. The body of the equation is simply the eigensystem equation for the body of A and shows that the scalar components of the unknown X and L matrices are simply the eigensystem of the body of A.
Ab Xb Xb Lb
13.9
By decomposing the soul of the equation into a sequence of equations relating matrices of elements of different grades, the solution for the unknown elements of X and L may be obtained sequentially by using the solutions from the equations of lower grade. For example, the equation for the elements of grade one is
Ab Xs + As Xb Xb Ls + Xs Lb
1 1 1 1
13.10
where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this 1 1 2009 9 3 process would be helpful if trying to solve by hand, it is not necessary when using the Solve function in Mathematica.
720
where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this process would be helpful if trying to solve by hand, it is not necessary when using the Solve function in Mathematica.
GrassmannMatrixEigensystem
The function implemented in GrassmannAlgebra for calculating the eigensystem of a matrix of Grassmann numbers is GrassmannMatrixEigensystem. It is capable of calculating the eigensystem only of matrices whose body has distinct eigenvalues.
? GrassmannMatrixEigensystem GrassmannMatrixEigensystem@AD calculates a list comprising the matrix of eigenvectors and the diagonal matrix of eigenvalues for a Grassmann matrix A whose body has distinct eigenvalues.
If the matrix does not have distinct eigenvalues, GrassmannMatrixEigensystem will return a message telling us. For example:
A = 882, x<, 8y, 2<<; MatrixForm@AD K 2 x O y 2
GrassmannMatrixEigensystem@AD
Eigenvalues::notDistinct : The matrix 882, x<, 8y, 2<< does not have distinct scalar eigenvalues. The operation applies only to matrices with distinct scalar eigenvalues.
GrassmannMatrixEigensystem@882, x<, 8y, 2<<D
If the matrix has distinct eigenvalues, a list of two matrices is returned. The first is the matrix of eigenvectors (whose columns have been normalized with an algorithm which tries to produce as simple a form as possible), and the second is the (diagonal) matrix of eigenvalues.
A = 882, 3 + x<, 82 + y, 3<<; MatrixForm@AD K 2 3+x O 2+y 3
9y 10
1 1x 5
+
y 5
2x 5
3y 5
xy 5
0 5+
2x 5
3y 5
xy 5
>
To verify that this is indeed the eigensystem for A we can evaluate the equation AX XL.
2009 9 3
721
e2 +
2 9 :0, + 1 2
2 1 7 + 2 2
2 15 9 + 4 2
This result is easily generalized to give an expression for any power of a matrix. 2009 9 3
722
This result is easily generalized to give an expression for any power of a matrix.
A p X L p X -1
Indeed, it is also valid for any linear combination of powers,
ap Ap ap X Lp X-1 X I ap Lp M X-1
13.11
and hence any function f[A] defined by a linear combination of powers (series):
f@ A D ap A p
f @ A D X f @ L D X -1
Now, since L is a diagonal matrix:
L DiagonalMatrix@8l1 , l2 , , lm <D
13.12
its pth power is just the diagonal matrix of the pth powers of the elements.
Lp DiagonalMatrix@8l1 p , l2 p , , lm p <D
Hence:
ap Lp DiagonalMatrixA9 ap l1 p , ap l2 p , , ap lm p =E f@LD DiagonalMatrix@8f@l1 D, f@l2 D, , f@lm D<D
Hence, finally we have an expression for the function f of the matrix A in terms of the eigenvector matrix and a diagonal matrix in which each diagonal element is the function f of the corresponding eigenvalue.
13.13
GrassmannMatrixFunction
We have already seen in Chapter 9 how to calculate a function of a Grassmann number by using GrassmannFunction. We use GrassmannFunction and the formula above to construct GrassmannMatrixFunction for calculating functions of matrices.
2009 9 3
723
? GrassmannMatrixFunction GrassmannMatrixFunction@f@ADD calculates a function f of a Grassmann matrix A with distinct eigenvalues, where f is a pure function. GrassmannMatrixFunction@8fx,x<,AD calculates a function fx of a Grassmann matrix A with distinct eigenvalues, where fx is a formula with x as variable.
It should be remarked that, just as functions of a Grassmann number will have different forms in spaces of different dimension, so too will functions of a Grassmann matrix.
1 2 1 2
I- 3 + 2 M x z, I- 3 + M x 1 2
2
2 z 2
I- 1 + 2 M x z>, 3 z 2 + 3 x z,
3 - 3 z +
I + 3 M x z>>
We then calculate the logarithm of this exponential and see that it is A as we expect.
GrassmannMatrixFunction@Log@expADD 881 + x, x z<, 82, 3 - z<<
Trigonometric functions
We compute the square of the sine and the square of the cosine of A:
2009 9 3
724
s2A = GrassmannMatrixFunction@8Sin@xD ^2, x<, AD ::2 x Cos@1D Sin@1D + Sin@1D2 + 1 2 Sin@1D H- 4 Cos@1D + Sin@3D + Sin@5DL x z, Cos@2D Sin@2D2 x z>, 1 2 1 2 x Sin@2D H- 2 + Sin@4DL + Sin@2D Sin@2D H6 + 6 Cos@4D - 3 Sin@4DL x z,
Sin@3D2 - z Sin@6D +
c2A = GrassmannMatrixFunction@8Cos@xD ^2, x<, AD ::Cos@1D2 - 2 x Cos@1D Sin@1D + 1 2 Cos@1D H- Cos@3D + Cos@5D + 4 Sin@1DL x z, - Cos@2D Sin@2D2 x z>, 1 2 x Sin@2D H- 2 + Sin@4DL - Sin@2D Sin@4D -
Symbolic matrices
B = 88a + x, x z<, 8b, c - z<<; MatrixForm@BD K a + x xz O b c-z
2009 9 3
725
Hb HHa - cL H- c a + c c - a x - c a x + c x -
Symbolic functions
Suppose we wish to compute the symbolic function f of the matrix A.
A MatrixForm K 1 + x xz O 2 3-z
If we try to compute this we get an incorrect result because the derivatives evaluated at various scalar values are not recognized by GrassmannSimplify as scalars. Here we print out just three lines of the result.
Short@GrassmannMatrixFunction@f@ADD, 3D :81<, :- f@1D + f@3D + x z 2 f @1D + 5 2 1 2 x f@1D 1 2 x f@3D + 22 + 1 2 x f @3D + 2 z f @3D ,
f@3D + 9 + x z
z f @3D >>
We see from this that the derivatives in question are the zeroth and first, f[_], and f'[_]. We therefore declare these patterns to be scalars so that the derivatives evaluated at any scalar arguments will be considered by GrassmannSimplify as scalars.
DeclareExtraScalars@8f@_D, f @_D<D :a, b, c, d, e, f, g, h, %, f@_D, H_ _L ?InnerProductQ, _, f @_D>
0
2009 9 3
726
fA = GrassmannMatrixFunction@f@ADD ::f@1D + x f @1D :- f@1D 1 2 1 2 x z Hf@1D - f@3D + 2 f @1DL, 1 2 3 2 z f@1D + f@3D + 1 2 x f@3D + 1 2 1 2 Hf@1D - f@3DL x z>,
x f@1D -
z f@3D -
The function f may be replaced by any specific function. We replace it here by Exp and check that the result is the same as that given above.
expA HfA . f ExpL Simplify True
13.11 Supermatrices
To be completed.
2009 9 3
727
A1 Guide to GrassmannAlgebra
To be completed
2009 9 3
728
Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that result seemed scarcely worthy of note until it was combined with a related idea. While I was pursuing the concept of product in geometry as it had been established by my father, I concluded Grassmann Algebra Book.nb 729 that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of their sides, provided one again interprets the product, not as the product of their lengths, but as that of the two displacements with their directions taken into account. When I combined this concept of the product with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the individual terms by the same displacement and added the products with due regard for their positive and negative values, the same result obtained, and must always obtain. This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one that could lead to important results. Yet this idea remained dormant for some time since the demands of my job led me to other tasks; also, I was initially perplexed by the remarkable result that, although the laws of ordinary multiplication, including the relation of multiplication to addition, remained valid for this new type of product, one could only interchange factors if one simultaneously changed the sign (i.e. changed + into | and vice versa).
As with his earlier work on tides, the importance of this work was ignored. Since few copies were sold, most ended by being used as waste paper by the publisher. The failure to find acceptance for Grassmann's ideas was probably due to two main reasons. The first was that Grassmann was just a simple schoolteacher, and had none of the academic charisma that other contemporaries, like Hamilton for example, had. History seems to suggest that the acceptance of radical discoveries often depends more on the discoverer than the discovery. The second reason is that Grassmann adopted the format and the approach of the modern mathematician. He introduced and developed his mathematical structure axiomatically and abstractly. The abstract nature of the work, initially devoid of geometric or physical significance, was just too new and formal for the mathematicians of the day and they all seemed to find it too difficult. More fully than any earlier mathematician, Grassmann seems to have understood the associative, commutative and distributive laws; yet still, great mathematicians like Mbius found it unreadable, and Hamilton was led to write to De Morgan that to be able to read Grassmann he 'would have to learn to smoke'. In the year of publication of the Ausdehnungslehre (1844) the Jablonowski Society of Leipzig offered a prize for the creation of a mathematical system fulfilling the idea that Leibniz had sketched in 1679. Grassmann entered with 'Die Geometrische Analyse geknpft und die von Leibniz Characteristik', and was awarded the prize. Yet as with the Ausdehnungslehre it was subsequently received with almost total silence. However, in the few years following, three of Grassmann's contemporaries were forced to take notice of his work because of priority questions. In 1845 Saint-Venant published a paper in which he developed vector sums and products essentially identical to those already occurring in Grassmann's earlier works (Barr 1845). In 1853 Cauchy published his method of 'algebraic keys' for solving sets of linear equations (Cauchy 1853). Algebraic keys behaved identically to Grassmann's units under the exterior product. In the same year Saint-Venant published an interpretation of the algebraic keys geometrically and in terms of determinants (Barr 1853). Since such were fundamental to Grassmann's already published work he wrote a reply for Crelle's Journal in 1855 entitled 'Sur les diffrentes genres de multiplication' in which he claimed priority over Cauchy and Saint-Venant and published some new results (Grassmann 1855). It was not until 1853 that Hamilton heard of the Ausdehnungslehre. He set to reading it and soon after wrote to De Morgan.
I have recently been reading more than a hundred pages of Grassmann's Ausdehnungslehre, with great admiration and interest . If I could hope to be put in rivalship with Des Cartes on the one hand and with Grassmann on the other, my scientific ambition would be fulfilled.
During the period 1844 to 1862 Grassmann published seventeen scientific papers, including important papers in physics, and a number of mathematics and language textbooks. He edited a political paper for a time and published materials on the evangelization of China. This, on top of a heavy load and the raising of a large family. However, this same period saw only few 2009 9teaching 3 mathematicians ~ Hamilton, Cauchy, Mbius, Saint-Venant, Bellavitis and Cremona ~ having any acquaintance with, or appreciation of, his work.
730
During the period 1844 to 1862 Grassmann published seventeen scientific papers, including important papers in physics, and a number of mathematics and language textbooks. He edited a political paper for a time and published materials on the evangelization of China. This, on top of a heavy teaching load and the raising of a large family. However, this same period saw only few mathematicians ~ Hamilton, Cauchy, Mbius, Saint-Venant, Bellavitis and Cremona ~ having any acquaintance with, or appreciation of, his work. In 1862 Grassmann published a completely rewritten Ausdehnungslehre: Die Ausdehnungslehre: Vollstnding und in strenger Form. In the foreword Grassmann discussed the poor reception accorded his earlier work and stated that the content of the new book was presented in 'the strongest mathematical form that is actually known to us; that is Euclidean '. It was a book of theorems and proofs largely unsupported by physical example. This apparently was a mistake, for the reception accorded this new work was as quiet as that accorded the first, although it contained many new results including a solution to Pfaff's problem. Friedrich Engel (the editor of Grassmann's collected works) comments: 'As in the first Ausdehnungslehre so in the second: matters which Grassmann had published in it were later independently rediscovered by others, and only much later was it realized that Grassmann had published them earlier' (Engel 1896). Thus Grassmann's works were almost totally neglected for forty-five years after his first discovery. In the second half of the 1860s recognition slowly started to dawn on his contemporaries, among them Hankel, Clebsch, Schlegel, Klein, Noth, Sylvester, Clifford and Gibbs. Gibbs discovered Grassmann's works in 1877 (the year of Grassmann's death), and Clifford discovered them in depth about the same time. Both became quite enthusiastic about Grassmann's new mathematics. Grassmann's activities after 1862 continued to be many and diverse. His contribution to philology rivals his contribution to mathematics. In 1849 he had begun a study of Sanskrit and in 1870 published his Wrtebuch zum Rig-Veda, a work of 1784 pages, and his translation of the Rig-Veda, a work of 1123 pages, both still in use today. In addition he published on mathematics, languages, botany, music and religion. In 1876 he was made a member of the American Oriental Society, and received an honorary doctorate from the University of Tbingen. On 26 September 1877 Hermann Grassmann died, departing from a world only just beginning to recognize the brilliance of the mathematical creations of one of its most outstanding eclectics.
2009 9 3
731
A3 Notation
To be completed.
Operations
l
exterior product operation regressive product operation interior product operation generalized product operation Clifford product operation hypercomplex product operation complement operation vector subspace complement operation cross product operation measure defined equal to equal to congruence
! =
Elements
a, b, c, x, y, z, a, b, g ,
m m m
scalars 1-elements or vectors m-elements Grassmann numbers position vectors points lines
X, Y, Z, n1 , n2 , P1 , P2 , L1 , L2 ,
2009 9 3
732
L1 , L2 , P1 , P2 ,
lines planes
Declarations
!n " n S V
declare a linear or vector space of n dimensions declare an n-space comprising an origin point and a vector space of n dimensions declare extra scalar symbols declare extra vector symbols
Special objects
1 1 " D n c g
unit scalar unit n-element origin point dimension of the currently declared space symbolic dimension of a space symbolic congruence factor derterminant of the metric tensor
Spaces
L
0
linear space of scalars or field of scalars linear space of 1-elements, underlying linear space linear space of m-elements linear space of n-elements
L
1
L
m
L
n
Basis elements
ei
2009 9 3
733
ei ei
m
contravariant basis 1-element basis m-element or covariant basis m-element contravariant basis m-element cobasis element of ei
m
ei
m
ei
m
2009 9 3
734
A4 Glossary
To be completed.
Ausdehnungslehre The term Ausdehnungslehre is variously translated as 'extension theory', 'theory of extension', or 'calculus of extension'. Refers to Grassmann's original work and other early work in the same notational and conceptual tradition. Bivector A bivector is a sum of simple bivectors. Bound vector A bound vector B is the exterior product of a point and a vector. B Px. It may also always be expressed as the exterior product of two points. Bound vector space A bound vector space is a linear space whose basis elements are interpreted as an origin point " and vectors. Bound bivector A bound bivector B is the exterior product of a point P and a bivector W: B PW. Bound simple bivector A bound simple bivector B is the exterior product of a point P and a simple bivector xy: B Pxy. It may also always be expressed as the exterior product of two points and a vector, or the exterior product of three points. Cofactor The cofactor of a minor M of a matrix A is the signed determinant formed from the rows and columns of A which are not in M. The sign may be determined from H- 1LS Hri +ci L , where ri and ci are the row and column numbers. Dimension of a linear space The dimension of a linear space is the maximum number of independent elements in it.
2009 9 3
735
n m
n m
Dimension of a Grassmann algebra The dimension of a Grassmann algebra is the sum of the dimensions of its component exterior linear spaces. The dimension of a Grassmann algebra is then given by 2n , where n is the dimension of the underlying linear space. Direction A direction is the space of a vector and is therefore the set of all vectors parallel to a given vector. Displacement A displacement is a physical interpretation of a vector. It may also be viewed as the difference of two points. Exterior linear space An exterior linear space of grade m, denoted L, is the linear space generated by mm
elements. Force A force is a physical entity represented by a bound vector. This differs from common usage in which a force is represented by a vector. For reasons discussed in the text, common use does not provide a satisfactory model. Force vector A force vector is the vector of the bound vector representing the force. General geometrically interpreted 2-element A general geometrically interpreted 2-element U is the sum of a bound vector Px and a bivector W. That is, U Px + W.
2009 9 3
736
Geometric entities Points, lines, planes, are geometric entities. Each is defined as the space of a geometrically interpreted element. A point is a geometric 1-entity. A line is a geometric 2-entity. A plane is a geometric 3-entity. Geometric interpretations Points, weighted points, vectors, bound vectors, bivectors, are geometric interpretations of m-elements. Geometrically interpreted algebra A geometrically interpreted algebra is a Grassmann algebra with a geometrically interpreted underlying linear space. Grade The grade of an m-element is m. The grade of a simple m-element is the number of 1-element factors in it. The grade of the exterior product of an m-element and a k-element is m+k. The grade of the regressive product of an m-element and a k-element is m+k|n. The grade of the complement of an m-element is n|m. The grade of the interior product of an m-element and a k-element is m|k. The grade of a scalar is zero. (The dimension n is the dimension of the underlying linear space.) GrassmannAlgebra The concatenated italicized term GrassmannAlgebra refers to the Mathematica software package which accompanies this book. A Grassmann algebra A Grassmann algebra is the direct sum of an underlying linear space L, its field L, and
1 0
L#L#L#!#L#!#L
0 1 2 m n
The Grassmann algebra The Grassmann algebra is used to describe that body of algebraic theory and results based on the Ausdehnungslehre, but extended to include more recent results and viewpoints. Intersection
2009 9 3
737
Intersection An intersection of two simple elements is any of the congruent elements defined by the intersection of their spaces. Laplace expansion theorem The Laplace expansion theorem states: If any r rows are fixed in a determinant, then the value of the determinant may be obtained as the sum of the products of the minors of rth order (corresponding to the fixed rows) by their cofactors. Line A line is the space of a bound vector. Thus a line consists of all the (perhaps weighted) points on it and all the vectors parallel to it. Linear space A linear space is a mathematical structure defined by a standard set of axioms. It is often referred to simply as a 'space'. Minor A minor of a matrix A is the determinant (or sometimes matrix) of degree (or order) k formed from A by selecting the elements at the intersection of k distinct columns and k distinct rows. m-direction An m-direction is the space of a simple m-vector. It is also therefore the set of all vectors parallel to a given simple m-vector. m-element An m-element is a sum of simple m-elements. m-plane An m-plane is the space of a bound simple m-vector. Thus a plane consists of all the (perhaps weighted) points on it and all the vectors parallel to it. m-vector An m-vector is a sum of simple m-vectors. n-algebra The term n-algebra is an alias for the phrase Grassmann algebra with an underlying linear space of n dimensions. Origin
2009 9 3
738
Origin The origin " is the geometric interpretation of a specific 1-element as a reference point. Plane A plane is the space of a bound simple bivector. Thus a plane consists of all the (perhaps weighted) points on it and all the vectors parallel to it. Point A point P is the sum of the origin " and a vector x: P " + x. Point mass A point mass is a physical interpretation of a weighted point. Physical entities Point masses, displacements, velocities, forces, moments, angular velocities, are physical entities. Each is represented by a geometrically interpreted element. A point mass, displacement or velocity is a physical 1-entity. A force, moment or angular velocity is a physical 2-entity. Physical representations Points, weighted points, vectors, bound vectors, bivectors, are geometric representations of physical entities such as point masses, displacements, velocities, forces, moments. Physical entities are represented by geometrically interpreted elements. Scalar A scalar is an element of the field L of the underlying linear space L.
0 1
A scalar is of grade zero. Screw A screw is a geometrically interpreted 2-element in a three-dimensional (physical) space (four-dimensional linear space) in which the bivector is orthogonal to the vector of the bound vector. The bivector is necessarily simple since the vector subspace is three-dimensional. Simple element A simple element is an element which may be expressed as the exterior product of 1elements. Simple bivector
2009 9 3
739
Simple bivector A simple bivector V is the exterior product of two vectors. V xy. Simple m-element A simple m-element is the exterior product of m 1-elements. Simple m-vector A simple m-vector is the exterior product of m vectors. Space of a simple m-element The space of a simple m-element a is the set of all 1-elements x such that ax0.
m m
The space of a simple m-element is a linear space of dimension m. Space of a non-simple m-element The space of a non-simple m-element is the union of the spaces of its component simple m-elements. 2-direction A 2-direction is the space of a simple bivector. It is therefore the set of all vectors parallel to a given simple bivector. Underlying linear space The underlying linear space of a Grassmann algebra is the linear space L of 1-elements,
1
which together with the exterior product operation, generates the algebra. The dimension of an underlying linear space is denoted by the symbol n. Underlying bound vector space An underlying bound vector space is an underlying linear space whose basis elements are interpreted as an origin point " and vectors. It can be shown that from this basis a second basis can be constructed, all of whose basis elements are points. Union A union of two simple elements is any of the congruent elements which define the union of their spaces. Vector A vector is a geometric interpretation of a 1-element. Vector space
2009 9 3
740
Vector space A vector space is a linear space in which the elements are interpreted as vectors. Vector subspace of a geometrically interpreted underlying linear space The vector subspace of a geometrically interpreted underlying linear space is the subspace of elements which do not involve the origin. Weighted point A weighted point is a scalar multiple a of a point P: a P.
2009 9 3
741
A5 Bibliography
To be completed.
Bibliography
Note This bibliography is specifically directed at covering the algebraic implications of Grassmann's work. It is outside the scope of this book to cover any applications of Grassmann's work to calculus (for example, the theory of exterior differential forms), nor to the discussion of other algebraic systems (for example Clifford algebra) except where they clearly intersect with Grassmann's algebraic concepts. Of course, a result which may be expressed by an author in one system may well find expression in others. Armstrong H L 1959 'On an alternative definition of the vector product in n-dimensional vector analysis' Matrix and Tensor Quarterly, IX no 4, pp 107-110. The author proposes a definition equivalent to the complement of the exterior product of n|1 vectors. Ball R S 1876 The Theory of Screws: A Study in the Dynamics of a Rigid Body Dublin See the note on Hyde (1888). Ball R S 1900 A Treatise on the Theory of Screws Cambridge University Press (1900). Reprinted 1998. A classical work on the theory of screws, containing an annotated bibliography in which Ball refers to the Ausdehnungslehre of 1862: 'This remarkable work contains much that is of instruction and interest in connection with the present theory . Here we have a very general theory which includes screw coordinates as a special case.' Ball does not use Grassmann's methods in his treatise. Barton H 1927 'A Modern Presentation of Grassmann's Tensor Analysis' American Journal of Mathematics, XLIX, pp 598-614. This paper covers similar ground to that of Moore (1926). Bourbaki N 1948
2009 9 3
742
Bourbaki N 1948 Algbra Actualits Scientifiques et Industrielles No.1044, Paris Chapter III treats multilinear algebra. Bowen R M and Wang C-C 1976 Introduction to Vectors and Tensors Plenum Press. Two volumes. This is the only contemporary text on vectors and tensors I have sighted which relates points and vectors via the explicit introduction of the origin into the calculus (p 254). Brand L 1947 Vector and Tensor Analysis Wiley, New York. Contains a chapter on motor algebra which, according to the author in his preface ' is apparently destined to play an important role in mechanics as well as in line geometry'. There is also a chapter on quaternions. Buchheim A 1884-1886 'On the Theory of Screws in Elliptic Space' Proceedings of the London Mathematical Society, xiv (1884) pp 83-98, xvi (1885) pp 15-27, xvii (1886) pp 240-254, xvii p 88. The author writes 'My special object is to show that the Ausdehnungslehre supplies all the necessary materials for a calculus of screws in elliptic space. Clifford was apparently led to construct his theory of biquaternions by the want of such a calculus, but Grassmann's method seems to afford a simpler and more natural means of expression than biquaternions.' (xiv, p 90) Later he extends this theory to ' all kinds of space.' (xvi, p 15) Burali-Forti C 1897 Introduction la Gomtrie Diffrentielle suivant la Mthode de H. Grassmann Gauthier-Villars, Paris This work covers both algebra and differential geometry in the tradition of the Peano approach to the Ausdehnungslehre. Burali-Forti C and Marcolongo R 1910 lments de Calcul Vectoriel Hermann, Paris Mostly a treatise on standard vector analysis, but it does contain an appendix (pp 176-198) on the methods of the Ausdehnungslehre and some interesting historical notes on the vector calculi and their notations. The authors use the wedge to denote Gibbs' cross product ! and use the !, initially introduced by Grassmann to denote the scalar or inner product.
2009 9 3
743
Mostly a treatise on standard vector analysis, but it does contain an appendix (pp 176-198) on the methods of the Ausdehnungslehre and some interesting historical notes on the vector calculi and their notations. The authors use the wedge to denote Gibbs' cross product ! and use the !, initially introduced by Grassmann to denote the scalar or inner product. Cartan 1922 Leons sur les Invariants Intgraux Hermann, Paris In this work Cartan develops the theory of exterior differential forms, remarking that he has called them 'extrieures' since they obey Grassmann's rules of 'multiplication extrieures'. Cartan 1938 Leons sur la Thorie des Spineures Hermann, Paris In the introduction Cartan writes "One of the principal aims of this work is to develop the theory of spinors systematically by giving a purely geometrical definition of these mathematical entities: because of this geometrical origin, the matrices used by physicists in quantum mechanics appear of their own accord, and we can grasp the profound origin of the property, possesssed by Clifford algebras, of representing rotations in space having any number of dimensions". Contains a short section on multivectors and complements (the French term is supplment). Cartan 1946 Leons sur la Gomtrie des Espaces de Riemann Gautier-Villars, Paris This is a classic text (originating in 1925) on the application of the exterior calculus to differential geometry. Cartan begins with a chapter on exterior algebra and its expression in tensor index notation. He uses square brackets to denote the exterior product (as did Grassmann), and a wedge to denote the cross product. Carvallo M E 1892 'La Mthode de Grassmann' Nouvelles Annales de Mathmatiques, serie 3 XI, pp 8-37. An exposition of some of Grassmann's methods applied to three dimensional geometry following the approach of Peano. It does not treat the interior product. Chevalley C 1955 The Construction and Study of Certain Important Algebras Mathematical Society of Japan, Tokyo. Lectures given at the University of Tokyo on graded, tensor, Clifford and exterior algebras. Clifford W K 1873
2009 9 3
744
Clifford W K 1873 'Preliminary Sketch of Biquaternions' Proceedings of the London Mathematical Society, IV, nos 64 and 65, pp 381-395 This paper includes an interesting discussion of the geometric nature of mechanical quantities. Clifford adopts the term 'rotor' for the bound vector, and 'motor' for the general sum of rotors. By analogy with the quaternion as a quotient of vectors he defines the biquaternion as a quotient of motors. Clifford W K 1878 'Applications of Grassmann's Extensive Algebra' American Journal of Mathematics Pure and Applied, I, pp 350-358 In this paper Clifford lays the foundations for general Clifford algebras. Clifford W K 1882 Mathematical Papers Reprinted by Chelsea Publishing Co, New York (1968). Of particular interest in addition to his two published papers above are the otherwise unpublished notes: 'Notes on Biquaternions' (~1873) 'Further Note on Biquaternions' (1876) 'On the Classification of Geometric Algebras' (1876) 'On the Theory of Screws in a Space of Constant Positive Curvature' (1876). Coffin J G 1909 Vector Analysis Wiley, New York. This is the second English text in the Gibbs-Heaviside tradition. It contains an appendix comparing the various notations in use at the time, including his view of the Grassmannian notation. Collins J V 1899-1900 'An elementary Exposition of Grassmann's Ausdehnungslehre or Theory of Extension' American Mathematical Monthly, 6 (1899) pp 193-198, 261-266, 297-301; 7 (1900) pp 31-35, 163-166, 181-187, 207-214, 253-258. This work follows in summary form the Ausdehnungslehre of 1862 as regards general theory but differs in its discussion of applications. It includes applications to geometry and brief applications to linear equations, mechanics and logic.
2009 9 3
745
Coolidge J L 1940 'Grassmann's Calculus of Extension' in A History of Geometrical Methods Oxford University Press, pp 252-257 This brief treatment of Grassmann's work is characterized by its lack of clarity. The author variously describes an exterior product as 'essentially a matrix' and as 'a vector perpendicular to the factors' (p 254). And confusion arises between Grassmann's matrix and the division of two exterior products (p 256). Cox H 1882 'On the application of Quaternions and Grassmann's Ausdehnungslehre to different kinds of Uniform Space' Cambridge Philosophical Transactions, XIII part II, pp 69-143. The author shows that the exterior product is the multiplication required to describe nonmetric geometry, for 'it involves no ideas of distance' (p 115). He then discusses exterior, regressive and interior products, applying them to geometry, systems of forces, and linear complexes | using the notation of 1844. In other papers Cox applies the Ausdehnungslehre to non-Euclidean geometry (1873) and to the properties of circles (1890). Crowe M J 1967, 1985 A History of Vector Analysis Notre Dame 1967. Republished by Dover 1985. This is the most informative work available on the history of vector analysis from the discovery of the geometric representation of complex numbers to the development of the Gibbs-Heaviside system Crowe's thesis is that the Gibbs-Heaviside system grew mostly out of quaternions rather than from the Ausdehnungslehre. His explanation of Grassmannian concepts is particularly accurate in contradistinction to many who supply a more casual reference. Dibag I 1974 'Factorization in Exterior Algebras' Journal of Algebra, 30, pp 259-262 The author develops necessary and sufficient conditions for an m-element to have a certain number of 1-element factors. He also shows that an (n|2)-element in an odd dimensional space always has a 1-element factor. Drew T B 1961 Handbook of Vector and Polyadic Analysis Reinhold, New York
2009 9 3
746
Tensor analysis in invariant notation. Of particular interest here is a section (p 57) on 'polycross products' | an extension of the (three-dimensional) cross product to polyads. Efimov N V and Rozendorn E R 1975 Linear Algebra and Multi-Dimensional Geometry MIR, Moscow Contains a chapter on multivectors and exterior forms. Fehr H 1899 Application de la Mthode Vectorielle de Grassmann la Gomtrie Infinitsimale Georges Carr, Paris Comprises an initial chapter on exterior algebra as well as standard differential geometry. Fleming W H 1965 Functions of Several Variables Addison-Wesley Contains a chapter on exterior algebra. Forder H G 1941 The Calculus of Extension Cambridge This text is the one of the most recent devoted to an exposition of Grassmann's methods. It is an extensive work (490 pages) largely using Grassmann's notations and relying primarily on the Ausdehnungslehre of 1862. Its application is particularly to geometry including many examples well illustrating the power of the methods, a chapter on forces, screws and linear complexes, and a treatment of the algebra of circles. Gibbs J W 1886 'On multiple algebra' Address to the American Association for the Advancement of Science In Collected Works, Gibbs 1928, vol 2. This paper is probably the most authoritative historical comparison of the different 'vectorial' algebras of the time. Gibbs was obviously very enthusiastic about the Ausdehnungslehre, and shows himself here to be one of Grassmann's greatest proponents. Gibbs J W 1891 'Quaternions and the Ausdehnungslehre' Nature, 44, pp 79|82. Also in Collected Works, Gibbs 1928. Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also concludes that to Grassmann must be attributed the discovery of matrices. Gibbs published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the relationship between quaternions and vector analysis, providing an enlightening insight into the quaternion|vector analysis controversy of the time.
2009 9 3
747
Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also concludes that to Grassmann must be attributed the discovery of matrices. Gibbs published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the relationship between quaternions and vector analysis, providing an enlightening insight into the quaternion|vector analysis controversy of the time. Gibbs J W 1928 The Collected Works of J. Willard Gibbs Ph.D. LL.D. Two volumes. Longmans, New York. In part 2 of Volume 2 is reprinted Gibbs' only personal work on vector analysis: Elements of Vector Analysis, Arranged for the Use of Students of Physics (1881|1884). This was not published elsewhere. Grassmann H G 1844 Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik Leipzig. The full title is Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik dargestellt und durch Andwendungen auf die brigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erlutert. This first book on Grassmann's new mathematics is known shortly as Die Ausdehnungslehre von 1844. It develops the theory of exterior multiplication and division and regressive exterior multiplication. It does not treat complements or interior products in the way of the Ausdehnungslehre of 1862. The original Die Ausdehnungslehre von 1844 was republished in 1878. The best source to this work is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (1995). Grassmann H G 1855 'Sur les diffrents genres de multiplication' Crelle's Journal, 49, pp 123-141 This paper was written to claim of Cauchy priority for a method of solving linear equations. Grassmann H G 1862 Die Ausdehnungslehre. Vollstndig und in strenger Form Berlin. This is Grassmann's second attempt to publish his new discoveries in book form. It adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more on the theorem|proof approach. The work comprises two main parts: the first on the exterior algebra and the second on the theory of functions. The first part includes chapters on addition and subtraction, products in general, combinatorial products (exterior and regressive), the interior product, and applications to geometry. This is probably Grassmann's most important work. The best source is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (2000). In the collected works edition, the editor Friedrich Engel has appended extensive notes and comments, which Kannenberg has also translated.
2009 9 3
adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more on the theorem|proof approach. The work comprises two main parts: the first on the exterior algebra and the second on the theory of functions. The first part includes chapters on addition and subtraction, products in general, combinatorial Grassmann productsAlgebra (exterior Book.nb and 748 regressive), the interior product, and applications to geometry. This is probably Grassmann's most important work. The best source is Volume 1 of Grassmann's collected works (1896), of which an English translation has been made by Lloyd C. Kannenberg (2000). In the collected works edition, the editor Friedrich Engel has appended extensive notes and comments, which Kannenberg has also translated. Grassmann H G 1878 'Verwendung der Ausdehnungslehre fur die allgemeine Theorie der Polaren und den Zusammenhang algebraischer Gebilde' Crelle's Journal, 84, pp 273|283. This is Grassmann's last paper. It contains, among other material, his most complete discussion on the notion of 'simplicity'. Grassmann H G 1896|1911 Hermann Grassmanns Gesammelte Mathematische und Physikalische Werke Teubner, Leipzig. Volume 1 (1896), Volume 2 (1902, 1904), Volume 3 (1911). Grassmann's complete collected works appeared between 1896 and 1911 under the editorship of Friedrich Engel and with the collaboration of Jakob Lroth, Eduard Study, Justus Grassmann, Hermann Grassmann jr. and Georg Scheffers. The following is a summary of their contents. Volume 1 Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik (1844) Geometrische Analyse: geknpft an die von Leibniz erfundene geometrische Charakteristik (1847) Die Ausdehnungslehre. Vollstndig und in strenger Form (1862) Volume 2 Papers on geometry, analysis, mechanics and physics Volume 3 Theorie der Ebbe und Flut (1840) Further papers on mathematical physics. Parts of Volume 1 (Die lineale Ausdehnungslehre (1844) and Geometrische Analyse (1847)) together with selected papers on mathematics and physics have been translated into English by Lloyd C Kannenberg and published as A New Branch of Mathematics (1995). Geometrische Analyse is Grassmann's prize-winning essay fulfilling Leibniz' search for an algebra of geometry. The remainder of Volume 1 (Die Ausdehnungslehre (1862)) has been translated into English by Lloyd C Kannenberg and published as Extension Theory (2000). Volume 2 comprises papers on geometry, analysis, analytical mechanics and mathematical physics plus two texts, one on arithmetic and the other on trigonometry. Volume 3 comprises Grassmann's earliest major work (Theorie der Ebbe und Flut (1840)) and further papers, particularly on wave theory. The Theory of Tides begins to apply Grassmann's new approach to vector analysis. Grassmann H der Jngere 1909, 1913, 1927 Projektive Geometrie der Ebene Teubner, Leipzig. A comprehensive treatment in three books using the methods of the elder H. Grassmann.
2009 9 3
749
A comprehensive treatment in three books using the methods of the elder H. Grassmann. Greenberg M J 1976 'Element of area via exterior algebra' American Mathematical Monthly, 83, pp 274-275 The author suggests that the treatment of elements of area by using the exterior product would be a more satisfactory treatment than that normally given in calculus texts. Greub W H 1967 Multilinear Algebra Springer-Verlag, Berlin. Contains chapters on exterior algebra. Gurevich G B 1964 Foundations of the Theory of Algebraic Invariants Noordhoff, Groningen Contains a chapter on m-vectors (called here polyvectors) with an extensive treatment of the conditions for divisibility of an m-vector by one or more vectors (pp 354-395). Hamilton W R 1853 Lectures on Quaternions Dublin The first English text on quaternions. The introduction briefly mentions Grassmann. Hamilton W R 1866 Elements of Quaternions Dublin The editor Charles Joly in his preface remarks in relation to the quaternion's associativity that "For example, Grassmann's multiplication is sometimes associative, but sometimes it is not." The exterior product is of course associative. However, here it seems Joly may be suffering from a confusion caused by Grassmann's notation for expresssions in which both exterior and regressive products appear. The second edition of 1899 has been reprinted in 1969 by Chelsea Publishing Company. Hardy A S 1895 Elements of Quaternions Ginn and Company, Boston The author claims this to be an introduction to quaternions at an elementary level.
2009 9 3
750
Heath A E 1917 'Hermann Grassmann 1809-1877' The Monist, 27, pp 1-21 'The Neglect of the Work of H. Grassmann' The Monist, 27, pp 22-35 'The Geometric Analysis of Hermann Grassmann and its connection with Leibniz's characteristic' The Monist, 27, pp 36-56 Hestenes D 1966 Space|Time Algebra Gordon and Breach This work is a seminal exposition of Clifford algebra emphasising the geometric nature of the quantities and operations involved. The author writes " ideas of Grassmann are used to motivate the construction of Clifford algebra and to provide a geometric interpretation of Clifford numbers. This is to be contrasted with other treatments of Clifford algebra which are for the most part formal algebra. By insisting on Grassmann's geometric viewpoint, we are led to look upon the Dirac algebra with new eyes." (p 2) Hestenes D 1968 'Multivector Calculus' Journal of Mathematical Analysis and Applications, 24, pp 313-325 In the words of the author: "The object of this paper is to show how differential and integral calculus in many dimensions can be greatly simplified using Clifford algebra." Hodge W V D 1952 Theory and Applications of Harmonic Integrals Cambridge The "star" operator defined by Hodge is of similar nature to Grassmann's complement operator. Hunt K H 1970 Screw systems in Spatial Kinematics (Screw Systems Surveyed and Applied to Jointed Rigid Bodies) Report MMERS 3, Department of Mechanical Engineering, Monash University, Clayton, Australia Although in this report the author has intentionally confined himself to well known methods of pure and analytical geometry, he is a strong proponent of the screw as being the natural language for investigating spacial mechanisms.
2009 9 3
751
Hyde E W 1884 'Calculus of Direction and Position' American Journal of Mathematics, VI, pp 1-13 In this paper the author compares quaternions to the methods of the Ausdehnungslehre and concludes that Grassmann's system is far preferable as a system of directed quantities. Hyde E W 1888 'Geometric Division of Non-congruent Quantities' Annals of Mathematics, 4, pp 9-18 This paper deals with the concept of exterior division more extensively than did Grassmann. Hyde E W 1888 'The Directional Theory of Screws' Annals of Mathematics, 4, pp 137-155 This paper is an account of the theory of screws using the Ausdehnungslehre. Hyde claims that "A screw evidently belongs thoroughly to the realm of the Directional Calculus and will not be easily or naturally treated by Cartesian methods; and Ball's treatment is throughout essentially Cartesian in its nature." Here he is referring to The Theory of Screws: A Study in the Dynamics of a Rigid Body (1876). In A Treatise on the Theory of Screws (1900) Ball comments: "Prof. Hyde proves by his [sic] calculus many of the fundamental theorems in the present theory in a very concise manner" (p 531). Hyde E W 1890 The Directional Calculus based upon the Methods of Hermann Grassmann Ginn and Company, Boston The author discusses geometric applications in 2 and 3 dimensions including screws and complements of bound elements (for example, points, lines and planes). Hyde E W 1906 Grassmann's Space Analysis Wiley, New York In the words of the author: "This little book is an attempt to present simply and concisely the principles of the "Extensive Analysis" as fully developed in the comprehensive treatises of Hermann Grassmann, restricting the treatment however to the geometry of two and three dimensional space."
2009 9 3
752
Jahnke E 1905 Vorlesungen ber die Vektorenrechnung mit Anwendungen auf Geometrie, Mechanik und Mathematische Physik. Teubner, Leipzig This work is full of examples of application of the Ausdehnungslehre (notation of 1862) to geometry, mechanics and physics. Only two and three dimensional problems are considered. Lasker E 1896 'A Essay on the Geometrical Calculus' Proceedings of the London Mathematical Society, XXVIII, pp 217-260 This work differs from most of the other papers of this era on the geometrical applications of the Ausdehnungslehre by concentrating on a space of arbitrary dimension rather than two or three. Lewis G N 1910 'On four-dimensional vector calculus and its application in electrical theory' Proceedings of the American Academy of Arts and Sciences, XLVI, pp 165-181 A specialization of the Ausdehnungslehre to four dimensions and its applications to electromagnetism in Minkowskian terms. The author introduces the new concepts with the minimum of explanation: for example, the anti-symmetric properties of bivectors and trivectors are justified as conventions! (p 167). Lotze A 1922 Die Grundgleichungen der Mechanik insbesondere Starrer Krper Teubner, Leipzig This short monograph is one of the rare works addressing mechanics using the methods of the Ausdehnungslehre. It treats the dynamics of systems of particles and the kinematics and dynamics of rigid bodies. Macfarlane A 1904 Bibliography of Quaternions and Allied Systems of Mathematics Dublin Published for the International Association for Promoting the Study of Quaternions and Allied Systems of Mathematics, this bibliography together with supplements to 1913 contains about 2500 articles including many on the Ausdehnungslehre and vector analysis. Marcus M 1966 'The Cauch-Schwarz inequality in the exterior algebra' Quarterly Journal of Mathematics, 17, pp 61-63 The author shows that a classical inequality for positive definite hermitian matrices is a special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.
2009 9 3
753
The author shows that a classical inequality for positive definite hermitian matrices is a special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra. Marcus M 1975 Finite Dimensional Multilinear Algebra Marcel Dekker, New York
Part II contains chapters on Grassmann and Clifford algebras.
Mehmke R 1913 Vorlesungen ber Punkt- und Vektorenrechnung (2 volumes) Teubner, Leipzig Volume 1 (394 pages) deals with the analysis of bound elements (points, lines and planes) and projective geometry. Milne E A 1948 Vectorial Mechanics Methuen, London An exposition of three-dimensional vectorial (and tensorial) mechnaics using 'invariant' notation. Milne is one of the rare authors who realises that physical forces and the linear momenta of particles are better modelled by line vectors (bound vectors) rather than by the usual vector algebra. He treats systems of line vectors (bound vectors) as vector pairs. Moore C L E 1926 'Grassmannian Geometry in Riemannian Space' Journal of Mathematics and Physics, 5, pp 191-200 This paper treats the complement, and the exterior, regressive and interior products in a Riemannian space in tensor index notation using the alternating tensors and the generalised Kronecker symbol. This classic use of tensor notation does not enhance the readability of the exposition. Murnaghan F D 1925 'The Generalised Kronecker Symbol and its Application to the Theory of Determinants' American Mathematical Monthly, 32, pp 233-241 The generalised Kronecker symbol is essentially the generalisation to an exterior product space of the usual Kronecker symbol. Murnaghan F D 1925 'The Tensor Character of the Generalised Kronecker Symbol' Bulletin of the American Mathematical Society, 31, pp 323-329 The author states "It will be readily recognised that there is an intimate connection here with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition of this theory with the aid of the generalised Kronecker symbol would help to make it more widely understood".
2009 9 3
754
The author states "It will be readily recognised that there is an intimate connection here with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition of this theory with the aid of the generalised Kronecker symbol would help to make it more widely understood". Peano G 1895 'Essay on Geometrical Calculus' in Selected Works of Guiseppe Peano Chapter XV, p 169-188 Allen and Unwin, London The essay is translated from 'Saggio di calcolo geometrico' Atti, Accad. Sci. Torino, 31, (1895-6) pp 952-975. Peano claims to have understood Grassmann's ideas by reconstructing them himself. His geometric ideas come through with clarity, substantiating his claim. Peano's principle exposition of Grassmann's work was in Calcolo geometrico secondo l'Ausdehnunglehre di H. Grassmann, Turin (1888) of which a translation of selected passages appears on pp 90-100 of the above Selected Works. Peano G 1901 Formulaire de Mathematiques Gauthier-Villars, Paris A compendium of axioms and results. The last part (pp 192-209) is devoted to point and vector spaces and includes some interesting historical comments. Pedoe D 1967 'On a geometrical theorem in exterior algebra' Canadian Journal of Mathematics, 19, pp 1187-1191 The author remarks 'This paper owes its inspiration to the remarkable book by H. G. Forder The Calculus of Extension Forder introduces many concepts which I find difficult to bring down to earth. But the methods developed in his book are powerful ones, and it is evident that much work can usefully be done in simplifying and interpreting some of the concepts he uses'. He does not mention Grassmann. Saddler W 1927 'Apolar triads on a cubic curve' Proceedings of the London Mathematical Society, Series 2, 26, pp 249-256 'Apolar tetrads on the Grassmann quartic surface' Journal of the London Mathematical Society, 2, pp 185-189 Exterior algebra applied to geometric construction. Sain M 1976 'The Growing Algebraic Presence in Systems Engineering: An Introduction' Proceedings of the IEEE, 64, p 96 A modern algebraic discussion culminating in the definition of the exterior algebra and its relationship to the theory of determinants and some systems theoretical applications.
2009 9 3
755
A modern algebraic discussion culminating in the definition of the exterior algebra and its relationship to the theory of determinants and some systems theoretical applications. Schlegel V 1872, 1875 System der Raumlehre (2 volumes) Leipzig Geometry using Grassmann's methods. Schouten J A 1951 Tensor Analysis for Physicists Oxford Contains a chapter on m-vectors from a tensor-analytic viewpoint. Schweitzer A R 1950 Bulletin of the American Mathematical Society, 56 'Grassmann's extensive algebra and modern number theory' (Part I, p 355; Part II, p 458) 'On the place of the algebraic equation in Grassmann's extensive algebra' (p 459) 'On the derivation of the regressive product in Grassmann's geometrical calculus' (p 463) 'A metric generalisation of Grassmann's geometric calculus' (p 464) Resums only of these papers are printed. Scott R F 1880 A Treatise on the Theory of Determinants Cambridge, London The author states in the preface that 'The principal novelty in the treatise lies in its systematic use of Grassmann's alternate units, by means of which the study of determinants is, I believe, much simplified.' Shepard G C 1966 Vector Spaces of Finite Dimension Oliver and Boyd, London Chapter IV contains an introduction to exterior products via tensors and multilinear algebra. Thomas J M 1962 Systems and Roots W Byrd Press The author uses exterior algebraic concepts in some of his network analysis.
2009 9 3
756
Tonti E 1972 Accademia Nazionale die Lincei, Serie VIII, Volume L II 'On the Mathematical Structure of a Large Class of Physical Theories' (Fasc.1, p 48) 'A Mathematical Model for Physical Theories' (Fasc.2-3, p 176, 351) These papers begin the author's investigations into the structure of physical theories, in which he considers the geometrical calculus to play an important part. Tonti E 1975 On the Formal Structure of Physical Theories Report of the Instituto di Matematica del Politecnico di Milano In this report the author constructs a classification scheme for physical quantities and the equations of physical theories. The mathematical structures needed for this, and which are reviewed in this report are algebraic topology, exterior algebra, exterior differential forms, and Clifford algebra. The author shows that the underlying structure of physical theories is basically capable of a geometric interpretation. Whitehead A N 1898 A Treatise on Universal Algebra (Volume 1) Cambridge No further volumes appeared. This is probably the best and most complete exposition of Grassmann's works in English (586 pages). The author recreates many of Grassmann's results, in many cases extending and clarifying them with original contributions. Whitehead considers non-Euclidean metrics and spaces of arbitrary dimension. However, like Grassmann, he does not distinguish between n-elements and scalars. Willmore T J 1959 Introduction to Differential Geometry Oxford Includes a brief discussion of exterior algebra and its application to differential geometry (p 189). Wilson E B 1901 Vector Analysis New York The first formally published book entirely devoted to presenting the Gibbs-Heaviside system of vector analysis based on Gibbs' lectures and Heaviside's papers in the Electrician in 1893. (Wilson uses the term 'bivector', but by it means a vector with real and imaginary parts.)
2009 9 3
757
Wilson E B and Lewis G N 1912 'The space-time manifold of relativity. The non-Euclidean geometry of mechanics and electromagnetics' Proceedings of the American Academy of Arts and Sciences, XLVIII, pp 387-507 This treatise uses a four-dimensional vector calculus developed by Lewis (see Lewis G N) by specializing the exterior calculus to four dimensions (with the scalar products of time-like vectors negative). This is a good example of the power of the exterior calculus. Ziwet A 1885-6 'A Brief Account of H. Grassmann's Geometrical Theories' Annals of Mathematics, 2, (1885 pp 1-11; 1886 pp 25-34) In the words of the author: 'It is the object of the present paper to give in the simplest form possible, a succinct accont of Grassmann's matheamtical theories and methods in their application to plane geometry'. (Follows in the main Schlegel's System der Raumlehre.)
758
The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the theory of extension. The only other books on Grassmann in English are those by Edward Wyllys Hyde, The Directional Calculus (Hyde 1890) and Grassmann's Space Analysis (Hyde 1906). They treat the theory of extension in two and three-dimensional geometric contexts and include some applications to statics. Several topics such as Hyde's treatment of screws are original contributions. The seminal papers on Clifford algebra are by William Kingdon Clifford and can be found in his collected works Mathematical Papers (Clifford 1882), republished in a facsimile edition by Chelsea. Fortunately for those interested in the evolution of the emerging 'geometric algebras', The International Association for Promoting the Study of Quaternions and Allied Systems of Mathematics published a bibliography (Macfarlane 1913) which, together with supplements to 1913, contains about 2500 articles. This therefore most likely contains all the works on the Ausdehnungslehre and related subjects up to 1913. The only other recent text devoted specifically to Grassmann algebra (to the author's knowledge as of 2001) is Arno Zaddach's Grassmanns Algebra in der Geometrie, BI-Wissenschaftsverlag, (Zaddach 1994).
2009 9 3
759
Index
To be completed.
2009 9 3