Sei sulla pagina 1di 409

Contents

Articles
Combinatorics Index of combinatorics articles Outline of combinatorics 3-dimensional matching AanderaaKarpRosenberg conjecture Algorithmic Lovsz local lemma All-pairs testing Analytic combinatorics Arctic circle theorem Arrangement of hyperplanes Baranyai's theorem Barycentric-sum problem Bent function Binomial coefficient Block walking Bondy's theorem BorsukUlam theorem BruckRyserChowla theorem Butcher group CameronErds conjecture Catalan's constant Chinese monoid Combination Combinatorial class Combinatorial data analysis Combinatorial explosion Combinatorial explosion (communication) Combinatorial hierarchy Combinatorial number system Combinatorial principles Combinatorics and dynamical systems Combinatorics and physics Composition (number theory) Constraint counting 1 7 15 20 23 28 32 33 33 34 37 38 39 44 61 62 63 64 65 72 72 75 76 81 81 82 83 84 84 87 89 90 92 95

Corners theorem Coupon collector's problem (generating function approach) Covering problem Cycle index Cyclic order De Arte Combinatoria De Bruijn torus Delannoy number Dickson's lemma Difference set DIMACS Dinitz conjecture Discrepancy theory Discrete Morse theory Disjunct matrix Dividing a circle into areas Dobinski's formula Domino tiling Enumerations of specific permutation classes Equiangular lines Erds conjecture on arithmetic progressions ErdsGraham problem ErdsFuchs theorem ErdsRado theorem European Journal of Combinatorics Extremal combinatorics Factorial Factorial number system Finite geometry Finite topological space FishburnShepp inequality Free convolution Fuzzy transportation Generalized arithmetic progression Geometric combinatorics Glaisher's theorem Graph dynamical system Group testing

97 98 100 101 106 117 119 119 121 121 123 124 125 126 127 132 136 137 139 143 144 145 146 147 148 149 149 163 169 173 178 179 180 181 181 182 182 185

History of combinatorics HuntMcIlroy algorithm Ideal ring bundle Incidence matrix Incidence structure Independence system Infinitary combinatorics Inversion (discrete mathematics) Inversive plane Isolation lemma Johnson scheme Josephus problem Kalmanson combinatorial conditions Kemnitz's conjecture Langford pairing Laver table Lehmer code LindstrmGesselViennot lemma List of factorial and binomial topics LittlewoodOfford problem Longest common subsequence problem Longest increasing subsequence Longest repeated substring problem Lottery mathematics Lovsz local lemma LubellYamamotoMeshalkin inequality M. Lothaire Markov spectrum Meander (mathematics) Method of distinguished element Mnev's universality theorem Multi-index notation Natural density No-three-in-line problem Occupancy theorem Ordered partition of a set Oriented matroid Partial permutation

189 191 192 193 195 197 197 199 201 202 205 206 209 209 210 211 212 215 217 219 220 229 232 232 236 238 239 240 241 244 245 247 250 252 254 255 256 261

Partition (number theory) Partition of a set Pascal's rule Percolation Percolation theory Perfect ruler Permanent is sharp-P-complete Permutation Permutation pattern Piecewise syndetic set Pigeonhole principle Probabilistic method q-analog q-Vandermonde identity Random permutation statistics Road coloring problem RotaBaxter algebra Rule of product Rule of sum Semilinear set Separable permutation Sequential dynamical system Series multisection Set packing Sharp-SAT Shortest common supersequence Shuffle algebra Sicherman dice Sidon sequence Sim (pencil game) Singmaster's conjecture Small set (combinatorics) Sparse ruler Sperner's lemma Spt function Stable roommates problem Star of David theorem Star product

261 269 272 274 275 279 279 285 296 300 301 305 307 310 312 324 326 327 328 329 329 330 331 332 334 334 335 336 338 340 341 343 344 347 350 351 353 354

Stars and bars (combinatorics) Sum-free sequence Sunflower (mathematics) Superpattern Symbolic combinatorics Szemerdi's theorem Theory of relations Toida's conjecture Topological combinatorics Trace monoid Transformation (combinatorics) Transversal (combinatorics) Tucker's lemma Twelvefold way Umbral calculus Uniform convergence (combinatorics) Vexillary permutation Virtual knot Weighing matrix WilfZeilberger pair Zero-sum problem

355 357 357 358 359 365 366 371 372 373 376 376 377 378 384 387 390 391 392 393 394

References
Article Sources and Contributors Image Sources, Licenses and Contributors 395 402

Article Licenses
License 404

Combinatorics

Combinatorics
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Aspects of combinatorics include counting the structures of a given kind and size (enumerative combinatorics), deciding when certain criteria can be met, and constructing and analyzing objects meeting the criteria (as in combinatorial designs and matroid theory), finding "largest", "smallest", or "optimal" objects (extremal combinatorics and combinatorial optimization), and studying combinatorial structures arising in an algebraic context, or applying algebraic techniques to combinatorial problems (algebraic combinatorics). Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry,[1] and combinatorics also has many applications in optimization, computer science, ergodic theory and statistical physics. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which also has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist.

History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. In 6th century BCE, physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26-1 possibilities. Roman historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schrder numbers.[2] [3] In the Ostomachion, Archimedes (3rd century BCE) considers a tiling puzzle. In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. Notably, an Indian mathematician Mahavira (c. 850) provided the general formulae for the number of permutations and combinations. The philosopher and astronomer Rabbi Abraham ibn Ezra (c. 1140) established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.[4] The arithmetical triangle An example of change ringing (with six bells), one of the earliest nontrivial a graphical diagram showing relationships among the binomial coefficients results in Graph Theory. was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.[5] During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In the modern times, the works by J. J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) laid the foundation for enumerative and algebraic combinatorics. Graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem.

Combinatorics In the second half of 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject.[6] In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field.

Approaches and subfields of combinatorics


Enumerative combinatorics
Enumerative combinatorics is the most classical area of combinatorics, and concentrates on counting the number of certain combinatorial objects. Although Five binary trees on three vertices, an example of Catalan numbers. counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.

Analytic combinatorics
Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.

Partition theory
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis, analytic number theory, and has connections with statistical mechanics.

A plane partition.

Combinatorics

Graph theory
Graphs are basic objects in combinatorics. The questions range from counting (e.g., the number of graphs on n vertices with k edges) to structural (e.g., which graphs contain Hamiltonian cycles) to algebraic questions (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). It should be noted that while there are very strong connections between graph theory and combinatorics, these two are sometimes thought of as separate subjects.[7]

Design theory
Petersen graph. Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which systems play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.

Finite geometry
Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for Design theory. It should not be confused with Discrete geometry (Combinatorial geometry).

Order theory
Order theory is the study of partially ordered sets, both finite and infinite. Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras.

Matroid theory
Matroid theory abstracts part of geometry. It studies the properties of sets Hasse diagram of the powerset of {x,y,z} (usually, finite sets) of vectors in a vector space that do not depend on the ordered by inclusion. particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of the order theory. It is now an independent field of study with a number of connections with other parts of combinatorics.

Combinatorics

Extremal combinatorics
Extremal combinatorics studies extremal questions on set systems. The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn,n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate. Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.

Probabilistic combinatorics
In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find), simply by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.

Self-avoiding walk in a square grid graph.

Often associated with Paul Erds, who did the pioneer work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. However, with the growth of applications to analysis of algorithms in computer science, as well as classical probability, additive and probabilistic number theory, the area recently grew to become an independent field of combinatorics.

Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Within the last decade or so, algebraic combinatorics came to be seen more expansively as the area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. One of the fastest developing subfields within algebraic combinatorics is combinatorial commutative algebra.

Young diagram of a partition (5,4,1).

Combinatorics on words
Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has Construction of a Thue-Morse infinite word. applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory and linguistics. While many applications are new, the classical ChomskySchtzenberger hierarchy of classes of formal grammars is perhaps the best known result in the field.

Combinatorics

Geometric combinatorics
Geometric combinatorics is related to convex and discrete geometry, in particular polyhedral combinatorics. It asks, for example, how many faces of each dimension can a convex polytope have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. We should note that combinatorial geometry is an old fashioned name for discrete geometry.

An icosahedron.

Topological combinatorics
Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology.

Arithmetic combinatorics
Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, Splitting a necklace with two cuts. subtraction, multiplication, and division). Additive combinatorics refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems.

Infinitary combinatorics
Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Gian-Carlo Rota used the name continuous combinatorics[8] to describe probability and measure theory, since there are many analogies between counting and measure.

Combinatorics

Related fields
Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.

Coding theory
Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory.
Kissing spheres are connected to both coding theory and discrete geometry.

Discrete and computational geometry


Discrete geometry (also called combinatorial geometry) also began a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry.

Combinatorics and dynamical systems


Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example graph dynamical system.

Combinatorics and physics


There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.

Notes
[1] [2] [3] [4] Bjrner and Stanley, p. 2 R.P. Stanley, "Hipparchus, Plutarch, Schrder, and Hough", Amer. Math. Monthly 104 (1997), no. 4, 344-350. L. Habsieger, M. Kazarian, S. Lando, On the second number of Plutarch, Amer. Math. Monthly 105 (1998), no. 5, 446. History of Combinatorics (http:/ / ncertbooks. prashanthellina. com/ class_11. Mathematics. Mathematics/ Ch-07(Permutation and Combinations FINAL 04. 01. 06). pdf), chapter in a textbook. [5] Arthur T. White, Ringing the Cosets, Amer. Math. Monthly 94 (1987), no. 8, 721-746; Arthur T. White, Fabian Stedman: The First Group Theorist?, Amer. Math. Monthly 103 (1996), no. 9, 771-778. [6] See Journals in Combinatorics and Related Fields (http:/ / www. combinatorics. net/ journals/ ) and Conferences on Combinatorics (http:/ / www. combinatorics. net/ conf/ ) [7] 2-Digit MSC Comparison (http:/ / www. math. gatech. edu/ ~sanders/ graphtheory/ writings/ 2-digit. html), by Daniel P. Sanders. [8] Continuous and profinite combinatorics (http:/ / faculty. uml. edu/ dklain/ cpc. pdf)

Combinatorics

References
Bjrner, A. and Stanley, R.P., A Combinatorial Miscellany (http://www-math.mit.edu/~rstan/papers/comb. pdf) Graham, R.L., Groetschel M., and Lovsz L., eds. (1996). Handbook of Combinatorics, Volumes 1 and 2. Elsevier (North-Holland), Amsterdam, and MIT Press, Cambridge, Mass. ISBN 0-262-07169-X. Lindner, Charles C. and Christopher A. Rodger (eds.) Design Theory, CRC-Press; 1st. edition (October 31, 1997). ISBN 0-8493-3986-3. van Lint, J.H., and Wilson, R.M. (2001). A Course in Combinatorics, 2nd Edition. Cambridge University Press. ISBN 0-521-80340-3. Stanley, Richard P. (1997, 1999). Enumerative Combinatorics, Volumes 1 and 2 (http://www-math.mit.edu/ ~rstan/ec/). Cambridge University Press. ISBN 0-521-55309-1, ISBN 0-521-56069-1. Combinatorial Analysis (http://encyclopedia.jrank.org/CLI_COM/COMBINATORIAL_ANALYSIS.html) an article in Encyclopdia Britannica Eleventh Edition Riordan, John (1958). An Introduction to Combinatorial Analysis, Wiley & Sons, New York (republished).

External links
Combinatorics (http://mathworld.wolfram.com/Combinatorics.html), a MathWorld article with many references. Combinatorics (http://www.mathpages.com/home/icombina.htm), from a MathPages.com portal. The Hyperbook of Combinatorics (http://www.combinatorics.net/hyper/), a collection of math articles links. The Two Cultures of Mathematics (http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) by W. T. Gowers, article on problem solving vs theory building

Index of combinatorics articles


A
Abstract simplicial complex Addition chain Scholz conjecture Algebraic combinatorics Alternating sign matrix Almost disjoint sets Antichain Arrangement of hyperplanes Assignment problem

Quadratic assignment problem Audioactive decay

Index of combinatorics articles

B
Barcode Matrix code QR Code Universal Product Code Bell polynomials Bertrand's ballot theorem Binary matrix Binomial theorem Block design

Symmetric balanced incomplete block design (SBIBD) Balanced incomplete block design (BIBD) Partially balanced incomplete block design (PBIBD) Block walking Boolean satisfiability problem 2-satisfiability 3-satisfiability Bracelet (combinatorics) BruckChowlaRyser theorem

C
Catalan number Cellular automaton Collatz conjecture Combination Combinatorial design Combinatorial number system Combinatorial optimization Combinatorial search Constraint satisfaction problem Conway's Game of Life Cycles and fixed points Cyclic order Cyclic permutation Cyclotomic identity

Index of combinatorics articles

D
Data integrity Alternating bit protocol Checksum Cyclic redundancy check Luhn formula Error detection Error-detecting code Error-detecting system Message digest Redundancy check Summation check De Bruijn sequence Deadlock Delannoy number Dining philosophers problem Mutual exclusion Rendezvous problem Derangement Dickson's lemma Dinitz conjecture Discrete optimization Dobinski's formula

E
Eight queens puzzle Entropy coding Enumeration Algebraic enumeration Combinatorial enumeration Burnside's lemma ErdsKoRado theorem Euler number

F
Fa di Bruno's formula Factorial number system Family of sets Faulhaber's formula Fifteen puzzle Finite geometry Finite intersection property

Index of combinatorics articles

10

G
Game theory Combinatorial game theory Combinatorial game theory (history) Combinatorial game theory (pedagogy) Star (game theory) Zero game, fuzzy game Dots and Boxes Impartial game Digital sum Nim Nimber SpragueGrundy theorem Partizan game Solved board games Col game Sim (pencil game) Sprouts (game) Surreal numbers Transposition table Black Path Game Sylver coinage Generating function Golomb coding Golomb ruler Graeco-Latin square Gray code

H
Hadamard matrices Complex Hadamard matrices Butson-type Hadamard matrices Generalized Hadamard matrices Regular Hadamard matrices Hamming distance Hash function Hash collision Perfect hash function Hat problem Heilbronn triangle problem Helly family Hypergeometric function identities Hypergeometric series

Hypergraph

Index of combinatorics articles

11

I
Incidence structure Integer partition Ferrers graph

K
Kakeya needle problem Kirkman's schoolgirl problem Knapsack problem KruskalKatona theorem

L
Lagrange inversion theorem Lagrange reversion theorem Lah number Large number Latin square Levenshtein distance Lexicographical order LittlewoodOfford problem LubellYamamotoMeshalkin inequality (known as the LYM inequality) Lucas chain

M
MacMahon Master theorem Magic square Marriage theorem Perfect matching Matroid embedding Monge array Monomial order Moreau's necklace-counting function Motzkin number Multiplicities of entries in Pascal's triangle Multiset Munkres' assignment algorithm

Index of combinatorics articles

12

N
Necklace (combinatorics) Necklace problem Negligible set Almost all Almost everywhere Null set Newton's identities

O
Ordered partition of a set Orthogonal design Complex orthogonal design Quaternion orthogonal design

P
Packing problem Bin packing problem Partition of a set Noncrossing partition Permanent Permutation Permutation matrix Permutations and combinations Josephus permutation Shuffling playing cards Pochhammer symbol Polyforms Polycubes Soma cube Polyiamonds Polyominoes Hexominoes Pentominoes Tetrominoes Polysquare puzzle Projective plane Property B Prfer sequence

Index of combinatorics articles

13

Q
q-analog q-binomial theoremsee Gaussian binomial coefficient q-derivative q-series q-theta function q-Vandermonde identity

R
Rencontres numbers Rubik's cube How to solve the Rubik's Cube Optimal solutions for Rubik's Cube Rubik's Revenge

S
Schrder number Search algorithm Binary search Interpolation search Linear search Local search String searching algorithm

AhoCorasick string matching algorithm Fuzzy string searching grep, agrep, wildcard character KnuthMorrisPratt algorithm Sequences with zero autocorrelation function Series-parallel networks problem Set cover problem Shuffling puzzle Small set (combinatorics) Sparse matrix, Sparse array Sperner family Sperner's lemma Stable marriage problem Steiner system Stirling number

Stirling transform String algorithm Straddling checkerboard Subsequence Longest common subsequence problem Optimal-substructure Subset sum problem

Index of combinatorics articles Symmetric functions Szemerdi's theorem

14

T
ThueMorse sequence Tower of Hanoi Turn number Turing tarpit

U
Union-closed sets conjecture Urn problems (probability)

V
Vandermonde's identity

W
Weighing matrices Weighted round robin Deficit round robin Wignerd'Espagnat inequality

Y
Young tableau

Outline of combinatorics

15

Outline of combinatorics
The following outline is presented as an overview of and topical guide to combinatorics: Combinatorics branch of mathematics concerning the study of finite or countable discrete structures.

Essence of combinatorics
Main article: Combinatorics Matroid Greedoid Ramsey theory Van der Waerden's theorem HalesJewett theorem Umbral calculus, binomial type polynomial sequences Combinatorial species

Branches of combinatorics
Algebraic combinatorics Analytic combinatorics Arithmetic combinatorics Combinatorics on words Combinatorial design theory Enumerative combinatorics Extremal combinatorics Geometric combinatorics Graph theory Infinitary combinatorics Matroid theory Order theory Partition theory Probabilistic combinatorics Topological combinatorics

Multi-disciplinary fields that include combinatorics


Coding theory Combinatorial optimization Combinatorics and dynamical systems Combinatorics and physics Discrete geometry Phylogenetics

Outline of combinatorics

16

History of combinatorics
Main article: History of combinatorics

General combinatorial principles and methods


Fundamental theorem of combinatorial enumeration Combinatorial principles Trial and error, brute force search, bogosort, British Museum algorithm Pigeonhole principle Method of distinguished element Mathematical induction Recurrence relation, telescoping series Generating functions as an application of formal power series

Schrdinger method exponential generating function Stanley's reciprocity theorem Binomial coefficients and their properties Combinatorial proof Double counting (proof technique) Bijective proof Inclusion-exclusion principle Mbius inversion formula Parity, even and odd permutations Combinatorial Nullstellensatz Incidence algebra Greedy algorithm Divide and conquer algorithm Akra-Bazzi method Dynamic programming Branch and bound Birthday attack, birthday paradox Floyd's cycle-finding algorithm Reduction to linear algebra Sparsity Weight function Minimax algorithm Alpha-beta pruning Probabilistic method Sieve methods Analytic combinatorics Symbolic combinatorics Combinatorial class Exponential formula Talk:Exponential formula Twelvefold way Talk:Twelvefold way

MacMahon Master theorem Talk:MacMahon Master theorem

Outline of combinatorics

17

Data structure concepts


Data structure Data type Abstract data type Algebraic data type Composite type Array Associative array Deque List

Linked list Queue Priority queue Skip list Stack Tree data structure Automatic garbage collection

Problem solving as an art


Heuristic Inductive reasoning How to Solve It Creative problem solving

Living with large numbers


Names of large numbers, long scale History of large numbers Graham's number Moser's number Skewes' number Large number notations

Conway chained arrow notation Hyper4 Knuth's up-arrow notation Moser polygon notation Steinhaus polygon notation Large number effects Exponential growth Combinatorial explosion Branching factor Granularity Curse of dimensionality

Concentration of measure

Outline of combinatorics

18

Persons influential in the field of combinatorics


Noga Alon George Andrews Eric Temple Bell Claude Berge Bel Bollobs Peter Cameron Louis Comtet John Horton Conway

On Numbers and Games Winning Ways for your Mathematical Plays Persi Diaconis Ada Dietz Paul Erds Erds conjecture Solomon Golomb Ben Green Tim Gowers Gyula O. H. Katona Imre Leader Lszl Lovsz Luke Pebody George Plya Gian-Carlo Rota Cecil C. Rousseau Herbert Ryser Dick Schelp Vera T. Ss Joel Spencer Emanuel Sperner Richard P. Stanley Endre Szemerdi Terence Tao Carsten Thomassen Jacques Touchard Bartel Leendert van der Waerden Richard Wilson Herbert Wilf Doron Zeilberger

Outline of combinatorics

19

Journals
Annals of Combinatorics Ars Combinatoria Australasian Journal of Combinatorics Bulletin of the Institute of Combinatorics and Its Applications Combinatorica Combinatorics, Probability and Computing Computational Complexity Designs, Codes and Cryptography Discrete and Computational Geometry Discrete Applied Mathematics Discrete Mathematics Discrete Mathematics & Theoretical Computer Science Discrete Optimization Discussiones Mathematicae Graph Theory Electronic Journal of Combinatorics European Journal of Combinatorics The Fibonacci Quarterly Finite Fields and Their Applications Geombinatorics Graphs and Combinatorics Integers, Electronic Journal of Combinatorial Number Theory Journal of Algebraic Combinatorics Journal of Automata, Languages and Combinatorics Journal of Combinatorial Designs Journal of Combinatorial Mathematics and Combinatorial Computing Journal of Combinatorial Optimization Journal of Combinatorial Theory, Series A Journal of Combinatorial Theory, Series B Journal of Complexity Journal of Cryptology Journal of Graph Algorithms and Applications Journal of Graph Theory Journal of Integer Sequences (Electronic) Journal of Mathematical Chemistry Online Journal of Analytic Combinatorics Optimization Methods and Software The Ramanujan Journal Sminaire Lotharingien de Combinatoire SIAM Journal on Discrete Mathematics

Outline of combinatorics

20

Prizes
Euler Medal

Publications
Geombinatorics

References External links


Combinatorics (http://mathworld.wolfram.com/Combinatorics.html), a MathWorld article with many references. Combinatorics (http://www.mathpages.com/home/icombina.htm), from a MathPages.com portal. The Hyperbook of Combinatorics (http://www.combinatorics.net/hyper/), a collection of math articles links. The Two Cultures of Mathematics (http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) by W. T. Gowers, article on problem solving vs theory building

3-dimensional matching
In the mathematical discipline of graph theory, a 3-dimensional matching is a generalization of bipartite matching (a.k.a. 2-dimensional matching) to 3-uniform hypergraphs. Finding a largest 3-dimensional matching is a well-known NP-hard problem in computational complexity theory.

Definition
Let X, Y, and Z be finite, disjoint sets, and let T be a subset of XYZ. That is, T consists of triples (x,y,z) such that xX, yY, and zZ. Now MT is a 3-dimensional matching if the following holds: for any two distinct triples (x1,y1,z1)M and (x2,y2,z2)M, we have x1x2, y1y2, and z1z2.

Example
The figure on the right illustrates 3-dimensional matchings. The set X is marked with red dots, Y is marked with blue dots, and Z is marked with green dots. Figure(a) shows the set T (gray areas). Figure(b) shows a 3-dimensional matching M with |M|=2, and Figure(c) shows a 3-dimensional matching M with |M|=3. The matching M illustrated in Figure(c) is a maximum 3-dimensional matching, i.e., it maximises |M|. The matching illustrated in Figures(b)(c) are maximal 3-dimensional matchings, i.e., they cannot be extended by adding more elements from T.

3-dimensional matchings. (a)InputT. (b)(c)Solutions.

3-dimensional matching

21

Comparison with bipartite matching


A 2-dimensional matching can be defined in a completely analogous manner. Let X and Y be finite, disjoint sets, and let T be a subset of XY. Now MT is a 2-dimensional matching if the following holds: for any two distinct pairs (x1,y1)M and (x2,y2)M, we have x1x2 and y1y2. In the case of 2-dimensional matching, the set T can be interpreted as the set of edges in a bipartite graph G=(X,Y,T); each edge in T connects a vertex in X to a vertex in Y. A 2-dimensional matching is then a matching in the graph G, that is, a set of pairwise non-adjacent edges. Hence 3-dimensional matchings can be interpreted as a generalization of matchings to hypergraphs: the sets X, Y, and Z contain the vertices, each element of T is a hyperedge, and the set M consists of pairwise non-adjacent edges (edges that do not have a common vertex).

Comparison with set packing


A 3-dimensional matching is a special case of a set packing: we can interpret each element (x,y,z) of T as a subset {x,y,z} of XYZ; then a 3-dimensional matching M consists of pairwise disjoint subsets.

Decision problem
In computational complexity theory, 3-dimensional matching is also the name of the following decision problem: given a set T and an integer k, decide whether there exists a 3-dimensional matching MT with |M|k. This decision problem is known to be NP-complete; it is one of Karp's 21 NP-complete problems.[1] The problem is NP-complete even in the special case that k=|X|=|Y|=|Z|.[1] [2] [3] In this case, a 3-dominating matching is not only a set packing but also an exact cover: the set M covers each element of X, Y, and Z exactly once.[4]

Optimization problem
A maximum 3-dimensional matching is a largest 3-dimensional matching. In computational complexity theory, this is also the name of the following optimization problem: given a set T, find a 3-dimensional matching MT that maximizes |M|. Since the decision problem described above is NP-complete, this optimization problem is NP-hard, and hence it seems that there is no polynomial-time algorithm for finding a maximum 3-dimensional matching. However, there are efficient polynomial-time algorithms for finding a maximum bipartite matching (maximum 2-dimensional matching), for example, the HopcroftKarp algorithm.

Approximation algorithms
The problem is APX-complete, that is, it is hard to approximate within some constant.[5] [6] [7] On the positive side, for any constant >0 there is a polynomial-time (3/2+)-approximation algorithm for 3-dimensional matching.[5]
[6]

There is a very simple polynomial-time 3-approximation algorithm for 3-dimensional matching: find any maximal 3-dimensional matching.[7] Just like a maximal matching is within factor 2 of a maximum matching,[8] a maximal 3-dimensional matching is within factor 3 of a maximum 3-dimensional matching.

3-dimensional matching

22

Notes
[1] [2] [3] [4] [5] [6] [7] [8] Karp (1972). Garey & Johnson (1979), Section3.1 and problemSP1 in AppendixA.3.1. Korte & Vygen (2006), Section 15.5. Papadimitriou & Steiglitz (1998), Section 15.7. Crescenzi et al. (2000). Ausiello et al. (2003), problemSP1 in AppendixB. Kann (1991) Matching (graph theory)#Properties.

References
Ausiello, Giorgio; Crescenzi, Pierluigi; Gambosi, Giorgio; Kann, Viggo; Marchetti-Spaccamela, Alberto; Protasi, Marco (2003), Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties, Springer. Crescenzi, Pierluigi; Kann, Viggo; Halldrsson, Magns; Karpinski, Marek; Woeginger, Gerhard (2000), "Maximum 3-dimensional matching" (http://www.nada.kth.se/~viggo/wwwcompendium/node143.html), A Compendium of NP Optimization Problems. Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H.Freeman, ISBN0-7167-1045-5. Kann, Viggo (1991), "Maximum bounded 3-dimensional matching is MAX SNP-complete", Information Processing Letters 37 (1): 2735, doi:10.1016/0020-0190(91)90246-E. Karp, Richard M. (1972), "Reducibility among combinatorial problems", in Miller, Raymond E.; Thatcher, James W., Complexity of Computer Computations, Plenum, pp.85103. Korte, Bernhard; Vygen, Jens (2006), Combinatorial Optimization: Theory and Algorithms (3rd ed.), Springer. Papadimitriou, Christos H.; Steiglitz, Kenneth (1998), Combinatorial Optimization: Algorithms and Complexity, Dover Publications.

AanderaaKarpRosenberg conjecture

23

AanderaaKarpRosenberg conjecture
In theoretical computer science, the AanderaaKarpRosenberg conjecture (also known as the AanderaaRosenberg conjecture or the evasiveness conjecture) is a group of related conjectures about the number of questions of the form "Is there an edge between vertex u and vertex v?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stl Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, it is not possible to skip any questions: an algorithm for determining whether the graph has the property must examine every pair of vertices. A property satisfying this conjecture is called evasive. More precisely, the AanderaaRosenberg conjecture states that any deterministic algorithm must test at least a constant fraction of all possible pairs of vertices, in the worst case, to determine any non-trivial monotone graph property; in this context, a property is monotone if it remains true when edges are added (so planarity is not monotone, but non-planarity is monotone). A stronger version of this conjecture, called the evasiveness conjecture or the AanderaaKarpRosenberg conjecture, states that exactly n(n1)/2 tests are needed. Versions of the problem for randomized algorithms and quantum algorithms have also been formulated and studied. The deterministic AanderaaRosenberg conjecture was proven by Rivest & Vuillemin (1975), but the stronger AanderaaKarpRosenberg conjecture remains unproven; this problem has been open for close to 35 years. Additionally, there is a large gap between the conjectured lower bound and the best proven lower bound for randomized and quantum query complexity.

Example
The property of being non-empty (that is, having at least one edge) is monotone, because adding another edge to a non-empty graph produces another non-empty graph. There is a simple algorithm for testing whether a graph is non-empty: loop through all of the pairs of vertices, testing whether each pair is connected by an edge. If an edge is ever found in this way, break out of the loop, and report that the graph is non-empty, and if the loop completes without finding any edges, then report that the graph is empty. On some graphs (for instance the complete graphs) this algorithm will terminate quickly, without testing every pair of edges, but on the empty graph it tests all possible pairs before terminating. Therefore, the query complexity of this algorithm is n(n1)/2: in the worst case, the algorithm performs n(n1)/2 tests. The algorithm described above is not the only possible method of testing for non-emptiness, but the AanderaaKarpRosenberg conjecture implies that every deterministic algorithm for testing non-emptiness has the same query complexity, n(n1)/2. That is, the property of being non-empty is evasive. For this property, the result is easy to prove directly: if an algorithm does not perform n(n1)/2 tests, it cannot distinguish the empty graph from a graph that has one edge connecting one of the untested pairs of vertices, and must give an incorrect answer on one of these two graphs.

AanderaaKarpRosenberg conjecture

24

Definitions
In the context of this article, all graphs will be simple and undirected, unless stated otherwise. This means that the edges of the graph form a set (and not a multiset) and each edge is a pair of distinct vertices. Graphs are assumed to have an implicit representation in which each vertex has a unique identifier or label and in which it is possible to test the adjacency of any two vertices, but for which adjacency testing is the only allowed primitive operation. Informally, a graph property is a property of a graph that is independent of labeling. More formally, a graph property is a mapping from the set of all graphs to {0,1} such that isomorphic graphs are mapped to the same value. For example, the property of containing at least 1 vertex of degree 2 is a graph property, but the property that the first vertex has degree 2 is not, because it depends on the labeling of the graph (in particular, it depends on which vertex is the "first" vertex). A graph property is called non-trivial if it doesn't assign the same value to all graphs. For instance, the property of being a graph is a trivial property, since all graphs possess this property. On the other hand, the property of being empty is non-trivial, because the empty graph possesses this property, but non-empty graphs do not. A graph property is said to be monotone if the addition of edges does not destroy the property. Alternately, if a graph possesses a monotone property, then every supergraph of this graph on the same vertex set also possesses it. For instance, the property of being nonplanar is monotone: a supergraph of a nonplanar graph is itself nonplanar. However, the property of being regular is not monotone. The big Oh notation is often used for query complexity. In short, f(n) is O(g(n)) if for large enough n, f(n) cg(n) for some positive constant c. Similarly, f(n) is (g(n)) if for large enough n, f(n) cg(n) for some positive constant c. Finally, f(n) is (g(n)) if it is both O(g(n)) and (g(n)).

Query complexity
The deterministic query complexity of evaluating a function on n bits (x1, x2, ... ,xn) is the number of bits xi that have to be read in the worst case by a deterministic algorithm to determine the value of the function. For instance, if the function takes value 0 when all bits are 0 and takes value 1 otherwise (this is the OR function), then the deterministic query complexity is exactly n. In the worst case, the first n 1 bits read could all be 0, and the value of the function now depends on the last bit. If an algorithm doesn't read this bit, it might output an incorrect answer. (Such arguments are known as adversary arguments.) The number of bits read are also called the number of queries made to the input. One can imagine that the algorithm asks (or queries) the input for a particular bit and the input responds to this query. The randomized query complexity of evaluating a function is defined similarly, except the algorithm is allowed to be randomized, i.e., it can flip coins and use the outcome of these coin flips to decide which bits to query. However, the randomized algorithm must still output the correct answer for all inputs: it is not allowed to make errors. Such algorithms are called Las Vegas algorithms, which distinguishes them from Monte Carlo algorithms which are allowed to make some error. Randomized query complexity can also be defined in the Monte Carlo sense, but the AanderaaKarpRosenberg conjecture is about the Las Vegas query complexity of graph properties. Quantum query complexity is the natural generalization of randomized query complexity, of course allowing quantum queries and responses. Quantum query complexity can also be defined with respect to Monte Carlo algorithms or Las Vegas algorithms, but it is usually taken to mean Monte Carlo quantum algorithms. In the context of this conjecture, the function to be evaluated is the graph property, and the input is a string of size n(n1)/2, which gives the locations of the edges on an n vertex graph, since a graph can have at most n(n1)/2 possible edges. The query complexity of any function is upper bounded by n(n1)/2, since the whole input is read after making n(n1)/2 queries, thus determining the input graph completely.

AanderaaKarpRosenberg conjecture

25

Deterministic query complexity


For deterministic algorithms, Rosenberg (1973) originally conjectured that for all nontrivial graph properties on n vertices, deciding whether a graph possesses this property requires (n2) queries. The non-triviality condition is clearly required because there are trivial properties like "is this a graph?" which can be answered with no queries at all. The conjecture was disproved by Aanderaa, who exhibited a directed graph property (the property of containing a "sink") which required only O(n) queries to test. A sink, in a directed graph, is a vertex of indegree n-1 and outdegree 0. This property can be tested with less than 3n queries (Best, van Emde Boas & Lenstra 1974). An undirected graph property which can also be tested with O(n) queries is the property of being a scorpion graph, first described in Best, van Emde Boas & Lenstra (1974). Then Aanderaa and Rosenberg formulated a new conjecture (the AanderaaRosenberg conjecture) which says that deciding whether a graph possesses a non-trivial monotone graph property requires (n2) queries.[1] This conjecture was resolved by Rivest & Vuillemin (1975) by showing that at least n2/16 queries are needed to test for any nontrivial monotone graph property. The bound was further improved to n2/9 by Kleitman & Kwiatkowski (1980), and then to n2/4 + o(n2) by Kahn, Saks & Sturtevant (1983). Richard Karp conjectured the stronger statement (which is now called the evasiveness conjecture or the AanderaaKarpRosenberg conjecture) that "every nontrivial monotone graph property for graphs on n vertices is evasive."[2] A property is called evasive if determining whether a given graph has this property requires exactly n(n1)/2 queries.[3] This conjecture says that the best algorithm for testing any nontrivial monotone property must query all possible edges. This conjecture is still open, although several special graph properties have shown to be evasive for all n. The conjecture has been resolved for the case where n is prime by Kahn, Saks & Sturtevant (1983) using a topological approach. The conjecture has also been resolved for all non-trivial monotone properties on bipartite graphs by Yao (1988). Minor-closed properties have also been shown to be evasive for large n (Chakrabarti, Khot & Shi 2001).

Randomized query complexity


Richard Karp also conjectured that (n2) queries are required for testing nontrivial monotone properties even if randomized algorithms are permitted. No nontrivial monotone property is known which requires less than n2/4 queries to test. A linear lower bound (i.e., (n)) follows from a very general relationship between randomized and deterministic query complexities. The first superlinear lower bound for this problem was given by Yao (1991) who showed that (nlog1/12n) queries are required. This was further improved by King (1988) to (n5/4), and then by Hajnal (1991) to(n4/3). This was subsequently improved to the current best known lower bound of (n4/3log1/3n) by Chakrabarti & Khot (2001). Some recent results give lower bounds which are determined by the critical probability p of the monotone graph property under consideration. The critical probability p is defined as the unique p such that a random graph G(n, p) possesses this property with probability equal to 1/2. A random graph G(n, p) is a graph on n vertices where each edge is chosen to be present with probability p independent of all the other edges. Friedgut, Kahn & Wigderson (2002) showed that any monotone property with critical probability p requires queries. For the same problem, O'Donnell et al. (2005) showed a lower bound of (n4/3/p1/3). As in the deterministic case, there are many special properties for which an (n2) lower bound is known. Moreover, better lower bounds are known for several classes of graph properties. For instance, for testing whether the graph has a subgraph isomorphic to any given graph (the so-called subgraph isomorphism problem), the best known lower bound is (n3/2) due to Grger (1992).

AanderaaKarpRosenberg conjecture

26

Quantum query complexity


For bounded-error quantum query complexity, the best known lower bound is (n2/3log1/6n) as observed by Andrew Yao.[4] It is obtained by combining the randomized lower bound with the quantum adversary method. The best possible lower bound one could hope to achieve is (n), unlike the classical case, due to Grover's algorithm which gives a O(n) query algorithm for testing the monotone property of non-emptiness. Similar to the deterministic and randomized case, there are some properties which are known to have an (n) lower bound, for example non-emptiness (which follows from the optimality of Grover's algorithm) and the property of containing a triangle. More interestingly, there are some graph properties which are known to have an (n3/2) lower bound, and even some properties with an (n2) lower bound. For example, the monotone property of nonplanarity has an (n3/2) lower bound (Ambainis et al. 2008) and the monotone property of containing more than half the possible number of edges (also called the majority function) has an (n2) lower bound (Beals et al. 2001).

Notes
[1] [2] [3] [4] Triesch (1996) Lutz (2001) Kozlov (2008, pp.226228) The result is unpublished, but mentioned in Magniez, Santha & Szegedy (2005).

References
Ambainis, Andris; Iwama, Kazuo; Nakanishi, Masaki; Nishimura, Harumichi; Raymond, Rudy; Tani, Seiichiro; Yamashita, Shigeru (2008), "Quantum query complexity of Boolean functions with small on-sets", Proceedings of the 19th International Symposium on Algorithms and Computation, Lecture Notes in Computer Science, 5369, Gold Coast, Australia: Springer-Verlag, pp.907918, doi:10.1007/978-3-540-92182-0_79, ISBN978-3-540-92181-3. Beals, Robert; Buhrman, Harry; Cleve, Richard; Mosca, Michele; de Wolf, Ronald (2001), "Quantum lower bounds by polynomials", Journal of the ACM 48 (4): 778797, doi:10.1145/502090.502097. Best, M.R.; van Emde Boas, P.; Lenstra, H.W. (1974), A sharpened version of the Aanderaa-Rosenberg conjecture, Report ZW 30/74, Mathematisch Centrum Amsterdam, hdl:1887/3792. Chakrabarti, Amit; Khot, Subhash (2001), "Improved Lower Bounds on the Randomized Complexity of Graph Properties", Proceedings of the 28th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, 2076, Springer-Verlag, pp.285296, doi:10.1007/3-540-48224-5_24, ISBN978-3-540-42287-7. Chakrabarti, Amit; Khot, Subhash; Shi, Yaoyun (2001), "Evasiveness of subgraph containment and related properties", SIAM Journal on Computing 31 (3): 866875, doi:10.1137/S0097539700382005. Friedgut, Ehud; Kahn, Jeff; Wigderson, Avi (2002), "Computing Graph Properties by Randomized Subcube Partitions", Randomization and Approximation Techniques in Computer Science, Lecture Notes in Computer Science, 2483, Springer-Verlag, p.953, doi:10.1007/3-540-45726-7_9, ISBN978-3-540-44147-2. Grger, Hans Dietmar (1992), "On the randomized complexity of monotone graph properties" (http://www.inf. u-szeged.hu/actacybernetica/edb/vol10n3/pdf/Groger_1992_ActaCybernetica.pdf), Acta Cybernetica 10 (3): 119127. Hajnal, Pter (1991), "An (n4/3) lower bound on the randomized complexity of graph properties", Combinatorica 11 (2): 131143, doi:10.1007/BF01206357. Kahn, Jeff; Saks, Michael; Sturtevant, Dean (1983), "A topological approach to evasiveness", 24th Annual Symposium on Foundations of Computer Science (sfcs 1983), Los Alamitos, CA, USA: IEEE Computer Society, pp.3133, doi:10.1109/SFCS.1983.4, ISBN0-8186-0508-1. King, Valerie (1988), "Lower bounds on the complexity of graph properties", Proc. 20th ACM Symposium on Theory of Computing, Chicago, Illinois, United States, pp.468476, doi:10.1145/62212.62258,

AanderaaKarpRosenberg conjecture ISBN0897912640. Kleitman, D.J.; Kwiatkowski, DJ (1980), "Further results on the Aanderaa-Rosenberg conjecture", Journal of Combinatorial Theory. Series B 28: 8595, doi:10.1016/0095-8956(80)90057-X. Kozlov, Dmitry (2008), Combinatorial Algebraic Topology, Springer-Verlag, ISBN9783540730514. Lutz, Frank H. (2001), "Some results related to the evasiveness conjecture", Journal of Combinatorial Theory, Series B 81 (1): 110124, doi:10.1006/jctb.2000.2000. Magniez, Frdric; Santha, Miklos; Szegedy, Mario (2005), "Quantum algorithms for the triangle problem" (http:/ /portal.acm.org/citation.cfm?id=1070591), Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, Vancouver, British Columbia: Society for Industrial and Applied Mathematics, pp.11091117, arXiv:quant-ph/0310134. O'Donnell, Ryan; Saks, Michael; Schramm, Oded; Servedio, Rocco A. (2005), "Every decision tree has an influential variable", Proc 46th IEEE Symposium on Foundations of Computer Science, pp.3139, doi:10.1109/SFCS.2005.34, ISBN0-7695-2468-0. Rivest, Ronald L.; Vuillemin, Jean (1975), "A generalization and proof of the Aanderaa-Rosenberg conjecture", Proc. 7th ACM Symposium on Theory of Computing, Albuquerque, New Mexico, United States, pp.611, doi:10.1145/800116.803747.

27

Rosenberg, Arnold L. (1973), "On the time required to recognize properties of graphs: a problem", SIGACT News 5 (4): 1516, doi:10.1145/1008299.1008302. Triesch, Eberhard (1996), "On the recognition complexity of some graph properties", Combinatorica 16 (2): 259268, doi:10.1007/BF01844851. Yao, Andrew Chi-Chih (1988), "Monotone bipartite graph properties are evasive", SIAM Journal on Computing 17 (3): 517520, doi:10.1137/0217031. Yao, Andrew Chi-Chih (1991), "Lower bounds to randomized algorithms for graph properties", Journal of Computer and System Sciences 42 (3): 267287, doi:10.1016/0022-0000(91)90003-N.

Further reading
Laszlo Lovasz; Young, Neal E. (2002). "Lecture Notes on Evasiveness of Graph Properties". arXiv:cs/0205031v1[cs.CC]. Chronaki, Catherine E (1990), A survey of Evasiveness: Lower Bounds on the Decision-Tree Complexity of Boolean Functions (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.1041), retrieved 2009-10-10. Michael Saks. "Decision Trees: Problems and Results, Old and New" (http://www.math.rutgers.edu/~saks/ PUBS/dtsurvey.pdf).

Algorithmic Lovsz local lemma

28

Algorithmic Lovsz local lemma


In theoretical computer science, the algorithmic Lovsz local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence. Given a finite set of bad events in a probability space with limited dependence amongst the 's

and with specific bounds on their respective probabilities, the Lovsz local lemma proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on how to avoid the bad events. If the events are determined by a finite collection of mutually independent random variables, a simple Las Vegas algorithm with expected polynomial runtime proposed by Robin Moser and Gbor Tardos[1] can compute an assignment to the random variables such that all events are avoided.

Review of Lovsz local lemma


The Lovsz Local Lemma is a powerful tool commonly used in the probabilistic method to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovsz Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a bad event and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows: Let subset of such that assignment of reals be a finite set of events in the probability space is independent from the collection of events to the events such that . For let denote a

. If there exists an

then the probability of avoiding all events in

is positive, in particular

Algorithmic version of the Lovsz local lemma


The Lovsz Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in practice. Note that random sampling from the probability space is likely to be inefficient, since the probability of the event of interest likely to be very small. Under the assumption that all of the events in random variables in computes an assignment to the random variables in are determined by a finite collection of mutually independent such that all events in are avoided. , Robin Moser and Gbor Tardos proposed an efficient randomized algorithm that is only bounded by a product of small numbers and therefore

Hence, this algorithm can be used to efficiently construct witnesses of complex objects with prescribed features for most problems to which the Lovsz Local Lemma applies.

Algorithmic Lovsz local lemma

29

History
Prior to the recent work of Moser and Tardos, earlier work had also made progress in developing algorithmic versions of the Lovsz Local Lemma. Jzsef Beck in 1991 first gave proof that an algorithmic version was possible.[2] In this breakthrough result, a stricter requirement was imposed upon the problem formulation than in the original non-constructive definition. Beck's approach required that for each , the number of dependencies of A was bounded above with a larger upper bound on dependencies: (approximately). The existential version of the Local Lemma permits

This bound is known to be tight. Since the initial algorithm, work has been done to push algorithmic versions of the Local Lemma closer to this tight value. Moser and Tardos's recent work are the most recent in this chain, and provide an algorithm that achieves this tight bound.

Algorithm
Let us first introduce some concepts that are used in the algorithm. For any random variable denotes the current assignment (evaluation) of . that determine the event is denoted by . . An assignment

(evaluation) to all random variables is denoted The unique minimal subset of random variables in

If the event is true under an evaluation , we say that satisfies , otherwise it avoids . Given a set of bad events we wish to avoid that is determined by a collection of mutually independent random variables 1. 2. while 3. return , the algorithm proceeds as follows: : a random evaluation of P such that A is satisfied by : a new random evaluation of P for each random variable .

pick an arbitrary satisfied event

In the first step, the algorithm randomly initializes the current assignment This means that an assignment random variable .

is sampled randomly and independently according to the distribution of the are avoided and which point the

The algorithm then enters the main loop which is executed until all events in

algorithm returns the current assignment. At each iteration of the main loop, the algorithm picks an arbitrary satisfied event (either randomly or deterministically) and resamples all the random variables that determine .

Main theorem
Let be a finite set of mutually independent random variables in the probability space . Let be a finite set of events determined by these variables. If there exists an assignment of reals to the events such that

then there exists an assignment of values to the variables

avoiding all of the events in

Moreover, the randomized algorithm described above resamples an event

at most an expected

times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most .

Algorithmic Lovsz local lemma The proof of this theorem can be found in the paper by Moser and Tardos [1]

30

Symmetric version
The requirement of an assignment function satisfying a set of inequalities in the theorem above is complex and not intuitive. But this requirement can be replaced by three simple conditions: , i.e. each event depends on at most other events, , i.e. the probability of each event is at most , , where is the base of the natural logarithm.

The version of the Lovsz Local Lemma with these three conditions instead of the assignment function x is called the Symmetric Lovsz Local Lemma. We can also state the Symmetric Algorithmic Lovsz Local Lemma: Let be a finite set of mutually independent random variables and . at most an expected times be a finite set of events determined by these variables as before. If the above three conditions hold then there exists an assignment of values to the variables avoiding all of the events in

Moreover, the randomized algorithm described above resamples an event

before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most .

Example
The following example illustrates how the algorithmic version of the Lovsz Local Lemma can be applied to a simple problem. Let be a CNF formula over variables appearing in at most , containing clauses, and with at least is satisfiable. literals in each

clause, and with each variable

clauses. Then,

This statement can be proven easily using the symmetric version of the Algorithmic Lovsz Local Lemma. Let be the set of mutually independent random variables which are sampled uniformly at random. Firstly, we truncate each clause in to contain exactly literals. Since each clause is a disjunction, this does not harm satisfiability, for if we can find a satisfying assignment for the truncated formula, it can easily be extended to a satisfying assignment for the original formula by reinserting the truncated literals. Now, define a bad event for each clause in , where is the event that clause in is unsatisfied by the . Since can current assignment. Since each clause contains literals (and therefore variables) and since all variables are

sampled uniformly at random, we can bound the probability of each bad event by each variable can appear in at most depend on at most Finally, since clauses and there are other events. and therefore variables in each clause, each bad event

it follows by the satisfying all clauses

symmetric Lovsz Local Lemma that the probability of a random assignment to

in is non-zero and hence such an assignment must exist. Now, the Algorithmic Lovsz Local Lemma actually allows us to efficiently compute such an assignment by applying the algorithm described above. The algorithm proceeds as follows: It starts with a random truth value assignment to the variables sampled uniformly at random. While there exists a clause in that is unsatisfied, it randomly picks an unsatisfied clause in and assigns a new truth are satisfied, the value to all variables that appear in chosen uniformly at random. Once all clauses in

Algorithmic Lovsz local lemma algorithm returns the current assignment. This algorithm is in fact identical to WalkSAT which is used to solve general boolean satisfiability problems. Hence, the Algorithmic Lovsz Local Lemma proves that WalkSAT has an expected runtime of at most CNF formulas that satisfy the two conditions above. A stronger version of the above statement is proven by Moser.[3] steps on

31

Applications
As mentioned before, the Algorithmic Version of the Lovsz Local Lemma applies to most problems for which the general Lovsz Local Lemma is used as a proof technique. Some of these problems are discussed in the following articles: Probabilistic proofs of non-probabilistic theorems Random graph

Parallel version
The algorithm described above lends itself well to parallelization, since resampling two independent events , i.e. , in parallel is equivalent to resampling sequentially. Hence, at each iteration of the main loop one can determine the maximal set of independent and satisfied events resample all events in in parallel. satisfies the slightly stronger conditions: Under the assumption that the assignment function and

for some

Moser and Tardos proved that the parallel algorithm achieves a better runtime complexity. In this

case, the parallel version of the algorithm takes an expected

steps before it terminates. The parallel version of the algorithm can be seen as a special case of the sequential algorithm shown above, and so this result also holds for the sequential case.

References
[1] Moser, Robin A.; Tardos, Gabor (2009). "A constructive proof of the general Lovsz Local Lemma". arXiv:0903.0544.. [2] Beck, Jzsef (1991), "An algorithmic approach to the Lovsz Local Lemma. I", Random Structures and Algorithms 2 (4): 343366, doi:10.1002/rsa.3240020402. [3] Moser, Robin A. (2008). "A constructive proof of the Lovsz Local Lemma". arXiv:0810.4812..

All-pairs testing

32

All-pairs testing
All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most choices. The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing.[1] Bugs involving interactions between three or more parameters are progressively less common,[2] while at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.[3] Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, symbolic execution, fuzz testing, and code review.

Notes
[1] Black, Rex (2007). Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. New York: Wiley. p.240. ISBN978-0-470-12790-2. [2] D.R. Kuhn, D.R. Wallace, A.J. Gallo, Jr. (June 2004). "Software Fault Interactions and Implications for Software Testing" (http:/ / csrc. nist. gov/ groups/ SNS/ acts/ documents/ TSE-0172-1003-1. pdf). IEEE Trans. on Software Engineering 30 (6). . [3] Practical Combinatorial Testing. SP 800-142. (http:/ / csrc. nist. gov/ groups/ SNS/ acts/ documents/ SP800-142-101006. pdf) (Report). Natl. Inst. of Standards and Technology. 2010. .

External links
Combinatorialtesting.com; Includes clearly written introductions to pairwise and other, more thorough, methods of combinatorial testing (http://www.combinatorialtesting.com) Hexawise.com - Pairwise test case generating tool with both free and commercial versions (also provides more thorough 3-way, 4-way, 5-way, and 6-way coverage solutions) (http://hexawise.com/) Pairwise Testing Comes of Age - Review including history, examples, issues, research (http://testcover.com/ pub/background/stareast2008.ppt) Pairwise Testing: Combinatorial Test Case Generation (http://www.pairwise.org/) Pairwise testing (http://www.developsense.com/testing/PairwiseTesting.html) All-pairs testing (http://www.mcdowella.demon.co.uk/allPairs.html) Pairwise and generalized t-way combinatorial testing (http://csrc.nist.gov/acts/) TestApi - the API library for testing, providing a variation generation API (http://testapi.codeplex.com) JCombinatorial - an open-source library that facilitates all-pairs testing with JUnit (http://code.google.com/p/ jcombinatorial/) "rdExpert Software for Orthogonal Array Testing" (http://www.phadkeassociates.com/ index_rdexperttestplanning.htm). Phadke Associates, Inc.. "Commercial toolset for Orthogonal Array and PairWise Testing."

Analytic combinatorics

33

Analytic combinatorics
Analytic combinatorics is a branch of combinatorics that describes combinatorial classes using generating functions, with formal power series that often correspond to analytic functions. Given a generating function, analytic combinatorics attempts to describe the asymptotic behavior of a counting sequence using algebraic techniques. This often involves analysis of the singularities of the associated analytic function. Two types of generating functions are commonly used ordinary and exponential generating functions. An important technique for deriving generating functions is symbolic combinatorics.

References
Herbert Wilf, Generatingfunctionology [1], Academic Press, 1990, ISBN 0127519556. Philippe Flajolet and Robert Sedgewick, Analytic Combinatorics, Cambridge University Press, 2008, ISBN 0521898064, Free online version of the book [2].

References
[1] http:/ / www. math. upenn. edu/ ~wilf/ DownldGF. html [2] http:/ / algo. inria. fr/ flajolet/ Publications/ book. pdf

Arctic circle theorem


In mathematics, the arctic circle theorem of Jockusch, Propp & Shor (1998) states that random domino tilings of a large Aztec diamond tend to be frozen outside a certain "arctic circle".

References
Jockusch, William; Propp, James; Shor, Peter (1998), Random Domino Tilings and the Arctic Circle Theorem [1]

References
[1] http:/ / arxiv. org/ abs/ math/ 9801068

Arrangement of hyperplanes

34

Arrangement of hyperplanes
In geometry and combinatorics, an arrangement of hyperplanes is a finite set A of hyperplanes in a linear, affine, or projective space S. Questions about a hyperplane arrangement A generally concern geometrical, topological, or other properties of the complement, M(A), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these properties are related to the arrangement and its intersection semilattice. The intersection semilattice of A, written L(A), is the set of all subspaces that are obtained by intersecting some of the hyperplanes; among these subspaces are S itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These subspaces are called the flats of A. L(A) is partially ordered by reverse inclusion. If the whole space S is 2-dimensional, the hyperplanes are lines; such an arrangement is often called an arrangement of lines. Historically, real arrangements of lines were the first arrangements investigated. If S is 3-dimensional one has an arrangement of planes.

General theory
The intersection semilattice and the matroid
The intersection semilattice L(A) is a meet semilattice and more specifically is a geometric semilattice. If the arrangement is linear or projective, or if the intersection of all hyperplanes is nonempty, the intersection lattice is a geometric lattice. (This is why the semilattice must be ordered by reverse inclusionrather than by inclusion, which might seem more natural but would not yield a geometric (semi)lattice.) When L(A) is a lattice, the matroid of A, written M(A), has A for its ground set and has rank function r(S) := codim(I), where S is any subset of A and I is the intersection of the hyperplanes in S. In general, when L(A) is a semilattice, there is an analogous matroid-like structure that might be called a semimatroid, which is a generalization of a matroid (and has the same relationship to the intersection semilattice as does the matroid to the lattice in the lattice case), but is not a matroid if L(A) is not a lattice.

Polynomials
For a subset B of A, let us define f(B) := the intersection of the hyperplanes in B; this is S if B is empty. The characteristic polynomial of A, written pA(y), can be defined by

summed over all subsets B of A except, in the affine case, subsets whose intersection is empty. (The dimension of the empty set is defined to be 1.) This polynomial helps to solve some basic questions; see below. Another polynomial associated with A is the Whitney-number polynomial wA(x, y), defined by

summed over B C A such that f(B) is nonempty. Being a geometric lattice or semilattice, L(A) has a characteristic polynomial, pL(A)(y), which has an extensive theory (see matroid). Thus it is good to know that pA(y) = yi pL(A)(y), where i is the smallest dimension of any flat, except that in the projective case it equals yi + 1pL(A)(y). The Whitney-number polynomial of A is similarly related to that of L(A). (The empty set is excluded from the semilattice in the affine case specifically so that these relationships will be valid.)

Arrangement of hyperplanes

35

The OrlikSolomon algebra


The intersection semilattice determines another combinatorial invariant of the arrangement, the OrlikSolomon algebra. To define it, fix a commutative subring K of the base field, and form the exterior algebra E of the vector space

generated by the hyperplanes. A chain complex structure is defined on E with the usual boundary operator Orlik-Solomon algebra is then the quotient of E by the ideal generated by elements of the form (where

. The

have an empty intersection) and by boundaries of elements of the same form for which has codimension less than p.

Real arrangements
In real affine space, the complement is disconnected: it is made up of separate pieces called cells or regions or chambers, each of which is either a bounded region that is a convex polytope, or an unbounded region that is a convex polyhedral region which goes off to infinity. Each flat of A is also divided into pieces by the hyperplanes that do not contain the flat; these pieces are called the faces of A. The regions are faces because the whole space is a flat. The faces of codimension 1 may be called the facets of A. The face semilattice of an arrangement is the set of all faces, ordered by inclusion. Adding an extra top element to the face semilattice gives the face lattice. In two dimensions (i.e., in the real affine plane) each region is a convex polygon (if it is bounded) or a convex polygonal region which goes off to infinity. As an example, if the arrangement consists of three parallel lines, the intersection semilattice consists of the plane and the three lines, but not the empty set. There are four regions, none of them bounded. If we add a line crossing the three parallels, then the intersection semilattice consists of the plane, the four lines, and the three points of intersection. There are eight regions, still none of them bounded. If we add one more line, parallel to the last, then there are 12 regions, of which two are bounded parallelograms. A typical problem about an arrangement in n-dimensional real space is to say how many regions there are, or how many faces of dimension 4, or how many bounded regions. These questions can be answered just from the intersection semilattice. For instance, two basic theorems are that the number of regions of an affine arrangement equals (1)npA(1) and the number of bounded regions equals (1)npA(1). Similarly, the number of k-dimensional faces or bounded faces can be read off as the coefficient of xnk in (1)n wA (x, 1) or (1)nwA(x, 1). Meiser (1993) designed a fast algorithm to determine the face of an arrangement of hyperplanes containing an input point. Another question about an arrangement in real space is to decide how many regions are simplices (the n-dimensional generalization of triangles and tetrahedra). This cannot be answered based solely on the intersection semilattice. A real linear arrangement has, besides its face semilattice, a poset of regions, a different one for each region. This poset is formed by choosing an arbitrary base region, B0, and associating with each region R the set S(R) consisting of the hyperplanes that separate R from B. The regions are partially ordered so that R1 R2 if S(R1, R) contains S(R2, R). In the special case when the hyperplanes arise from a root system, the resulting poset is the corresponding Weyl group with the weak Bruhat order. In general, the poset of regions is ranked by the number of separating hyperplanes and its Mbius function has been computed (Edelman 1984).

Arrangement of hyperplanes

36

Complex arrangements
In complex affine space (which is hard to visualize because even the complex affine plane has four real dimensions), the complement is connected (all one piece) with holes where the hyperplanes were removed. A typical problem about an arrangement in complex space is to describe the holes. The basic theorem about complex arrangements is that the cohomology of the complement M(A) is completely determined by the intersection semilattice. To be precise, the cohomology ring of M(A) (with integer coefficients) is isomorphic to the Orlik-Solomon algebra on Z. The isomorphism can be described rather explicitly, and gives a presentation of the cohomology in terms of generators and relations, where generators are represented (in the de Rham cohomology) as logarithmic differential forms

with

any linear form defining the generic hyperplane of the arrangement.

Technicalities
Sometimes it is convenient to allow the degenerate hyperplane, which is the whole space S, to belong to an arrangement. If A contains the degenerate hyperplane, then it has no regions because the complement is empty. However, it still has flats, an intersection semilattice, and faces. The preceding discussion assumes the degenerate hyperplane is not in the arrangement. Sometimes one wants to allow repeated hyperplanes in the arrangement. We did not consider this possibility in the preceding discussion, but it makes no material difference.

References
Hazewinkel, Michiel, ed. (2001), "Arrangement of hyperplanes" [1], Encyclopedia of Mathematics, Springer, ISBN978-1556080104 Edelman, Paul H. (1984), "A partial order on the regions of n dissected by hyperplanes", Transactions of the American Mathematical Society 283 (2): 617631, doi:10.2307/1999150, JSTOR1999150, MR0737888. Meiser, S. (1993), "Point location in arrangements of hyperplanes", Information and Computation 106 (2): 286303, doi:10.1006/inco.1993.1057, MR1241314. Orlik, Peter; Terao, Hiroaki (1992), Arrangements of Hyperplanes, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 300, Berlin: Springer-Verlag, MR1217488. Zaslavsky, Thomas (1975), "Facing up to arrangements: face-count formulas for partitions of space by hyperplanes", Memoirs of the American Mathematical Society (Providence, R.I.: American Mathematical Society) (No. 154), MR0357135.

References
[1] http:/ / www. encyclopediaofmath. org/ index. php?title=A/ a110700

Baranyai's theorem

37

Baranyai's theorem
In combinatorial mathematics, Baranyai's theorem (proved by and named after Zsolt Baranyai) deals with the decompositions of complete hypergraphs.

A partition of a complete graph into perfect matchings, the case r=2 of Baranyai's theorem

Statement of the theorem


The statement of the result is that if decomposes into 1-factors. are natural numbers and r divides k, then the complete hypergraph is a hypergraph with k vertices, in which every subset of r vertices forms a

hyperedge; a 1-factor of this hypergraph is a set of hyperedges that touches each vertex exactly once, or equivalently a partition of the vertices into subsets of sizer. Thus, the theorem states that the k vertices of the hypergraph may be partitioned into subsets of r vertices in exactly one of the partitions. different ways, in such a way that each r-element subset appears in

History
The r=2 case can be rephrased as stating that every complete graph with an even number of vertices has an edge coloring whose number of colors equals its degree, or equivalently that its edges may be partitioned into perfect matchings. It may be used to schedule round-robin tournaments, and its solution was already known in the 19th century. The case that k=2r is also easy. The r=3 case was established by R. Peltesohn in 1936. The general case was proved by Zsolt Baranyai in 1975.

References
Baranyai, Zs. (1975), "On the factorization of the complete uniform hypergraph", in Hajnal, A.; Rado, R.; Ss, V. T., Infinite and Finite Sets, Proc. Coll. Keszthely, 1973, Colloquia Math. Soc. Jnos Bolyai, 10, North-Holland, pp.91107. van Lint, J. H.; Wilson, R. M. (2001), A Course in Combinatorics (2nd ed.), Cambridge University Press. Peltesohn, R. (1936), Das Turnierproblem fr Spiele zu je dreien, Inaugural dissertation, Berlin.

Barycentric-sum problem

38

Barycentric-sum problem
Combinatorial number theory deals with number theoretic problems which involve combinatorial ideas in their formulations or solutions. Paul Erds is the main founder of this branch of number theory. Typical topics include covering system, zero-sum problems, various restricted sumsets, and arithmetic progressions in a set of integers. Algebraic or analytic methods are powerful in this field. In combinatorial number theory, the barycentric-sum problems are questions that can be answered using combinatorial techniques. The context of barycentric-sum problems are the barycentric sequences.

Example
Let be the cyclic group of integers modulo n. Let S be a sequence of elements of be the length of S. A sequence such that . with , where the repetition of is barycentric or has a elements is allowed. Let

barycentric-sum if it contains one element Informally, if contains one element

, which is the average of its terms. A barycentric sequence of length is 5-barycentric with barycenter 2, however the set {0,2,3,4,5}

is called a t-barycentric sequence. Moreover when S is a set, the term barycentric set is used instead of barycentric sequence. For example, the set {0,1,2,3,4} is not 5-barycentric. The barycentric-sum problem consist in finding the smallest integer t such that any sequence of length t contains a k-barycentric sequence for some given k.The study of the existence of such t related with k and the study of barycentric constants are part of the barycentric-sum problems. It has been introduced by [1] [2] [3] Ordaz, inspired in a theorem of Hamidoune : every sequence of length in contains a k-barycentric sequence. Notice that a k-barycentric sequence in , with k a multiple of n, is a sequence with zero-sum. The zero-sum problem on sequences started in 1961 with the Erds, Ginzburg and Ziv theorem: every sequence of length in an abelian group of order n, contains an n-subsequence with zero-sum.[4] [5] [6] [7] [8]
[9] [10]

Barycentric-sum problems have been defined in general for finite abelian groups. However, most of the main results obtained up to now are in . The barycentric constants introduced by Ordaz are[11] [12] [13] [14] [15] : k-barycentric Olson constant, k-barycentric Davenport constant, barycentric Davenport constant, generalized barycentric Davenport constant, constrained barycentric Davenport constant. This constants are related to the Davenport constant[16] i.e. the smallest integer t such that any t-sequence contains a subsequence with zero-sum. Moreover related to the classical Ramsey numbers , the barycentric Ramsey numbers are introduced. An overview of the results computed manually or automatically are presented.[17] The implemented algorithms are written in C.[17] [13] [18]

References
[1] C. Delorme, S. Gonzlez, O. Ordaz and M.T. Varela. Barycentric sequences and barycentric Ramsey numbers stars, Discrete Math. 277(2004)4556. [2] C. Delorme, I. Mrquez, O. Ordaz and A. Ortuo. Existence condition for barycentric sequences, Discrete Math. 281(2004)163172. [3] Y. O. Hamidoune. On weighted sequences sums, Combinatorics, Probability and Computing 4(1995) 363367. [4] Y. Caro. Zero-sum problems: a survey. Discrete Math. 152 (1996) 93113. [5] P. Erds, A. Ginzburg and A. Ziv. Theorem in the additive number theory, Bull. Res. Council Israel 10F (1961) 4143. [6] C. Flores and O. Ordaz. On sequences with zero sum in abelian group. Volume in homage to Dr. Rodolfo A. Ricabarra (Spanish), 99-106, Vol. Homenaje, 1, Univ. Nac. del Sur, Baha Blanca, 1995. [7] W. Gao and A. Geroldinger, Zero-sum problems in finite abelian groups: A survey. Expositiones Mathematicae 24 (2006), n. 4, 337369. [8] D. J. Grynkiewicz, O. Ordaz, M.T. Varela and F. Villarroel, On the Erdos-Ginzburg-Ziv Inverse Theorems. Acta Arithmetica. 129 (2007)307318. 2 [9] Y. O. Hamidoune, O. Ordaz and A. Ortuo. On a combinatorial theorem of Erds-Ginzburg-Ziv. Combinatorics, Probability and Computing 7 (1998)403412.

Barycentric-sum problem
[10] O. Ordaz and D. Quiroz, Representation of group elements as subsequences sums, To appear in Discrete Math. [11] S. Gonzlez, L. Gonzlez and O. Ordaz. Barycentric Ramsey numbers for small graphs, To appear in Bulletin of the Malaysan Mathematical Sciences Society. [12] L. Gonzlez, I. Mrquez, O. Ordaz and D. Quiroz, Constrained and generalized barycentric Davenport constants, Divulgaciones Matemticas 15 No. 1 (2007)1121. [13] C. Guia, F. Losavio, O. Ordaz M.T. Varela and F. Villarroel, Barycentric Davenport constants. To appear in Divulgaciones Matemticas. [14] O. Ordaz, M.T. Varela and F. Villarroel. k-barycentric Olson constant. To appear in Mathematical Reports. [15] O. Ordaz and D. Quiroz, Barycentric-sum problem: a survey. Divulgaciones Matemticas 15 No. 2 (2007)193206. [16] C. Delorme, O. Ordaz and D. Quiroz. Some remarks on Davenport constant, Discrete Math. 237(2001)119128. [17] L. Gonzlez, F. Losavio, O. Ordaz, M.T. Varela and F. Villarroel. Barycentric Integers sequences. Sumited to Expositiones Mathematicae. [18] F. Villarroel,Tesis Doctoral en Matemtica. La constante de Olson k baricntrica y un teorema inverso de Erds-Ginzburg-Ziv. Facultad de Ciencias. Universidad Central de Venezuela, (2008).

39

External links
Divulgacions Matemticas (Spanish) (http://www.emis.de/journals/DM/)

Bent function
In the mathematical field of combinatorics, a bent function is a special type of Boolean function. Defined and named in the 1960s by Oscar Rothaus in research not published until 1976, bent functions are so called because they are as different as possible from all linear and affine functions. They have been extensively studied for their applications in cryptography, but have also been applied to spread spectrum, coding theory, and combinatorial design. The definition can be extended in several ways, leading to different classes of generalized bent functions that share many of the useful properties of the original.

The 2-ary bent functions with Hamming weight 1 Their nonlinearity is

Bent function

40

The nonlinearity of the bent function is

stands for the exclusive or (compare algebraic normal form)

Walsh transform
Bent functions are defined in terms of the Walsh transform. The Walsh transform of a Boolean function : Z is the function given by Z2

where a x = a1x1 + a2x2 + + anxn (mod 2) is the dot product in Z = a x } and S1(a) = { x Z

: (x) a x }. Then |S0(a)| + |S1(a)| = 2 and hence the transform lies in the range

.[1] Alternatively, let S0(a) = { x Z

: (x)

For any Boolean function and a Z

Moreover, the linear function 0(x) = ax and the affine function 1(x) = ax + 1 correspond to the two extreme cases, since

Thus, for each a Z

the value of

characterizes where the function (x) lies in the range from 0(x) to 1(x).

Bent functions are in a sense equidistant from all of these, so they are equally hard to approximate with any affine

Bent function function.

41

Definition and properties


Rothaus defined a bent function as a Boolean function : Z value. The simplest examples of bent functions, written in algebraic normal form, are F(x1,x2) = x1x2 and G(x1,x2,x3,x4) = x1x2 + x3x4. This pattern continues: x1x2 + x3x4 + ... + xn1xn is a bent function Z Z2 for every even n, but there is a wide variety of different types of bent functions as n increases.[2] The sequence of values (1)(x), with x Z taken in lexicographical order, is called a bent sequence; bent functions and bent sequences have equivalent properties. In this 1 form, the Walsh transform is easily computed as where W(2n) is the natural-ordered Walsh matrix and the sequence is treated as a column vector.[3] Rothaus proved that bent functions exist only for even n, and that for a bent function , .
[1]

Z2 whose Walsh transform has constant absolute

for all a Z , so and g are

In fact,

, where g is also bent. In this case,

considered dual functions.[3] Every bent function has a Hamming weight (number of times it takes the value 1) of 2n 12n/21, and in fact agrees with any affine function at one of those two numbers of points. So the nonlinearity of (minimum number of times it equals any affine function) is 2n12n/21, the maximum possible. Conversely, any Boolean function with nonlinearity 2n12n/21 is bent.[1] The degree of in algebraic normal form (called the nonlinear order of ) is at most n/2 (for n > 2).[2] Although bent functions are vanishingly rare among Boolean functions of many variables, they come in many different kinds. There has been detailed research into special classes of bent functions, such as the homogeneous ones[4] or those arising from a monomial over a finite field,[5] but so far the bent functions have defied all attempts at a complete enumeration or classification.

Applications
As early as 1982 it was discovered that maximum length sequences based on bent functions have cross-correlation and autocorrelation properties rivalling those of the Gold codes and Kasami codes for use in CDMA.[6] These sequences have several applications in spread spectrum techniques. The properties of bent functions are naturally of interest in modern digital cryptography, which seeks to obscure relationships between input and output. By 1988 Forr recognized that the Walsh transform of a function can be used to show that it satisfies the Strict Avalanche Criterion (SAC) and higher-order generalizations, and recommended this tool to select candidates for good S-boxes achieving near-perfect diffusion.[7] Indeed, the functions satisfying the SAC to the highest possible order are always bent.[8] Furthermore, the bent functions are as far as possible from having what are called linear structures, nonzero vectors a such that (x+a) + (x) is a constant. In the language of differential cryptanalysis (introduced after this property was discovered) the derivative of a bent function at every nonzero point a (that is, a(x) = (x+a) + (x)) is a balanced Boolean function, taking on each value exactly half of the time. This property is called perfect nonlinearity.[2] Given such good diffusion properties, apparently perfect resistance to differential cryptanalysis, and resistance by definition to linear cryptanalysis, bent functions might at first seem the ideal choice for secure cryptographic functions such as S-boxes. Their fatal flaw is that they fail to be balanced. In particular, an invertible S-box cannot be constructed directly from bent functions, and a stream cipher using a bent combining function is vulnerable to a correlation attack. Instead, one might start with a bent function and randomly complement appropriate values until the result is balanced. The modified function still has high nonlinearity, and as such functions are very rare the

Bent function process should be much faster than a brute-force search.[2] But functions produced in this way may lose other desirable properties, even failing to satisfy the SACso careful testing is necessary.[8] A number of cryptographers have worked on techniques for generating balanced functions that preserve as many of the good cryptographic qualities of bent functions as possible.[9] [10] [11] Some of this theoretical research has been incorporated into real cryptographic algorithms. The CAST design procedure, used by Carlisle Adams and Stafford Tavares to construct the S-boxes for the block ciphers CAST-128 and CAST-256, makes use of bent functions.[11] The cryptographic hash function HAVAL uses Boolean functions built from representatives of all four of the equivalence classes of bent functions on six variables.[12] The stream cipher Grain uses an NLFSR whose nonlinear feedback polynomial is, by design, the sum of a bent function and a linear function.[13]

42

Generalizations
The most common class of generalized bent functions is the mod m type, such that

has constant absolute value mn/2. Perfect nonlinear functions


n1

, those such that for all nonzero a,

(x+a) (a) takes on each value m times, are generalized bent. If m is prime, the converse is true. In most cases only prime m are considered. For odd prime m, there are generalized bent functions for every positive n, even and odd. They have many of the same good cryptographic properties as the binary bent functions.[14] Semi-bent functions are an odd-order counterpart to bent functions. A semi-bent function is n odd, such that takes only the values 0 and m
(n+1)/2

with

. They also have good cryptographic characteristics, and

some of them are balanced, taking on all possible values equally often.[15] The partially bent functions form a large class defined by a condition on the Walsh transform and autocorrelation functions. All affine and bent functions are partially bent. This is in turn a proper subclass of the plateaued functions.[16] The idea behind the hyper-bent functions is to maximize the minimum distance to all Boolean functions coming from bijective monomials on the finite field GF(2n), not just the affine functions. For these functions this distance is constant, which may make them resistant to an interpolation attack. Other related names have been given to cryptographically important classes of functions Z Z , such as

almost bent functions and crooked functions. While not Boolean functions themselves, these are closely related to the bent functions and have good nonlinearity properties.

References
[1] C. Qu; J. Seberry, T. Xia (29 December 2001). Boolean Functions in Cryptography (http:/ / citeseer. ist. psu. edu/ old/ 700097. html). . Retrieved 14 September 2009. [2] W. Meier; O. Staffelbach (April 1989). "Nonlinearity Criteria for Cryptographic Functions". Eurocrypt '89. pp.549562. [3] C. Carlet; L.E. Danielsen, M.G. Parker, P. Sol (19 May 2008). "Self Dual Bent Functions" (http:/ / www. ii. uib. no/ ~matthew/ bfcasdb. pdf) (PDF). Fourth International Workshop on Boolean Functions: Cryptography and Applications (BFCA '08). . Retrieved 21 September 2009. [4] T. Xia; J. Seberry, J. Pieprzyk, C. Charnes (June 2004). "Homogeneous bent functions of degree n in 2n variables do not exist for n > 3" (http:/ / ro. uow. edu. au/ infopapers/ 291/ ). Discrete Applied Mathematics 142 (1-3): 127132. doi:10.1016/j.dam.2004.02.006. ISSN0166-218X. . Retrieved 21 September 2009. [5] A. Canteaut; P. Charpin, G. Kyureghyan (January 2008). "A new class of monomial bent functions" (http:/ / www-roc. inria. fr/ secret/ Anne. Canteaut/ Publications/ CanChaKuy07. pdf) (PDF). Finite Fields and Their Applications 14 (1): 221241. doi:10.1016/j.ffa.2007.02.004. ISSN1071-5797. . Retrieved 21 September 2009. [6] J. Olsen; R. Scholtz, L. Welch (November 1982). "Bent-Function Sequences" (http:/ / www. costasarrays. org/ costasrefs/ b2hd-olsen82bent-function. html). IEEE Transactions on Information Theory IT-28 (6): 858864. ISSN0018-9448. . Retrieved 24 September 2009.

Bent function
[7] R. Forr (August 1988). "The Strict Avalanche Criterion: Spectral Properties of Boolean Functions and an Extended Definition". CRYPTO '88. pp.450468. [8] C. Adams; S. Tavares (January 1990). The Use of Bent Sequences to Achieve Higher-Order Strict Avalanche Criterion in S-Box Design (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 41. 8374). Technical Report TR 90-013. Queen's University. . Retrieved 23 September 2009. [9] K. Nyberg (April 1991). "Perfect nonlinear S-boxes". Eurocrypt '91. pp.378386. [10] J. Seberry; X. Zhang (December 1992). "Highly Nonlinear 0-1 Balanced Boolean Functions Satisfying Strict Avalanche Criterion" (http:/ / citeseerx. ist. psu. edu:80/ viewdoc/ summary?doi=10. 1. 1. 57. 4992). AUSCRYPT '92. pp.143155. . Retrieved 24 September 2009. [11] C. Adams (November 1997). "Constructing Symmetric Ciphers Using the CAST Design Procedure" (http:/ / jya. com/ cast. html). Designs, Codes, and Cryptography 12 (3): 283316. doi:10.1023/A:1008229029587. ISSN0925-1022. . Retrieved 20 September 2009. [12] Y. Zheng; J. Pieprzyk, J. Seberry (December 1992). "HAVALa one-way hashing algorithm with variable length of output" (http:/ / labs. calyptix. com/ files/ haval-paper. pdf) (PDF). AUSCRYPT '92. pp.83104. . Retrieved 24 September 2009. [13] M. Hell; T. Johansson, A. Maximov, W. Meier (PDF). A Stream Cipher Proposal: Grain-128 (http:/ / www. ecrypt. eu. org/ stream/ p2ciphers/ grain/ Grain128_p2. pdf). . Retrieved 24 September 2009. [14] K. Nyberg (May 1990). "Constructions of bent functions and difference sets". Eurocrypt '90. pp.151160. [15] K. Khoo; G. Gong, D. Stinson (February 2006). "A new characterization of bent and semi-bent functions on finite fields" (http:/ / www. cacr. math. uwaterloo. ca/ ~dstinson/ papers/ dcc-final. ps) (PostScript). Designs, Codes, and Cryptography 38 (2): 279295. doi:10.1007/s10623-005-6345-x. ISSN0925-1022. . Retrieved 24 September 2009. [16] Y. Zheng; X. Zhang (November 1999). "Plateaued Functions" (http:/ / citeseer. ist. psu. edu/ old/ 291018. html). Second International Conference on Information and Communication Security (ICICS '99). pp.284300. . Retrieved 24 September 2009.

43

Further reading
C. Carlet (May 1993). "Two New Classes of Bent Functions". Eurocrypt '93. pp.77101. J. Seberry; X. Zhang (March 1994). "Constructions of Bent Functions from Two Known Bent Functions" (http:// citeseerx.ist.psu.edu:80/viewdoc/summary?doi=10.1.1.55.531). Australasian Journal of Combinatorics 9: 2135. ISSN1034-4942. Retrieved 17 September 2009. T. Neumann; advisor: U. Dempwolff (May 2006). Bent Functions (http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.85.8731). Retrieved 23 September 2009. Colbourn, Charles J.; Dinitz, Jeffrey H. (2006). Handbook of Combinatorial Designs (2nd ed.). CRC Press. pp.337339. ISBN978-1-58-488506-1.

Binomial coefficient

44

Binomial coefficient
In mathematics, binomial coefficients are a family of positive integers that occur as coefficients in the binomial theorem. They are indexed by two nonnegative integers; the binomial coefficient indexed by n and k is usually written , and it is the coefficient of the xk term in the polynomial expansion of the binomial power (1+x)n. Arranging binomial coefficients into rows for successive values of n, and in which k ranges from 0 to n, gives a triangular array called Pascal's triangle. This family of numbers also arises in many other areas than The binomial coefficients can be arranged to form algebra, notably in combinatorics. For any set containing n Pascal's triangle. elements, the number of distinct k-element subsets of it that can be formed (the k-combinations of its elements) is given by the binomial coefficient . Therefore is often read as "n choose k". The properties of binomial coefficients have led to extending the meaning of the symbol beyond the basic case where n and k are nonnegative integers with k

n; such expressions are then still called binomial coefficients. The notation was introduced by Andreas von Ettingshausen in 1826,[1] although the numbers were already

known centuries before that (see Pascal's triangle). The earliest known detailed discussion of binomial coefficients is in a tenth-century commentary, due to Halayudha, on an ancient Hindu classic, Pingala's chandastra. In about 1150, the Hindu mathematician Bhaskaracharya gave a very clear exposition of binomial coefficients in his book Lilavati.[2] Alternative notations include C(n, k), nCk, nCk, , ,[3] in all of which the C stands for combinations or choices.

Definition and interpretations


For natural numbers (taken to include 0) n and k, the binomial coefficient
k n

can be defined as the coefficient of

the monomial X in the expansion of (1 + X) . The same coefficient also occurs (if k n) in the binomial formula

(valid for any elements x,y of a commutative ring), which explains the name "binomial coefficient". Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that k objects can be chosen from among n objects; more formally, the number of k-element subsets (or k-combinations) of an n-element set. This number can be seen as equal to the one of the first definition, independently of any of the formulas below to compute it: if in each of the n factors of the power (1 + X)n one temporarily labels the term X with an index i (running from 1 to n), then each subset of k indices gives after expansion a contribution Xk, and the coefficient of that monomial in the result will be the number of such subsets. This shows in particular that is a natural number for any natural numbers n and k. There are many other combinatorial interpretations of binomial coefficients (counting problems for which the answer is given by a binomial coefficient expression), for instance the number of words formed of n bits (digits 0 or 1) whose sum is k is given by , while the number of ways to write where every ai is a nonnegative integer is given by interpretations are easily seen to be equivalent to counting k-combinations. . Most of these

Binomial coefficient

45

Computing the value of binomial coefficients


Several methods exist to compute the value of k-combinations. without actually expanding a binomial power or counting

Recursive formula
One has a recursive formula for binomial coefficients

with initial values

The formula follows either from tracing the contributions to Xk in (1 + X)n1(1 + X), or by counting k-combinations of {1, 2, ..., n} that contain n and that do not contain n separately. It follows easily that when k>n, and for all n, so the recursion can stop when reaching such cases. This recursive formula then allows the construction of Pascal's triangle.

Multiplicative formula
A more efficient method to compute individual binomial coefficients is given by the formula

where the numerator of the first fraction is expressed as a falling factorial power. This formula is easiest to understand for the combinatorial interpretation of binomial coefficients. The numerator gives the number of ways to select a sequence of k distinct objects, retaining the order of selection, from a set of n objects. The denominator counts the number of distinct sequences that define the same k-combination when order is disregarded.

Factorial formula
Finally there is a formula using factorials that is easy to remember:

where n! denotes the factorial of n. This formula follows from the multiplicative formula above by multiplying numerator and denominator by (n k)!; as a consequence it involves many factors common to numerator and denominator. It is less practical for explicit computation unless common factors are first canceled (in particular since factorial values grow very rapidly). The formula does exhibit a symmetry that is less evident from the multiplicative formula (though it is from the definitions)
(1)

Binomial coefficient

46

Generalization and connection to the binomial series


The multiplicative formula allows the definition of binomial coefficients to be extended[4] by replacing n by an arbitrary number (negative, real, complex) or even an element of any commutative ring in which all positive integers are invertible:

With this definition one has a generalization of the binomial formula (with one of the variables set to 1), which justifies still calling the binomial coefficients:
(2)

This formula is valid for all complex numbers and X with |X|<1. It can also be interpreted as an identity of formal power series in X, where it actually can serve as definition of arbitrary powers of series with constant coefficient equal to1; the point is that with this definition all identities hold that one expects for exponentiation, notably

If is a nonnegative integer n, then all terms with k>n are zero, and the infinite series becomes a finite sum, thereby recovering the binomial formula. However for other values of , including negative integers and rational numbers, the series is really infinite.

Binomial coefficient

47

Pascal's triangle
Pascal's rule is the important recurrence relation

1000th row of Pascal's triangle, arranged vertically, with grey-scale representations of decimal digits of the coefficients, right-aligned. The left boundary of the image corresponds roughly to the graph of the logarithm of the binomial coefficients, and illustrates that they form a log-concave sequence.

(3)

which can be used to prove by mathematical induction that

is a natural number for all n and k, (equivalent to the

statement that k! divides the product of k consecutive integers), a fact that is not immediately obvious from formula (1). Pascal's rule also gives rise to Pascal's triangle:

Binomial coefficient

48

0: 1: 2: 3: 4: 5: 6: 7: 8: 1 1 8 1 7 28 1 6 21 56 1 5 15 35 1 4 10 1 3 1

1 1 2 3 6 10 20 35 70 56 15 21 28 4 5 6 7 8 1 1 1 1 1 1 1

Row number n contains the numbers

for k = 0,,n. It is constructed by starting with ones at the outside and

then always adding two adjacent numbers and writing the sum directly underneath. This method allows the quick calculation of binomial coefficients without the need for fractions or multiplications. For instance, by looking at row number 5 of the triangle, one can quickly read off that (x + y)5 = 1 x5 + 5 x4y + 10 x3y2 + 10 x2y3 + 5 x y4 + 1 y5. The differences between elements on other diagonals are the elements in the previous diagonal, as a consequence of the recurrence relation (3) above.

Combinatorics and statistics


Binomial coefficients are of importance in combinatorics, because they provide ready formulas for certain frequent counting problems: There are There are There are There are ways to choose k elements from a set of n elements. See Combination. ways to choose k elements from a set of n if repetitions are allowed. See Multiset. strings containing k ones and n zeros. strings consisting of k ones and n zeros such that no two ones are adjacent.[5]

The Catalan numbers are The binomial distribution in statistics is The formula for a Bzier curve.

Binomial coefficients as polynomials


For any nonnegative integer k, the expression can be simplified and defined as a polynomial divided by k!:

This presents a polynomial in t with rational coefficients. As such, it can be evaluated at any real or complex number t to define binomial coefficients with such first arguments. These "generalized binomial coefficients" appear in Newton's generalized binomial theorem. For each k, the polynomial can be characterized as the unique degree k polynomial p(t) satisfying p(0) = p(1) =

... = p(k 1) = 0 and p(k) = 1. Its coefficients are expressible in terms of Stirling numbers of the first kind, by definition of the latter:

Binomial coefficient

49

The derivative of

can be calculated by logarithmic differentiation:

Binomial coefficients as a basis for the space of polynomials


Over any field containing Q, each polynomial p(t) of degree at most d is uniquely expressible as a linear combination . The coefficient ak is the kth difference of the sequence p(0), p(1), , p(k). Explicitly,[6]

(3.5)

Integer-valued polynomials
Each polynomial is integer-valued: it takes integer values at integer inputs. (One way to prove this is by induction on k, using Pascal's identity.) Therefore any integer linear combination of binomial coefficient polynomials is integer-valued too. Conversely, (3.5) shows that any integer-valued polynomial is an integer linear combination of these binomial coefficient polynomials. More generally, for any subring R of a characteristic 0 field K, a polynomial in K[t] takes values in R at all integers if and only if it is an R-linear combination of binomial coefficient polynomials.

Example
The integer-valued polynomial 3t(3t+1)/2 can be rewritten as

Identities involving binomial coefficients


The factorial formula facilitates relating nearby binomial coefficients. For instance, if k is a positive integer and n is arbitrary, then

and, with a little more work,

Moreover, the following may be useful:

Binomial coefficient

50

Series involving binomial coefficients


The formula
(5)

is obtained from (2) using x = 1. This is equivalent to saying that the elements in one row of Pascal's triangle always add up to two raised to an integer power. A combinatorial interpretation of this fact involving double counting is given by counting subsets of size 0, size 1, size 2, and so on up to size n of a set S of n elements. Since we count the number of subsets of size i for 0 i n, this sum must be equal to the number of subsets of S, which is known to be 2n. That is, Equation5 is a statement that the power set for a finite set with n elements has size 2n. More explicitly, consider a bit string with n digits. This bit string can be used to represent 2n numbers. Now consider all of the bit strings with no ones in them. There is just one, or rather n choose 0. Next consider the number of bit strings with just a single one in them. There are n, or rather n choose 1. Continuing this way we can see that the equation above holds. The formulas
(6a)

and
(6b)

follow from (2), after differentiating with respect to x (twice in the latter) and then substituting x = 1. The Chu-Vandermonde identity, which holds for any complex-values m and n and any non-negative integer k, is
(7a)

and can be found by examination of the coefficient of

in the expansion of (1+x)m(1+x)nm = (1+x)n using

equation (2). When m=1, equation (7a) reduces to equation (3). A similar looking formula, which applies for any integers j, k, and n satisfying 0jkn, is
(7b)

and

can

be

found

by

examination using

of

the

coefficient

of

in

the

expansion

of

When j=k, equation (7b) gives

From expansion (7a) using n=2m, k=m, and (1), one finds
(8)

Let F(n) denote the n-th Fibonacci number. We obtain a formula about the diagonals of Pascal's triangle

Binomial coefficient

51

(9)

This can be proved by induction using (3) or by Zeckendorf's representation (Just note that the lhs gives the number of subsets of {F(2),...,F(n)} without consecutive members, which also form all the numbers below F(n+1)). Also using (3) and induction, one can show that
(10)

Although there is no closed formula for

(unless one resorts to Hypergeometric functions), one can again use (3) and induction, to show that for k = 0, ..., n1
(11)

as well as
(12)

which is itself a special case of the result from the theory of finite differences that for any polynomial P(x) of degree less than n,
(13a)

Differentiating (2) k times and setting x = 1 yields this for and the general case follows by taking linear combinations of these. When P(x) is of degree less than or equal to n,
(13b)

, when 0 k < n,

where

is the coefficient of degree n in P(x).

More generally for (13b),


(13c)

where m and d are complex numbers. This follows immediately applying (13b) to the polynomial Q(x):=P(m + dx) instead of P(x), and observing that Q(x) has still degree less than or equal to n, and that its coefficient of degree n is dnan. The infinite series
(14)

Binomial coefficient

52

is convergent for k 2. This formula is used in the analysis of the German tank problem. It is equivalent to the formula for the finite sum

which is proved for M>m by induction on M. Using (8) one can derive
(15)

and
(16)

Series multisection gives the following identity for the sum of binomial coefficients taken with a step s and offset t as a closed-form sum of s terms:

Identities with combinatorial proofs


Many identities involving binomial coefficients can be proved by combinatorial means. For example, the following identity for nonnegative integers (which reduces to (6) when ):
(16b)

can be given a double counting proof as follows. The left side counts the number of ways of selecting a subset of of at least q elements, and marking q elements among those selected. The right side counts the same parameter, because there are ways of choosing a set of q marks and they occur in all subsets that additionally contain some

subset of the remaining elements, of which there are The recursion formula

where both sides count the number of k-element subsets of {1, 2, . . ., n} with the right hand side rst grouping them into those that contain element n and those that do not. The identity (8) also has a combinatorial proof. The identity reads

Suppose you have

empty squares arranged in a row and you want to mark (select) n of them. There are

ways to do this. On the other hand, you may select your n squares by selecting k squares from among the first n and squares from the remaining n squares; any k from 1 to n will work. This gives

Binomial coefficient Now apply (4) to get the result. Sum of coefficients row The number of k-combinations for all k, , is the sum of the nth row (counting from 0) of the

53

binomial coefficients. These combinations are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to , where each digit position is an item from the set of n.

Dixon's identity
Dixon's identity is

or, more generally,

where a, b, and c are non-negative integers.

Continuous identities
Certain trigonometric integrals have values expressible in terms of binomial coefficients: For and
(17)

(18)

(19)

These can be proved by using Euler's formula to convert trigonometric functions to complex exponentials, expanding using the binomial theorem, and integrating term by term.

Generating functions
Ordinary generating functions
For a fixed n, the ordinary generating function of the sequence is:

For a fixed k, the ordinary generating function of the sequence

is:

Binomial coefficient The bivariate generating function of the binomial coefficients is:

54

Another bivariate generating function of the binomial coefficients, which is symmetric, is:

Exponential generating function


The exponential bivariate generating function of the binomial coefficients is:

Divisibility properties
In 1852, Kummer proved that if m and n are nonnegative integers and p is a prime number, then the largest power of p dividing equals pc, where c is the number of carries when m and n are added in base p. Equivalently, the exponent of a prime p in equals the number of nonnegative integers j such that the fractional part of k/pj is greater than the fractional part of n/pj. It can be deduced from this that is divisible by n/gcd(n,k). A somewhat surprising result by David Singmaster (1974) is that any integer divides almost all binomial coefficients. More precisely, fix an integer d and let f(N) denote the number of binomial coefficients with n < N such that d divides . Then

Since the number of binomial coefficients

with n < N is N(N+1) / 2, this implies that the density of binomial

coefficients divisible by d goes to 1. Another fact: An integer n2 is prime if and only if all the intermediate binomial coefficients

are divisible by n. Proof: When p is prime, p divides for all 0<k<p because it is a natural number and the numerator has a prime factor p but the denominator does not have a prime factor p. When n is composite, let p be the smallest prime factor of n and let k=n/p. Then 0<p<n and

otherwise the numerator k(n1)(n2)...(np+1) has to be divisible by n=kp, this can only be the case when (n1)(n2)...(np+1) is divisible by p. But n is divisible by p, so p does not divide n1, n2, ..., np+1 and because p is prime, we know that p does not divide (n1)(n2)...(np+1) and so the numerator cannot be divisible by n.

Binomial coefficient

55

Bounds and asymptotic formulas


The following bounds for hold:

Stirling's approximation yields the bounds: and, in general, and the approximation as The infinite product formula (cf. Gamma function, alternative definition) for m2 and n1,

yields the asymptotic formulas

as

This asymptotic behaviour is contained in the approximation

as well. (Here

is the k-th harmonic number and

is the EulerMascheroni constant). and the binary entropy of the largest

The sum of binomial coefficients can be bounded by a term exponential in that occurs. More precisely, for and , it holds

where

is the binary entropy of

.[7]

A simple and rough upper bound for the sum of binomial coefficients is given by the formula below (not difficult to prove)

Binomial coefficient

56

Generalizations
Generalization to multinomials
Binomial coefficients can be generalized to multinomial coefficients. They are defined to be the number:

where

While the binomial coefficients represent the coefficients of (x+y)n, the multinomial coefficients represent the coefficients of the polynomial

See multinomial theorem. The case r = 2 gives binomial coefficients:

The combinatorial interpretation of multinomial coefficients is distribution of n distinguishable elements over r (distinguishable) containers, each containing exactly ki elements, where i is the index of the container. Multinomial coefficients have many properties similar to these of binomial coefficients, for example the recurrence relation:

and symmetry:

where

is a permutation of (1,2,...,r).

Generalization to negative integers


If , then , this reduces to extends to all .

In the special case

Taylor series
Using Stirling numbers of the first kind the series expansion around any arbitrarily chosen point is

Binomial coefficient

57

Binomial coefficient with n=1/2


The definition of the binomial coefficients can be extended to the case where In particular, the following identity holds for any non-negative integer : is real and is integer.

This shows up when expanding

into a power series using the Newton binomial series :

Identity for the product of binomial coefficients


One can express the product of binomial coefficients as a linear combination of binomial coefficients:

where the connection coefficients are multinomial coefficients. In terms of labelled combinatorial objects, the connection coefficients represent the number of ways to assign m+n-k labels to a pair of labelled combinatorial objectsof weight m and n respectivelythat have had their first k labels identified, or glued together to get a new labelled combinatorial object of weight m+n-k. (That is, to separate the labels into three portions to apply to the glued part, the unglued part of the first object, and the unglued part of the second object.) In this regard, binomial coefficients are to exponential generating series what falling factorials are to ordinary generating series.

Partial Fraction Decomposition


The partial fraction decomposition of the inverse is given by and

Newton's binomial series


Newton's binomial series, named after Sir Isaac Newton, is one of the simplest Newton series:

The identity can be obtained by showing that both sides satisfy the differential equation (1+z) f'(z) = f(z). The radius of convergence of this series is 1. An alternative expression is

where the identity

is applied.

Binomial coefficient

58

Two real or complex valued arguments


The binomial coefficient is generalized to two real or complex valued arguments using the gamma function or beta function via

This definition inherits these following additional properties from

moreover,

The resulting function has been little-studied, apparently first being graphed in (Fowler 1996). Notably, many binomial identities fail: but for n positive (so negative). The behavior is quite complex, and markedly different in various octants (that is, with respect to the x and y axes and the line ), with the behavior for negative x having singularities at negative integer values and a checkerboard of positive and negative regions: in the octant in the octant in the quadrant it is a smoothly interpolated form of the usual binomial, with a ridge ("Pascal's ridge"). and in the quadrant the function is close to zero. the function is alternatingly very large positive and negative on the

parallelograms with vertices in the octant the behavior is again alternatingly very large positive and negative, but on a square grid. in the octant it is close to zero, except for near the singularities.

Generalization to q-series
The binomial coefficient has a q-analog generalization known as the Gaussian binomial coefficient.

Generalization to infinite cardinals


The definition of the binomial coefficient can be generalized to infinite cardinals by defining:

where A is some set with cardinality

. One can show that the generalized binomial coefficient is well-defined, in , will remain the same. For

the sense that no matter what set we choose to represent the cardinal number

finite cardinals, this definition coincides with the standard definition of the binomial coefficient. Assuming the Axiom of Choice, one can show that for any infinite cardinal .

Binomial coefficient

59

Binomial coefficient in programming languages


The notation is convenient in handwriting but inconvenient for typewriters and computer terminals. Many

programming languages do not offer a standard subroutine for computing the binomial coefficient, but for example the J programming language uses the exclamation mark: k ! n . Naive implementations of the factorial formula, such as the following snippet in Python: <syntaxhighlight lang="python"> def binomialCoefficient(n, k): from math import factorial return factorial(n) // (factorial(k) * factorial(n - k)) </syntaxhighlight> are very slow and are uselessly calculating factorials of very high numbers (in languages as C or Java they suffer from overflow errors because of this reason). A direct implementation of the multiplicative formula works well: <syntaxhighlight lang="python"> def binomialCoefficient(n, k): if k > n - k: # take advantage of symmetry k = n - k c = 1 for i in range(k): c = c * (n - (k - i)) c = c // (i+1) return c </syntaxhighlight> (Notice that range(k) is a list from 0 to k-1). The example mentioned above can be also written in functional style. The following Scheme example uses recursive definition

Rational arithmetic can be easily avoided using integer division

The following implementation uses all these ideas <syntaxhighlight lang="scheme"> (define (binomial n k) Helper function to compute C(n,k) via forward recursion (define (binomial-iter n k i prev) (if (>= i k) prev (binomial-iter n k (+ i 1) (/ (* (- n i) prev) (+ i 1))))) Use symmetry property C(n,k)=C(n, n-k) (if (< k (- n k)) (binomial-iter n k 0 1) (binomial-iter n (- n k) 0 1))) </syntaxhighlight> Another way to compute the binomial coefficient when using large numbers is to recognize that

Binomial coefficient

60

where

denotes the natural logarithm of the gamma function at

. It is a special function that is easily

computed and is standard in some programming languages such as using log_gamma in Maxima, LogGamma in Mathematica, or gammaln in MATLAB. Roundoff error may cause the returned value to not be an integer.

Notes
[1] Higham (1998) [2] Lilavati Section 6, Chapter 4 (see Knuth (1997)). [3] Shilov (1977) [4] See (Graham, Knuth & Patashnik 1994), which also defines for . Alternative generalizations, such as to two real or for , but this causes most binomial coefficient

complex valued arguments using the Gamma function assign nonzero values to

identities to fail, and thus is not widely used majority of definitions. One such choice of nonzero values leads to the aesthetically pleasing "Pascal windmill" in Hilton, Holton and Pedersen, Mathematical reflections: in a room with many mirrors, Springer, 1997, but causes even Pascal's identity to fail (at the origin). [5] Muir, Thomas (1902). "Note on Selected Combinations" (http:/ / books. google. com/ books/ reader?id=EN8vAAAAIAAJ& output=reader& pg=GBS. PA102). Proceedings of the Royal Society of Edinburgh. . [6] This can be seen as a discrete analog of Taylor's theorem. It is closely related to Newton's polynomial. Alternating sums of this form may be expressed as the NrlundRice integral. [7] see e.g. Flum & Grohe (2006, p.427)

References
Benjamin, Arthur T.; Quinn, Jennifer (2003). Proofs that Really Count: The Art of Combinatorial Proof (https:// www.maa.org/EbusPPRO/Bookstore/ProductDetail/tabid/170/Default.aspx?ProductId=675), Mathematical Association of America. Bryant, Victor (1993). Aspects of combinatorics. Cambridge University Press. ISBN0521419743. Flum, Jrg; Grohe, Martin (2006). Parameterized Complexity Theory (http://www.springer.com/east/home/ generic/search/results?SGWID=5-40109-22-141358322-0). Springer. ISBN978-3-540-29952-3. Fowler, David (January 1996). "The Binomial Coefficient Function". The American Mathematical Monthly (Mathematical Association of America) 103 (1): 117. doi:10.2307/2975209. JSTOR2975209 Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics (Second ed.). Addison-Wesley. pp.153256. ISBN0-201-55802-5. Higham, Nicholas J. (1998). Handbook of writing for the mathematical sciences. SIAM. p.25. ISBN0898714206. Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1: Fundamental Algorithms (Third ed.). Addison-Wesley. pp.5274. ISBN0-201-89683-4. Singmaster, David (1974). "Notes on binomial coefficients. III. Any integer divides almost all binomial coefficients". J. London Math. Soc. (2) 8 (3): 555560. doi:10.1112/jlms/s2-8.3.555. Shilov, G. E. (1977). Linear algebra. Dover Publications. ISBN9780486635187.

Binomial coefficient

61

External links
Calculation of Binomial Coefficient (http://www.stud.feec.vutbr.cz/~xvapen02/vypocty/komb. php?language=english) This article incorporates material from the following PlanetMath articles, which are licensed under the Creative Commons Attribution/Share-Alike License: Binomial Coefficient, Bounds for binomial coefficients, Proof that C(n,k) is an integer, Generalized binomial coefficients.

Block walking
In combinatorics, block walking is a method useful in thinking about sums of combinations graphically as "walks" on Pascal's triangle. As the name suggests, block walking problems involve counting the number of ways an individual can walk from one corner A of a city block to another corner B of another city block given restrictions on the number of blocks the person may walk, the directions the person may travel, the distance from A to B, et cetera.

An Example Block Walking Problem


Suppose such an individual, say "Fred", must walk exactly k blocks to get to a point B that is exactly k blocks from A. It is convenient to regard Fred's starting point A as the origin, , of a rectangular array of lattice points and B as some lattice point nonnegative. , e units "East" and n units "North" of A, where and both and are

Solution by Brute Force


A "brute force" solution to this problem may be obtained by systematically counting the number of ways Fred can reach each point where and without backtracking (i.e. only traveling North or East from one point to another) until a pattern is observed. For example, the number of ways Fred could go from to or is exactly one; to is two; to or is one; to or is three; and so on. In general, one soon discovers that the number of paths from A to any such X corresponds to an entry of Pascal's Triangle.

Combinatorial Solution
Since the problem involves counting a finite, discrete number of paths between lattice points, it is reasonable to assume a combinatorial solution exists to the problem. Towards this end, we note that for Fred to still be on a path that will take him from A to B over blocks, at any point X he must either travel along one of the unit vectors <1,0> and <0,1>. For the sake of clarity, let and . Given the coordinates of B, times, respectively. As regardless of the path Fred travels he must walk along the vectors E and N exactly and such, the problem reduces to finding the number of distinct rearrangements of the word , which is equivalent to finding the number of ways to choose number of paths Fred could take from A to B traveling only . indistinct objects from a group of blocks is . Thus the total

Block walking

62

Other Problems with Known, Block Walking Combinatorial Proofs


Proving that

can be done with a straightforward application of block walking.[1]

References
[1] Lehoczky, Sandor and Richard Rusczyk. The Art of Problem Solving, Volume II. Page 231.

Bondy's theorem
In mathematics, Bondy's theorem is a theorem in combinatorics that appeared in a 1972 article by John Adrian Bondy.[1] The theorem is as follows: Let X be a set with n elements and let A1, A2, ..., An be distinct subsets of X. Then there exists a subset S of X with n1 elements such that the sets AiS are all distinct. In other words, if we have a 0-1 matrix with n rows and n columns such that each row is distinct, we can remove one column such that the rows of the resulting n(n1) matrix are distinct.[2] [3] From the perspective of computational learning theory, this can be rephrased as follows: Let C be a concept class over a finite domain X. Then there exists a subset S of X with the size at most |C|1 such that S is a witness set for every concept in C. This implies that every finite concept class C has its teaching dimension bounded by |C|1.

Example
Consider the 44 matrix

where all rows are pairwise distinct. If we delete, for example, the first column, the resulting matrix

no longer has this property: the first row is identical to the second row. Nevertheless, by Bondy's theorem we know that we can always find a column that can be deleted without introducing any identical rows. In this case, we can delete the third column: all rows of the 34 matrix

are distinct. Another possibility would have been deleting the fourth column.

Bondy's theorem

63

Notes
[1] Bondy, J. A. (1972), "Induced subsets", Journal of Combinatorial Theory, Series B 12: 201202, doi:10.1016/0095-8956(72)90025-1, MR0319773. [2] Jukna, Stasys (2001), Extremal Combinatorics with Applications in Computer Science, Springer, ISBN9783540663133, Section 12.1. [3] Clote, Peter; Remmel, Jeffrey B. (1995), Feasible Mathematics II, Birkhuser, ISBN9783764336752, Section 4.1.

BorsukUlam theorem
In mathematics, the BorsukUlam theorem, named after Stanisaw Ulam and Karol Borsuk, states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. According to (Matouek 2003, p.25), the first historical mention of the statement of this theorem appears in (Lyusternik 1930). The first proof was given by (Borsuk 1933), where the formulation of the problem was attributed to Ulam. Since then, many alternate proofs have been found out by various authors as collected in (Steinlein 1985). The case n = 2 is often illustrated by saying that at any moment there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures. This assumes that temperature and barometric pressure vary continuously. A stronger statement related to BorsukUlam theorem is that every antipode-preserving map f from Sn to itself has odd degree.

Corollaries
No subset of Rn is homeomorphic to Sn. The LusternikSchnirelmann theorem: If the sphere Sn is covered by n+1 open sets, then one of these sets contains a pair (x,x) of antipodal points. (this is equivalent to the BorsukUlam theorem) The Ham sandwich theorem: For any compact sets in Rn we can always find a hyperplane dividing each of them into two subsets of equal measure). The Brouwer fixed point theorem (Matouek 2003, p.25; Su 1997).

References
Borsuk, K. (1933). "Drei Stze ber die n-dimensionale euklidische Sphre". Fund. Math. 20: 177190. Lyusternik, L.; Shnirel'man, S. (1930). "Topological Methods in Variational Problems". Issledowatelskii Institut Matematiki i Mechaniki pri O. M. G. U. (Moscow). Matouek, Ji (2003). Using the BorsukUlam theorem. Berlin: Springer Verlag. doi:10.1007/978-3-540-76649-0. ISBN3-540-00362-2. Steinlein, H. (1985). "Borsuk's antipodal theorem and its generalizations and applications: a survey. Mthodes topologiques en analyse non linaire". Sm. Math. Supr. Montral, Sm. Sci. OTAN (NATO Adv. Study Inst.) 95: 166235. Su, Francis Edward (Nov. 1997). "Borsuk-Ulam Implies Brouwer: A Direct Construction" [1]. The American Mathematical Monthly 104 (9): 855859.

BorsukUlam theorem

64

References
[1] http:/ / www. math. hmc. edu/ ~su/ papers. dir/ borsuk. pdf

BruckRyserChowla theorem
The BruckRyserChowla theorem is a result on the combinatorics of block designs. It states that if a (v, b, r, k, )-design exists with v = b (a symmetric design), then: if v is even, then k is a square; if v is odd, then the following Diophantine equation has a nontrivial solution: x2 (k )y2 (1)(v1)/2 z2 = 0. The theorem was proved in the case of projective planes in (Bruck & Ryser 1949). It was extended to symmetric designs in (Ryser & Chowla 1950).

Projective planes
In the special case of a symmetric design with = 1, that is, a projective plane, the theorem (which in this case is referred to as the BruckRyser theorem) can be stated as follows: If a finite projective plane of order q exists and q is congruent to 1 or 2 (mod 4), then q must be the sum of two squares. Note that for a projective plane, the design parameters are v = b = q2 + q + 1, r = k = q + 1, = 1. The theorem, for example, rules out the existence of projective planes of orders 6 and 14 but allows the existence of planes of orders 10 and 12. Since a projective plane of order 10 has been shown not to exist using a combination of coding theory and large-scale computer search, the condition of the theorem is evidently not sufficient for the existence of a design. However, no stronger general non-existence criterion is known.

Connection with incidence matrices


The existence of a symmetric (v, b, r, k, )-design is equivalent to the existence of a v v incidence matrix R with elements 0 and 1 satisfying R RT = (k )I + J where I is the v v identity matrix and J is the v v all-1 matrix. In essence, the BruckRyserChowla theorem is a statement of the necessary conditions for the existence of a rational v v matrix R satisfying this equation. In fact, the conditions stated in the BruckRyserChowla theorem are not merely necessary, but also sufficient for the existence of such a rational matrix R. They can be derived from the HasseMinkowski theorem on the rational equivalence of quadratic forms.

BruckRyserChowla theorem

65

References
Bruck, R.H.; Ryser, H.J. (1949), "The nonexistence of certain finite projective planes", Canadian J. Math. 1: 8893 Chowla, S.; Ryser, H.J. (1950), "Combinatorial problems", Canadian J. Math. 2: 9399 Lam, C. W. H. (1991), "The Search for a Finite Projective Plane of Order 10" [1], American Mathematical Monthly 98 (4): 305318 van Lint, J.H., and R.M. Wilson (1992), A Course in Combinatorics. Cambridge, Eng.: Cambridge University Press.

External links
Weisstein, Eric W., "BruckRyserChowla Theorem [2]" from MathWorld.

References
[1] http:/ / www. cecm. sfu. ca/ organics/ papers/ lam/ [2] http:/ / mathworld. wolfram. com/ Bruck-Ryser-ChowlaTheorem. html

Butcher group
In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by Hairer & Wanner (1974), is an infinite-dimensional group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the RungeKutta method. It arose from an algebraic formalism involving rooted trees that provides formal power series solutions of the differential equation modeling the flow of a vector field. It was Cayley (1857), prompted by the work of Sylvester on change of variables in differential calculus, who first noted that the derivatives of a composition of functions can be conveniently expressed in terms of rooted trees and their combinatorics. Connes & Kreimer (1999) pointed out that the Butcher group is the group of characters of the Hopf algebra of rooted trees that had arisen independently in their own work on renormalization in quantum field theory and Connes' work with Moscovici on local index theorems. This Hopf algebra, often called the Connes-Kreimer algebra, is essentially equivalent to the Butcher group, since its dual can be identified with the universal enveloping algebra of the Lie algebra of the Butcher group.[1] As they commented:

We regard Butchers work on the classification of numerical integration methods as an impressive example that concrete problem-oriented work can lead to far-reaching conceptual results.

Butcher group

66

Differentials and rooted trees


A rooted tree is a graph with a distinguished node, called the root, in which every other node is connected to the root by a unique path. If the root of a tree t is removed and the nodes connected to the original node by a single bond are taken as new roots, the tree t breaks up into rooted trees t1, t2, ... Reversing this process a new tree t = [t1, t2, ...] can be constructed by joining the roots of the trees to a new common root. Rooted trees with two, three and four nodes, from Cayley's original The number of nodes in a tree is denoted by |t|. A article heap-ordering of a rooted tree t is an allocation of the numbers 1 through |t| to the nodes so that the numbers increase on any path going away from the root. Two heap orderings are equivalent, if there is an automorphism of rooted trees mapping one of them on the other. The number of equivalence classes of heap-orderings on a particular tree is denoted by (t) and can be computed using the Butcher's formula:[2] [3]

where St denotes the symmetry group of t and the tree factorial is defined recursively by with the tree factorial of an isolated root defined to be 1 The ordinary differential equation for the flow of a vector field on an open subset U of RN can be written

where x(s) takes values in U, f is a smooth function from U to RN and x0 is the starting point of the flow at time s = 0.

Cayley (1857) gave a method to compute the higher order derivatives x(m)(s) in terms of rooted trees. His formula can be conveniently expressed using the elementary differentials introduced by Butcher. These are defined inductively by

With this notation

giving the power series expansion

As an example when N = 1, so that x and f are real-valued functions of a single real variable, the formula yields

where the four terms correspond to the four rooted trees from left to right in Figure 3 above. In a single variable this formula is the same as Fa di Bruno's formula of 1855; however in several variables it has to be written more carefully in the form

Butcher group where the tree structure is crucial.

67

Definition using Hopf algebra of rooted trees


The Hopf algebra H of rooted trees was defined by Connes & Kreimer (1998) in connection with Kreimer's previous work on renormalization in quantum field theory. It was later discovered that the Hopf algebra was the dual of a Hopf algebra defined earlier by Grossman & Larsen (1989) in a different context. The characters of H, i.e. the homomorphisms of the underlying commutative algebra into R, form a group, called the Butcher group. It corresponds to the formal group structure discovered in numerical analysis by Butcher (1972). The Hopf algebra of rooted trees H is defined to be the polynomial ring in the variables t, where t runs through rooted trees. Its comultiplication is defined by

where the sum is over all proper rooted subtrees s of t;

is the monomial given by the product the variables ti

formed by the rooted trees that arise on erasing all the nodes of s and connected links from t. The number of such trees is denoted by n(t\s). Its counit is the homomorphism of H into R sending each variable t to zero. Its antipode S can be defined recursively by the formula

The Butcher group is defined to be the set of algebra homomorphims of H into R with group structure

The inverse in the Butcher group is given by

and the identity by the counit .

Butcher series and RungeKutta method


The non-linear ordinary differential equation

can be solved approximately by the Runge-Kutta method. This iterative scheme requires an m x m matrix

and a vector

with m components. The scheme defines vectors xn by first finding a solution X1, ... , Xm of

and then setting

Butcher (1963) showed that the solution of the corresponding ordinary differential equations

Butcher group

68

has the power series expansion

where j and are determined recursively by

and

The power series above are called B-series or Butcher series.[2] [4] The corresponding assignment is an element of the Butcher group. The homomorphism corresponding to the actual flow has

Butcher showed that the Runge-Kutta method gives an nth order approximation of the actual flow provided that and agree on all trees with n nodes or less. Moreover Butcher (1972) showed that the homomorphisms defined by the Runge-Kutta method form a dense subgroup of the Butcher group: in fact he showed that, given a homomorphism ', there is a Runge-Kutta homomorphism agreeing with ' to order n; and that if given homomorphims and ' corresponding to Runge-Kutta data (A, b) and (A' , b' ), the product homomorphism corresponds to the data

Hairer & Wanner (1974) proved that the Butcher group acts naturally on the functions f. Indeed setting

they proved that

Lie algebra
Connes & Kreimer (1998) showed that associated with the Butcher group G is an infinite-dimensional Lie algebra. The existence of this Lie algebra is predicted by a theorem of Milnor & Moore (1965): the commutativity and natural grading on H implies that the dual H* can be identified with the universal enveloping algebra of a Lie algebra . Connes and Kreimer explicitly identify with a space of derivations of H into R, i.e. linear maps such that

the formal tangent space of G at the identity . This forms a Lie algebra with Lie bracket

is generated by the derivations t defined by for each rooted tree t.

Butcher group

69

Renormalization
Connes & Kreimer (1998) provided a general context for using Hopf algebraic methods to give a simple mathematical formulation of renormalization in quantum field theory. Renormalization was interpreted as Birkhoff factorization of loops in the character group of the associated Hopf algebra. The models considered by Kreimer (1999) had Hopf algebra H and character group G, the Butcher group. Brouder (2000) has given an account of this renormalization process in terms of Runge-Kutta data. In this simplified setting, a renormalizable model has two pieces of input data:[5] a set of Feynman rules given by an algebra homomorphism of H into the algebra V of Laurent series in z with poles of finite order; a renormalization scheme given by a linear operator R on V such that R satisfies the Rota-Baxter identity

and the image of R id lies in the algebra V+ of power series in z. Note that R satisfies the Rota-Baxter identity if and only if id R does. An important example is the minimal subtraction scheme

In addition there is a projection P of H onto the augmentation ideal ker given by

To define the renormalized Feynman rules, note that the antipode S satisfies

so that

The renormalized Feynman rules are given by a homomorphism homomorphism S. The homomorphism is uniquely specified by .

of H into V obtained by twisting the

Because of the precise form of , this gives a recursive formula for

For the minimal subtraction scheme, this process can be interpreted in terms of Birkhoff factorization in the complex Butcher group. can be regarded as a map of the unit circle into the complexification GC of G (maps into C instead of R). As such it has a Birkhoff factorization

where + is holomorphic on the interior of the closed unit disk and is holomorphic on its complement in the Riemann sphere C with () = 1. The loop + corresponds to the renormalized homomorphism. The evaluation at z = 0 of + or the renormalized homomorphism gives the dimensionally regularized values for each rooted tree. In example, the Feynman rules depend on additional parameter , a "unit of mass". Connes & Kreimer (2001) showed that

so that is independent of . The complex Butcher group comes with a natural one-parameter group w of automorphisms, dual to that on H for w 0 in C. The loops and w have the same negative part and, for t real,

Butcher group

70

defines a one-parameter subgroup of the complex Butcher group GC called the renormalization group flow (RG). Its infinitesimal generator is an element of the Lie algebra of GC and is defined by It is called the beta-function of the model. In any given model, there is usually a finite-dimensional space of complex coupling constants. The complex Butcher group acts by diffeomorphims on this space. In particular the renormalization group defines a flow on the space of coupling constants, with the beta function giving the corresponding vector field. More general models in quantum field theory require rooted trees to be replaced by Feynman diagrams with vertices decorated by symbols from a finite index set. Connes and Kreimer have also defined Hopf algebras in this setting and have shown how they can be used to systematize standard computations in renormalization theory.

Example
Kreimer (2007) has given a "toy model" involving dimensional regularization for H and the algebra V. If c is a positive integer and q = q / is a dimensionless constant, Feynman rules can be defined recursively by

where z = 1 D/2 is the regularization parameter. These integrals can be computed explicitly in terms of the Gamma function using the formula

In particular

Taking the renormalization scheme R of minimal subtraction, the renormalized quantities when evaluated at z = 0.

are polynomials in

Notes
[1] [2] [3] [4] Brouder 2004 Butcher 2008 Brouder 2000 Jackson, K. R.; Kvrn, A.; Nrsett, S.P. (1994), "The use of Butcher series in the analysis of Newton-like iterations in Runge-Kutta formulas", Applied Numerical Mathematics 15 (3): 341356, doi:10.1016/0168-9274(94)00031-X (Special issue to honor professor J. C. Butcher on his sixtieth birthday) [5] Kreimer 2007

References
Bergbauer, Christoph; Kreimer, Dirk (2005), "The Hopf Algebra of Rooted Trees in Epstein-Glaser Renormalization", Ann. Henri Poincar 6 (2): 343367, arXiv:hep-th/0403207, doi:10.1007/s00023-005-0210-3 Boutet de Monvel, Louis (2003), "Algbre de Hopf des diagrammes de Feynman, renormalisation et factorisation de Wiener-Hopf (d'aprs A. Connes et D. Kreimer). [Hopf algebra of Feynman diagrams, renormalization and Wiener-Hopf factorization (following A. Connes and D. Kreimer) (http://people.math.jussieu.fr/~boutet/ renormalisation.pdf)"], Astrisque, Sminaire Bourbaki 290: 149165

Butcher group Brouder, Christian (2000), "RungeKutta methods and renormalization", Eur.Phys.J. C12: 521534, arXiv:hep-th/9904014 Brouder, Christian (2004), "Trees, Renormalization and Differential Equations" (http://www.springerlink.com/ content/m334351x243t2412/), BIT Numerical Mathematics 44 (3): 425438, doi:10.1023/B:BITN.0000046809.66837.cc Butcher, J.C (1963), "Coefficients for the study of Runge-Kutta integration processes", J. Austral. Math. Soc. 3 (2): 185201, doi:10.1017/S1446788700027932 Butcher, J.C (1972), "An algebraic theory of integration methods", Math. Comput. 26 (117): 79106, doi:10.2307/2004720, JSTOR2004720 Butcher, John C. (2008), Numerical methods for ordinary differential equations (2nd ed.), John Wiley & Sons Ltd., ISBN978-0-470-72335-7, MR2401398 Butcher, J.C (2009), "Trees and numerical methods for ordinary differential equations" (http://www. springerlink.com/content/un0168l544n80250/), Numerical Algorithms (Springer online) Cayley, Arthur (1857), "On the theory of analytic forms called trees" (http://www.archive.org/stream/ collectedmathema03cayluoft#page/242/mode/1up), Philosophical Magazine XIII: 172176 (also in Volume 3 of the Collected Works of Cayley, pages 242246) Connes, Alain; Kreimer, Dirk (1998), "Hopf Algebras, Renormalization and Noncommutative Geometry" (http:// www.alainconnes.org/docs/ncgk.pdf), Communications in Mathematical Physics 199: 203242, doi:10.1007/s002200050499 Connes, Alain; Kreimer, Dirk (1999), "Lessons from quantum field theory: Hopf algebras and spacetime geometries", Lett. Math. Phys. 48: 8596, doi:10.1023/A:1007523409317 Connes, Alain; Kreimer, Dirk (2000), "Renormalization in quantum field theory and the Riemann-Hilbert problem. I. The Hopf algebra structure of graphs and the main theorem" (http://www.alainconnes.org/docs/ RH1.pdf), Comm. Math. Phys. 210: 249273, doi:10.1007/s002200050779 Connes, Alain; Kreimer, Dirk (2001), "Renormalization in quantum field theory and the Riemann-Hilbert problem. II. The -function, diffeomorphisms and the renormalization group" (http://www.alainconnes.org/ docs/RH2.pdf), Comm. In Math. Phys. 216: 215241, doi:10.1007/PL00005547 Gracia-Bonda, Jos; Vrilly, Joseph C.; Figueroa, Hctor (2000), Elements of noncommutative geometry, Birkhuser, ISBN0817641246, Chapter 14. Grossman, R.; Larson, R. (1989), "Hopf algebraic structures of families of trees" (http://users.lac.uic.edu/ ~grossman/papers/journal-03.pdf), Journal Algebra 26: 184210 Hairer, E.; Wanner, G. (1974), "On the Butcher group and general multi-value methods" (http://www. springerlink.com/content/e6r7327737lq3516/), Computing 13: 115, doi:10.1007/BF02268387 Kreimer, Dirk (1998), "On the Hopf algebra structure of perturbative quantum field theories", Adv. Theor. Math. Phys. 2: 303334, arXiv:q-alg/9707029 Kreimer, Dirk (1999), "Chen's iterated integral represents the operator product expansion", Adv. Theor. Math. Phys. 3: 627670, arXiv:hep-th/9901099 Kreimer, Dirk (2007), Factorization in Quantum Field Theory: An Exercise in Hopf Algebras and Local Singularities, Frontiers in Number Theory, Physics, and Geometry II, Springer, pp.715736, arXiv:hep-th/0306020 Milnor, John Willard; Moore, John C. (1965), "On the structure of Hopf algebras", Annals of Mathematics. Second Series 81 (2): 211264, doi:10.2307/1970615, JSTOR1970615, MR0174052

71

CameronErds conjecture

72

CameronErds conjecture
In combinatorics, the CameronErds conjecture is the statement that the number of sum-free sets contained in is The sum of two odd numbers is even, so a set of odd numbers is always sum-free. There are in |N|, and so odd numbers

subsets of odd numbers in |N|. The CameronErds conjecture says that this counts a constant

proportion of the sum-free sets. The conjecture was stated by Peter Cameron and Paul Erds in 1988.[1] It was proved by Ben Green[2] and independently by Alexander Sapozhenko[3] [4] in 2003.

Notes
[1] Cameron, P. J.; Erds, P. (1990), "On the number of sets of integers with various properties" (http:/ / books. google. com/ books?id=68g0Ds4FNM0C& pg=PA61& lpg=PA61), Number theory: proceedings of the First Conference of the Canadian Number Theory Association, held at the Banff Center, Banff, Alberta, April 17-27, 1988, Berlin: de Gruyter, pp.6179, MR1106651, . [2] Green, Ben (2004), "The Cameron-Erds conjecture", The Bulletin of the London Mathematical Society 36 (6): 769778, arXiv:math.NT/0304058, doi:10.1112/S0024609304003650, MR2083752. [3] Sapozhenko, A. A. (2003), "The Cameron-Erds conjecture", Doklady Akademii Nauk 393 (6): 749752, MR2088503. [4] Sapozhenko, Alexander A. (2008), "The Cameron-Erds conjecture", Discrete Mathematics 308 (19): 43614369, doi:10.1016/j.disc.2007.08.103, MR2433862.

Catalan's constant
In mathematics, Catalan's constant G, which occasionally appears in estimates in combinatorics, is defined by

where is the Dirichlet beta function. Its numerical value [1] is approximately (sequence A006752 in OEIS) G = 0.915 965 594 177 219 015 054 603 514 932 384 110 774 It is not known whether G is irrational, let alone transcendental. Catalan's constant was named after Eugne Charles Catalan.

Integral identities
Some identities include

along with

Catalan's constant

73

where K(x) is a complete elliptic integral of the first kind, and .

Uses
G appears in combinatorics, as well as in values of the second polygamma function, also called the trigamma function, at fractional arguments:

Simon Plouffe gives an infinite collection of identities between the trigamma function, these are expressible as paths on a graph. It also appears in connection with the hyperbolic secant distribution.

and Catalan's constant;

Quickly converging series


The following two formulas involve quickly converging series, and are thus appropriate for numerical computation:

and

The theoretical foundations for such series is given by Broadhurst (the first formula)[2] and Ramanujan (the second formula)[3] The algorithms for fast evaluation of the Catalan constant is constructed by E. Karatsuba[4] [5] .

Known digits
The number of known digits of Catalan's constant G has increased dramatically during the last decades. This is due both to the increase of performance of computers as well as to algorithmic improvements.[6]

Catalan's constant

74

Number of known decimal digits of Catalan's constant G


Date 1832 1858 1864 1877 1913 1990 1996 August 14, 1996 Decimal digits 16 19 14 20 32 20,000 50,000 100,000 Computation performed by Thomas Clausen Carl Johan Danielsson Hill Eugne Charles Catalan James W. L. Glaisher James W. L. Glaisher Greg J. Fee Greg J. Fee Greg J. Fee & Simon Plouffe Thomas Papanikolaou Thomas Papanikolaou Patrick Demichel Xavier Gourdon Xavier Gourdon & Pascal Sebah Xavier Gourdon & Pascal Sebah Shigeru Kondo & Steve Pagliarulo Shigeru Kondo & Steve Pagliarulo [7] [8] [9] [9]

September 29, 1996 300,000 1996 1997 January 4, 1998 2001 2002 October 2006 August 2008 January 31, 2009 April 16, 2009 1,500,000 3,379,957 12,500,000 100,000,500 201,000,000 5,000,000,000 10,000,000,000 15,510,000,000 31,026,000,000

Alexander J. Yee & Raymond Chan Alexander J. Yee & Raymond Chan

Notes
[1] http:/ / www. gutenberg. org/ etext/ 812 [2] D.J. Broadhurst, " Polylogarithmic ladders, hypergeometric series and the ten millionth digits of (3) and (5) (http:/ / arxiv. org/ abs/ math. CA/ 9803067)", (1998) arXiv math.CA/9803067 [3] B.C. Berndt, Ramanujan's Notebook, Part I., Springer Verlag (1985) [4] E.A. Karatsuba, Fast evaluation of transcendental functions, Probl. Inf. Transm. Vol.27, No.4, pp.339-360 (1991) [5] E.A. Karatsuba, Fast computation of some special integrals of mathematical physics. Scientific Computing, Validated Numerics, Interval Methods, W.Krmer, J.W.von Gudenberg, eds.; pp. 29-41, (2001) [6] Gourdon, X., Sebah, P; Constants and Records of Computation (http:/ / numbers. computation. free. fr/ Constants/ constants. html) [7] Shigeru Kondo's website (http:/ / ja0hxv. calico. jp/ pai/ ecatalan. html) [8] Constants and Records of Computation (http:/ / numbers. computation. free. fr/ Constants/ constants. html) [9] Large Computations (http:/ / www. numberworld. org/ nagisa_runs/ computations. html)

Catalan's constant

75

References
Victor Adamchik, 33 representations for Catalan's constant (http://www-2.cs.cmu.edu/~adamchik/articles/ catalan/catalan.htm) (undated) Adamchik,, Victor (2002). "A certain series associated with Catalan's constant" (http://www-2.cs.cmu.edu/ ~adamchik/articles/csum.html). Zeitschr. f. Analysis und ihre Anwendungen (ZAA) 21 (3): 110. MR1929434. Simon Plouffe, A few identities (III) with Catalan (http://www.lacim.uqam.ca/~plouffe/IntegerRelations/ identities3a.html), (1993) (Provides over one hundred different identities). Simon Plouffe, A few identities with Catalan constant and Pi^2 (http://www.lacim.uqam.ca/~plouffe/ IntegerRelations/identities3.html), (1999) (Provides a graphical interpretation of the relations) Weisstein, Eric W., " Catalan's Constant (http://mathworld.wolfram.com/CatalansConstant.html)" from MathWorld. Catalan constant: Generalized power series (http://functions.wolfram.com/Constants/Catalan/06/01/) at the Wolfram Functions Site Greg Fee, Catalan's Constant (Ramanujan's Formula) (http://www.gutenberg.org/etext/682) (1996) (Provides the first 300,000 digits of Catalan's constant.). Bradley, David M. (1999). "A class of series acceleration formulae for Catalan's constant". The Ramanujan Journal 3 (2): 159173. doi:10.1023/A:1006945407723. MR1703281. Bradley, David M. (2007). "A class of series acceleration formulae for Catalan's constant". arXiv:0706.0356. Bradley, David M. (2001), Representations of Catalan's constant (http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.26.1879) Carella, N. A. (2011). "Note On the Irrationality of the L-Function Constants L(s, X)". arXiv:1105.2042v1.

Chinese monoid
In mathematics, the Chinese monoid is a monoid generated by a totally ordered alphabet with the relations cba=cab=bca for every abc. It was discovered by Duchamp & Krob (1994) during their classification of monoids with growth similar to that of the plactic monoid, and studied in detail by Cassaigne et al. (2001).

References
Cassaigne, Julien; Espie, Marc; Krob, Daniel; Novelli, Jean-Christophe; Hivert, Florent (2001), "The Chinese monoid" [1], International Journal of Algebra and Computation 11 (3): 301334, doi:10.1142/S0218196701000425, ISSN0218-1967, MR1847182 Duchamp, Grard; Krob, Daniel (1994), "Plactic-growth-like monoids" [2], Words, languages and combinatorics, II (Kyoto, 1992), World Sci. Publ., River Edge, NJ, pp.124142, MR1351284

References
[1] http:/ / hal. archives-ouvertes. fr/ docs/ 00/ 05/ 79/ 27/ PS/ chimo. ps [2] http:/ / www. liafa. jussieu. fr/ web9/ rapportrech/ description_en. php?idrapportrech=487

Combination

76

Combination
In mathematics a combination is a way of selecting several things out of a larger group, where (unlike permutations) order does not matter. In smaller cases it is possible to count the number of combinations. For example given three fruit, say an apple, orange and pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally a k-combination of a set S is a subset of k distinct elements of S. If the set has n elements the number of k-combinations is equal to the binomial coefficient

which can be written using factorials as

whenever .

, and which is zero when

. The set

of all k-combinations of a set S is sometimes denoted by

Combinations can consider the combination of n things taken k at a time without or with repetitions.[1] In the above example repetitions were not allowed. If however it was possible to have two of any one kind of fruit there would be 3 more combinations: one with two apples, one with two oranges, and one with two pears. With large sets, it becomes necessary to use mathematics to find the number of combinations. For example, a poker hand can be described as a 5-combination (k=5) of cards from a 52 card deck (n=52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is1/2,598,960.

Number of k-combinations
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by C(n,k), or by a variation such as , , or even (the latter form is standard in French, Russian, and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by ; notably it occurs as coefficient in the binomial formula, hence its name binomial coefficient. One can define all natural numbers k at once by the relation for

3-element subsets of a 5-element set

from which it is clear that

and

for k>n. To see that these coefficients count

k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements ofS: it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes (1 + X)n, the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations.

Combination Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to (1 + X)n, one can use (in addition to the basic cases already given) the recursion relation

77

which follows from (1 + X)n = (1 + X)n 1(1 + X); this leads to the construction of Pascal's triangle. For determining an individual binomial coefficient, it is more practical to use the formula

The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored. When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation

This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an (n k)-combination. Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:

where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by (n k)!, so it is certainly inferior as a method of computation to that formula. The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (nk) elements among each other produces the same combination; this explains the division in the formula. From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions: , , . Together with the basic cases , these allow successive computation of respectively all numbers of

combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size n k.

Combination

78

Example of counting combinations


As a concrete example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:

Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:

Another alternative computation, almost equivalent to the first, is based on writing

which gives

When evaluated as 52 1 51 2 50 3 49 4 48 5, this can be computed using only integer arithmetic. The reason that all divisions are without remainder is that the intermediate results they produce are themselves binomial coefficients. Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:

Enumerating k-combinations
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = {1,2, ...,n}, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.[2] [3]

Combination

79

Number of combinations with repetition


A k-combination with repetitions, or k-multicombination, or multiset of size k from a set S is given by a sequence of k not necessarily distinct elements of S, where order is not taken into account: two sequences of which one can be obtained from the other by permuting the terms define the same multiset. If S has n elements, the number of such k-multicombinations is also given by a binomial coefficient, namely by

Bijection between 3-element multisets with elements from a 5-element set (on the right) and 3-element subsets of a 7-element set (on the left)

(the case where both n and k are zero is special; the correct value 1 (for the empty 0-multicombination) is given by left hand side , but not by the right hand side ).

Example of counting multicombinations


For example, if you have ten types of donuts (n=10) on a menu to choose from and you want three donuts (k=3), the number of ways to choose can be calculated as

The analogy with the k-combination case can be stressed by writing the numerator as a rising power

There is an easy way to understand the above result. Label the elements of S with numbers 0, 1, ..., n 1, and choose a k-combination from the set of numbers { 1, 2, ..., n + k 1 } (so that there are n 1 unchosen numbers). Now change this k-combination into a k-multicombination of S by replacing every (chosen) number x in the k-combination by the element of S labeled by the number of unchosen numbers less than x. This is always a number in the range of the labels, and it is easy to see that every k-multicombination of S is obtained for one choice of a k-combination.

Combination A concrete example may be helpful. Suppose there are 4 types of fruits (apple, orange, pear, banana) at a grocery store, and you want to buy 12 pieces of fruit. So n=4 and k=12. Use label 0 for apples, 1 for oranges, 2 for pears, and 3 for bananas. A selection of 12 fruits can be translated into a selection of 12 distinct numbers in the range 1,...,15 by selecting as many consecutive numbers starting from 1 as there are apples in the selection, then skip a number, continue choosing as many consecutive numbers as there are oranges selected, again skip a number, then again for pears, skip one again, and finally choose the remaining numbers (as many as there are bananas selected). For instance for 2 apples, 7 oranges, 0 pears and 3 bananas, the numbers chosen will be 1, 2, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15. To recover the fruits, the numbers 1, 2 (not preceded by any unchosen numbers) are replaced by apples, the numbers 4,5,...,10 (preceded by one unchosen number: 3) by oranges, and the numbers 13, 14, 15 (preceded by three unchosen numbers: 3, 11, and 12) by bananas; there are no chosen numbers preceded by exactly 2 unchosen numbers, and therefore no pears in the selection. The total number of possible selections is

80

Number of k-combinations for all k


The number of k-combinations for all k, , is the sum of the nth row (counting from 0) of the

binomial coefficients. These combinations are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to , where each digit position is an item from the set of n.

Probability: sampling a random combination


There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of .

References
[1] Erwin Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, INC, 1999 [2] http:/ / www. site. uottawa. ca/ ~lucia/ courses/ 5165-09/ GenCombObj. pdf [3] http:/ / www. sagemath. org/ doc/ reference/ sage/ combinat/ subset. html

External links
C code to generate all combinations of n elements chosen as k (http://compprog.wordpress.com/2007/10/17/ generating-combinations-1/) Many Common types of permutation and combination math problems, with detailed solutions (http://mathforum. org/library/drmath/sets/high_perms_combs.html) The Unknown Formula (http://www.murderousmaths.co.uk/books/unknownform.htm) For combinations when choices can be repeated and order does NOT matter (http://www.nitte.ac.in/userfiles/file/Combinations with Repetitions.pdf) Combinations with repetitions (by: Akshatha AG and Smitha B)

Combinatorial class

81

Combinatorial class
In combinatorics, a combinatorial class (or simply class) is an equivalence class of sets that have the same counting sequence. Although the elements of these equivalent sets may have very different definitions and semantics, combinatorics is concerned only with the number of elements of a given size. Therefore, knowledge about one set in the class can be applied directly to other sets in the class. For example, the set of triangulations of polygons is combinatorially isomorphic to the set of general rooted plane trees. Although these sets certainly describe different things, they have the same counting sequence, namely the Catalan numbers.

Combinatorial data analysis


Combinatorial data analysis (CDA) is the study of data sets where the arrangement of objects is important. CDA can be used either to determine how well a given combinatorial construct reflects the observed data, or to search for a suitable combinatorial construct that does fit the data.

References
Lawrence J. Hubert (1987). Assignment Methods in Combinatorial Data Analysis. Marcel Dekker. ISBN 978-0824776176. Lawrence J. Hubert, Phipps Arabie, Jacqueline Meulman (2001). Combinatorial Data Analysis: Optimization by Dynamic Programming. SIAM. ISBN 978-0898714784. Michael J Brusco, Stephanie Stahl (2005). Branch-and-bound Applications in Combinatorial Data Analysis. Springer. ISBN 978-0387250373.

Combinatorial explosion

82

Combinatorial explosion
In mathematics a combinatorial explosion describes the effect of functions that grow very rapidly as a result of combinatorial considerations.[1] Examples of such functions include the factorial function and related functions. Pathological examples of combinatorial explosion include functions such as the Ackermann function.

Example in computing
Combinatorial explosion can occur in computing environments in a way analogous to communications and multi-dimensional space. Imagine a simple system with only one variable, a boolean called A. The system has two possible states, A = true or A = false. Adding another boolean variable B will give the system four possible states, A = true and B = true, A = true and B = false, A = false and B = true, A = false and B = false. A system with n booleans has 2n possible states, while a system of n variables each with Z allowed values (rather than just the 2 (true and false) of booleans) will have Zn possible states. The possible states can be thought of as the leaf nodes of a tree of height n, where each node has Z children. This rapid increase of leaf nodes can be useful in areas like searching, since many results can be accessed without having to descend very far. It can also be a hindrance when manipulating such structures. Consider a class hierarchy in an object-oriented language. The hierarchy can be thought of as a tree, with different types of object inheriting from their parents. If different classes need to be combined, such as in a comparison (like A<B) then the number of possible combinations which may occur explodes. If each type of comparison needs to be programmed then this soon becomes intractable for even small numbers of classes. Multiple inheritance can solve this, by allowing subclasses to have multiple parents, and thus a few parent classes can be considered rather than every child, without disrupting any existing hierarchy. For example, imagine a hierarchy where different vegetables inherit from their ancestor species. Attempting to compare the tastiness of each vegetable with the others becomes intractable since the hierarchy only contains information about genetics and makes no mention of tastiness. However, instead of having to write comparisons for carrot/carrot, carrot/potato, carrot/sprout, potato/potato, potato/sprout, sprout/sprout, they can all inherit from a separate class of tasty whilst keeping their current ancestor-based hierarchy, then all of the above can be implemented with only a tasty/tasty comparison.

Example in arithmetic
Suppose we take the factorial for n:

Then 1! = 1, 2! = 2, 3! = 6, and 4! = 24. However, we quickly get to extremely large numbers, even for relatively small n. For example, 100! = 9.33262154 10157, a number so large that it cannot be displayed on most calculators.

References
[1] Krippendorff, Klaus. "Combinatorial Explosion" (http:/ / pespmc1. vub. ac. be/ ASC/ Combin_explo. html). Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB. . Retrieved 29 November 2010.

Combinatorial explosion (communication)

83

Combinatorial explosion (communication)

Using separate lines of communication, four organizations require six channels

Using an intermediary, only one channel per organization is required

In administration and computing, a combinatorial explosion is the rapidly accelerating increase in lines of communication as organizations are added in a process. (Casually described as "exponential" it is actually strictly only polynomial) If two organizations need to communicate about a particular topic, it may be easiest to communicate directly in an ad hoc manneronly one channel of communication is required. However, if a third organization is added, three separate channels are required. Adding a fourth organization requires six channels; five, ten; six, fifteen; etc. In general, going on like that, it will take communication lines for n organizations.

The alternative approach is to realize when this communication will not be a one-off requirement, and produce a generic or intermediate way of passing information. The drawback is that this requires more work for the first pair, since each must convert its internal approach to the common one, rather than the superficially easier approach of just understanding the other.

Combinatorial hierarchy

84

Combinatorial hierarchy
Combinatorial hierarchy is a mathematical structure of bit-strings generated by an algorithm based on discrimination (exclusive-or between bits). It was originally discovered by A.F. Parker-Rhodes in the 1960s, and is interesting because of physical interpretations that relate it to quantum mechanics.[1] For example, values close to the fine structure constant and the proton-mass gravitational coupling constant appear in the generation of the Hierarchy.[1]

Notes
[1] Bastin, Ted and Kilmister, C.W. Combinatorial Physics. World Scientific, 1995, ISBN 981-02-2212-2

References
A formal development of the combinatorial hierarchy in terms of group theory appears in the appendix to "On the physical interpretation and the mathematical structure of the combinatorial hierarchy," Int. Journ. Theor. Phys. 18, 7 (1979) 445. Theory of Indistinguishables, A.F. Parker-Rhodes, Reidel, 1981. Journal of the Western Regional Chapter of the Alternative Natural Philosophy Association (http://www. stanford.edu/~pnoyes)

Combinatorial number system


In mathematics, and in particular in combinatorics, the combinatorial number system of degree k (for some positive integer k), also referred to as combinadics, is a correspondence between natural numbers (taken to include0) N and k-combinations, represented as strictly decreasing sequences ck>...>c2>c10. Since the latter are strings of numbers, one can view this as a kind of numeral system for representing N, although the main utility is representing a k-combination by N rather than the other way around. Distinct numbers correspond to distinct k-combinations, and produce them in lexicographic order; moreover the numbers less than correspond to all k-combinations of { 0, 1, ..., n 1}. The correspondence does not depend on the sizen of the set that the k-combinations are taken from, so it can be interpreted as a map from N to the k-combinations taken from N; in this view the correspondence is a bijection. The number N corresponding to (ck,...,c2,c1) is given by

The fact that a unique sequence so corresponds to any number N was observed by D.H. Lehmer.[1] Indeed a greedy algorithm finds the k-combination corresponding to N: take ck maximal with , then take ck 1 maximal with , and so forth. The originally used term "combinatorial representation of integers" is shortened to "combinatorial number system" by Knuth,[2] who also gives a much older reference;[3] the term "combinadic" is introduced by James McCaffrey[4] (without reference to previous terminology or work). Unlike the factorial number system, the combinatorial number system of degree k is not a mixed radix system: the part of the number N represented by a "digit" ci is not obtained from it by simply multiplying by a place value. The main application of the combinatorial number system is that it allows rapid computation of the k-combination that is at a given position in the lexicographic ordering, without having to explicitly list the k-combinations preceding it; this allows for instance random generation of k-combinations of a given set. Enumeration of k-combinations has many applications, among which software testing, sampling, quality control, and the analysis of

Combinatorial number system lottery games. This is also known as "rank" ("ranking" and "unranking"), and is known by that name in most CAS software and in computational mathematics.[5] [6]

85

Ordering combinations
A k-combination of a set S is a subset of S with k (distinct) elements. The main purpose of the combinatorial number system is to provide a representation, each by a single number, of all possible k-combinations of a set S of n elements. Choosing, for any n, {0, 1, ..., n1} as such a set, it can be arranged that the representation of a given k-combination C is independent of the value of n (although n must of course be sufficiently large); in other words considering C as a subset of a larger set by increasing n will not change the number that representsC. Thus for the combinatorial number system one just considers C as a k-combination of the set N of all natural numbers, without explicitly mentioning n. In order to ensure that the numbers representing the k-combinations of {0, 1, ..., n1} are less than those representing k-combinations not contained in {0, 1, ..., n1}, the k-combinations must be ordered in such a way that their largest elements are compared first. The most natural ordering that has this property is lexicographic ordering of the decreasing sequence of their elements. So comparing the 5-combinations C={0,3,4,6,9} and C'={0,1,3,7,9}, one has that C comes before C', since they have the same largest part9, but the next largest part6 of C is less than the next largest part7 of C'; the sequences compared lexicographically are (9,6,4,3,0) and (9,7,3,1,0). Another way to describe this ordering is view combinations as describing the k raised bits in the binary representation of a number, so that C={c1,...,ck} describes the number (this associates distinct numbers to all finite sets of natural numbers); then comparison of k-combinations can be done by comparing the associated binary numbers. In the example C and C' correspond to numbers 10010110012=60110 and 10100010112=65110, which again shows that C comes before C'. This number is not however the one one wants to represent the k-combination with, since many binary numbers have a number of raised bits different form k; one wants to find the relative position of C in the ordered list of (only) k-combinations.

Place of a combination in the ordering


The number associated in the combinatorial number system of degree k to a k-combination C is the number of k-combinations strictly less than C in the given ordering. This number can be computed from C={ ck, ..., c2, c1} with ck>...> c2 > c1 as follows. From the definition of the ordering it follows that for each k-combination S strictly less thanC, there is a unique indexi such that ci is absent from S, while ck, ..., ci+1 are present in S, and no other value larger than ci is. One can therefore group those k-combinations S according to the possible values 1, 2, ..., k of i, and count each group separately. For a given value of i one must include ck, ..., ci+1 in S, and the remaining i elements of S must be chosen from the ci non-negative integers strictly less than ci; moreover any such choice will result in a k-combinations S strictly less thanC. The number of possible choices is , which is therefore the number of combinations in group i; the total number of k-combinations strictly less than C then is

and this is the index (starting from0) of C in the ordered list of k-combinations. Obviously there is for every NN exactly one k-combination at indexN in the list (supposing k1, since the list is then infinite), so the above argument proves that every N can be written in exactly one way as a sum of k binomial coefficients of the given form.

Combinatorial number system

86

Finding the k-combination for a given number


The given formula allows finding the place in the lexicographic ordering of a given k-combination immediately. The reverse process of finding the k-combination at a given place N requires somewhat more work, but is straightforward nonetheless. By the definition of the lexicographic ordering, two k-combinations that differ in their largest element ck will be ordered according to the comparison of those largest elements, from which it follows that all combinations with a fixed value of their largest element are contiguous in the list. Moreover the smallest combination with ck as largest element is , and it has ci=i1 for all i<k (for this combination all terms in the expression except are zero). Therefore ck is the largest number such that . If k>1 the remaining elements of the in the combinatorial number and k 1 instead k-combination form the k 1-combination corresponding to the number system of degree k 1, and can therefore be found by continuing in the same way for of N and k.

Example
Suppose one wants to determine the 5-combination at position 72. The successive values of remaining elements form the 4-combination at position 72 56 = 16. The successive values of for n=4, 5, 6, ... for n=3, 4, 5, ... are 0, 1, 6, 21, 56, 126, 252, ..., of which the largest one not exceeding 72 is 56, for n=8. Therefore c5=8, and the are 0, 1, 5, 15, 35, ..., of which the largest one not exceeding 16 is 15, for n=6, so c4=6. Continuing similarly to search for a 3-combination at position 16 15 = 1 one finds c3=3, which uses up the final unit; this establishes , and the remaining values ci will be the maximal ones with , namely ci = i 1. Thus we have found the 5-combination {8, 6, 3, 1, 0}.

Applications
One could use the combinatorial number system to list or traverse all k-combinations of a given finite set, but this is a very inefficient way to do that. Indeed, given some k-combination it is much easier to find the next combination in lexicographic ordering directly than to convert a number to a k-combination by the method indicated above. To find the next combination, find the smallest i2 for which cici1+2; then increase ci1 by one and set all cj with j < i 1 to their minimal value j 1. If the k-combination is represented as a binary value with k bits1, then the next such value can be computed without any loop using bitwise arithmetic: the following function will advance x to that value or return false: // find next k-combination bool next_combination(unsigned long& x) // assume x has form x'01^a10^b in binary { unsigned long u = x & -x; // extract rightmost bit 1; u = 0'00^a10^b unsigned long v = u + x; // set last non-trailing bit 0, and clear to the right; v=x'10^a00^b if (v==0) // then overflow in v, or x==0 return false; // signal that next k-combination cannot be represented x = v +(((v^x)/u)>>2); // v^x = 0'11^a10^b, (v^x)/u = 0'0^b1^{a+2}, and x x'100^b1^a return true; // successful completion } This is called Gosper's hack;[7] corresponding assembly code was described as item 175 in HAKMEM.

Combinatorial number system On the other hand the possibility to directly generate the k-combination at index N has useful applications. Notably, it allows generating a random k-combination of an n-element set using a random integer N with , simply by converting that number to the corresponding k-combination. If a computer program needs to maintain a table with information about every k-combination of a given finite set, the computation of the index N associated to a combination will allow the table to be accessed without searching.

87

References
[1] Applied Combinatorial Mathematics, Ed. E. F. Beckenbach (1964), pp.2730. [2] Knuth, D. E. (2005), "Generating All Combinations and Partitions", The Art of Computer Programming, 4, Fascicle 3, Addison-Wesley, pp.56, ISBN0-201-85394-9. [3] Pascal, Ernesto (1887), Giornale di Matematiche, 25, pp.4549 [4] McCaffrey, James (2004), Generating the mth Lexicographical Element of a Mathematical Combination (http:/ / msdn. microsoft. com/ en-us/ library/ aa289166(VS. 71). aspx), Microsoft Developer Network, [5] http:/ / www. site. uottawa. ca/ ~lucia/ courses/ 5165-09/ GenCombObj. pdf [6] http:/ / www. sagemath. org/ doc/ reference/ sage/ combinat/ subset. html [7] Knuth, D. E. (2009), "Bitwise tricks and techniques", The Art of Computer Programming, 4, Fascicle 1, Addison-Wesley, pp.54, ISBN0-321-58050-8

Combinatorial principles
In proving results in combinatorics several useful combinatorial rules or combinatorial principles are commonly recognized and used. The rule of sum, rule of product, and inclusion-exclusion principle are often used for enumerative purposes. Bijective proofs are utilized to demonstrate that two sets have the same number of elements. The pigeonhole principle often ascertains the existence of something or is used to determine the minimum or maximum number of something in a discrete context. Many combinatorial identities arise from double counting methods or the method of distinguished element. Generating functions and recurrence relations are powerful tools that can be used to manipulate sequences, and can describe if not resolve many combinatorial situations.

Rule of sum
The rule of sum is an intuitive principle stating that if there are a possible outcomes for an event (or ways to do something) and b possible outcomes for another event (or ways to do another thing), and the two events cannot both occur (or the two things can't both be done), then there are a + b total possible outcomes for the events (or total possible ways to do one of the things). More formally, the sum of the sizes of two disjoint sets is equal to the size of their union.

Rule of product
The rule of product is another intuitive principle stating that if there are a ways to do something and b ways to do another thing, then there are ab ways to do both things.

Combinatorial principles

88

Inclusion-exclusion principle
The inclusion-exclusion principle relates the size of the union of multiple sets, the size of each set, and the size of each possible intersection of the sets. The smallest example is when there are two sets: the number of elements in the union of A and B is equal to the sum of the number of elements in A and B, minus the number of elements in their intersection. Generally, according to this principle, if A1, ..., An are finite sets, then

Inclusionexclusion illustrated for three sets

Bijective proof
Bijective proofs prove that two sets have the same number of elements by finding a bijective function (one-to-one correspondence) from one set to the other.

Double counting
Double counting is a technique that equates two expressions that count the size of a set in two ways.

Pigeonhole principle
The pigeonhole principle states that if a items are each put into one of b boxes, where a > b, then one of the boxes contains more than one item. Using this one can, for example, demonstrate the existence of some element in a set with some specific properties.

Method of distinguished element


The method of distinguished element singles out a "distinguished element" of a set to prove some result.

Generating function
Generating functions can be thought of as polynomials with infinitely many terms whose coefficients correspond to terms of a sequence. This new representation of the sequence opens up new methods for finding identities and closed forms pertaining to certain sequences. The (ordinary) generating function of a sequence an is

Combinatorial principles

89

Recurrence relation
A recurrence relation defines each term of a sequence in terms of the preceding terms. Recurrence relations may lead to previously unknown properties of a sequence, but generally closed-form expressions for the terms of a sequence are more desired.

References
J. H. van Lint and R. M. Wilson (2001), A Course in Combinatorics (Paperback), 2dn edition, Cambridge University Press. ISBN 0521006015

Combinatorics and dynamical systems


The mathematical disciplines of combinatorics and dynamical systems interact in a number of ways. The ergodic theory of dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field of arithmetic combinatorics. Also dynamical systems theory is heavily involved in the relatively recent field of combinatorics on words. Also combinatorial aspects of dynamical systems are studied. Dynamical systems can be defined on combinatorial objects; see for example graph dynamical system.

References
Baake, Michael; Damanik, David; Putnam, Ian; Solomyak, Boris (2004), Aperiodic Order: Dynamical Systems, Combinatorics, and Operators [1], Banff International Research Station for Mathematical Innovation and Discovery. Berth, Valrie; Ferenczi, Sbastien; Zamboni, Luca Q. (2005), "Interactions between dynamics, arithmetics and combinatorics: the good, the bad, and the ugly", Algebraic and topological dynamics, Contemp. Math., 385, Providence, RI: Amer. Math. Soc., pp.333364, MR2180244. Fauvet, F.; Mitschi, C. (2003), From combinatorics to dynamical systems: Proceedings of the Computer Algebra Conference in honor of Jean Thomann held in Strasbourg, March 2223, 2002, IRMA Lectures in Mathematics and Theoretical Physics, 3, Berlin: Walter de Gruyter & Co., ISBN3-11-017875-3, MR2049418. Fogg, N. Pytheas (2002), Substitutions in dynamics, arithmetics and combinatorics, Lecture Notes in Mathematics, 1794, Berlin: Springer-Verlag, doi:10.1007/b13861, ISBN3-540-44141-7, MR1970385. Forman, Robin (1998), "Combinatorial vector fields and dynamical systems", Mathematische Zeitschrift 228 (4): 629681, doi:10.1007/PL00004638, MR1644432. Kaimanovich, V.; Lodkin, A. (2006), Representation theory, dynamical systems, and asymptotic combinatorics (Papers from the conference held in St. Petersburg, June 813, 2004), American Mathematical Society Translations, Series 2, 217, Providence, RI: American Mathematical Society, ISBN978-0-8218-4208-9, MR2286117. Latapy, Matthieu (2000), "Generalized integer partitions, tilings of zonotopes and lattices", in Krob, Daniel; Mikhalev, Alexander A., Formal Power Series and Algebraic Combinatorics: 12th International Conference, FPSAC'00, Moscow, Russia, June 2000, Proceedings, Berlin: Springer, pp.256267, arXiv:math/0008022, MR1798219. Lothaire, M. (2005), Applied combinatorics on words, Encyclopedia of Mathematics and its Applications, 105, Cambridge: Cambridge University Press, ISBN978-0-521-84802-2, MR2165687. Mortveit, Henning S.; Reidys, Christian M. (2008), An introduction to sequential dynamical systems, Universitext, New York: Springer, ISBN978-0-387-30654-4, MR2357144. Nekrashevych, Volodymyr (2008), "Symbolic dynamics and self-similar groups", Holomorphic Dynamics and Renormalization: A Volume in Honour of John Milnor's 75th Birthday, Fields Inst. Commun., 53, Providence, RI:

Combinatorics and dynamical systems Amer. Math. Soc., pp.2573, MR2477417. Starke, Jens; Schanz, Michael (1998), "Dynamical system approaches to combinatorial optimization", Handbook of combinatorial optimization, Vol. 2, Boston, MA: Kluwer Acad. Publ., pp.471524, MR1665408.

90

External links
Combinatorics of Iterated Functions: Combinatorial Dynamics & Dynamical Combinatorics [2] Combinatorial dynamics [3] at Scholarpedia

References
[1] http:/ / www. birs. ca/ workshops/ 2004/ 04w5001/ report04w5001. pdf [2] http:/ / www. tetration. org/ Combinatorics/ index. html [3] http:/ / www. scholarpedia. org/ article/ Combinatorial_dynamics

Combinatorics and physics


Combinatorial physics or physical combinatorics is the area of interaction between physics and combinatorics. "Combinatorial Physics is an emerging area which unites combinatorial and discrete mathematical techniques applied to theoretical physics, especially Quantum Theory."[1] . "Physical combinatorics might be defined naively as combinatorics guided by ideas or insights from physics"[2] . Combinatorics has always played an important role in quantum field theory and statistical physics.[3] However, combinatorial physics only emerged as a specific field after a seminal work by Alain Connes and Dirk Kreimer[4] , showing that the renormalization of Feynman diagrams can be described by a Hopf algebra. Combinatorial physics can be characterized by the use of algebraic concepts to interpret and solve physical problems involving combinatorics. It gives rise to a particularly harmonious collaboration between mathematicians and physicists. Among the significant physical results of combinatorial physics we may mention the reinterpretation of renormalization as a Riemann-Hilbert problem[5] , the fact that the SlavnovTaylor identities of gauge theories generate a Hopf ideal[6] , the quantization of fields[7] and strings[8] and a completely algebraic description of the combinatorics of quantum field theory[9] . The important example of editing combinatorics and physics is relation between enumeration of Alternating sign matrix and Ice-type model. Corresponding ice-type model is six vertex model with domain wall boundary conditions.

References
[1] 2007 International Conference on Combinatorial physics (http:/ / www. ifj. edu. pl/ conf/ combphys/ index. html) [2] Physical Combinatorics (http:/ / books. google. co. uk/ books?id=ItPQassTBwsC), Masaki Kashiwara, Tetsuji Miwa, Springer, 2000, ISBN 0817641750 [3] David Ruelle (1999). Statistical Mechanics, Rigorous Results. World Scientific. ISBN978-9810238629. [4] A. Connes, D. Kreimer, Renormalization in quantum field theory and the Riemann-Hilbert problem I (http:/ / arxiv. org/ abs/ hep-th/ 9912092), Commun. Math. Phys. 210 (2000), 249-273 [5] A. Connes, D. Kreimer, Renormalization in quantum field theory and the Riemann-Hilbert problem II (http:/ / arxiv. org/ abs/ hep-th/ 0003188), Commun. Math. Phys. 216 (2001), 215-241 [6] W. D. van Suijlekom, Renormalization of gauge fields: A Hopf algebra approach (http:/ / arxiv. org/ abs/ hep-th/ 0610137), Commun. Math. Phys. 276 (2007), 773-798 [7] C. Brouder, B. Fauser, A. Frabetti, R. Oeckl, Quantum field theory and Hopf algebra cohomology (http:/ / arxiv. org/ abs/ hep-th/ 0311253), J. Phys. A: Math. Gen. 37 (2004), 5895-5927 [8] T. Asakawa, M. Mori, S. Watamura, Hopf Algebra Symmetry and String Theory (http:/ / arxiv. org/ abs/ 0805. 2203), Prog. Theor. Phys. 120 (2008), 659-689

Combinatorics and physics


[9] C. Brouder, Quantum field theory meets Hopf algebra (http:/ / arxiv. org/ abs/ hep-th/ 0611153), Math. Nachr. 282 (2009), 1664-1690

91

Further reading
Some Open Problems in Combinatorial Physics (http://arxiv.org/abs/0901.2612), G. Duchamp, H. Cheballah One-parameter groups and combinatorial physics (http://arxiv.org/abs/quant-ph/0401126), G. Duchamp, K.A. Penson, A.I. Solomon, A.Horzela, P.Blasiak Combinatorial Physics, Normal Order and Model Feynman Graphs (http://arxiv.org/abs/quant-ph/0310174), A.I. Solomon, P. Blasiak, G. Duchamp, A. Horzela, K.A. Penson Hopf Algebras in General and in Combinatorial Physics: a practical introduction (http://arxiv.org/abs/0802. 0249), G. Duchamp, P. Blasiak, A. Horzela, K.A. Penson, A.I. Solomon Discrete and Combinatorial Physics (http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-4526.pdf) Bit-String Physics: a Novel "Theory of Everything" (http://www.slac.stanford.edu/pubs/slacpubs/6500/ slac-pub-6509.pdf), H. Pierre Noyes Combinatorial Physics (http://books.google.co.uk/books?id=-maWHQAACAAJ), Ted Bastin, Clive W. Kilmister, World Scientific, 1995, ISBN 9810222122 Physical Combinatorics and Quasiparticles (http://arxiv.org/abs/0903.0510), Giovanni Feverati, Paul A. Pearce, Nicholas S. Witte Physical Combinatorics of Non-Unitary Minimal Models (http://citeseerx.ist.psu.edu/viewdoc/ download?doi=10.1.1.46.4129&rep=rep1&type=pdf), Hannah Fitzgerald Paths, Crystals and Fermionic Formulae (http://arxiv.org/abs/math.QA/0102113), G.Hatayama, A.Kuniba, M.Okado, T.Takagi, Z.Tsuboi On powers of Stirling matrices (http://arxiv.org/abs/0812.4047), Istvan Mezo "On cluster expansions in graph theory and physics", N BIGGS - The Quarterly Journal of Mathematics, 1978 Oxford Univ Press Enumeration Of Rational Curves Via Torus Actions (http://arxiv.org/abs/hep-th/9405035), Maxim Kontsevich, 1995 Non-commutative Calculus and Discrete Physics (http://arxiv.org/abs/quant-ph/0303058), Louis H. Kauffman, February 1, 2008 Sequential cavity method for computing free energy and surface pressure (http://arxiv.org/abs/0807.1551), David Gamarnik, Dmitriy Katz, July 9, 2008

Combinatorics and statistical physics


"Graph Theory and Statistical Physics", J.W. Essam, Discrete Mathematics, 1, 83-112 (1971). Combinatorics In Statistical Physics (http://webserv.zib.de/Publications/Reports/SC-91-19.pdf) Hard Constraints and the Bethe Lattice: Adventures at the Interface of Combinatorics and Statistical Physics (http://arxiv.org/abs/math.CO/0304468), Graham Brightwell, Peter Winkler Graphs, Morphisms, and Statistical Physics: DIMACS Workshop Graphs, Morphisms and Statistical Physics, March 19-21, 2001, DIMACS Center (http://books.google.co.uk/books?id=4WrfAtDiJqsC), Jaroslav Neetil, Peter Winkler, AMS Bookstore, 2001, ISBN 0821835513

Combinatorics and physics

92

Conference proceedings
Proc. of Combinatorics and Physics, Los Alamos, August 1998 Physics and Combinatorics 1999: Proceedings of the Nagoya 1999 International Workshop (http://books. google.co.uk/books?id=SPa-HAAACAAJ), Anatol N. Kirillov, Akihiro Tsuchiya, Hiroshi Umemura, World Scientific, 2001, ISBN 9810245785 Physics and combinatorics 2000: proceedings of the Nagoya 2000 International Workshop (http://books.google. co.uk/books?id=D7NHaGFLPKQC), Anatol N. Kirillov, Nadejda Liskova, World Scientific, 2001, ISBN 9810246420 Asymptotic combinatorics with applications to mathematical physics: a European mathematical summer school held at the Euler Institute, St. Petersburg, Russia, July 9-20, 2001 (http://books.google.co.uk/ books?id=wTF-BxM5WbcC), Anatoli, Moiseevich Vershik, Springer, 2002, ISBN 3540403124 Counting Complexity: An International Workshop On Statistical Mechanics And Combinatorics (http://www. iop.org/EJ/toc/1742-6596/42/1), 1015 July 2005, Dunk Island, Queensland, Australia Proceedings of the Conference on Combinatorics and Physics, MPIM Bonn, March 19-23, 2007

Composition (number theory)


In mathematics, a composition of an integer n is a way of writing n as the sum of a sequence of (strictly) positive integers. Two sequences that differ in the order of their terms define different compositions of their sum, while they are considered to define the same partition of that number. Any integer has finitely many distinct compositions. Negative numbers do not have any compositions, but 0 has one composition, the empty sequence. Any positive integer n has 2n1 distinct compositions. This is a power of two, because every composition matches a binary number.

Bijection between 3 bit binary numbers and compositions of 4

A weak composition of an integer n is similar to a composition of n, but allowing terms of the sequence to be zero: it is a way of writing n as the sum of a sequence of non-negative integers. As a consequence any positive integer admits infinitely many weak compositions (if their length is not bounded). Adding a number of terms 0 to the end of a weak composition is usually not considered to define a different weak composition, in other words weak compositions are assumed to be implicitly extended indefinitely by terms0.

Composition (number theory)

93

Examples
The sixteen compositions of 5 are: 5 4+1 3+2 3+1+1 2+3 2+2+1 2+1+2 2+1+1+1 1+4 1+3+1 1+2+2 1+2+1+1 1+1+3 1+1+2+1 1+1+1+2 1+1+1+1+1.

Compare this with the seven partitions of 5: 5 4+1 3+2 3+1+1 2+2+1 2+1+1+1 1+1+1+1+1.

It is possible to put constraints on the parts of the compositions. For example the five compositions of 5 into distinct terms are: 5 4+1 3+2 2+3 1+4.

The 32 compositions of 6 1+1+1+1+1+1 2+1+1+1+1 1+2+1+1+1 ... 1+5 6

Compare this with the three partitions of 5 into distinct terms: 5 4+1 3+2.

Composition (number theory)

94

The 11 partitions of 6 1+1+1+1+1+1 2+1+1+1+1 3+1+1+1 ... 3+3 6

Number of compositions
Conventionally the empty composition is counted as the sole composition of 0, and there are no compositions of negative integers. There are 2n1 compositions of n1; here is a proof: Placing either a plus sign or a comma in each of the n1 boxes of the array

produces a unique composition of n. Conversely, every composition of n determines an assignment of pluses and commas. Since there are n1 binary choices, the result follows. The same argument shows that the number of compositions of n into exactly k parts is given by the binomial coefficient all possible number of parts we recover 2n1 as the total number of compositions of n: . Note that by summing over

For weak compositions, the number is

, since each k-composition of n+k corresponds to a weak

one ofn by the rule [a+b+...+c=n+k][(a1)+(b1)+...+(c1)=n].

Composition (number theory)

95

References
"Combinatorics of Compositions and Words", Silvia Heubach, Toufik Mansour, CRC Press, 2009, ISBN 9781420072679.

External links
Partition and composition calculator [1]

References
[1] http:/ / www. btinternet. com/ ~se16/ js/ partitions. htm

Constraint counting
In mathematics, constraint counting is a crude but often useful way of counting the number of free functions needed to specify a solution to a partial differential equation.

Einstein strength
Everyone knows that Albert Einstein said that a physical theory should be as simple as possible, but no simpler. But not everyone knows that he had a quantitative idea in mind. Consider a second order partial differential equation in three variables, such as the two-dimensional wave equation

It is often profitable to think of such an equation as a rewrite rule allowing us to rewrite arbitrary partial derivatives of the function using fewer partials than would be needed for an arbitrary function. For example, if satisfies the wave equation, we can rewrite where in the first equality, we appealed to the fact that partial derivatives commute. Einstein asked: how much redundancy can we eliminate in this fashion, for a given partial differential equation?

Linear equations
To answer this in the important special case of a linear partial differential equation, Einstein asked: how many of the partial derivatives of a solution can be linearly independent? It is convenient to record his answer using an ordinary generating function

where is a natural number counting the number of linearly independent partial derivatives (of order k) of an arbitrary function in the solution space of the equation in question. Einstein observed that whenever a function satisfies some partial differential equation, we can use the corresponding rewrite rule to eliminate some of them, because further mixed partials have necessarily become linearly dependent. Specifically, the power series counting the variety of arbitrary functions of three variables (no constraints) is

but the power series counting those in the solution space of some second order p.d.e. is

Constraint counting

96

which records that we can eliminate one second order partial so forth. More generally, the o.g.f. for an arbitrary function of n variables is

, three third order partials

, and

where the coefficients of the infinite power series of the generating function are constructed using an appropriate infinite sequence of binomial coefficients, and the power series for a function required to satisfy a linear m-th order equation is

Next,

which can be interpreted to predict that a solution to a second order linear p.d.e. in three variables is expressible by two freely chosen functions of two variables, one of which is used immediately, and the second, only after taking a first derivative, in order to express the solution.

General solution of initial value problem


To verify this prediction, recall the solution of the initial value problem

Applying the Laplace transform

gives

Applying the Fourier transform or

to the two spatial variables gives

Applying the inverse Laplace transform gives

Applying the inverse Fourier transform gives

where

Here, p,q are arbitrary (sufficiently smooth) functions of two variables, so (due their modest time dependence) the integrals P,Q also count as "freely chosen" functions of two variables; as promised, one of them is differentiated once before adding to the other to express the general solution of the initial value problem for the two dimensional

Constraint counting wave equation.

97

Quasilinear equations
In the case of a nonlinear equation, it will only rarely be possible to obtain the general solution in closed form. However, if the equation is quasilinear (linear in the highest order derivatives), then we can still obtain approximate information similar to the above: specifying a member of the solution space will be "modulo nonlinear quibbles" equivalent to specifying a certain number of functions in a smaller number of variables. The number of these functions is the Einstein strength of the p.d.e. In the simple example above, the strength is two, although in this case we were able to obtain more precise information.

References
Siklos, S. T. C. (1996). "Counting solutions of Einstein's equation". Class. Quant. Grav. 13 (7): 19311948. doi:10.1088/0264-9381/13/7/021. Application of constraint counting to Riemannian geometry and to general relativity.

Corners theorem
In mathematics, the corners theorem is an important result, proved by Mikls Ajtai and Endre Szemerdi, of a statement in arithmetic combinatorics. It states that for every >0 there exists N such that given at least N2 points in the NN grid {1,...,N}{1,...,N}, there exists a corner, i.e., three points in the form (x,y), (x+h,y), and (x,y+h). Later Solymosi gave a simpler proof, based on the triangle removal lemma. The corners theorem implies Roth's theorem.

References
M. Ajtai, E. Szemerdi: Sets of lattice points that form no squares, Studia Sci. Math. Hungar., 9(1974), 911. J. Solymosi: Note on a generalization of Roth's theorem, Algorithms Combin., 25, 2003,Springer, Berlin, 825827,

External link
Proof of the corners theorem [1] on polymath.

References
[1] http:/ / michaelnielsen. org/ polymath1/ index. php?title=Ajtai-Szemer%C3%A9di%27s_proof_of_the_corners_theorem

Coupon collector's problem (generating function approach)

98

Coupon collector's problem (generating function approach)


The coupon collector's problem can be solved in several different ways. The generating function approach is a combinatorial technique that allows to obtain precise results. We introduce the probability generating function (PGF) steps to collect the n coupons i.e. where is the probability that we take q

and the expectation is given by

We can calculate

explicitly. We have

To see what this means, note that

so that this is the PGF of an event that has probability p occurring zero or more times, with the exponent of z counting the number of times. We split the sequence of coupons into segments. A new segment begins every time a new coupon is retrieved for the first time. The PGF is the product of the PGFs of the individual segments. Applying this to , we see that it represents the following sequence of events: retrieve the first coupon (no restrictions at this time) retrieve the first coupon some number of times retrieve the second coupon (probability ) retrieve a mix of the first and second coupons some number of times retrieve the third coupon (probability ) retrieve a mix of the first, second, and third coupons some number of times retrieve the fourth coupon (probability retrieve the last coupon (probability In the following, The function and denote harmonic numbers. ) ).

if first simplified before deriving the expectation. First: .

THe use is made of the fact that

to obtain the derivative of . This yields

or

Coupon collector's problem (generating function approach)

99

Finally, some simplification:

so that

The PGF

makes it possible to obtain an exact value for the variance. Start with

which consists entirely of factorial moments that can be calculated from the PGF. We already have the value of . For the remainder, use

The derivative is

Evaluation at

yields

The conclusion is that

Covering problem

100

Covering problem
In combinatorics and computer science, covering problems are computational problems that ask whether a certain combinatorial structure 'covers' another, or how large the structure has to be to do that. Covering problems are minimization problems and usually linear programs, whose dual problems are called packing problems. The most prominent examples of covering problems are the set cover problem, which is equivalent to the hitting set problem, and its special cases, the vertex cover problem and the edge cover problem.

General LP formulation
In the context of linear programming, one can think of any linear program as a covering problem if the coefficients in the constraint matrix, the objective function, and right-hand side are nonnegative.[1] More precisely, let us consider the following general integer linear program:
minimize

subject to

Such an integer linear program is called covering problem if . Intuition: Assume having types of object and each object of type we buy. If the constraints

for all has an associated cost of

and . The number is a

indicates how many objects of type

are satisfied, it is said that

covering (the structures that are covered depend on the combinatorial context). Finally, an optimal solution to the above integer linear program is a covering of minimal cost.

Other uses
For Petri nets, for example, the covering problem is defined as the question if for a given marking, there exists a run of the net, such that some larger (or equal) marking can be reached. Larger means here that all components are at least as large as the ones of the given marking and at least one is properly larger.

Notes
[1] Vazirani (2001, p.112)

References
Vazirani, Vijay V. (2001). Approximation Algorithms. Springer-Verlag. ISBN3-540-65367-8.

Cycle index

101

Cycle index
In mathematics, and in particular in the field of combinatorics, cycle indices are used in combinatorial enumeration when symmetries are to be taken into account. This is particularly important in species theory. Each permutation of a finite set of objects partitions that set into cycles; the cycle index monomial of is a monomial in variables a1, a2, that describes the type of this partition (the cycle type of ): the exponent of ai is the number of cycles of of sizei. The cycle index polynomial of a permutation group is the average of the cycle index monomials of its elements. The terms cycle index and cycle indicator are also used, both for the cycle index monomial of a permutation and for the cycle index polynomial of a group. Knowing the cycle index polynomial of a permutation group, one can enumerate equivalence classes of objects that arise when the group acts on a set of slots being filled with objects described by a generating function. This is the most common application and it uses the Plya enumeration theorem.

Definition
The cycle index of a permutation group G is the average of

over all permutations g of the group, where jk(g) is the number of cycles of length k in the disjoint cycle decomposition of g. More formally, let G be a permutation group of order m and degree n. Every permutation g in G has a unique decomposition into disjoint cycles, say c1 c2 c3 ... Let the length of a cycle c be denoted by |c|. Now let jk(g) be the number of cycles of g of length k, where

We associate to g the monomial

in the variables a1, a2, ... an. Then the cycle index Z(G) of G is given by

The question of what the cycle structure of a random permutation looks like is an important question in the analysis of algorithms. An overview of the most important results may be found at random permutation statistics.

Cycle index

102

Examples
Basic examples of disjoint cycle decompositions may be found here. The cyclic group C3 = {e,(1 2 3),(1 3 2)} consists of the identity and two 3-cycles. Thus its cycle index is

The symmetric group S3 has the elements and its cycle index is

The cyclic group C6 contains the six permutations [1 [2 [3 [4 [5 [6 2 3 4 5 6 1 3 4 5 6 1 2 4 5 6 1 2 3 5 6 1 2 3 4 6] 1] 2] 3] 4] 5] = = = = = = (1)(2)(3)(4)(5)(6) (1 2 3 4 5 6) (1 3 5)(2 4 6) (1 4)(2 5)(3 6) (1 5 3)(2 6 4) (1 6 5 4 3 2).

and its cycle index is

Case study: edge permutation group of graphs on three vertices


Consider graphs on three vertices. Every permutation in the group S3 of vertex permutations induces an edge permutation and we want to compute the cycle index of the latter group. These are the permutations: The identity. No vertices are permuted, and no edges; the contribution is Three reflections in an axis passing through a vertex and the midpoint of the opposite edge. These fix one edge (the one not incident on the vertex) and exchange the remaining two; the contribution is Two rotations, one clockwise, the other counterclockwise. These create a cycle of three edges; the contribution is The cycle index of the group G of edge permutations induced by vertex permutations from S3 is

It happens that K3 is its own dual and hence the edge permutation group induced by the vertex permutation group is the same as the vertex permutation group, namely S3 and the cycle index is Z(S3). This is not the case for graphs on more than three vertices, where the vertex permutation group has degree n and the edge permutation group has degree n(n1)/2. For n>3 we have n(n1)/2>n. We will see an example in the next section.

Cycle index

103

Case study: edge permutation group of graphs on four vertices


We compute the cycle index of the edge permutation group for graphs on four vertices. The process is entirely analogous to the three-vertex case. These are the vertex permutations and the edge permutations that they induce: The identity. This permutation maps all vertices (and hence, edges) to themselves and the contribution is Six permutations that exchange two vertices. These permutations preserve the edge that connects the two vertices as well as the edge that connects the two vertices not exchanged. The remaining edges form two two-cycles and the contribution is Eight permutations that fix one vertex and produce a three-cycle for the three vertices not fixed. These permutations create two three-cycles of edges, one containing those not incident on the vertex, and another one containing those incident on the vertex; the contribution is Three permutations that exchange two vertex pairs at the same time. These permutations preserve the two edges that connect the two pairs. The remaining edges form two two-cycles and the contribution is Six permutations that rotate the vertices along a four-cycle. These permutations create a four-cycle of edges (those that lie on the cycle) and exchange the remaining two edges; the contribution is Note that we may visualize the types of permutations as symmetries of a regular tetrahedron. This yields the following description of the permutation types. The identity. Reflection in the plane that contains one edge and the midpoint of the edge opposing it. Rotation by 120 degrees about the axis passing through a vertex and the midpoint of the opposite face. Rotation by 180 degrees about the axis connecting the midpoints of two opposite edges. Six rotoreflections by 90 degrees.

We have computed the cycle index of the edge permuation group G of graphs on four vertices and it is

Case study: face permutations of a cube


Consider an ordinary cube in three-space and its automorphisms under rotations, which form a group, call it C. It permutes the six faces of the cube. (We could also consider edge permutations or vertex permutations.) There are twenty-four automorphisms. We will classify them all and compute the cycle index of C. The identity. There is one such permutation and its contribution is Six 90-degree face rotations. We rotate about the axis passing through the centers of the face and the face opposing it. This will fix the face and the face opposing it and create a four-cycle of the faces parallel to the axis of rotation. The contribution is Three 180-degree face rotations.

Cube with colored faces

Cycle index We rotate about the same axis as in the previous case, but now there is no four cycle of the faces parallel to the axis, but rather two two-cycles. The contribution is Eight 120-degree vertex rotations. This time we rotate about the axis passing through two opposite vertices (the endpoints of a main diagonal). This creates two three-cycles of faces (the faces incident on the same vertex form a cycle). The contribution is Six 180-degree edge rotations. These edge rotations rotate about the axis that passes through the midpoints of opposite edges not incident on the same face and parallel to each other and exchanges the two faces that are incident on the first edge, the two faces incident on the second edge, and the two faces that share two vertices but no edge with the two edges, i.e. there are three two-cycles and the contribution is The conclusion is that the cycle index of the group C is

104

Cycle indices of some groups


Identity group En
This group contains one permutation that fixes every element.

Cyclic group Cn
Cyclic group is the group of rotations of n elements round a circle.

This formula is easily verified for powers of primes isomorphic to the group generated by readily apparent that the order of is or

, where we use the fact that the cyclic group is with multiplication being the group operation. It is thus . The possible values of the order are But the number of solutions

according to whether from the interval to is one when and , giving

otherwise, so that the

number of elements of each order is

which is the formula from above (where we have taken into account that a permutation of order cycles, each of which is obtained by a single application of ).

splits into

Cycle index

105

Dihedral group Dn
Dihedral group is like the cyclic group, but also includes reflections.

The alternating group An


The alternating group includes all even n! permutations of n elements.

The symmetric group Sn


The cycle index of the symmetric group Sn is given by the formula:

that can be also stated in terms of complete Bell polynomials:

This formula is obtained by counting how many times a given permutation shape can occur. There are three steps: first partition the set of n labels into subsets, where there are subsets of size k. Every such subset generates cycles of length k. But we do not distinguish between cycles of the same size, i.e. they are permuted by yields . This

There is a useful recursive formula for the cycle index of the symmetric group. Set size l of the cycle that contains n, where There are

and consider the

ways to choose the remaining

elements of the cycle and every such choice generates different cycles. This yields the recurrence

or

Cycle index

106

External links
Marko Riedel, Plya's enumeration theorem and the symbolic method [1]. Harald Fripertinger (1997). "Cycle indices of linear, affine and projective groups" [2]. Linear Algebra and Its Applications 263: 133156. doi:10.1016/S0024-3795(96)00530-7.

References
[1] http:/ / www. mathematik. uni-stuttgart. de/ ~riedelmo/ papers/ collier. pdf [2] http:/ / www. uni-graz. at/ ~fripert/ linaffproj. html

Cyclic order
In mathematics, a cyclic order is a way to arrange a set of objects in a circle.[nb] Unlike most structures in order theory, a cyclic order cannot be modeled as a binary relation "a < b". One does not say that east is more clockwise than west. Instead, a cyclic order is defined as a ternary relation [a, b, c], meaning "after a, one reaches b before c". For example, [June, October, February]. A ternary relation is called a cyclic order if it is cyclic, asymmetric, transitive, and total. Dropping the "total" requirement results in a partial cyclic order. A set with a cyclic order is called a cyclically ordered set or simply a cycle.[nb] Some familiar cycles are discrete, having only a finite number of elements: there are seven days of the week, four cardinal directions, twelve notes in the chromatic scale, and three plays in rock-paper-scissors. In a finite cycle, each element has a "next element" and a "previous element". There are also continuously variable cycles with infinitely many elements, such as the oriented unit circle in the plane. Cyclic orders are closely related to linear orders, which arrange objects in a line. Any linear order can be bent into a circle, and any cyclic order can be cut at a point, resulting in a line. These operations, along with the related constructions of intervals and covering maps, mean that questions about cyclic orders can often be transformed into questions about linear orders, which are more familiar. Cycles have more symmetries than linear orders, so they are often more compatible with other mathematical structures. The integers form a linearly ordered group Z, whose quotient by a number n, the finite cyclic group Z/n, is circularly orderable but not linearly orderable. The real numbers form a linearly ordered topological space R, whose one-point compactification, the real projective line RP1, is circularly orderable but not linearly orderable.

Finite cycles
In other words, a finite cycle is the same as a cycle of a permutation. Zn-torsor[1] A cyclic order on a set X with n elements is an arrangement of X as on a clock face, for an n-hour clock. That is, rather than an order relation on X, we define on X just functions 'element immediately before' and 'element immediately following' any given x, in such a way that taking predecessors, or successors, cycles once through the elements as x(1), x(2), ..., x(n). Another way to put it is to say that we make X into the standard directed cycle graph on n vertices, by some matching of elements to vertices. It can be instinctive to use cyclic orders for symmetric functions, for example as in

Cyclic order xy + yz + zx where writing the final monomial as xz would distract from the pattern. A substantial use of cyclic orders is in the determination of the conjugacy classes of free groups. Two elements g and h of the free group F on a set Y are conjugate if and only if, when they are written as products of elements y and y1 with y in Y, and then those products are put in cyclic order, the cyclic orders are equivalent under the rewriting rules that allow one to remove or add adjacent y and y1. A cyclic order on a set X can be determined by a linear order on X, but not in a unique way. Choosing a linear order is equivalent to choosing a first element, so there are exactly n linear orders that induce a given cyclic order. Since there are n! possible linear orders, there are n! / n = (n 1)! possible cyclic orders.

107

Definitions
An infinite set can also be ordered cyclically. Important examples of infinite cycles include the unit circle, S1, and the rational numbers, Q. The basic idea is the same: we arrange elements of the set around a circle. However, in the infinite case we cannot use the immediate successor relation; instead we use a ternary relation denoting that elements a, b, c occur after each other (not necessarily immediately) as we go around the circle. By currying the arguments of the ternary relation [a, b, c], one can think of a cyclic order as a one-parameter family of binary order relations, called cuts, or as a two-parameter family of subsets of K, called intervals.

The ternary relation


The general definition is as follows: a cyclic order on a set X is a relation C X3, written [a, b, c], that satisfies the following axioms:[nb] 1. 2. 3. 4. Cyclicity: If [a, b, c] then [b, c, a] Asymmetry: If [a, b, c] then not [c, b, a] Transitivity: If [a, b, c] and [a, c, d] then [a, b, d] Totality: If a, b, and c are distinct, then either [a, b, c] or [c, b, a]

The axioms are named by analogy with the asymmetry, transitivity, and totality axioms for a binary relation, which together define a strict linear order. Edward Huntington(1916, 1924) considered other possible lists of axioms, including one list meant to emphasize the similarity between a cyclic order and a betweenness relation. A ternary relation that satisfies the first three axioms, but not necessarily the axiom of totality, is a partial cyclic order.

Rolling and cuts


Given a linear order < on a set X, the cyclic order on X induced by < is defined as follows:[2] [a, b, c] if and only if a < b < c or b < c < a or c < a < b Two linear orders induce the same cyclic order if they can be transformed into each other by a cyclic rearrangement, as in cutting a deck of cards.[3] One may define a cyclic order relation as a ternary relation that is induced by a strict linear order as above.[4] Cutting a single point out of a cyclic order leaves a linear order behind. More precisely, given a cyclically ordered set (K, [ ]), each element a K defines a natural linear order <a on the remainer of the set, K a, by the following rule:[5] x <a y if and only if [a, x, y]. Moreover, <a can be extended by adjoining a as a least element; the resulting linear order on K is called the principal cut with least element a. Likewise, adjoining a as a greatest element results in a cut <a.[6]

Cyclic order

108

Intervals
Given two elements a b K, the open interval from a to b, written (a, b), is the set of all x K such that [a, x, b]. The system of open intervals completely defines the cyclic order and can be used as an alternate definition of a cyclic order relation.[7] An interval (a, b) has a natural linear order given by <a. One can define half-closed and closed intervals [a, b), (a, b], and [a, b] by adjoining a as a least element and/or b as a greatest element.[8] As a special case, the open interval (a, a) is defined as the cut K a. More generally, a proper subset S of K is called convex if it contains an interval between every pair of points: for a b S, either (a, b) or (b, a) must also be in S.[9] A convex set is linearly ordered by the cut <x for any x not in the set; this ordering is independent of the choice of x.

Monotone functions
The "cyclic order = arranging in a circle" idea works because any subset of a cycle is itself a cycle. In order to use this idea to impose cyclic orders on sets that are not actually subsets of the unit circle in the plane, it is necessary to consider functions between sets. A function between two cyclically ordered sets, f : X Y, is called a monotonic function or a homomorphism if it pulls back the ordering on Y: whenever [f(a), f(b), f(c)], one has [a, b, c]. Equivalently, f is monotone if whenever [a, b, c] and f(a), f(b), and f(c) are all distinct, then [f(a), f(b), f(c)]. A typical example of a monotone function is the following function on the cycle with 6 elements: f(0) = f(1) = 4, f(2) = f(3) = 0, f(4) = f(5) = 1. A function is called an embedding if it is both monotone and injective.[nb] Equivalently, an embedding is a function that pushes forward the ordering on X: whenever [a, b, c], one has [f(a), f(b), f(c)]. As an important example, if X is a subset of a cyclically ordered set Y, and X is given its natural ordering, then the inclusion map i : X Y is an embedding. Generally, an injective function f from an unordered set X to a cycle Y induces a unique cyclic order on X that makes f an embedding.

Functions on finite sets


A cyclic order on a finite set X can be determined by an injection into the unit circle, X S1. There are many possible functions that induce the same cyclic orderin fact, infinitely many. In order to quantify this redundancy, it takes a more complex combinatorial object than a simple number. Examining the configuration space of all such maps leads to the definition of an (n 1)-dimensional polytope known as a cyclohedron. Cyclohedra were first applied to the study of knot invariants;[10] they have more recently been applied to the experimental detection of periodically expressed genes in the study of biological clocks.[11] The category of homomorphisms of the standard finite cycles is called the cyclic category; it may be used to consrtuct Alain Connes' cyclic homology. One may define a degree of a function between cycles, analogous to the degree of a continuous mapping. For example, the natural map from the circle of fifths to the chromatic circle is a map of degree 7. One may also define a rotation number.

Cyclic order

109

Completion
A cut with both a least element and a greatest element is called a jump. For example, every cut of a finite cycle Zn is a jump. A cycle with no jumps is called dense.[12] [13] A cut with neither a least element nor a greatest element is called a gap. For example, the rational numbers Q have a gap at every irrational number. They also have a gap at infinity, i.e. the usual ordering. A cycle with no gaps is called complete.[14] [13] A cut with exactly one endpoint is called a principal or Dedekind cut. For example, every cut of the circle S1 is a principal cut. A cycle where every cut is principal, being both dense and complete, is called continuous.[15] [13] The set of all cuts is cyclically ordered by the following relation: [<1, <2, <3] if and only if there exist x, y, z such that:[16] x <1 y <1 z, x <1y <2 z <2 x, and x <1 y <1z <3 x <3 y. A certain subset of this cycle of cuts is the Dedekind completion of the original cycle.

Further constructions
Unrolling and covers
[<1, <2, <3] and [x, y, z]

Starting from a cyclically ordered set K, one may form a linear order by unrolling it along an infinite line. This captures the intuitive notion of keeping track of how many times one goes around the circle. Formally, one defines a linear order on the Cartesian product Z K, where Z is the set of integers, by fixing an element a and requiring that for all i:[17] If [a, x, y], then ai < xi < yi < ai + 1. For example, the months January 2011, May 2011, September 2011, and January 2012 occur in that order. This ordering of Z K is called the universal cover of K.[nb] Its order type is independent of the choice of a, but the notation is not, since the integer coordinate "rolls over" at a. For example, although the cyclic order of pitch classes is compatible with the A-to-G alphabetical order, C is chosen to be the first note in each octave, so in note-octave notation, B3 is followed by C4. The inverse construction starts with a linearly ordered set and coils it up into a cyclically ordered set. Given a linearly ordered set L and an order-preserving bijection T : L L with unbounded orbits, the orbit space L / T is cyclically ordered by the requirement:[7] [nb] If a < b < c < T(a), then [[a], [b], [c]]. In particular, one can recover K by defining T(xi) = xi + 1 on Z K. There are also n-fold coverings for finite n; in this case, one cyclically ordered set covers another cyclically ordered set. For example, the 24-hour clock is a double cover of the 12-hour clock. In geometry, the pencil of rays emanating from a point in the oriented plane is a double cover of the pencil of unoriented lines passing through the same point.[18] These covering maps can be characterized by lifting them to the universal cover.[7]

Cyclic order

110

Products and retracts


Given a cyclically ordered set (K, [ ]) and a linearly ordered set (L, <), the (total) lexicographic product is a cyclic order on the product set K L, defined by [(a, x), (b, y), (c, z)] if one of the following holds:[19] [a, b, c] a = b c and x < y b = c a and y < z c = a b and z < x a = b = c and [x, y, z] The lexicographic product K L globally looks like K and locally looks like L; it can be thought of as K copies of L. This construction is sometimes used to characterize cyclically ordered groups.[20] One can also glue together different linearly ordered sets to form a circularly ordered set. For example, given two linearly ordered sets L1 and L2, one may form a circle by joining them together at positive and negative infinity. A circular order on the disjoint union L1 L2 {, } is defined by < L1 < < L2 < , where the induced ordering on L1 is the opposite of its original ordering. For example, the set of all longitudes is circularly ordered by joining together all points west and all points east, along with the prime meridian and the 180th meridian. Kuhlmann, Marshall & Osiak (2011) use this construction while characterizing the spaces of orderings and real places of double formal Laurent series over a real closed field.[21]

Topology
The open intervals form a base for a natural topology, the cyclic order topology. The open sets in this topology are exactly those sets which are open in every compatible linear order.[22] To illustrate the difference, in the set [0, 1), the subset [0, 1/2) is a neighborhood of 0 in the linear order but not in the cyclic order. Interesting examples of cyclically ordered spaces include the conformal boundary of a simply connected Lorentz surface[23] and the leaf space of a lifted essential lamination of certain 3-manifolds.[24] Discrete dynamical systems on cyclically ordered spaces have also been studied.[25] The interval topology forgets the original orientation of the cyclic order. This orientation can be restored by enriching the intervals with their induced linear orders; then one has a set covered with an atlas of linear orders that are compatible where they overlap. In other words, a cyclically ordered set can be thought of as a locally linearly ordered space: an object like a manifold, but with order relations instead of coordinate charts. This viewpoint makes it easier to be precise about such concepts as covering maps. The generalization to a locally partially ordered space is studied in Roll (1993); see also Directed topology.

Cyclic order

111

Related structures
Groups
A cyclically ordered group is a set with both a group structure and a cyclic order, such that left and right multiplication both preserve the cyclic order.

Modified axioms
A partial cyclic order is a ternary relation that generalizes a (total) cyclic order in the same way that a partial order generalizes a total order. It is cyclic, asymmetric, and transitive, but it need not be total. An order variety is a partial cyclic order that satisfies an additional spreading axiom. Replacing the asymmetry axiom with a complementary version results in the definition of a co-cyclic order. Appropriately total co-cyclic orders are related to cyclic orders in the same way that is related to <. A cyclic order obeys a relatively strong 4-point transitivity axiom. One structure that weakens this axiom is a CC system: a ternary relation that is cyclic, asymmetric, and total, but generally not transitive. Instead, a CC system must obey a 5-point transitivity axiom and a new interiority axiom, which constrains the 4-point configurations that violate cyclic transitivity.[26] A cyclic order is required to be symmetric under cyclic permutation, [a, b, c] [b, c, a], and asymmetric under reversal: [a, b, c] [c, b, a]. A ternary relation that is asymmetric under cyclic permutation and symmetric under reversal, together with appropriate versions of the transitivity and totality axioms, is called a betweenness relation. A separation relation is a quaternary relation that can be thought of as a cyclic order without an orientation. The relationship between a circular order and a separation relation is analogous to the relationship between a linear order and a betweenness relation.[27]

Symmetries and model theory


Evans, Macpherson & Ivanov (1997) provide a model-theoretic description of the covering maps of cycles. Tararin(2001, 2001) studies groups of automorphisms of cycles with various transitivity properties. Giraudet & Holland (2002) characterize cycles whose full automorphism groups act freely and transitively. Campero-Arena & Truss (2009) characterize countable colored cycles whose automorphism groups act transitively. Truss (2009) studies the automorphism group of the unique (up to isomorphism) countable dense cycle. Kulpeshov & Macpherson (2005) study minimality conditions on circularly ordered structures, i.e. models of first-order languages that include a cyclic order relation. These conditions are analogues of o-minimality and weak o-minimality for the case of linearly ordered structures. Kulpeshov(2006, 2009) continues with some characterizations of -categorical structures.[28]

Cognition
Hans Freudenthal has emphasized the role of cyclic orders in cognitive development, as a contrast to Jean Piaget who addresses only linear orders. Some experiments have been performed to investigate the mental representations of cyclically ordered sets, such as the months of the year.

Notes on usage
cyclic orderThe relation may be called a cyclic order (Huntington 1916, p.630), a circular order (Huntington 1916, p.630), a cyclic ordering (Kok 1973, p.6), or a circular ordering (Mosher 1996, p.109). Some authors call such an ordering a total cyclic order (Isli & Cohn 1998, p.643), a complete cyclic order (Novk 1982, p.462), a linear cyclic order (Novk 1984, p.323), or an l-cyclic order or -cyclic order (ernk 2001, p.32), to distinguish from the

Cyclic order broader class of partial cyclic orders, which they call simply cyclic orders. Finally, some authors may take cyclic order to mean an unoriented quaternary separation relation (Bowditch 1998, p.155). cycleA set with a cyclic order may be called a cycle (Novk 1982, p.462) or a circle (Giraudet & Holland 2002, p.1). The above variations also appear in adjective form: cyclically ordered set (cyklicky uspodan mnoiny, ech 1936, p.23), circularly ordered set, total cyclically ordered set, complete cyclically ordered set, linearly cyclically ordered set, l-cyclically ordered set, -cyclically ordered set. All authors agree that a cycle is totally ordered. ternary relationThere are a few different symbols in use for a cyclic relation. Huntington (1916, p.630) uses concatenation: ABC. ech (1936, p.23) and (Novk 1982, p.462) use ordered triples and the set membership symbol: (a, b, c) C. Megiddo (1976, p.274) uses concatenation and set membership: abc C, understanding abc as a cyclically ordered triple. The literature on groups, such as wierczkowski (1959a, p.162) and ernk & Jakubk (1987, p.157), tend to use square brackets: [a, b, c]. Giraudet & Holland (2002, p.1) use round parentheses: (a, b, c), reserving square brackets for a betweenness relation. Campero-Arena & Truss (2009, p.1) use a function-style notation: R(a, b, c). Rieger (1947, cited after Pecinov 2008, p.82) uses a "less-than" symbol as a delimiter: < x, y, z <. Some authors use infix notation: a < b < c, with the understanding that this does not carry the usual meaning of a < b and b < c for some binary relation < (erny 1978, p.262). Weinstein (1996, p.81) emphasizes the cyclic nature by repeating an element: p r q p. embeddingNovk (1984, p.332) calls an embedding an "isomorphic embedding". rollIn this case, Giraudet & Holland (2002, p.2) write that K is L "rolled up". orbit spaceThe map T is called archimedean by Bowditch (2004, p.33), coterminal by Campero-Arena & Truss (2009, p.582), and a translation by McMullen (2009, p.10). universal coverMcMullen (2009, p.10) calls Z K the "universal cover" of K. Giraudet & Holland (2002, p.3) write that K is Z K "coiled". Freudenthal & Bauer (1974, p.10) call Z K the "-times covering" of K. Often this construction is written as the anti-lexicographic order on K Z.

112

References
Citations
[1] Brown 1987, p.52. [2] Huntington 1935, p.6; ech 1936, p.25. [3] Calegari 2004, p.439. [4] Courcelle 2003. [5] Huntington 1935, p.7; ech 1936, p.24. [6] Novk 1984, p.323. [7] McMullen 2009, p.10. [8] Giraudet & Holland 2002, p.2. [9] Kulpeshov 2009. [10] Stasheff 1997, p.58. [11] Morton et al. 2007. [12] Novk 1984, p.325. [13] Novk & Novotn 1987, p.409410. [14] Novk 1984, pp.325, 331. [15] Novk 1984, p.333. [16] Novk 1984, p.330. [17] Roll 1993, p.469; Freudenthal & Bauer 1974, p.10 [18] Freudenthal 1973, p.475; Freudenthal & Bauer 1974, p.10 [19] wierczkowski 1959a, p.161. [20] wierczkowski 1959a. [21] Kuhlmann, Marshall & Osiak 2011, p.8. [22] Viro et al. 2008, p.44. [23] Weinstein 1996, pp.8081. [24] Calegari & Dunfield 2003, pp.1213.

Cyclic order
[25] [26] [27] [28] Bass et al. 1996, p.19. Knuth 1992, p.4. Huntington 1935. Macpherson 2011.

113

Bibliography Bass, Hyman; Otero-Espinar, Maria Victoria; Rockmore, Daniel; Tresser, Charles (1996), Cyclic renormallzatlon and automorphism groups of rooted trees, Lecture Notes in Mathematics, 1621, Springer, doi:10.1007/BFb0096321, ISBN978-3-540-60595-9 Bowditch, Brian H. (September 1998), "Cut points and canonical splittings of hyperbolic groups" (http://www. kryakin.com/files/Acta_Mat_(2_55)/acta197_151/180/180_1.pdf), Acta Mathematica 180 (2): 145186, doi:10.1007/BF02392898, retrieved 25 April 2011 Bowditch, Brian H. (November 2004), "Planar groups and the Seifert conjecture" (http://www.warwick.ac.uk/ ~masgak/abstracts/pla.html), Journal fr die reine und angewandte Mathematik 576: 1162, doi:10.1515/crll.2004.084, retrieved 31 May 2011 Brown, Kenneth S. (February 1987), "Finiteness properties of groups" (http://www.math.cornell.edu/ ~kbrown/scan/1987.0044.0045.pdf), Journal of Pure and Applied Algebra 44 (13): 4575, doi:10.1016/0022-4049(87)90015-6, retrieved 21 May 2011 Calegari, Danny (13 December 2004), "Circular groups, planar groups, and the Euler class" (http://emis.math. ca/journals/UW/gt/ftp/main/m7/m7-15.pdf), Geometry & Topology Monographs 7: 431491, arXiv:math/0403311, doi:10.2140/gtm.2004.7.431, retrieved 30 April 2011 Calegari, Danny; Dunfield, Nathan M. (April 2003), "Laminations and groups of homeomorphisms of the circle", Inventiones mathematicae 152 (1): 149204, arXiv:math/0203192, doi:10.1007/s00222-002-0271-6 Campero-Arena, G.; Truss, John K. (April 2009), "1-transitive cyclic orderings" (http://www.maths.leeds.ac. uk/pure/staff/truss/g.pdf), Journal of Combinatorial Theory, Series A 116 (3): 581594, doi:10.1016/j.jcta.2008.08.006, retrieved 25 April 2011 ech, Eduard (1936) (in Czech), Bodov mnoiny (http://dml.cz/handle/10338.dmlcz/400435), Prague: Jednota eskoslovenskch matematik a fysik, doi:10338.dmlcz/400435, retrieved 9 May 2011 ernk, tefan (2001), "Cantor extension of a half linearly cyclically ordered group" (http://lord.uz.zgora. pl:7777/bib/bibwww.pdf?nIdA=4493), Discussiones Mathematicae - General Algebra and Applications 21 (1): 3146, retrieved 22 May 2011 ernk, tefan; Jakubk, Jn (1987), "Completion of a cyclically ordered group" (http://dspace.dml.cz/ bitstream/handle/10338.dmlcz/102144/CzechMathJ_37-1987-1_16.pdf), Czechoslovak Mathematical Journal 37 (1): 157174, MR875137, Zbl0624.06021, hdl:10338.dmlcz/102144, retrieved 25 April 2011 erny, Ilja (1978), "Cuts in simple connected regions and the cyclic ordering of the system of all boundary elements" (http://dml.cz/bitstream/handle/10338.dmlcz/117983/CasPestMat_103-1978-3_6.pdf), asopis pro pstovn matematiky 103 (3): 259281, doi:10338.dmlcz/117983, retrieved 11 May 2011 Courcelle, Bruno (21 August 2003), "2.3 Circular order" (http://www-mgi.informatik.rwth-aachen.de/FMT/ problems.pdf), in Berwanger, Dietmar; Grdel, Erich, Problems in Finite Model Theory, p.12, retrieved 15 May 2011 Evans, David M.; Macpherson, Dugald; Ivanov, Alexandre A. (1997), "Finite Covers" (http://www.amsta. leeds.ac.uk/Pure/preprints/hdm/hdm5.ps), in Evans, David M., Model theory of groups and automorphism groups: Blaubeuren, August 1995, London Mathematical Society Lecture Note Series, 244, Cambridge University Press, pp.172, ISBN0-521-58955-X, retrieved 5 May 2011 Freudenthal, Hans (1973), Mathematics as an educational task, D. Reidel, ISBN90-277-0235-7 Freudenthal, Hans; Bauer, A. (1974), "GeometryA Phenomenological Discussion", in Behnke, Heinrich; Gould, S. H., Fundamentals of mathematics, 2, MIT Press, pp.328, ISBN0-262-02069-6 Freudenthal, Hans (1983), Didactical phenomenology of mathematical structures, D. Reidel, ISBN90-277-1535-1

Cyclic order Giraudet, Michle; Holland, W. Charles (September 2002), "Ohkuma Structures" (http://web.mac.com/ chollan1/iWeb/Site/publications_files/55 Ohkuma structures.pdf), Order 19 (3): 223237, doi:10.1023/A:1021249901409, retrieved 28 April 2011 Huntington, Edward V. (1 November 1916), "A Set of Independent Postulates for Cyclic Order" (http://www. pnas.org/content/2/11/630.full.pdf), Proceedings of the National Academy of Sciences of the United States of America 2 (11): 630631, retrieved 8 May 2011 Huntington, Edward V. (15 February 1924), "Sets of Completely Independent Postulates for Cyclic Order" (http:/ /www.pnas.org/content/10/2/74.full.pdf), Proceedings of the National Academy of Sciences of the United States of America 10 (2): 7478, retrieved 8 May 2011 Huntington, Edward V. (July 1935), "Inter-Relations Among the Four Principal Types of Order" (http://www. ams.org/journals/tran/1935-038-01/S0002-9947-1935-1501800-1/S0002-9947-1935-1501800-1.pdf), Transactions of the American Mathematical Society 38 (1): 19, doi:10.1090/S0002-9947-1935-1501800-1, retrieved 8 May 2011 Isli, Amar; Cohn, Anthony G. (1998), "An algebra for cyclic ordering of 2D orientations" (https://www.aaai. org/Papers/AAAI/1998/AAAI98-091.pdf), AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, ISBN0-262-51098-7, retrieved 23 May 2011 Knuth, Donald E. (1992), Axioms and Hulls (http://www-cs-faculty.stanford.edu/~uno/aah.html), Lecture Notes in Computer Science, 606, Heidelberg: Springer-Verlag, pp.ix+109, doi:10.1007/3-540-55611-7, ISBN3-540-55611-7, retrieved 5 May 2011 Kok, H. (1973), Connected orderable spaces, Amsterdam: Mathematisch Centrum, ISBN9061960886 Kuhlmann, Salma; Marshall, Murray; Osiak, Katarzyna (1 June 2011), "Cyclic 2-structures and spaces of orderings of power series fields in two variables" (http://math.usask.ca/~marshall/r[[x,y]]_20,11.pdf), Journal of Algebra 335 (1): 3648, doi:10.1016/j.jalgebra.2011.02.026, retrieved 11 May 2011 Kulpeshov, Beibut Sh. (December 2006), "On 0-categorical weakly circularly minimal structures", Mathematical Logic Quarterly 52 (6): 555574, doi:10.1002/malq.200610014 Kulpeshov, Beibut Sh. (March 2009), "Definable functions in the 0-categorical weakly circularly minimal structures", Siberian Mathematical Journal 50 (2): 282301, doi:10.1007/s11202-009-0034-3 Translation of Kulpeshov (2009), " 0- " (http://mi.mathnet.ru/eng/smj1965), Sibirski Matematicheski Zhurnal 50 (2): 356379, retrieved 24 May 2011 Kulpeshov, Beibut Sh.; Macpherson, H. Dugald (July 2005), "Minimality conditions on circularly ordered structures", Mathematical Logic Quarterly 51 (4): 377399, doi:10.1002/malq.200410040, MR2150368 Macpherson, H. Dugald (2011), "A survey of homogeneous structures" (http://www.amsta.leeds.ac.uk/pure/ staff/macpherson/homog_final2.pdf), Discrete Mathematics, doi:10.1016/j.disc.2011.01.024, retrieved 28 April 2011 McMullen, Curtis T. (2009), "Ribbon R-trees and holomorphic dynamics on the unit disk" (http://www.math. harvard.edu/~ctm/papers/home/text/papers/rtrees/rtrees.pdf), Journal of Topology 2 (1): 2376, doi:10.1112/jtopol/jtn032, retrieved 15 May 2011 Megiddo, Nimrod (March 1976), "Partial and complete cyclic orders" (http://www.ams.org/journals/bull/ 1976-82-02/S0002-9904-1976-14020-7/S0002-9904-1976-14020-7.pdf), Bulletin of the American Mathematical Society 82 (2): 274276, doi:10.1090/S0002-9904-1976-14020-7, retrieved 30 April 2011 Morton, James; Pachter, Lior; Shiu, Anne; Sturmfels, Bernd (January 2007), "The Cyclohedron Test for Finding Periodic Genes in Time Course Expression Studies", Statistical Applications in Genetics and Molecular Biology 6 (1), arXiv:q-bio/0702049, doi:10.2202/1544-6115.1286 Mosher, Lee (1996), "A user's guide to the mapping class group: once-punctured surfaces", in Baumslag, Gilbert, Geometric and computational perspectives on infinite groups, DIMACS, 25, AMS Bookstore, pp.101174,

114

Cyclic order arXiv:math/9409209, ISBN0-8218-0449-9 Novk, Vtzslav (1982), "Cyclically ordered sets" (http://dml.cz/bitstream/handle/10338.dmlcz/101821/ CzechMathJ_32-1982-3_12.pdf), Czechoslovak Mathematical Journal 32 (3): 460473, doi:10338.dmlcz/101821, retrieved 30 April 2011 Novk, Vtzslav (1984), "Cuts in cyclically ordered sets" (http://dml.cz/bitstream/handle/10338.dmlcz/ 101955/CzechMathJ_34-1984-2_17.pdf), Czechoslovak Mathematical Journal 34 (2): 322333, doi:10338.dmlcz/101955, retrieved 30 April 2011 Novk, Vtzslav; Novotn, Miroslav (1987), "On completion of cyclically ordered sets" (http://dspace.dml.cz/ bitstream/handle/10338.dmlcz/102168/CzechMathJ_37-1987-3_6.pdf), Czechoslovak Mathematical Journal 27 (3): 407414, doi:10338.dmlcz/102168, retrieved 25 April 2011 Pecinov, Elika (2008) (in Czech), Ladislav Svante Rieger (19161963) (http://dml.cz/handle/10338.dmlcz/ 400757), Djiny matematiky, 36, Prague: Matfyzpress, doi:10338.dmlcz/400757, ISBN978-80-7378-047-0, retrieved 9 May 2011 Rieger, L. S. (1947), " uspodanch a cyklicky uspodanch grupch II (On ordered and cyclically ordered groups II)" (in Czech), Vstnk Krlovsk esk spolecnosti nauk, Tda mathematicko-prodovdn (Journal of the Royal Czech Society of Sciences, Mathematics and natural history) (1): 133

115

Roll, J. Blair (1993), "Locally partially ordered groups" (http://dml.cz/bitstream/handle/10338.dmlcz/ 128411/CzechMathJ_43-1993-3_8.pdf), Czechoslovak Mathematical Journal 43 (3): 467481, hdl:10338.dmlcz/128411, retrieved 30 April 2011 Stasheff, Jim (1997), "From operads to 'physically' inspired theories" (http://www.math.unc.edu/Faculty/jds/ operadchik.ps), in Loday, Jean-Louis; Stasheff, James D.; Voronov, Alexander A., Operads: Proceedings of Reneassance Conferences, Contemporary Mathematics, 202, AMS Bookstore, pp.5382, ISBN978-0-8218-0513-8, retrieved 1 May 2011 wierczkowski, S. (1959a), "On cyclically ordered groups" (http://matwbn.icm.edu.pl/ksiazki/fm/fm47/ fm4718.pdf), Fundamenta Mathematicae 47: 161166, retrieved 2 May 2011 Tararin, Valeri Mikhailovich (2001), "On Automorphism Groups of Cyclically Ordered Sets", Siberian Mathematical Journal 42 (1): 190204, doi:10.1023/A:1004866131580. Translation of Tamarin (2001), " " (http://mi.mathnet.ru/eng/smj1484) (in Russian), Sibirskii Matematicheskii Zhurnal 42 (1): 212230, retrieved 30 April 2011 Tararin, Valeri Mikhailovich (2002), "On c-3-Transitive Automorphism Groups of Cyclically Ordered Sets", Mathematical Notes 71 (1): 110117, doi:10.1023/A:1013934509265. Translation of Tamarin (2002), " c-3- " (http://mi.mathnet.ru/ eng/mz333), Matematicheskie Zametki 71 (1): 122129, retrieved 22 May 2011 Truss, John K. (2009), "On the automorphism group of the countable dense circular order" (http://www.maths. leeds.ac.uk/Pure/staff/truss/mnew.pdf), Fundamenta Mathematicae 204 (2): 97111, doi:10.4064/fm204-2-1, retrieved 25 April 2011 Viro, Oleg; Ivanov, Oleg; Netsvetaev, Nikita; Kharlamov, Viatcheslav (2008), "8. Cyclic Orders" (http://www. ams.org/bookstore/pspdf/mbk-54-prev.pdf), Elementary topology: problem textbook (1st English ed.), AMS Bookstore, pp.4244, ISBN978-0-8218-4506-6, retrieved 25 April 2011 Weinstein, Tilla (July 1996), An introduction to Lorentz surfaces, De Gruyter Expositions in Mathematics, 22, Walter de Gruyter, ISBN978-3-11-014333-1

Cyclic order

116

Further reading
Bhattacharjee, Meenaxi; Macpherson, Dugald; Mller, Rgnvaldur G.; Neumann, Peter M. (1998), Notes on Infinite Permutation Groups, Lecture Notes in Mathematics, 1698, Springer, pp.108109, doi:10.1007/BFb0092550 Bodirsky, Manuel; Pinsker, Michael (to appear), "Reducts of Ramsey Structures", Model Theoretic Methods in Finite Combinatorics, Contemporary Mathematics, AMS, arXiv:1105.6073 Cameron, Peter J. (June 1976), "Transitivity of permutation groups on unordered sets", Mathematische Zeitschrift 148 (2): 127139, doi:10.1007/BF01214702 Cameron, Peter J. (June 1977), "Cohomological aspects of two-graphs", Mathematische Zeitschrift 157 (2): 101119, doi:10.1007/BF01215145 Cameron, Peter J. (1997), Evans, David M., ed., Model theory of groups and automorphism groups: Blaubeuren, August 1995 (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.2321), London Mathematical Society Lecture Note Series, 244, Cambridge University Press, pp.126133, ISBN0-521-58955-X, retrieved 5 May 2011 Courcelle, Joost; Engelfriet (April 2011), Graph Structure and Monadic Second-Order Logic, a Language Theoretic Approach (http://www.labri.fr/perso/courcell/Book/TheBook.pdf), Cambridge University Press, retrieved 17 May 2011 Droste, M.; Giraudet, M.; Macpherson, D. (March 1995), "Periodic Ordered Permutation Groups and Cyclic Orderings", Journal of Combinatorial Theory, Series B 63 (2): 310321, doi:10.1006/jctb.1995.1022 Droste, M.; Giraudet, M.; Macpherson, D. (March 1997), "Set-Homogeneous Graphs and Embeddings of Total Orders" (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.9135), Order 14 (1): 920, doi:10.1023/A:1005880810385, retrieved 17 May 2011 Evans, David M. (17 November 1997), "Finite covers with finite kernels" (http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.57.5323), Annals of Pure and Applied Logic 88 (23): 109147, doi:10.1016/S0168-0072(97)00018-3, retrieved 5 May 2011 Ivanov, A. A. (January 1999), "Finite Covers, Cohomology and Homogeneous Structures", Proceedings of the London Mathematical Society 78 (1): 128, doi:10.1112/S002461159900163X Jakubk, Jn (2006), "On monotone permutations of -cyclically ordered sets" (http://dml.cz/bitstream/handle/ 10338.dmlcz/128075/CzechMathJ_56-2006-2_10.pdf), Czechoslovak Mathematical Journal 45 (2): 403415, doi:10338.dmlcz/128075, retrieved 30 April 2011 Kennedy, Christine Cowan (August 1955), On a cyclic ternary relation ... (M.A. Thesis), Tulane University, OCLC16508645 Knya, Eszter Herendine (2006), "A mathematical and didactical analysis of the concept of orientation" (http:// tmcs.math.klte.hu/Contents/2006-Vol-IV-Issue-I/konya-abstract.pdf), Teaching Mathematics and Computer Science 4 (1): 111130, retrieved 17 May 2011 Knya, Eszter Herendine (2008), "Geometrical transformations and the concept of cyclic ordering" (http://www. cme.rzeszow.pl/pdf/part_2_3.pdf), in Maj, Boena; Pytlak, Marta; Swoboda, Ewa, Supporting Independent Thinking Through Mathematical Education, Rzeszw University Press, pp.102108, ISBN978-83-7338-420-0, retrieved 17 May 2011 Leloup, Grard (February 2011), "Existentially equivalent cyclic ultrametric spaces and cyclically valued groups" (http://math.usask.ca/fvk/leloup3.pdf), Logic Journal of the IGPL 19 (1): 144173, doi:10.1093/jigpal/jzq024, retrieved 30 April 2011 Marongiu, Gabriele (1985), "Some remarks on the 0-categoricity of circular orderings" (in Italian), Unione Matematica Italiana. Bollettino. B. Serie VI 4 (3): 883900, MR0831297 McCleary, Stephen; Rubin, Matatyahu (6 October 2005), Locally Moving Groups and the Reconstruction Problem for Chains and Circles, arXiv:math/0510122 Mller, G. (1974), "Lineare und zyklische Ordnung", Praxis der Mathematik 16: 261269, MR0429660

Cyclic order Rubin, M. (1996), "Locally moving groups and reconstruction problems", in Holland, W. Charles, Ordered groups and infinite permutation groups, Mathematics and Its Applications, 354, Kluwer, pp.121157, ISBN978-0-7923-3853-6 wierczkowski, S. (1956), "On cyclic ordering relations", Bulletin de l'Acadmie Polonaise des Sciences, Classe III 4: 585586 wierczkowski, S. (1959b), "On cyclically ordered intervals of integers" (http://matwbn.icm.edu.pl/ksiazki/ fm/fm47/fm4719.pdf), Fundamenta Mathematicae 47: 167172, retrieved 2 May 2011 Truss, J.K. (July 1992), "Generic Automorphisms of Homogeneous Structures", Proceedings of the London Mathematical Society s3-65 (1): 121141, doi:10.1112/plms/s3-65.1.121

117

External links
cyclic order (http://ncatlab.org/nlab/show/cyclic+order) at nLab

De Arte Combinatoria
The Dissertatio de arte combinatoria is an early work by Gottfried Leibniz published in 1666 in Leipzig.[1] It is an extended version of his doctoral dissertation, written before the author had seriously undertaken the study of mathematics.[2] The booklet was reissued without Leibniz' consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum.[3] During the following years he repeatedly expressed regrets about its being circulated as he considered it immature.[4] Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time. The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might Find all the predicates which are appropriate to a given subject, or Find all the subjects which are convenient to a given predicate. For this, Leibniz was inspired in the Ars Magna of Ramon Llull, although he criticized this author because of the arbitrariness of his categories and his indexing. Leibniz discusses in this work some combinatorial concepts. He had read Clavius' comments to the Tractatus de Sphaera of Sacrobosco, and some other contemporary works. He introduced the term variationes ordinis for the permutations, combinationes for the combinations of two elements, con3nationes (shorthand for conternationes) for those of three elements, etc. His general term for combinations was complexions. He found the formula

which he thought was original. The first examples of use of his ars combinatoria are taken from law, the musical registry of an organ, and the Aristotelian theory of generation of elements from the four primary qualities. But philosophical applications are of greater importance. He cites the idea of Hobbes that all reasoning is just a computation. The most careful example is taken from geometry, from where we shall give some definitions. He introduces the Class I concepts, which are primitive. Class I

De Arte Combinatoria 1 point, 2 space, 3 included, [...] 9 parts, 10 total, [...] 14 number, 15 various [...] Class II contains simple combinations. Class II.1 Quantity is 14 9 Where means "of the" (from Ancient Greek: ). Thus, "Quantity" is the number of the parts. Class III contains the con3nationes: Class III.1 Interval is 2.3.10 Thus, "Interval" is the space included in total. Of course, concepts deriving from former classes may also be defined. Class IV.1 Line is 1/3 2 Where 1/3 means the first concept of class III. Thus, a "line" is the interval of (between) points. Leibniz compares his system to the Chinese and Egyptian languages, although he did not really understand them at this point. For him, this is a first step towards the Characteristica Universalis, the perfect language which would provide a direct representation of ideas along with a calculus for the philosophical reasoning. As a preface, the work begins with a proof of the existence of God, cast in geometrical form, and based on the Argument from Motion.

118

Notes
[1] G.W. Leibniz, Dissertatio de arte combinatoria, 1666, Smtliche Schriften und Briefe (Berlin: Akademie Verlag, 1923-) A VI 1, p.163; Philosophische Schriften (Gerhardt) Bd. IV S.30; [2] Gottfried Wilhelm Leibniz: Hauptschriften zur Grundlegung der Philosophie. Zur allgemeinen Charakteristik. Philosophische Werke Band 1. page 32. Translated by Artur Buchenau. Published, reviewed and added an introduction and notes by Ernst Cassirer. Publishing company of Felix Meiner. Hamburg. 1966, p.32. [3] G.G.L. Ars Combinatoria, Acta Eruditorum, Feb., 1691, p.63 [4] Leibniz complained to various correspondents e.g. to Morell (1 October 1697) or to Meier (23 January 1699) see Akademie I.14 p.548 or I.16 p.540

References
E.J. Aiton, Leibniz: A Biography. Hilger, Bristol, 1985. ISBN 0-85274-470-6.

De Bruijn torus

119

De Bruijn torus
In combinatorial mathematics, a De Bruijn torus, named after Nicolaas Govert de Bruijn, is an array of symbols from an alphabet (often just 0 and 1) that contains every m-by-n matrix exactly once. It is a torus because the edges are considered wraparound for the purpose of finding matrices. Its name comes from the De Bruijn sequence, which can be considered a special case where n is1 (one dimension). One of the main open questions regarding De Bruijn tori is whether a De Bruijn torus for a particular alphabet size can be constructed for a given m andn. It is known that these always exist when n=1, since then we simply get the De Bruijn sequences, which always exist. It is also known that "square" tori exist wheneverm=n.[1]
A De Bruijn torus. Each 2-by-2 binary matrix can be found within it exactly once.

References
[1] Jackson, Brad; Stevens, Brett; Hurlbert, Glenn (Sept. 2009). "Research problems on Gray codes and universal cycles". Discrete Mathematics 309 (17): 53415348. doi:10.1016/j.disc.2009.04.002.

Delannoy number
In mathematics, a Delannoy number describes the number of paths from the southwest corner (0, 0) of a rectangular grid to the northeast corner (m, n), using only single steps north, northeast, or east.

For an n n grid, the first few Delannoy numbers (starting with n=0) are (sequence A001850 in OEIS): 1, 3, 13, 63, 321, 1683, 8989, 48639, 265729, ... The following figure illustrates the 63 Delannoy paths through a 3 3 grid:

Delannoy number

120

The paths that do not rise above the SWNE diagonal represent the Schrder numbers.

References
Weisstein, Eric W., "Delannoy Number [1]" from MathWorld.

References
[1] http:/ / mathworld. wolfram. com/ DelannoyNumber. html

Dickson's lemma

121

Dickson's lemma
In mathematics, Dickson's lemma is a finiteness statement applying to n-tuples of natural numbers. It is a simple fact from combinatorics, which has become attributed to the American algebraist L. E. Dickson. It was certainly known earlier, for example to Gordan in his researches on invariant theory. Stating it first for clarity for N2, for any pair (m,n) of natural numbers we can introduce

the 'rectangle' of numbers (r, s) with r at least m and s at least n. This is semi-infinite in the north and east directions, in the usual plane representation. The lemma then states that any union of the Rm,n can be expressed as the union of a finite subset of those Rm,n. This is analogous to the conventional topological definition of compactness. The generalization to Nk is the natural one, with k-tuples in place of pairs. The statement says something about Nk as the topological space with the product topology arising from N, where the latter has the (semi-continuity) topology in which the open sets are all sets Rm defined as all n with n at least m. The 'rectangles' are by definition a base for the topology; it says finite unions give all open sets. As for the proof of the lemma, it can be derived directly, but a slick way is to show that it is a special case of Hilbert's basis theorem. In fact it is essentially the case of ideals generated by monomials.

References
Dickson, L. E. (1913). "Finiteness of the odd perfect and primitive abundant numbers with factors". Amer. Journal Math. 35 (4): 413422. doi:10.2307/2370405. JSTOR2370405. distinct prime

Difference set
In combinatorics, a of is difference set is a subset of a group such that the order of is , the size in , and every nonidentity element of ways. can be expressed as a product of elements of exactly

Basic facts
A simple counting argument shows that there are exactly pairs of elements from that will yield nonidentity elements, so every difference set must satisfy the equation . If is a difference set, and , then is also a difference set, and is called a translate of . The set of all translates of a difference set (mostly called points) and group blocks. Any two blocks have exactly In particular, if forms a symmetric design. In such a design there are elements blocks. The

blocks. Each block of the design consists of

points, each point is contained in

elements in common and any two points are "joined" by

then acts as an automorphism group of the design. It is sharply transitive on points and blocks. , then the difference set gives rise to a projective plane. An example of a (7,3,1) is the subset . The translates of this difference set gives the

difference set in the group

Fano plane. Since every difference set gives a symmetric design, the parameter set must satisfy the BruckChowlaRyser theorem. Not every symmetric design gives a difference set.

Difference set

122

Multipliers
It has been conjectured that if p is a prime dividing by and not dividing v, then the group automorphism defined , and this is known as the First Multiplier of if for every prime p dividing m, there fixes some translate of D. It is known to be true for . Then , with coprime t and v, fixes some translate of , where

Theorem. A more general known result, the Second Multiplier Theorem, first says to choose a divisor exists an integer i such that

is the exponent (the least common multiple of the

orders of every element) of the group. For example, 2 is a multiplier of the (7,3,1)-difference set mentioned above.

Parameters
Every difference set known to mankind to this day has one of the following parameters or their complements: and some positive integer prime power for some prime power and some positive integer . . -difference set for some positive integer -difference set for some positive integer and some positive integer . -difference set for some positive integer . -difference set . -difference set for some -difference set for some prime power .

Known difference sets


Singer Let fields of order groups of and non-zero respectively and elements. Then the and set -difference is the trace function set, , where -difference set: and are Galois

are their respective multiplicative is where . a

Application
It is found by Xia, Zhou and Giannakis that, difference sets can be used to construct a complex vector codebook that achieves the difficult Welch bound on maximum cross correlation amplitude. The so-constructed codebook also forms the so-called Grassmannian manifold.

Generalisations
A , the size of difference family is a set of subsets is for all , and every nonidentity element of of a group such that the order of of is can be expressed as a product

elements of for some (i.e. both come from the same ) in exactly ways. A difference set is a difference family with . The parameter equation above generalises to .[1] The development family is a 2-design. Every 2-design with a regular automorphism group is of a difference for some difference family

Difference set

123

References
[1] Beth, Thomas; Jungnickel, Dieter; Lenz, Hanfried "Design theory", Cambridge University Press, Cambridge, 1986

W. D. Wallis (1988). Combinatorial Designs. Marcel Dekker. ISBN0-8247-7942-8. Daniel Zwillinger (2003). CRC Standard Mathematical Tables and Formulae. CRC Press. p.246. ISBN1-58488-291-3. P. Xia, S. Zhou, G. B. Giannakis (2005). "Achieving the Welch Bound with Difference Sets" (http://www.engr. uconn.edu/~shengli/papers/conf05/05icassp.pdf). IEEE Transactions on Information Theory 51 (5): 19001907. doi:10.1109/TIT.2005.846411.

DIMACS
The Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) is a collaboration between Rutgers University, Princeton University, and the research firms AT&T, Bell Labs, Telcordia, and NEC. It was founded in 1989 with money from the National Science Foundation. Its offices are located on the Rutgers campus, and 250 members from the six institutions form its permanent members. DIMACS is devoted to both theoretical development and practical applications of discrete mathematics and theoretical computer science. It engages in a wide variety of evangelism including encouraging, inspiring, and facilitating researchers in these subject areas, and sponsoring conferences and workshops. Fundamental research in discrete mathematics has applications in diverse fields including Cryptology, Engineering, Networking, and Management Decision Support. The current director of DIMACS is Fred S. Roberts. Past directors were Daniel Gorenstein and Andrs Hajnal.[1]

The DIMACS Challenges


DIMACS sponsors implementation challenges to determine practical algorithm performance on problems of interest. There have been nine DIMACS challenges so far. 1990-1991: Network Flows and Matching 1992-1992: NP-Hard Problems: Max Clique, Graph Coloring, and SAT 1993-1994: Parallel Algorithms for Combinatorial Problems 1994-1995: Computational Biology: Fragment Assembly and Genome Rearrangement 1995-1996: Priority Queues, Dictionaries, and Multidimensional Point Sets 1998-1998: Near Neighbor Searches 2000-2000: Semidefinite and Related Optimization Problems 2001-2001: The Traveling Salesman Problem 2005-2005: The Shortest Path Problem

References
[1] A history of mathematics at Rutgers (http:/ / www. math. rutgers. edu/ docs/ history. html), Charles Weibel.

External links
DIMACS Website (http://dimacs.rutgers.edu/)

Dinitz conjecture

124

Dinitz conjecture
In combinatorics, the Dinitz conjecture is a statement about the extension of arrays to partial Latin squares, proposed in 1979 by Jeff Dinitz, and proved in 1994 by Fred Galvin. The Dinitz conjecture, now a theorem, is that given an n n square array, a set of m symbols with m n, and for each cell of the array an n-element set drawn from the pool of m symbols, it is possible to choose a way of labeling each cell with one of those elements in such a way that no row or column repeats a symbol. The Dinitz conjecture is closely related to graph theory, in which it can be succinctly stated as natural . It means that the list chromatic index of the complete bipartite graph equals for . In fact, Fred

Galvin proved the Dinitz conjecture as a special case of his theorem stating that the list chromatic index of any bipartite multigraph is equal to its chromatic index. Moreover, it is also a special case of the edge list coloring conjecture saying that the same holds not only for bipartite graphs, but also for any loopless multigraph.

References
F. Galvin (1995). "The list chromatic index of a bipartite multigraph". Journal of Combinatorial Theory. Series B 63 (1): 153158. doi:10.1006/jctb.1995.1011. Zeilberger, D. (1996). "The Method of Undetermined Generalization and Specialization Illustrated with Fred Galvin's Amazing Proof of the Dinitz Conjecture". The American mathematical monthly 103 (3): 233239. arXiv:math/9506215. Chow, T. Y. (1995). "On the Dinitz conjecture and related conjectures" [1]. Discrete Math 145: 145173. Retrieved 2009-04-15.

External links
Weisstein, Eric W., "Dinitz Problem [2]" from MathWorld.

References
[1] http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 72. 7005 [2] http:/ / mathworld. wolfram. com/ DinitzProblem. html

Discrepancy theory

125

Discrepancy theory
In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be. It is also called theory of irregularities of distribution. This refers to the theme of classical discrepancy theory, namely distributing points in some space such that they are evenly distributed with respect to some (mostly geometrically defined) subsets. The discrepancy (irregularity) measures how far a given distribution deviates from an ideal one. Discrepancy theory can be described as the study of inevitable irregularities of distributions, in measure-theoretic and combinatorial settings. Just as Ramsey theory elucidates the impossibility of total disorder, discrepancy theory studies the deviations from total uniformity.

History
The 1916 paper of Weyl on the uniform distribution of sequences in the unit interval The theorem of van Aardenne-Ehrenfest

Classic theorems
Axis-parallel rectangles in the plane (Roth, Schmidt) Discrepancy of half-planes (Alexander, Matouek) Arithmetic progressions (Roth, Sarkozy, Beck, Matousek & Spencer) Beck-Fiala theorem Six Standard Deviations Suffice (Spencer)[1]

Major open problems


Axis-parallel rectangles in dimensions three and higher (Folklore) Komls conjecture The three permutations problem (Beck) - disproved by Newman and Nikolov.[2] Erds discrepancy problem - Homogeneous arithmetic progressions (Erds, $500)

Applications
Numerical Integration: Monte Carlo methods in high dimensions. Computational Geometry: Divide and conquer algorithms. Image Processing: Halftoning

References
[1] Joel Spencer (June 1985). "Six Standard Deviations Suffice". Transactions of the American Mathematical Society (Transactions of the American Mathematical Society, Vol. 289, No. 2) 289 (2): 679706. doi:10.2307/2000258. JSTOR2000258. [2] http:/ / front. math. ucdavis. edu/ 1104. 2922

Further reading
Beck, Jzsef; Chen, William W. L. (1987). Irregularities of Distribution. New York: Cambridge University Press. ISBN0521307929. Chazelle, Bernard (2000). The Discrepancy Method: Randomness and Complexity. New York: Cambridge University Press. ISBN0521770939. Matousek, Jiri (1999). Geometric Discrepancy: An Illustrated Guide. Algorithms and combinatorics. 18. Berlin: Springer. ISBN354065528X.

Discrete Morse theory

126

Discrete Morse theory


Discrete Morse theory is a combinatorial adaptation of Morse theory defined on finite CW complexes.

Notation regarding CW complexes


Let and be a CW complex. Define the incidence function in , on is defined by in the following way: given two cells to . The boundary equals the degree of the attaching map from the boundary of

operator

It is a defining property of boundary operators that

Discrete Morse functions


A Real-valued function 1. For any cell one. 2. For any cell is a discrete Morse function if it satisfies the following two properties: , the number of cells , the number of cells in the boundary of containing which satisfy is at most

in their boundary which satisfy provided that is a regular : either a

is at most one. It can be shown[1] that both conditions can not hold simultaneously for a fixed cell CW complex. In this case, each cell boundary cell with larger value, or a co-boundary cell with smaller

can be paired with at most one exceptional cell

value. The cells which have no pairs, i.e.,

their function values are strictly higher than their boundary cells and strictly lower than their co-boundary cells are called critical cells. Thus, a discrete Morse function partitions the CW complex into three distinct cell collections: , where: 1. 2. 3. denotes the critical cells which are unpaired, denotes cells which are paired with boundary cells, and denotes cells which are paired with co-boundary cells. -dimensional cells in and the -dimensional cells

By construction, there is a bijection of sets between in , which can be denoted by is a unit in the underlying ring of

for each natural number . For instance, over the integers

. It is an additional technical to its paired cell , the only allowed values are

requirement that for each

, the degree of the attaching map from the boundary of

. This technical requirement is guaranteed when one assumes that is a regular CW complex over . The fundamental result of discrete Morse theory establishes that the CW complex is isomorphic on the level of homology to a new complex consisting of only the critical cells. The paired cells in and describe . Some gradient paths between adjacent critical cells which can be used to obtain the boundary operator on details of this construction are provided in the next section.

The Morse complex


A gradient path that . The index of this is a sequence of cell pairs gradient path is defined to be the so integer

. The division here makes sense because the incidence between paired cells must be path . Note that by construction, the values of the discrete Morse function if must decrease across . The

is said to connect two critical cells

. This relationship may

Discrete Morse theory be expressed as . The multiplicity of this connection is defined to be the integer is defined by

127

operator on the critical cells

where the sum is taken over all gradient path connections from

to

References
[1] Forman, Robin: Morse Theory for Cell Complexes (https:/ / drona. csa. iisc. ernet. in/ ~vijayn/ courses/ TopoForVis/ papers/ FormanDiscreteMorseTheory. pdf), Lemma 2.5

Robin Forman (2002) A User's Guide to Discrete Morse Theory (http://www.emis.de/journals/SLC/wpapers/ s48forman.pdf), Sminare Lotharinen de Combinatore 48 Dmitry Kozlov (2007). Combinatorial Algebraic Topology. Springer. ISBN 978-3540719618. Jakob Jonsson (2007). Simplicial Complexes of Graphs. Springer. ISBN 978-3540758587. Peter Orlik, Volkmar Welker (2007). Algebraic Combinatorics: Lectures at a Summer School In Nordfjordeid. Springer. ISBN 978-3540683759.

Disjunct matrix
Disjunct and separable matrices play a pivotal role in the mathematical area of non-adaptive group testing. This area investigates efficient designs and procedures to identify 'needles in haystacks' by conducting the tests on groups of items instead of each item alone. The main concept is that if there are very few special items (needles) and the groups are constructed according to certain combinatorial guidelines, then one can test the groups and find all the needles. This can reduce the cost and the labor associated with of large scale experiments. The grouping pattern can be represented by a binary matrix, where each column represents an item and each row represents a pool. The symbol '1' denotes participation in the pool and '0' absence from a pool. The d-disjuntness and the d-separability of the matrix describe sufficient condition to identify d special items. In a matrix that is d-separable, the Boolean sum of every d columns is unique. In a matrix that is d-disjunct the Boolean sum of every d columns does not contain any other column in the matrix. Theoretically, for the same number of columns (items), one can construct d-separable matrices with fewer rows (tests) than d-disjunct. However, designs that are based on d-separable are less applicable since the decoding time to identify the special items is exponential. In contrast, the decoding time for d-dijunct matrices is polynomial.

d-separable
Definition: A that matrix is -separable if and only if where such

Decoding algorithm
First we will describe another way to look at the problem of group testing and how to decode it from a different notation. We can give a new interpretation of how group testing works as follows: Group testing: Given input Take Define to be the and such that if and only if output

column of so that

Disjunct matrix

128

This gives that This formalizes the relation between -separable and Given a 1. For each matrix such that such that . and the columns of is -separable: check if and in a way more suitable to the thinking of

-disjunct matrices. The algorithm to decode a

-separable matrix is as follows:

This algorithm runs in time

d-disjunct
In literature disjunct matrices are also called super-imposed codes and d-cover-free families. Definition: A but Claim: is if and only if -disjunct implies x matrix is d-disjunct if . Denoting gives that is is -separable be a x and such that -separable. is the such that column of -disjunct if and only if is not . is -disjunct. , and such that where

Proof: (by contradiction) Let -separable. Then there exists This implies that Therefore is

-disjunct matrix. Assume for contradiction that with such that . This contradicts the fact that

Decoding algorithm
The algorithm for -separable matrices was still a polynomial in . The following will give a nicer algorithm for given our bounds for . The -disjunct matrices which will be a Lemma 1: There exists an Observation 1: For any matrix where then . Observation 2: For any where and multiple instead of raised to the power of -disjunct if x matrix. such that it implies if 's where and for each but and

algorithm is as follows in the proof of the following lemma: time decoding for any and given

it implies

. The opposite is also true. If

. This is the case because

is generated by taking all of the logical or of the where such that

-disjunct matrix and every set there exists some then where

. Thus, if Proof of Lemma 1: Given as input 1. For each 2. For set , if

. use the following algorithm: , if set the appropriate such that if overall. 's will be set to 0 by step 2 of is supposed to be 1 then

then for all

By Observation 1 we get that any position where and, if assign

the algorithm. By Observation 2 we have that there is at least one is supposed to be 1, it can only be the case that . This takes time the value 0 leaving it as a 1 and solving for

as well. Therefore step 2 will never

Disjunct matrix

129

Upper bounds for non-adaptive group testing


The results for these upper bounds rely mostly on the properties of -disjunct matrices. Not only are the upper bounds nice, but from Lemma 1 we know that there is also a nice decoding algorithm for these bounds. First the following lemma will be proved since it is relied upon for both constructions: Lemma 2: Given 1. 2. for some integers then is -disjunct. but rather applies to any pair of columns in 1's let be a matrix and:

Note: these conditions are stronger than simply having a subset of size a matrix. Therefore no matter what column Proof of Lemma 2: Fix an arbitrary if column matches is and the total number of shared 1's by any two columns is .

that is chosen in the matrix, that column will contain at least and a matrix

. There exists a match between . Then the total number of has a fewer number of .

has a 1 in the same row position as in column

, i.e. a column

matches than the number of ones in it. Therefore there must be a row with all 0s in We will now generate constructions for the bounds.

but a 1 in

Randomized construction
This first construction will use a probabilistic argument to show the property wanted, in particular the Chernoff bound. Using this randomized construction gives that . The following lemma will give the result needed. Theorem 1: There exists a random -disjunct matrix with matrix with and let . Denote the the Chernoff column of bound, with rows. (where will be picked independently as , . gives Proof of Theorem 1: Begin by building a random later). It will be shown that with probability Then the for is is and

-disjunct. First note that . Now fix . if Using

expectancy

. Taking the union bound over all columns gives , . Therefore

, with probability Now suppose and .

. This gives

then

. So if

. Using . By the union . This gives that

the Chernoff bound on this gives bound over and probability pairs such that with probability can be made to be is -disjunct. . Thus

. Note that by changing . By setting to be

the , the

above argument shows that

Disjunct matrix Note that in this proof thus giving the upper bound of .

130

Strongly explicit construction


It is possible to prove a bound of worse by a randomized one. Theorem 2: There exists a strongly explicit using a strongly explicit code. Although this bound is factor it is preferable because this produces a strongly explicit construction instead of a -disjunct matrix with rows.

This proof will use the properties of concatenated codes along with the properties of disjunct matrices to construct a code that will satisfy the bound we are after. Proof of Theorem 2: Let with its 1. 2. then is column being . If such that can be found such that , -disjunct. To complete the proof another concept must be introduced. This concept . Denote as the matrix

uses code concatenation to obtain the result we want. Kautz-Singleton '64 Let , . --Example: Let codewords for and denotes the matrix of codewords for . Below, denotes the matrix of , where each column is a . Let be a -ReedSolomon code. Let where the 1 is in the position. Then , such that for , and

codeword. The overall image shows the transition from the outer code to the concatenated code.

--Divide the rows of rows and gives that can be looked at as a total of -disjunct). Now pick Since sets of into sets of size and number them as then note that . Since so let . Since , the entries in each column of ) which gives (so is where indexes the set of where it

indexes the row in the set. If . So that means

entries where only one of the entries is nonzero (by definition of and

nonzero entries in each column. Therefore

and and

such that it gives that

(so

). Since .

we have

Disjunct matrix Thus we have a strongly explicit construction for a code that can be used to form a group testing matrix and so . For non-adaptive testing we have shown that and we have that (i)

131

(strongly explicit) and (ii) explicit) for matrices of


[1]

(randomized). As of recent work

by Porat and Rothscheld they presented a explicit method construction (i.e. deterministic time but not strongly , however it is not shown here. There is also a lower bound for disjunct which is not shown here either.
[2] [3] [4]

Notes
[1] Porat, E., & Rothschild, A. (2008). Explicit Non-adaptive Combinatorial Group Testing Schemes. In Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP) (pp. 748759). [2] Dachkov, A. G., & Rykov, V. V. (1982). Bounds on the length of disjunctive codes. Problemy Peredachi Informatsii [Problems of Information Transmission], 18(3), 713. [3] Dachkov, A. G., Rashad, A. M., & Rykov, V. V. (1989). Superimposed distance codes. Problemy Upravlenija i Teorii Informacii [Problems of Control and Information Theory], 18(4), 237250. [4] Zoltan Furedi, On r-Cover-free Families, Journal of Combinatorial Theory, Series A, Volume 73, Issue 1, January 1996, Pages 172173, ISSN 0097-3165, DOI: 10.1006/jcta.1996.0012. (http:/ / www. sciencedirect. com/ science/ article/ B6WHS-45NJMVF-39/ 2/ 172ef8c5c4aee2d85d1ddd56b107eef3)

References
1. Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Spring 2010), Lectures 28 (http://www.cse.buffalo.edu/~atri/courses/coding-theory/spr10/lectures/lect28.pdf), 29 (http:// www.cse.buffalo.edu/~atri/courses/coding-theory/spr10/lectures/lect29.pdf).

Dividing a circle into areas

132

Dividing a circle into areas


In geometry, the problem of dividing a circle into areas by means of an inscribed polygon with n sides, in such a way as to maximise the number of areas created by the edges and diagonals, has a solution by an inductive method.

Lemma
If we already have n points on the circle and add one more point, we draw n lines from the new point to previously existing points. Two cases are possible. In the first case (a), the new line passes through a point where two or more old lines (between previously existing points) cross. In the second case (b), the new line crosses each of the old lines in a different point. It will be useful to know the following fact. Lemma. We can choose the new point A so that case b occurs for each of the new lines. Proof. Notice that, for the case a, three points must be on one line: the new point A, the old point O to which we draw the line, and the point I where two of the old lines intersect. Notice that there are n old points O, and hence finitely many points I where two of the old lines intersect. For each O and I, the line OI crosses the circle in one point other than O. Since the circle has infinitely many points, it has a point A which will be on none of the lines OI. Then, for this point A and all of the old points O, case b will be true. This lemma means that, if there are k lines crossing AO, then each of them crosses AO at a different point and k+1 new areas are created by the line AO.

Dividing a circle into areas

133

Solution
Inductive method
The lemma establishes an important property for solving the problem. By employing inductive proof, one can provide a formula for f(n) in terms of f(n 1). In the figure you can see the dark lines connecting points 1 through 4 dividing the circle into 8 total regions (i.e., f(4) = 8). This figure illustrates the inductive step from n = 4 to n = 5 with the dashed lines. When the fifth point is added (i.e., when computing f(5) using f(4)), this results in four (n 1) new lines (the dashed lines in the diagram) being added, numbered 1 through 4, one for each point that they connect to. The number of new regions introduced by the fifth point can therefore be determined by considering the number of regions added by each of the 4 lines. Set i to count the lines we are adding. Each new line can cross a number of existing lines, depending on which point it is to (the value of i). The new lines will never cross each other, except at the new point. The number of lines that each new line intersects can be determined by considering the number of points on the "left" of the line and the number of points on the "right" of the line. Since all existing points already have lines between them, the number of points on the left multiplied by the number of points on the right is the number of lines that will be crossing the new line. For the line to point i, there are ni1 points on the left and i 1 points on the right, so a total of (n i 1) (i 1) lines must be crossed. In this example, the lines to i = 1 and i = 4 each cross zero lines, while the lines to i = 2 and i = 3 each cross two lines (there are two points on one side and one on the other). So the recurrence can be expressed as

Which can be easily reduced to

This can be further reduced, using the formula for i2.

Dividing a circle into areas It follows that the solution will be a quartic polynomial in n. Its actual coefficients can be found, by using the five values of f(n) given above.

134

Combinatorics and topology method


The lemma asserts that the number of regions is maximal if all "inner" intersections of chords are simple (exactly two chords pass through each point of intersection in the interior). This will be the case if the points on the circle are chosen "in general position". Under this assumption of "generic intersection", the number of regions can also be determined in a non-inductive way, using the formula for the Euler characteristic of a connected planar graph (viewed here as a graph embedded in the 2-sphere S 2). A planar graph determines a cell decomposition of the plane with F faces (2-dimensional cells), E edges (1-dimensional cells) and V vertices (0-dimensional cells). As the graph is connected, the Euler relation for the 2-dimensional sphere S 2

holds. View the diagram (the circle together with all the chords) above as a planar graph. If the general formulas for V and E can both be found, the formula for F can also be derived, which will solve the problem. Its vertices include the n points on the circle, referred to as the exterior vertices, as well as the interior vertices, the intersections of distinct chords in the interior of the circle. The "generic intersection" assumption made above guarantees that each interior vertex is the intersection of no more than two chords. Thus the main task in determining V is finding the number of interior vertices. As a consequence of the lemma, any two intersecting chords will uniquely determine an interior vertex. These chords are in turn uniquely determined by the four corresponding endpoints of the chords, which are all exterior vertices. Any four exterior vertices determine a cyclic quadrilateral, and all cyclic quadrilaterals are convex quadrilaterals, so each set of four exterior vertices have exactly one point of intersection formed by their diagonals(chords). Further, by definition all interior vertices are formed by intersecting chords. Therefore each interior vertex is uniquely determined by a combination of four exterior vertices, where the number of interior vertices is given by

and so

The edges include the n circular arcs connecting pairs of adjacent exterior vertices, as well as the chordal line segments (described below) created inside the circle by the collection of chords. Since there are two groups of vertices: exterior and interior, the chordal line segments can be further categorized into three groups: 1. Edges directly (not cut by other chords) connecting two exterior vertices. These are chords between adjacent exterior vertices, and form the perimeter of the polygon. There are n such edges. 2. Edges connecting two interior vertices. 3. Edges connecting an interior and exterior vertex. To find the number of edges in groups 2 and 3, consider each interior vertex, which is connected to exactly four edges. This yields

edges. Since each edge is defined by two endpoint vertices, and we only enumerated the interior vertices, group 2 edges are counted twice while group 3 edges are counted once only.

Dividing a circle into areas Notice that every chord that is cut by another (i.e., chords not in group 1) must contain two group 3 edges, its beginning and ending chordal segments. As chords are uniquely determined by two exterior vertices, there are altogether

135

group 3 edges. This is twice the total number of chords that are not themselves members of group 1. The sum of these results divided by two gives the combined number of edges in groups 2 and 3. Adding the n edges from group 1, and the n circular arc edges brings the total to

Substituting V and E into the Euler relation solved for F,

one then obtains

Since one of these faces is the exterior of the circle, the number of regions rG inside the circle is F 1, or

which is the same quartic polynomial obtained by using the inductive method.

References
Conway, J. H. and Guy, R. K. "How Many Regions." In The Book of Numbers. New York: Springer-Verlag, pp.7679, 1996. Weisstein, Eric W., "Circle Division by Chords [1]" from MathWorld. http://www.arbelos.co.uk/Papers/Chords-regions.pdf

References
[1] http:/ / mathworld. wolfram. com/ CircleDivisionbyChords. html

Dobinski's formula

136

Dobinski's formula
In combinatorial mathematics, Dobinskis formula[1] states that the number of partitions of a set of n members is

This number has come to be called the nth Bell number Bn, after Eric Temple Bell. The above formula can be seen as a particular case, for , of the more general relation:

Probabilistic content
Those familiar with probability theory will recognize the expression given by Dobinski's formula as the nth moment of the Poisson distribution with expected value 1. Today, Dobinski's formula is sometimes stated by saying the number of partitions of a set of size n equals the nth moment of that distribution.

A proof
The proof given here is an adaptation to probabilistic language, of the proof given by Rota.[2] Combinatorialists use the Pochhammer symbol (x)n to denote the falling factorial (whereas, in the theory of special functions, the same notation denotes the rising factorial). If x and n are nonnegative integers, 0nx, then (x)n is the number of one-to-one functions that map a size-n set into a size-x set. Let be any function from a size-n set A into a size-x set B. For any uB, let 1(u) = {vA : (v) = u}. Then { 1 (u) : uB} is a partition of A, coming from the equivalence relation of "being in the same fiber". This equivalence relation is called the "kernel" of the function . Any function from A into B factors into one function that maps a member of A to that part of the kernel to which it belongs, and another function, which is necessarily one-to-one, that maps the kernel into B. The first of these two factors is completely determined by the partition that is the kernel. The number of one-to-one functions from into B is (x)||, where || is the number of parts in the partition . Thus the total number of functions from a size-n set A into a size-x set B is

the index running through the set of all partitions of A. On the other hand, the number of functions from A into B is clearly xn. Therefore we have

If X is a Poisson-distributed random variable with expected value 1, then we get that the nth moment of this probability distribution is

But all of the factorial moments E((X)k) of this probability distribution are equal to 1. Therefore

and this is just the number of partitions of the set A. Q.E.D.

Dobinski's formula

137

Notes and references


[1] G. Dobiski, "Summirung der Reihe fr m=1,2,3,4,5,", Grunert's Archiv, volume 61, 1877, pages 333336 (Internet Archive: (http:/ / www. archive. org/ stream/ archivdermathem88unkngoog#page/ n346)). [2] * Gian-Carlo Rota, "The Number of Partitions of a Set" (http:/ / www. jstor. org/ stable/ 2312585), American Mathematical Monthly, volume 71, number 5, May 1964, pages 498504.

Domino tiling
A domino tiling of a region in the Euclidean plane is a tessellation of the region by dominos, shapes formed by the union of two unit squares meeting edge-to-edge. Equivalently, it is a matching in the grid graph formed by placing a vertex at the center of each square of the region and connecting two vertices when they correspond to adjacent squares.

Height functions
For some classes of tilings on a regular grid in two dimensions, it is possible to define a height function associating an integer to the nodes of the grid. For instance, draw a chessboard, fix a node with height 0, then for any node there is a path from the height of each node height of the previous node the path from to to it. On this path define
Domino tiling of a square

(i.e. corners of the squares) to be the plus one if the square on the right of is black, and minus one else.

More details can be found in Kenyon & Okounkov (2005).

Thurston's height condition


William Thurston (1990) describes a test for determining whether a simply-connected region, formed as the union of unit squares in the plane, has a domino tiling. He forms an undirected graph that has as its vertices the points (x,y,z) in the three-dimensional integer lattice, where each such point is connected to four neighbors: if x+y is even, then (x,y,z) is connected to (x+1,y,z+1), (x-1,y,z+1), (x,y+1,z-1), and (x,y-1,z-1), while if x+y is odd, then (x,y,z) is connected to (x+1,y,z-1), (x-1,y,z-1), (x,y+1,z+1), and (x,y-1,z+1). The boundary of the region, viewed as a sequence of integer points in the (x,y) plane, lifts uniquely (once a starting height is chosen) to a path in this three-dimensional graph. A necessary condition for this region to be tileable is that this path must close up to form a simple closed curve in three dimensions, however, this condition is not sufficient. Using more careful analysis of the boundary path, Thurston gave a criterion for tileability of a region that was sufficient as well as necessary.

Domino tiling

138

Counting tilings of regions


The number of ways to cover a Kasteleyn (1961), is given by -byrectangle with dominoes, calculated independently by Temperley & Fisher (1961) and

Domino tiling of an 88 square using the minimum number of long-edge-to-long-edge pairs (1 pair in the center).

The sequence of values generated by this formula for squares with m = n = 0, 2, 4, 6, 8, 10, 12, ... is 1, 2, 36, 6728, 12988816, 258584046368, 53060477521960000, ... (sequence A004003 in OEIS). These numbers can be found by writing them as the Pfaffian of an mn by mn antisymmetric matrix whose eigenvalues can be found explicitly. This technique may be applied in many mathematics-related subjects, for example, in the classical, 2-dimensional computation of the dimer-dimer correlator function in statistical mechanics. The number of tilings of a region is very sensitive to boundary conditions, and can change dramatically with apparently insignificant changes in the shape of the region. This is illustrated by the number of tilings of an Aztec diamond of order n, where the number of tilings is 2(n+1)n/2. If this is replaced by the "augmented Aztec diamond" of order n with 3 long rows in the middle rather than 2, the number of tilings drops to the much smaller number D(n,n), a Delannoy number, which has only exponential rather than super-exponential growth in n. For the "reduced Aztec diamond" of order n with only one long middle row, there is only one tiling.

An Aztec diamond of order 4, with 1024 domino tilings

References

Height Functions"

[1]

Bodini, Olivier; Latapy, Matthieu (2003), "Generalized Tilings with , Morfismos 7 (1): 4768, ISSN1870-6525.

Kasteleyn, P. W. (1961), "The statistics of dimers on a lattice. I. The number of dimer arrangements on a quadratic lattice", Physica 27 (12): 12091225, doi:10.1016/0031-8914(61)90063-5. Kenyon, Richard (2000), "The planar dimer model with boundary: a survey", Directions in mathematical quasicrystals, CRM Monogr. Ser., 13, Providence, R.I.: American Mathematical Society, pp.307328, MR1798998 Kenyon, Richard; Okounkov, Andrei (2005), "What is a dimer?" [2], Notices of the American Mathematical Society 52 (3): 342343, ISSN0002-9920.

Domino tiling Propp, James (2005), "Lambda-determinants and domino-tilings", Advances in Applied Mathematics 34 (4): 871879, arXiv:math.CO/0406301, doi:10.1016/j.aam.2004.06.005. Sellers, James A. (2002), "Domino tilings and products of Fibonacci and Pell numbers" [3], Journal of Integer Sequences 5 (Article 02.1.2). Thurston, W. P. (1990), "Conway's tiling groups", American Mathematical Monthly (Mathematical Association of America) 97 (8): 757773, doi:10.2307/2324578, JSTOR2324578. Wells, David (1997), The Penguin Dictionary of Curious and Interesting Numbers (revised ed.), London: Penguin, p.182, ISBN0-14-026149-4.

139

References
[1] http:/ / www-rp. lip6. fr/ %7Elatapy/ Publis/ morfismos03. pdf [2] http:/ / www. ams. org/ notices/ 200503/ what-is. pdf [3] http:/ / www. emis. de/ journals/ JIS/ VOL5/ Sellers/ sellers4. pdf

Enumerations of specific permutation classes


In the study of permutation patterns, there has been considerable interest in enumerating specific permutation classes, especially those with relatively few basis elements.

Classes avoiding one pattern of length 3


There are two symmetry classes and a single Wilf class for single permutations of length three.
sequence enumerating Avn() OEIS type of sequence exact enumeration reference MacMahon (1915/16) Knuth (1968)

123 1, 2, 5, 14, 42, 132, 429, 1430, ... A000108 algebraic (nonrational) 231 g.f. Catalan numbers

Classes avoiding one pattern of length 4


There are seven symmetry classes and three Wilf classes for single permutations of length four.
sequence enumerating Avn() OEIS type of sequence exact enumeration reference Bna (1997)

1342 1, 2, 6, 23, 103, 512, 2740, 15485, ... A022558 algebraic (nonrational) g.f. 2413

1234 1, 2, 6, 23, 103, 513, 2761, 15767, ... A005802 holonomic (nonalgebraic) g.f. Gessel (1990) 1243 1432 2143 1324 1, 2, 6, 23, 103, 513, 2762, 15793, ... A061552

No non-recursive formula counting 1324-avoiding permutations is known. A recursive formula was given by Marinov & Radoii (2003) and Albert et al. (2006) have provided lower bounds for the growth of this class.

Enumerations of specific permutation classes

140

Classes avoiding two patterns of length 3


There are five symmetry classes and three Wilf classes, all of which were enumerated in Simion & Schmidt (1985).
B sequence enumerating Avn(B) OEIS n/a A000124 polynomial, 123, 132 1, 2, 4, 8, 16, 32, 64, 128, ... 132, 312 231, 312 A000079 rational g.f., type of sequence finite

123, 321 1, 2, 4, 4, 0, 0, 0, 0, ... 123, 231 1, 2, 4, 7, 11, 16, 22, 29, ...

Classes avoiding one pattern of length 3 and one of length 4


There are eighteen symmetry classes and nine Wilf classes, all of which have been enumerated. For these results, see Atkinson (1999) or West (1996).
B sequence enumerating Avn(B) OEIS n/a type of sequence finite

321, 1234 1, 2, 5, 13, 25, 25, 0, 0, ... 321, 2134 1, 2, 5, 13, 30, 61, 112, 190, ... 132, 4321 1, 2, 5, 13, 31, 66, 127, 225, ... 321, 1324 1, 2, 5, 13, 32, 72, 148, 281, ... 321, 1342 1, 2, 5, 13, 32, 74, 163, 347, ... 321, 2143 1, 2, 5, 13, 33, 80, 185, 411, ... 132, 4312 1, 2, 5, 13, 33, 81, 193, 449, ... 132, 4231 132, 3214 1, 2, 5, 13, 33, 82, 202, 497, ... 321, 2341 1, 2, 5, 13, 34, 89, 233, 610, ... 321, 3412 321, 3142 132, 1234 132, 4213 132, 4123 132, 3124 132, 2134 132, 3412

A116699 polynomial A116701 polynomial A179257 polynomial A116702 rational g.f. A088921 rational g.f. A005183 rational g.f.

A116703 rational g.f. A001519 rational g.f., alternate Fibonacci numbers

Classes avoiding two patterns of length 4


There are 56 symmetry classes and 38 Wilf equivalence classes, of which 18 have been enumerated.

Enumerations of specific permutation classes

141

sequence enumerating Avn(B)

OEIS n/a

type of sequence finite

exact enumeration reference ErdsSzekeres theorem Kremer & Shiu (2003) Kremer & Shiu (2003) Kremer & Shiu (2003) Vatter (preprint)

4321, 1234 1, 2, 6, 22, 86, 306, 882, 1764, ...

4312, 1234 1, 2, 6, 22, 86, 321, 1085, 3266, ... A116705 polynomial 4321, 3124 1, 2, 6, 22, 86, 330, 1198, 4087, ... A116708 rational g.f. 4312, 2134 1, 2, 6, 22, 86, 330, 1206, 4174, ... A116706 rational g.f. 4321, 1324 1, 2, 6, 22, 86, 332, 1217, 4140, ... A165524 polynomial 4321, 2143 1, 2, 6, 22, 86, 333, 1235, 4339, ... A165525 4312, 1324 1, 2, 6, 22, 86, 335, 1266, 4598, ... A165526 4231, 2143 1, 2, 6, 22, 86, 335, 1271, 4680, ... A165527 rational g.f. 4231, 1324 1, 2, 6, 22, 86, 336, 1282, 4758, ... A165528 rational g.f. 4213, 2341 1, 2, 6, 22, 86, 336, 1290, 4870, ... A116709 rational g.f. 4312, 2143 1, 2, 6, 22, 86, 337, 1295, 4854, ... A165529 4213, 1243 1, 2, 6, 22, 86, 337, 1299, 4910, ... A116710 rational g.f. 4321, 3142 1, 2, 6, 22, 86, 338, 1314, 5046, ... A165530 rational g.f. 4213, 1342 1, 2, 6, 22, 86, 338, 1318, 5106, ... A116707 rational g.f. 4312, 2341 1, 2, 6, 22, 86, 338, 1318, 5110, ... A116704 rational g.f.

Albert, Atkinson & Brignall (preprint) Albert, Atkinson & Vatter (2009) Kremer & Shiu (2003)

Kremer & Shiu (2003) Vatter (preprint) Kremer & Shiu (2003) Kremer & Shiu (2003)

3412, 2143 1, 2, 6, 22, 86, 340, 1340, 5254, ... A029759 algebraic (nonrational) g.f. Atkinson (1998) 4321, 4123 1, 2, 6, 22, 86, 342, 1366, 5462, ... A047849 rational g.f. 4321, 3412 4123, 3142 4123, 2143 Kremer & Shiu (2003)

4123, 2341 1, 2, 6, 22, 87, 348, 1374, 5335, ... A165531 algebraic (nonrational) g.f. Atkinson, Sagan & Vatter (preprint) 4231, 3214 1, 2, 6, 22, 87, 352, 1428, 5768, ... A165532 4213, 1432 1, 2, 6, 22, 87, 352, 1434, 5861, ... A165533 4312, 3421 1, 2, 6, 22, 87, 354, 1459, 6056, ... A164651 4213, 2431 4312, 3124 1, 2, 6, 22, 88, 363, 1507, 6241, ... A165534 4231, 3124 1, 2, 6, 22, 88, 363, 1508, 6255, ... A165535 4312, 3214 1, 2, 6, 22, 88, 365, 1540, 6568, ... A165536 4231, 3412 1, 2, 6, 22, 88, 366, 1552, 6652, ... A032351 algebraic (nonrational) g.f. Bna (1998) 4231, 3142 4213, 3241 4213, 3124 4213, 2314 4213, 2143 1, 2, 6, 22, 88, 366, 1556, 6720, ... A165537 4312, 3142 1, 2, 6, 22, 88, 367, 1568, 6810, ... A165538 4213, 3421 1, 2, 6, 22, 88, 367, 1571, 6861, ... A165539 4213, 3412 1, 2, 6, 22, 88, 368, 1584, 6968, ... A109033 algebraic (nonrational) g.f. Le (2005) 4123, 3142 4321, 3214 1, 2, 6, 22, 89, 376, 1611, 6901, ... A165540 4213, 3142 1, 2, 6, 22, 89, 379, 1664, 7460, ... A165541 4231, 4123 1, 2, 6, 22, 89, 380, 1677, 7566, ... A165542 see Le (2005), but g.f. unknown

Enumerations of specific permutation classes

142

4321, 4213 1, 2, 6, 22, 89, 380, 1678, 7584, ... A165543 4123, 3412 1, 2, 6, 22, 89, 381, 1696, 7781, ... A165544 4312, 4123 1, 2, 6, 22, 89, 382, 1711, 7922, ... A165545 4321, 4312 1, 2, 6, 22, 90, 394, 1806, 8558, ... A006318 algebraic (nonrational) g.f. Kremer (2000), Kremer (2003) 4312, 4231 Schrder numbers 4312, 4213 4312, 3412 4231, 4213 4213, 4132 4213, 4123 4213, 2413 4213, 3214 3142, 2413 3412, 2413 1, 2, 6, 22, 90, 395, 1823, 8741, ... A165546 4321, 4231 1, 2, 6, 22, 90, 396, 1837, 8864, ... A053617

References
Albert, Michael H.; Elder, Murray; Rechnitzer, Andrew; Westcott, P.; Zabrocki, Mike (2006), "On the Stanley-Wilf limit of 4231-avoiding permutations and a conjecture of Arratia", Advances in Applied Mathematics 36: 96105, doi:10.1016/j.aam.2005.05.007, MR2199982. Albert, Michael H.; Atkinson, M. D.; Brignall, Robert (preprint), The enumeration of permutations avoiding 2143 and 4231 [1]. Albert, Michael H.; Atkinson, M. D.; Vatter, Vincent (2009), "Counting 1324, 4231-avoiding permutations" [2], Electron. J. Combin. 16 (1): Research article 136, 9 pp., MR2577304. Atkinson, M. D. (1998), "Permutations which are the union of an increasing and a decreasing subsequence" [3], Electron. J. Combin. 5: Research article 6, 13 pp., MR1490467. Atkinson, M. D. (1999), "Restricted permutations", Discrete Math. 195: 2738, doi:10.1016/S0012-365X(98)00162-9, MR1663866. Atkinson, M. D.; Sagan, Bruce E.; Vatter, Vincent (preprint), Counting (3+1)-avoiding permutations, arXiv:1102.5568. Bna, Mikls (1997), "Exact enumeration of 1342-avoiding permutations: a close link with labeled trees and planar maps", J. Combin. Theory Ser. A 80: 257272, doi:10.1006/jcta.1997.2800, MR1485138. Bna, Mikls (1998), "The permutation classes equinumerous to the smooth class" [4], Electron. J. Combin. 5: Research article 31, 12 pp., MR1626487. Gessel, Ira M. (1990), "Symmetric functions and P-recursiveness", J. Combin. Theory Ser. A 53: 257285, doi:10.1016/0097-3165(90)90060-A, MR1041448. Knuth, Donald E. (1968), The Art Of Computer Programming Vol. 1, Boston: Addison-Wesley, ISBN0-201-89683-4, MR0286317, OCLC155842391 Kremer, Darla (2000), "Permutations with forbidden subsequences and a generalized Schrder number", Discrete Math. 218: 121130, doi:10.1016/S0012-365X(99)00302-7, MR1754331. Kremer, Darla (2003), "Postscript: Permutations with forbidden subsequences and a generalized Schrder number", Discrete Math. 270: 333334, doi:10.1016/S0012-365X(03)00124-9, MR1997910. Kremer, Darla; Shiu, Wai Chee (2003), "Finite transition matrices for permutations avoiding pairs of length four patterns", Discrete Math. 268: 171183, doi:10.1016/S0012-365X(03)00042-6, MR1983276. Le, Ian (2005), "Wilf classes of pairs of permutations of length 4" [5], Electron. J. Combin. 12: Research article 25, 27 pp., MR2156679. MacMahon, Percy A. (1915/16), Combinatory Analysis, London: Cambridge University Press

Enumerations of specific permutation classes Marinov, Darko; Radoii, Rado (2003), "Counting 1324-Avoiding Permutations" [6], Electron. J. Combin. 9(2): R13, MR2028282. Simion, Rodica; Schmidt, Frank W. (1985), "Restricted permutations", European J. Combin. 6: 383406, MR0829358. Vatter, Vincent (preprint), Finding regular insertion encodings for permutation classes [7]. West, Julian (1996), "Generating trees and forbidden subsequences", Discrete Math. 157: 363374, doi:10.1016/S0012-365X(96)83023-8, MR1417303.

143

References
[1] [2] [3] [4] [5] [6] [7] http:/ / users. mct. open. ac. uk/ rb8599/ papers/ 2143-4231-enum. pdf http:/ / www. combinatorics. org/ Volume_16/ Abstracts/ v16i1r136. html http:/ / www. combinatorics. org/ Volume_5/ Abstracts/ v5i1r6. html http:/ / www. combinatorics. org/ Volume_5/ Abstracts/ v5i1r31. html http:/ / www. combinatorics. org/ Volume_12/ Abstracts/ v12i1r25. html http:/ / www. combinatorics. org/ Volume_9/ Abstracts/ v9i2r13. html http:/ / www. math. ufl. edu/ ~vatter/ publications/ ins-enc/

Equiangular lines
In geometry, a set of lines in Euclidean space is called equiangular if every pair of lines makes the same angle. Equiangular lines are related to two-graphs. Given a set of equiangular lines, let c be the cosine of the common angle. We assume that angle is not 90, since that case is trivial (i.e., not interesting, because the lines are just coordinate axes); thus, c is nonzero. We may move the lines so they all pass through the origin of coordinates. Choose one unit vector in each line. Form the matrix M of inner products. This matrix has 1 on the diagonal and c everywhere else, and it is symmetric. Subtracting the identity matrix I and dividing by c, we have a symmetric matrix with zero diagonal and 1 off the diagonal. This is the adjacency matrix of a two-graph.

References
van Lint, J. H., and Seidel, J. J. Equilateral point sets in elliptic geometry. Proc. Koninkl. Ned. Akad. Wetenschap. Ser. A 69 (= Indagationes Mathematicae 28) (1966), 335-348. Brouwer, A.E., Cohen, A.M., and Neumaier, A. Distance-Regular Graphs. Springer-Verlag, Berlin, 1989. Section 3.8. Godsil, Chris; Royle, Gordon (2001), Algebraic Graph Theory, Graduate Texts in Mathematics, 207, New York: Springer-Verlag. (See Chapter 11.)

Erds conjecture on arithmetic progressions

144

Erds conjecture on arithmetic progressions


Erds' conjecture on arithmetic progressions, often incorrectly referred to as the ErdsTurn conjecture (see also ErdsTurn conjecture on additive bases), is a conjecture in additive combinatorics due to Paul Erds. It states that if the sum of the reciprocals of the members of a set A of positive integers diverges, then A contains arbitrarily long arithmetic progressions. Formally, if

(i.e. A is a large set) then A contains arithmetic progressions of any given length. If true, the theorem would generalize Szemerdi's theorem. Erds offered a prize of US$3000 for a proof of this conjecture at the time.[1] The problem is currently worth US$5000.[2] The GreenTao theorem on arithmetic progressions in the primes is a special case of this conjecture.

Notes
[1] Bollobs, Bla (March 1988). "To Prove and Conjecture: Paul Erds and His Mathematics". American Mathematical Monthly 105 (3): 233. [2] p. 354, Soifer, Alexander (2008); The Mathematical Coloring Book: Mathematics of Coloring and the Colorful Life of its Creators; New York: Springer. ISBN 978-0387746401

References
P. Erds: Rsultats et problmes en thorie de nombres (http://www.renyi.hu/~p_erdos/1973-24.pdf), Sminaire Delange-Pisot-Poitou (14e anne: 1972/1973), Thorie des nombres, Fasc 2., Exp. No. 24, pp. 7, P. Erds: Problems in number theory and combinatorics, Proc. Sixth Manitoba Conf. on Num. Math., Congress Numer. XVIII(1977), 3558. P. Erds: On the combinatorial problems which I would most like to see solved, Combinatorica, 1(1981), 28.

ErdsGraham problem

145

ErdsGraham problem
In combinatorial number theory, the ErdsGraham problem is the problem of proving that, if the set {2,3,4,...} of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every r>0, and every r-coloring of the integers greater than one, there is a finite monochromatic subset S of these integers such that

In more detail, Paul Erds and Ronald Graham conjectured that, for sufficiently large r, the largest member of S could be bounded by br for some constant b independent of r. It was known that, for this to be true, b must be at least e. Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral student at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for b is very large: it is at most e167000. Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets C of smooth numbers in intervals of the form [X, X1+], where C contains sufficiently many numbers so that the sum of their reciprocals is at least six. The ErdsGraham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least 6r; therefore, if the integers are r-colored there must be a monochromatic subset C satisfying the conditions of Croot's theorem.

References
Croot, Ernest S., III (2000). Unit Fractions. Ph.D. thesis. University of Georgia, Athens. Croot, Ernest S., III (2003). "On a coloring conjecture about unit fractions". Annals of Mathematics 157 (2): 545556. arXiv:math.NT/0311421. doi:10.4007/annals.2003.157.545. Erds, Paul and Graham, Ronald L. (1980). "Old and new problems and results in combinatorial number theory". L'Enseignement Mathmatique 28: 3044.

External links
Ernie Croot's Webpage [1]

References
[1] http:/ / www. math. gatech. edu/ ~ecroot/

ErdsFuchs theorem

146

ErdsFuchs theorem
In mathematics, in the area of combinatorial number theory, the ErdsFuchs theorem is a statement about the number of ways that numbers can be represented as a sum of two elements of a given set, stating that the average order of this number cannot be close to being a linear function. The theorem is named after Paul Erds and Wolfgang Heinrich Johannes Fuchs.

Statement
Let A be a subset of the natural numbers and r(n) denote the number of ways that a natural number n can be expressed as the sum of two elements of A (taking order into account). We consider the average

The theorem states that

cannot hold unless C = 0.

References
P. Erds; W.H.J. Fuchs (1956). "On a Problem of Additive Number Theory". Journal of the London Mathematical Society 31 (1): 6773. doi:10.1112/jlms/s1-31.1.67. Donald J. Newman (1998). Analytic number theory. GTM. 177. New York: Springer. pp.3138. ISBN0-387-98308-2.

ErdsRado theorem

147

ErdsRado theorem
In partition calculus, part of combinatorial set theory, which is a branch of mathematics, the ErdsRado theorem is a basic result, extending Ramsey's theorem to uncountable sets.

Statement of the theorem


If r2 is finite, is an infinite cardinal, then where exp0()= and inductively expr+1()=2expr(). This is sharp in the sense that expr()+ cannot be replaced by expr() on the left hand side. The above partition symbol describes the following statement. If f is a coloring of the r+1-element subsets of a set of cardinality expr()+, then there is a homogeneous set of cardinality + (a set, all whose r+1-element subsets get the same f-value).

References
Erds, Paul; Hajnal, Andrs; Mt, Attila; Rado, Richard (1984), Combinatorial set theory: partition relations for cardinals, Studies in Logic and the Foundations of Mathematics, 106, Amsterdam: North-Holland Publishing Co.,, ISBN0-444-86157-2, MR0795592 Erds, P.; Rado, R. (1956), "A partition calculus in set theory." [1], Bull. Amer. Math. Soc. 62: 427489, doi:10.1090/S0002-9904-1956-10036-0, MR0081864

References
[1] http:/ / www. ams. org/ bull/ 1956-62-05/ S0002-9904-1956-10036-0/

European Journal of Combinatorics

148

European Journal of Combinatorics


European Journal of Combinatorics
Abbreviated title (ISO) European J. Combin. Discipline Language Combinatorics English Publication details Publisher Publication history Elsevier (Netherlands) 1993 to present

Indexing ISSN 0195-6698 Links Journal homepage [2] [1]

The European Journal of Combinatorics (usually abbreviated as European J. Combin. [3] ), is a peer-reviewed scientific journal for combinatorics. It is an international, bimonthly journal of pure mathematics, specializing in theories arising from combinatorial problems. The journal is primarily open to papers dealing with mathematical structures within combinatorics and/or establishing direct links between combinatorics and other branches of mathematics and the theories of computing. The journal includes full-length research papers, short notes, and research problems on important topics. This journal has been founded by Michel Deza, Michel Las Vergnas and Pierre Rosenstiehl. The current editors-in-chief are Patrice Ossona de Mendez and Pierre Rosenstiehl. The impact factor for the European Journal of Combinatorics in 2010 was 0.716.[4]

External links
European Journal of Combinatorics [5]

References
[1] http:/ / www. worldcat. org/ issn/ 0195-6698 [2] http:/ / www. sciencedirect. com/ science/ journal/ 01956698 [3] Abbreviations of Names of Serials, Section E, Mathematics Reviews, American Mathematical Society (http:/ / www. ams. org/ mathweb/ annser_f/ annser_E. html) [4] Journal Citation Reports, 2011, published by Thomson Reuters. [5] http:/ / www. elsevier. com/ wps/ find/ journaldescription. cws_home/ 622824/ description#description

Extremal combinatorics

149

Extremal combinatorics
Extremal combinatorics is a field of combinatorics, which is itself a part of mathematics. Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. For example, how many people can we invite to a party where among each three people there are two who know each other and two who don't know each other? An easy Ramsey-type argument shows that at most five persons can attend such a party. Or, suppose we are given a finite set of nonzero integers, and are asked to mark as large a subset as possible of this set under the restriction that the sum of any two marked integers cannot be marked. It appears that (independent of what the given integers actually are!) we can always mark at least one-third of them.

References
Stasys Jukna, Extremal Combinatorics, With Applications in Computer Science (preface [1]). Springer-Verlag, 2001. ISBN 3-540-66313-4.

References
[1] http:/ / lovelace. thi. informatik. uni-frankfurt. de/ ~jukna/ EC_Book/ preface. html

Factorial
n 0 1 2 3 4 5 6 7 8 9 10 15 20 25 50 70 100 170 n! 1 1 2 6 24 120 720 5040 40320 362880 3628800 1307674368000 2432902008176640000 1.55112100431025 3.04140932021064 1.197857167010100 9.332621544410157 7.257415615310306

Factorial

150
171 450 1000 3249 10000 25206 100000

1.241018070210309 1.7333687331101000 4.0238726008102567 6.41233768831010000 2.84625968091035659 1.205703438210100000 2.824229408010456573

205023 2.5038989317101000004 1000000 8.2639316883105565708 1.02483838381098 10100 1.797693134910308 101.000000000010100 109.956570551810101 105.533666577510310

The first few and selected larger members of the sequence of factorials (sequence A000142 in OEIS). The values specified in scientific notation are rounded to the displayed precision.

In mathematics, the factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, The value of 0! is 1, according to the convention for an empty product.[1] The factorial operation is encountered in many different areas of mathematics, notably in combinatorics, algebra and mathematical analysis. Its most basic occurrence is the fact that there are n! ways to arrange n distinct objects into a sequence (i.e., permutations of the set of objects). This fact was known at least as early as the 12th century, to Indian scholars.[2] The notation n! was introduced by Christian Kramp in 1808.[3] The definition of the factorial function can also be extended to non-integer arguments, while retaining its most important properties; this involves more advanced mathematics, notably techniques from mathematical analysis.

Definition
The factorial function is formally defined by

or recursively defined by

Both of the above definitions incorporate the instance

in the first case by the convention that the product of no numbers at all is 1. This is convenient because: There is exactly one permutation of zero objects (with nothing to permute, "everything" is left in place). The recurrence relation (n + 1)! = n! (n + 1), valid for n > 0, extends to n = 0. It allows for the expression of many formulas, like the exponential function as a power series:

Factorial

151

It makes many identities in combinatorics valid for all applicable sizes. The number of ways to choose 0 elements from the empty set is set of n is . . More generally, the number of ways to choose (all) n elements among a

The factorial function can also be defined for non-integer values using more advanced mathematics, detailed in the section below. This more generalized definition is used by advanced calculators and mathematical software such as Maple or Mathematica.

Applications
Although the factorial function has its roots in combinatorics, formulas involving factorials occur in many areas of mathematics. There are n! different ways of arranging n distinct objects into a sequence, the permutations of those objects. Often factorials appear in the denominator of a formula to account for the fact that ordering is to be ignored. A classical example is counting k-combinations (subsets of k elements) from a set with n elements. One can obtain such a combination by choosing a k-permutation: successively selecting and removing an element of the set, k times, for a total of

possibilities. This however produces the k-combinations in a particular order that one wishes to ignore; since each k-combination is obtained in k! different ways, the correct number of k-combinations is

This number is known as the binomial coefficient

, because it is also the coefficient of Xk in (1 + X)n.

Factorials occur in algebra for various reasons, such as via the already mentioned coefficients of the binomial formula, or through averaging over permutations for symmetrization of certain operations. Factorials also turn up in calculus; for example they occur in the denominators of the terms of Taylor's formula, basically to compensate for the fact that the n-th derivative of xn is n!. Factorials are also used extensively in probability theory. Factorials can be useful to facilitate expression manipulation. For instance the number of k-permutations of n can be written as

while this is inefficient as a means to compute that number, it may serve to prove a symmetry property of binomial coefficients:

Factorial

152

Number theory
Factorials have many applications in number theory. In particular, n! is necessarily divisible by all prime numbers up to and includingn. As a consequence, n > 5 is a composite number if and only if

A stronger result is Wilson's theorem, which states that

if and only if p is prime. Adrien-Marie Legendre found that the multiplicity of the prime p occurring in the prime factorization of n! can be expressed exactly as

This fact is based on counting the number of factors p of the integers from 1 ton. The number of multiples of p in the numbers 1 to n are given by ; however, this formula counts those numbers with two factors of p only once. Hence another factors of p must be counted too. Similarly for three, four, five factors, to infinity. The sum is

finite since pi can only be less than or equal to n for finitely many values ofi, and the floor function results in 0 when applied forpi>n. The only factorial that is also a prime number is 2, but there are many primes of the formn!1, called factorial primes. All factorials greater than 0! and 1! are even, as they are all multiples of2. Also, all factorials greater than 5! are multiples of 10 (and hence have a trailing zero as their final digit), because they are multiples of 5 and2. Also note that the reciprocals of factorials produce a convergent series: (see e)

Rate of growth and approximations for large n


As n grows, the factorial n! becomes larger than all polynomials and exponential functions (but slower than double exponential functions) in n. Most approximations for n! are based on approximating its natural logarithm

Plot of the natural logarithm of the factorial

Factorial

153

The graph of the function f(n) = log n! is shown in the figure on the right. It looks approximately linear for all reasonable values of n, but this intuition is false. We get one of the simplest approximations for log n! by bounding the sum with an integral from above and below as follows:

which gives us the estimate

Hence log n! is (n log n). This result plays a key role in the analysis of the computational complexity of sorting algorithms (see comparison sort). From the bounds on logn! deduced above we get that

It is sometimes practical to use weaker but simpler estimates. Using the above formula it is easily shown that for all n we have , and for all n 6 we have . For large n we get a better estimate for the number n! using Stirling's approximation:

In fact, it can be proved that for all n we have

A much better approximation for log n! was given by Srinivasa Ramanujan (Ramanujan 1988)

Computation
Computing factorials is trivial from an algorithmic point of view: successively multiplying a variable initialized to 1 by the integers 2 up to n (if any) will compute n!, provided the result fits in the variable. In functional languages, the recursive definition is often implemented directly to illustrate recursive functions. The main difficulty in computing factorials is the size of the result. To assure that the result will fit for all legal values of even the smallest commonly used integral type (8-bit signed integers) would require more than 700 bits, so no reasonable specification of a factorial function using fixed-size types can avoid questions of overflow. The values 12! and 20! are the largest factorials that can be stored in, respectively, the 32 bit and 64 bit integers commonly used in personal computers. Although floating point representation of the result allows going a bit further, it remains quite limited by possible overflow. The largest factorial that most calculators can handle is 69!, because 69!<10100<70!. Calculators that use 3-digit exponents can compute larger factorials, up to, for example, 253! 5.210499 on HP calculators and 449! 3.910997 on the TI-86. The calculator seen in Mac OS X, Microsoft Excel and Google Calculator, as well as the freeware Fox Calculator, can handle factorials up to 170!, which is the largest factorial that can be represented as a 64-bit IEEE 754 floating-point value. The scientific calculator in Windows XP is able to calculate factorials up to at least 100000!. Most software applications will compute small factorials by direct multiplication or table lookup. Larger factorial values can be approximated using Stirling's formula. Wolfram Alpha can calculate exact results for the ceiling function and floor function applied to the binary, natural and common logarithm of n! for values of n up to 249999, and up to 20,000,000! for the Integers.

Factorial If very large exact factorials are needed, they can be computed using bignum arithmetic. In such computations speed may be gained by not sequentially multiplying the numbers up to (or down from) n into a single accumulator, but by partitioning the sequence so that the products for each of the two parts are approximately of the same size, compute those products recursively and then multiply. The asymptotically-best efficiency is obtained by computing n! from its prime factorization. As documented by Peter Borwein, prime factorization allows n! to be computed in time O(n(lognloglogn)2), provided that a fast multiplication algorithm is used (for example, the SchnhageStrassen algorithm).[4] Peter Luschny presents source code and benchmarks for several efficient factorial algorithms, with or without the use of a prime sieve.[5]

154

Extension of factorial to non-integer values of argument


The Gamma and Pi functions
Besides nonnegative integers, the factorial function can also be defined for non-integer values, but this requires more advanced tools from mathematical analysis. One function that "fills in" the values of the factorial (but with a shift of 1 in the argument) is called the Gamma function, denoted (z), defined for all complex numbers z except the non-positive integers, and given when the real part of z is positive by
The factorial function, generalized to all real numbers except negative integers. For example, 0!=1!=1, (0.5)!=, (0.5)!=/2.

Its relation to the factorials is that for any natural number n

Euler's original formula for the Gamma function was

It is worth mentioning that there is an alternative notation that was originally introduced by Gauss which is sometimes used. The Pi function, denoted (z) for real numbers z no less than0, is defined by

In terms of the Gamma function it is

It truly extends the factorial in that

Factorial In addition to this, the Pi function satisfies the same recurrence as factorials do, but at every complex value z where it is defined

155

In fact, this is no longer a recurrence relation but a functional equation. Expressed in terms of the Gamma function this functional equation takes the form

Since the factorial is extended by the Pi function, for every complex value z where it is defined, we can write:

The values of these functions at half-integer values is therefore determined by a single one of them; one has

from which it follows that fornN,

For example,

It also follows that fornN,

For example,

The Pi function is certainly not the only way to extend factorials to a function defined at almost all complex values, and not even the only one that is analytic wherever it is defined. Nonetheless it is usually considered the most natural way to extend the values of the factorials to a complex function. For instance, the BohrMollerup theorem states that the Gamma function is the only function that takes the value 1 at 1, satisfies the functional equation (n+1)=n(n), is meromorphic on the complex numbers, and is log-convex on the positive real axis. A similar statement holds for the Pi function as well, using the (n)=n(n1) functional equation. However, there exist complex functions that are probably simpler in the sense of analytic function theory and which interpolate the factorial values. For example, Hadamard's 'Gamma'-function (Hadamard 1894) which, unlike the Gamma function, is an entire function.[6] Euler also developed a convergent product approximation for the non-integer factorials, which can be seen to be equivalent to the formula for the Gamma function above:

However, this formula does not provide a practical means of computing the Pi or Gamma function, as its rate of convergence is slow.

Factorial

156

Applications of the Gamma function


The volume of an n-dimensional hypersphere of radius R is

Factorial at the complex plane

Amplitude and phase of factorial of complex argument.

Representation through the Gamma-function allows evaluation of factorial of complex argument. Equilines of amplitude and phase of factorial are shown in figure. Let . Several levels of constant modulus (amplitude) covers range Thin lines show and constant phase are shown. The grid , with unit step. The scratched line shows the level . intermediate levels of constant modulus and constant phase. At poles , phase and amplitude are not defined. Equilines are dense in vicinity of

singularities along negative integer values of the argument. For , the Taylor expansions can be used:

The first coefficients of this expansion are

Factorial

157

approximation 0 1 2

where

is the Euler constant and

is the Riemann zeta function. Computer algebra systems such as Sage

(mathematics software) can generate many terms of this expansion.

Approximations of factorial
For the large values of the argument, factorial can be approximated through the integral of the digamma function, using the continued fraction representation. This approach is due to T. J. Stieltjes (1894). Writing z! = exp(P(z)) where P(z) is

Stieltjes gave a continued fraction for p(z)

The first few coefficients an are [7]


n 0 1 / 12 1 1 / 30 2 53 / 210 3 195 / 371 4 22999 / 22737 5 29944523 / 19773142 6 109535241009 / 48264275462 an

There is common misconception, that

or

for any complex z0.

Indeed, the relation through the logarithm is valid only for specific range of values of z in vicinity of the real axis, while . The larger is the real part of the argument, the smaller should be the imaginary part. However, the inverse relation, z!=exp(P(z)), is valid for the whole complex plane apart from zero. The convergence is poor in vicinity of the negative part of the real axis. (It is difficult to have good convergence of any approximation in vicinity of the singularities). While or , the 6 coefficients above are sufficient for the evaluation of the factorial with the complex<double> precision. For higher precision more coefficients can be computed by a rational QD-scheme (H. Rutishauser's QD algorithm).[8]

Factorial

158

Non-extendability to negative integers


The relation n!=(n1)!n allows one to compute the factorial for an integer given the factorial for a smaller integer. The relation can be inverted so that one can compute the factorial for an integer given the factorial for a larger integer:

Note, however, that this recursion does not permit us to compute the factorial of a negative integer; use of the formula to compute (1)! would require a division by zero, and thus blocks us from computing a factorial value for every negative integer. (Similarly, the Gamma function is not defined for non-positive integers, though it is defined for all other complex numbers.)

Factorial-like products and functions


There are several other integer sequences similar to the factorial that are used in mathematics:

Primorial
The primorial (sequence A002110 in OEIS) is similar to the factorial, but with the product taken only over the prime numbers.

Double factorial
The product of all odd integers up to some odd positive integer n is often called the double factorial of n (even though it only involves about half the factors of the ordinary factorial, and its value is therefore closer to the square root of the factorial). It is denoted by n!!. For an odd positive integer n = 2k - 1, k 1, it is . For example, 9!!=13579 =945. This notation creates a notational ambiguity with the composition of the factorial function with itself (which for n>2 gives much larger numbers than the double factorial); this may be justified by the fact that composition arises very seldom in practice, and could be denoted by (n!)! to circumvent the ambiguity. The double factorial notation is not essential; it can be expressed in terms of the ordinary factorial by , since the denominator equals and cancels the unwanted even factors from the numerator. The introduction of

the double factorial is motivated by the fact that it occurs rather frequently in combinatorial and other settings, for instance (2n1)!! is the number of permutations of 2n whose cycle type consists of n parts equal to2; these are the involutions without fixed points. (2n1)!! is the number of perfect matchings in a complete graph K(2n). (2n5)!! is the number of unrooted binary trees with n labeled leaves. The value is equal to (see above)

Sometimes n!! is defined for non-negative even numbers as well. One choice is a definition similar to the one for odd values

Factorial For example, with this definition, 8!! =2468 =384. However, note that this definition does not match the expression above, of the double factorial in terms of the ordinary factorial, and is also inconsistent with the extension of the definition of to complex numbers that is achieved via the Gamma function as indicated below. Also, for even numbers, the double factorial notation is hardly shorter than expressing the same value using ordinary factorials. For combinatorial interpretations (the value gives, for instance, the size of the hyperoctahedral group), the latter expression can be more informative (because the factor 2n is the order of the kernel of a projection to the symmetric group). Even though the formulas for the odd and even double factorials can be easily combined into

159

the only known interpretation for the sequence of all these numbers (sequence A006882 in OEIS) is somewhat artificial: the number of down-up permutations of a set of n + 1 elements for which the entries in the even positions are increasing. The sequence of double factorials for n=1,3,5,7,... (sequence A001147 in OEIS) starts as 1, 3, 15, 105, 945, 10395, 135135, .... Some identities involving double factorials are:

Alternative extension of the double factorial Disregarding the above definition of n!! for even values ofn, the double factorial for odd integers can be extended to most real and complex numbers z by noting that when z is a positive odd integer then

The expressions obtained by taking one of the above formulas for

and

and expressing the

occurring factorials in terms of the gamma function can both be seen (using the multiplication theorem) to be equivalent to the one given here. The expression found for z!! is defined for all complex numbers except the negative even numbers. Using it as the definition, the volume of an n-dimensional hypersphere of radius R can be expressed as

Multifactorials
A common related notation is to use multiple exclamation points to denote a multifactorial, the product of integers in steps of two ( ), three ( ), or more. The double factorial is the most commonly used variant, but one can similarly define the triple factorial ( recursively for non-negative integers as ) and so on. One can define the k-th factorial, denoted by ,

though see the alternative definition below. Some mathematicians have suggested an alternative notation of other multifactorials, but this has not come into general use. for the double factorial and similarly for

Factorial With the above definition, In the same way that is not defined for negative integers, and . is not defined for negative even integers, is not defined for negative integers evenly divisible by Alternative extension of the multifactorial Alternatively, the multifactorial z!(k) can be extended to most real and complex numbers z by noting that when z is one more than a positive multiple of k then

160

This last expression is defined much more broadly than the original; with this definition, z!(k) is defined for all complex numbers except the negative real numbers evenly divisible by k. This definition is consistent with the earlier definition only for those integers z satisfyingz1modk. In addition to extending z!(k) to most complex numbersz, this definition has the feature of working for all positive real values ofk. Furthermore, when k=1, this definition is mathematically equivalent to the (z) function, described above. Also, when k=2, this definition is mathematically equivalent to the alternative extension of the double factorial, described above.

Quadruple factorial
The so-called quadruple factorial, however, is not the multifactorialn!(4); it is a much larger number given by(2n)!/n!, starting as 1, 2, 12, 120, 1680, 30240, 665280, ... (sequence A001813 in OEIS). It is also equal to

Superfactorial
Neil Sloane and Simon Plouffe defined a superfactorial in The Encyclopedia of Integer Sequences (Academic Press, 1995) to be the product of the first factorials. So the superfactorial of 4 is

In general

Equivalently, the superfactorial is given by the formula

which is the determinant of a Vandermonde matrix. The sequence of superfactorials starts (from ) as

1, 1, 2, 12, 288, 34560, 24883200, ... (sequence A000178 in OEIS)

Factorial Alternative definition Clifford Pickover in his 1995 book Keys to Infinity used a new notation, n$, to define the superfactorial

161

or as, where the (4) notation denotes the hyper4 operator, or using Knuth's up-arrow notation,

This sequence of superfactorials starts:

Here, as is usual for compound exponentiation, the grouping is understood to be from right to left:

Hyperfactorial
Occasionally the hyperfactorial of n is considered. It is written as H(n) and defined by

For n = 1, 2, 3, 4, ... the values H(n) are 1, 4, 108, 27648,... (sequence A002109 in OEIS). The asymptotic growth rate is where A = 1.2824... is the GlaisherKinkelin constant.[9] H(14)=1.8474...1099 is already almost equal to a googol, and H(15)=8.0896...10116 is almost of the same magnitude as the Shannon number, the theoretical number of possible chess games. Compared to the Pickover definition of the superfactorial, the hyperfactorial grows relatively slowly. The hyperfactorial function can be generalized to complex numbers in a similar way as the factorial function. The resulting function is called the K-function.

Notes
[1] Ronald L. Graham, Donald E. Knuth, Oren Patashnik (1988) Concrete Mathematics, Addison-Wesley, Reading MA. ISBN 0-201-14236-8, p.111 [2] N. L. Biggs, The roots of combinatorics, Historia Math. 6 (1979) 109136 [3] Higgins, Peter (2008), Number Story: From Counting to Cryptography, New York: Copernicus, p.12, ISBN978-1-84800-000-1 says Krempe though. [4] Peter Borwein. "On the Complexity of Calculating Factorials". Journal of Algorithms 6, 376380 (1985) [5] Peter Luschny, Fast-Factorial-Functions: The Homepage of Factorial Algorithms (http:/ / www. luschny. de/ math/ factorial/ FastFactorialFunctions. htm). [6] Peter Luschny, Hadamard versus Euler - Who found the better Gamma function? (http:/ / www. luschny. de/ math/ factorial/ hadamard/ HadamardsGammaFunction. html). [7] Digital Library of Mathematical Functions, http:/ / dlmf. nist. gov/ 5. 10 [8] Peter Luschny, On Stieltjes' Continued Fraction for the Gamma Function. (http:/ / www. luschny. de/ math/ factorial/ approx/ continuedfraction. html). [9] Weisstein, Eric W., " GlaisherKinkelin Constant (http:/ / mathworld. wolfram. com/ Glaisher-KinkelinConstant. html)" from MathWorld.

Factorial

162

References
Hadamard, M. J. (1894) (in French), Sur LExpression Du Produit 123 (n1) Par Une Fonction Entire (http://www.luschny.de/math/factorial/hadamard/HadamardFactorial.pdf), OEuvres de Jacques Hadamard, Centre National de la Recherche Scientifiques, Paris, 1968 Ramanujan, Srinivasa (1988), The lost notebook and other unpublished papers, Springer Berlin, p.339, ISBN354018726X

External links
Approximation formulas (http://www.luschny.de/math/factorial/approx/SimpleCases.html) All about factorial notation n! (http://factorielle.free.fr/index_en.html) Weisstein, Eric W., " Factorial (http://mathworld.wolfram.com/Factorial.html)" from MathWorld. Weisstein, Eric W., " Double factorial (http://mathworld.wolfram.com/DoubleFactorial.html)" from MathWorld. Factorial (http://planetmath.org/encyclopedia/Factorial.html) at PlanetMath. "Double Factorial Derivations" (http://www.docstoc.com/docs/5606124/ Double-Factorials-Selected-Proofs-and-Notes) Factorial calculators and algorithms Factorial Calculator (http://web.ics.purdue.edu/~chen165/Math.htm): instantly finds factorials up to 10^14! Animated Factorial Calculator (http://www.gfredericks.com/main/sandbox/arith/factorial): shows factorials calculated as if by hand using common elementary school aglorithms "Factorial" (http://demonstrations.wolfram.com/Factorial/) by Ed Pegg, Jr. and Rob Morris, Wolfram Demonstrations Project, 2007. Fast Factorial Functions (with source code in Java, C#, C++, Scala and Go) (http://www.luschny.de/math/ factorial/FastFactorialFunctions.htm)

Factorial number system

163

Factorial number system


Numeral systems by culture Hindu-Arabic numerals Western Arabic (Hindu numerals) Eastern Arabic Indian family Tamil Burmese Khmer Lao Mongolian Thai

East Asian numerals Chinese Japanese Suzhou Korean Vietnamese Counting rods Alphabetic numerals Abjad Armenian ryabhaa Cyrillic Ge'ez Greek (Ionian) Hebrew

Other systems Aegean Attic Babylonian Brahmi Egyptian Etruscan Inuit Kharosthi Mayan Quipu Roman Sumerian Urnfield

List of numeral system topics Positional systems by base Decimal (10) 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 20, 24, 30, 36, 60, 64 List of numeral systems

In combinatorics, the factorial number system, also called factoradic, is a mixed radix numeral system adapted to numbering permutations. It is also called factorial base, although factorials do not function as base, but as place value of digits. By converting a number less than n! to factorial representation, one obtains a sequence of n digits that can be converted to a permutation of n in a straightforward way, either using them as Lehmer code or as inversion table[1] representation; in the former case the resulting map from integers to permutations of n lists them in lexicographical order. General mixed radix systems were studied by Georg Cantor.[2] The term "factorial number system" is used by Knuth,[3] while the French equivalent "numration factorielle" was first used in 1888.[4] The term "factoradic", which is a portmanteau of factorial and mixed radix, appears to be of more recent date.[5]

Factorial number system

164

Definition
The factorial number system is a mixed radix numeral system: the i-th digit from the right has basei, which means that the digit must be strictly less than i, and that (taking into account the bases of the less significant digits) its value to be multiplied by (i 1)! (its place value).
Radix Place value 8 7! 7 6! 6 5! 5 4 3 2 1

4! 3! 2! 1! 0! 2 2 1 1 1 0

Place value in decimal 5040 720 120 24 6 Highest digit allowed 7 6 5 4 3

From this it follows that the rightmost digit is always 0, the second can be 0 or 1, the third 0, 1 or 2, and so on. The factorial number system is sometimes defined with the rightmost digit omitted, because it is always zero (sequence A007623 in OEIS). In this article a factorial number representation will be flagged by a subscript "!", so for instance 341010! stands for 364514031201, whose value is ((((35+4)4+1)3+0)2+1)1+0=46310. General properties of mixed radix number systems apply to the factorial number system as well. For instance, one can convert a number into factorial representation producing digits from right to left, by repeatedly dividing the number by the place values (1, 2, 3, ...), taking the remainder as digits, and continuing with the integer quotient, until this quotient becomes 0. One could in principle extend the system to deal with fractional numbers by choosing base values for the positions after the "decimal" point, but the natural extension by values 0, 1, 2, ... is not an option. The symmetric choice of base values 1, 2, 3, ... after the point would be possible, with corresponding place values n!, but it is not distinguished by any particular mathematical properties (except that the number e takes the form 10.011111...).

Examples
Here are the first twenty-four numbers, counting from zero. The table on the left shows permutations, and inversion vectors[6] (which are reflected factorial numbers) below them. Another column shows the inversion sets. The digit sums of the inversion vectors (or factorial numbers) and the cardinalities of the inversion sets are equal (and have the same parity as the permutation). They form the sequence A034968.

Factorial number system

165

Permutohedron graph showing permutations and their inversion vectors (compare version with factorial numbers) The arrows indicate the bitwise less or equal relation.

Factorial number system

166

decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

factorial 0! 10! 100! 110! 200! 210! 1000! 1010! 1100! 1110! 1200! 1210! 2000! 2010! 2100! 2110! 2200! 2210! 3000! 3010! 3100! 3110! 3200! 3210!

For another example, the greatest number that could be represented with six digits would be 543210! which equals 719 in decimal: 55! + 44! + 3x3! + 22! + 11! + 00!. Clearly the next factorial number representation after 543210! is 1000000! which designates 6!=72010, the place value for the radix-7 digit. So the former number, and its summed out expression above, is equal to: 6! 1. The factorial number system provides a unique representation for each natural number, with the given restriction on the "digits" used. No number can be represented in more than one way because the sum of consecutive factorials multiplied by their index is always the next factorial minus one:

This can be easily proved with mathematical induction. However, when using Arabic numerals to write the digits (and not including the subscripts as in the above examples), their simple concatenation becomes ambiguous for numbers having a "digit" greater than 9. The smallest such example is the number 1010!=3628800010, which may be written A0000000000!, but not 100000000000!

Factorial number system which denotes 11!=3991680010. Thus using letters AZ to denote digits 10,...,35 as in other base-N make the largest representable number 36!1=37199332678990121746799944815083519999999910. For arbitrarily greater numbers one has to choose a base for representing individual digits, say decimal, and provide a separating mark between them (for instance by subscripting each digit by its base, also given in decimal). In fact the factorial number system itself is not truly a numeral system in the sense of providing a representation for all natural numbers using only a finite alphabet of symbols.

167

Permutations
There is a natural mapping between the integers 0,...,n!1 (or equivalently the numbers with n digits in factorial representation) and permutations of n elements in lexicographical order, when the integers are expressed in factoradic form. This mapping has been termed the Lehmer code (or inversion table). For example, with n=3, such a mapping is
decimal factorial permutation 010 110 210 310 410 510 000! 010! 100! 110! 200! 210! (0,1,2) (0,2,1) (1,0,2) (1,2,0) (2,0,1) (2,1,0)

The leftmost factoradic digit 0, 1, or 2 is chosen as the first permutation digit from the ordered list (0,1,2) and is removed from the list. Think of this new list as zero indexed and each successive digit dictates which of the remaining elements is to be chosen. If the second factoradic digit is "0" then the first element of the list is selected for the second permutation digit and is then removed from the list. Similarly if the second factoradic digit is "1", the second is selected and then removed. The final factoradic digit is always "0", and since the list now contains only one element it is selected as the last permutation digit. The process may become clearer with a longer example. For example, here is how the digits in the factoradic 4041000! (equal to 298210) pick out the digits in (4,0,6,2,1,3,5), the 2982nd permutation of the numbers 0 through 6.
4041000! (4,0,6,2,1,3,5) factoradic: 4 | 0 | 4 | 1 | 0 | 0 | 0! |

(0,1,2,3,4,5,6) -> (0,1,2,3,5,6) -> (1,2,3,5,6) -> (1,2,3,5) -> (1,3,5) -> (3,5) -> (5) | permutation:(4, | 0, | 6, | 2, | 1, | 3, | 5)

A natural index for the group direct product of two permutation groups is the concatenation of two factoradic numbers, with two subscript "!"s. concatenated factoradics 000!000! 000!010! ... 000!210! 010!000! 010!010!

decimal 010 110 510 610 710

permutation pair ((0,1,2),(0,1,2)) ((0,1,2),(0,2,1)) ((0,1,2),(2,1,0)) ((0,2,1),(0,1,2)) ((0,2,1),(0,2,1))

Factorial number system ... 110!200! ... 210!200! 210!210!

168

2210 3410 3510

((1,2,0),(2,0,1)) ((2,1,0),(2,0,1)) ((2,1,0),(2,1,0))

References
[1] Knuth, D. E. (1973), "Volume 3: Sorting and Searching", The Art of Computer Programming, Addison-Wesley, pp.12, ISBN0-201-89685-0 [2] Cantor, G. (1869), Zeitschrift fr Mathematik und Physik, 14. [3] Knuth, D. E. (1997), "Volume 2: Seminumerical Algorithms", The Art of Computer Programming (3rd ed.), Addison-Wesley, pp.192, ISBN0-201-89684-2. [4] Laisant, Charles-Ange (1888), "Sur la numration factorielle, application aux permutations" (http:/ / www. numdam. org/ item?id=BSMF_1888__16__176_0) (in French), Bulletin de la Socit Mathmatique de France 16: 176183, . [5] The term "factoradic" is apparently introduced in McCaffrey, James (2003), Using Permutations in .NET for Improved Systems Security (http:/ / msdn2. microsoft. com/ en-us/ library/ aa302371. aspx), Microsoft Developer Network, . [6] Weisstein, Eric W., " Inversion Vector (http:/ / mathworld. wolfram. com/ InversionVector. html)" from MathWorld.

External links
Mantaci, Roberto; Rakotondrajao, Fanja (2001), "A permutation representation that knows what Eulerian means" (http://www.dmtcs.org/volumes/abstracts/pdfpapers/dm040203.pdf) (PDF), Discrete Mathematics and Theoretical Computer Science 4: 101108. Arndt, Jrg (March 5, 2009). Algorithms for Programmers: Ideas and source code (draft) (http://www.jjj.de/ fxt/#fxtbook). pp.224234.

External links
A Lehmer code calculator (http://www-ang.kfunigraz.ac.at/~fripert/fga/k1lehm.html) Note that their permutation digits start from 1, so mentally reduce all permutation digits by one to get results equivalent to those on this page

Finite geometry

169

Finite geometry
A finite geometry is any geometric system that has only a finite number of points. Euclidean geometry, for example, is not finite, because a Euclidean line contains infinitely many points, in fact as many points as there are real numbers. A finite geometry can have any (finite) number of dimensions. Finite geometries may be constructed via linear algebra, as vector spaces over a finite field, and called Galois geometries, or can be defined purely combinatorially. Many, but not all, finite geometries are Galois geometries for example, any finite projective space of dimension three or greater is isomorphic to a projective space over a finite field (the projectivization of a vector space over a finite field), so in this case there is no distinction, but in dimension two there are combinatorially defined projective planes which are not isomorphic to projective spaces over finite fields, namely the non-Desarguesian planes, so in this case there is a distinction.

Finite planes
The following remarks apply only to finite planes. There are two kinds of finite plane geometry: affine and projective. In an affine geometry, the normal sense of parallel lines applies. In a projective plane, by contrast, any two lines intersect at a unique point, and so parallel lines do not exist. Both finite affine plane geometry and finite projective plane geometry may be described by fairly simple axioms. An affine plane geometry is a nonempty set collection of subsets of (whose elements are called "points"), along with a nonempty (whose elements are called "lines"), such that: containing

1. Given any two distinct points, there is exactly one line that contains both points. 2. The parallel postulate: Given a line and a point not on , there exists exactly one line such that 3. There exists a set of four points, no three of which belong to the same line.

The last axiom ensures that the geometry is not trivial (either empty or too simple to be of interest, such as a single line with an arbitrary number of points on it), while the first two specify the nature of the geometry. The simplest affine plane contains only four points; it is called the affine plane of order 2. Since no three are collinear, any pair of points determines a unique line, and so this plane contains six lines. It corresponds to a tetrahedron where non-intersecting edges are considered "parallel", or a square where not only opposite sides, but also diagonals are considered "parallel". More generally, a finite affine plane of order has points and lines; each line contains points, and each point is on lines.

Finite affine plane of order 2, containing 4 points and 6 lines. Lines of the same color are "parallel".

Finite geometry

170 (whose

A projective plane geometry is a nonempty set of subsets of

elements are called "points"), along with a nonempty collection (whose elements are called "lines"), such that: 1. Given any two distinct points, there is exactly one line that contains both points. 2. The intersection of any two distinct lines contains exactly one point. 3. There exists a set of four points, no three of which belong to the same line.

Finite affine plane of order 3, containing 9 points and 12 lines.

An examination of the first two axioms shows that they are nearly identical, except that the roles of points and lines have been interchanged. This suggests the principle of duality for projective plane geometries, meaning that any true statement valid in all these geometries remains true if we exchange points for lines and lines for points. The smallest geometry satisfying all three axioms contains seven points. In this simplest of the projective planes, there are also seven lines; each point is on three lines, and each line contains three points. This particular projective plane is sometimes called the Fano plane. If any of the lines is removed from the plane, along with the points on that line, the resulting geometry is the affine plane of order 2. The Fano plane is called the projective plane of order 2 because it is unique (up to isomorphism). In general, the projective plane of order n has n2+n+1 points and the same number of lines; each line contains n+1 points, and each point is on n+1 lines. A permutation of the Fano plane's seven points that carries line and vice versa. collinear points (points on the same line) to collinear points is called a collineation of the plane. The full collineation group is of order 168 and is isomorphic to the group PSL(2,7) = PSL(3,2), and general linear group GL(3,2).
Duality in the Fano plane: Each point corresponds to a

Finite geometry

171

Order of planes
A finite plane of order n is one such that each line has n points (for an affine plane), or such that each line has points (for a projective plane). One major open question in finite geometry is: Is the order of a finite plane always a prime power? This is conjectured to be true, but has not been proven. Affine and projective planes of order n exist whenever n is a prime power (a prime number raised to a positive integer exponent), by using affine and projective planes over the finite field with elements. Planes not derived from finite fields also exist, but all known examples have order a prime power. The best general result to date is the BruckRyser theorem of 1949, which states: If n is a positive integer of the form 4k+1 or 4k+2 and n is not equal to the sum of two integer squares, then n does not occur as the order of a finite plane. The smallest integer that is not a prime power and not covered by the BruckRyser theorem is 10; 10 is of the form 4k+2, but it is equal to the sum of squares 12+32. The non-existence of a finite plane of order 10 was proven in a computer-assisted proof that finished in 1989 see (Lam 1991) for details. The next smallest number to consider is 12, for which neither a positive nor a negative result has been proved.

Finite spaces of 3 or more dimensions


For some important differences between finite plane geometry and the geometry of higher-dimensional finite spaces, see axiomatic projective space. For a discussion of higher-dimensional finite spaces in general, see, for instance, the works of J.W.P. Hirschfeld.

Finite three-spaces
Associated with every field K is a (3-dimensional) projective space whose points, lines, and planes can be identified with the 1-, 2-, and 3-dimensional subspaces of the 4-dimensional vector space over the field K. There is a set of axioms for projective spaces. The smallest projective space over the field Z2 has 15 points, 35 lines, and 15 planes. Each of the 15 planes contains 7 points and 7 lines. As geometries, they are isomorphic to the Fano plane. Every point is contained in 7 lines and every line contains three points. In addition, two distinct points are contained in exactly one line and two planes intersect in exactly one line. In 1892, Gino Fano was the first to consider such a finite geometry a three dimensional geometry containing 15 points, 35 lines, and 15 planes, with each plane containing 7 points and 7 lines. In synthetic projective geometry the undefined elements are taken as points and lines. A plane and a three-space may then be defined using the postulates of incidence and existence: Postulates of Incidence P-1: If A and B are distinct points, there is at least one line on both A and B. P-2: If A and B are distinct points, there is not more than one line on both A and B. P-3: If A, B, and C are points not all on the same line, and D and E are distinct points such that B, C, and D are on a line and C, A, and E are on a line, there is a point F such that A, B, and F are on a line and also D, E, and F are on a line.

Finite geometry

172

Postulates of Existence P-4: There exists at least one line. P-5: There are at least three distinct points on every line. P-6: Not all points are on the same line. P-7: Not all points are on the same plane. P-8: If S3 is a three-space, every point is on S3.
Figure 1

In particular Postulates P-1 through P-8 are satisfied by the points, lines, and planes of the three-space whose points are indicated in Figure 1. This three-space contains exactly 15 points. There are also many very different finite projective three-spaces on which these Postulates are defined.

Finite n-spaces
In general, for any positive integer n, a geometry on an n-space is called an n-dimensional geometry. A four-dimensional projective geometry may be obtained by replacing P-8 by P-8: Not all points are on the same three-space and by a postulate of closure P-8: If S4is a four-space, every point is on S4. In general, an n-dimensional projective geometry (n = 4, 5, ) may be obtained by replacing P-8 by postulates stating that: (i) Not all points are on the same S3, S4, , Sn-1, (ii) If Sn is an n-space, every point is on Sn. The study of these higher-dimensional spaces ( n > 3) has many important applications in advanced mathematical theories.

Kirkman's Schoolgirl Problem


This 3-space can be modeled by Kirkman's schoolgirl problem, which states: Fifteen schoolgirls walk each day in five groups of three. Arrange the girls walk for a week so that in that time, each pair of girls walks together in a group just once. (See answer in link.) There are 35 different combinations for the girls to walk together. There are also 7 days of the week, and 3 girls in each group. The diagram of this problem provides a visual representation of the Fano space. Diagrams for this problem can be found at [1] Each color represents the day of the week (seven colors, blue, green, yellow, purple, red, black, and orange). The definition of a Fano space says that each line is on three points. The figure represents this showing that there are 3 points for every line. This is the basis for the answer to the schoolgirl problem. This figure is then rotated 7 times. There are 5 different lines for each day, multiplied by 7 (days) and the result is 35. Then, there are 15 points, and there are also 7 starting lines on each point. This then gives a representation of the Fano space.

References
[1] http:/ / home. wlu. edu/ ~mcraea/ finite_geometry/ Applications/ Prob31SchoolGirl/ problem31. html

Lynn Margaret Batten : Combinatorics of Finite Geometries. Cambridge University Press Dembowski: Finite Geometries. Lam, C. W. H. (1991), "The Search for a Finite Projective Plane of Order 10" (http://www.cecm.sfu.ca/ organics/papers/lam/), American Mathematical Monthly 98 (4): 305318 Eves, Howard. A Survey of Geometry: Volume One. Boston: Allyn and Bacon Inc., 1963. Meserve, Bruce E. Fundamental Concepts of Geometry. New York: Dover Publications, 1983. Polster, Burkard. Yea Why Try Her Raw Wet Hat: A Tour of the Smallest Projective Space. Volume 21, Number 2, 1999. (http://www.springerlink.com/content/cr23u8j8128g77tw/fulltext.pdf) Problem 31: Kirkman's schoolgirl problem (http://home.wlu.edu/~mcraea/finite_geometry/Applications/ Prob31SchoolGirl/problem31.html)

Finite geometry

173

External links
Weisstein, Eric W., " finite geometry (http://mathworld.wolfram.com/FiniteGeometry.html)" from MathWorld. Essay on Finite Geometry by Michael Greenberg (http://www.cims.nyu.edu/vigrenew/ug_research/ MichaelGreenberg.pdf) Finite geometry (Script) (http://www.math.mtu.edu/~jbierbra/HOMEZEUGS/finitegeom04.ps) Finite Geometry Resources (http://cage.ugent.be/geometry/links.php) J. W. P. Hirschfeld (http://www.maths.sussex.ac.uk/Staff/JWPH/), researcher on finite geometries Books by Hirschfeld on finite geometry (http://www.maths.sussex.ac.uk/Staff/JWPH/RESEARCH/ index.html) AMS Column: Finite Geometries? (http://www.ams.org/featurecolumn/archive/finitegeometries.html) Galois Geometry and Generalized Polygons (http://cage.ugent.be/~fdc/intensivecourse/intensivecourse_final. html), intensive course in 1998 Carnahan, Scott (2007-10-27), "Small finite sets" (http://sbseminar.wordpress.com/2007/10/27/ small-finite-sets/), Secret Blogging Seminar (http://sbseminar.wordpress.com/), notes on a talk by Jean-Pierre Serre on canonical geometric properties of small finite sets.

Finite topological space


In mathematics, a finite topological space is a topological space for which the underlying point set is finite. That is, it is a topological space for which there are only finitely many points. While topology is mostly interesting only for infinite spaces, finite topological spaces are often used to provide examples of interesting phenomena or counterexamples to plausible sounding conjectures. William Thurston has called the study of finite topologies in this sense an oddball topic that can lend good insight to a variety of questions.[1]

Topologies on a finite set


As a bounded sublattice
A topology on a set X is defined as a subset of P(X), the power set of X, which includes both and X and is closed under finite intersections and arbitrary unions. Since the power set of a finite set is finite there can be only finitely many open sets (and only finitely many closed sets). Therefore one only need check that the union of a finite number of open sets is open. This leads to a simpler description of topologies on a finite set. Let X be a finite set. A topology on X is a subset of P(X) such that 1. and X 2. if U and V are in then U V 3. if U and V are in then U V A topology on a finite set is therefore nothing more than a sublattice of (P(X), ) which includes both the bottom element () and the top element (X). Every finite bounded lattice is complete since the meet or join of any family of elements can always be reduced to a meet or join of two elements. It follows that in a finite topological space the union or intersection of an arbitrary family of open sets (resp. closed sets) is open (resp. closed).

Finite topological space

174

Specialization preorder
Topologies on a finite set X are in one-to-one correspondence with preorders on X. Recall that a preorder on X is a binary relation on X which is reflexive and transitive. Given a (not necessarily finite) topological space X we can define a preorder on X by x y if and only if x cl{y} where cl{y} denotes the closure of the singleton set {y}. This preorder is called the specialization preorder on X. Every open set U of X will be an upper set with respect to (i.e. if x U and x y then y U). Now if X is finite, the converse is also true: every upper set is open in X. So for finite spaces, the topology on X is uniquely determined by . Going in the other direction, suppose (X, ) is a preordered set. Define a topology on X by taking the open sets to be the upper sets with respect to . Then the relation will be the specialization preorder of (X, ). The topology defined in this way is called the Alexandrov topology determined by . The equivalence between preorders and finite topologies can be interpreted as a version of Birkhoff's representation theorem, an equivalence between finite distributive lattices (the lattice of open sets of the topology) and partial orders (the partial order of equivalence classes of the preorder). This correspondence also works for a larger class of spaces called finitely generated spaces. Finitely generated spaces can be characterized as the spaces in which an arbitrary intersection of open sets is open. Finite topological spaces are a special class of finitely generated spaces.

Examples
0 or 1 points
There is a unique topology on the empty set . The only open set is the empty one. Indeed, this is the only subset of . Likewise, there is a unique topology on a singleton set {a}. Here the open sets are and {a}. This topology is both discrete and trivial, although in some ways it is better to think of it as a discrete space since it shares more properties with the family of finite discrete spaces. For any topological space X there is a unique continuous function from to X, namely the empty function. There is also a unique continuous function from X to the singleton space {a}, namely the constant function to a. In the language of category theory the empty space serves as an initial object in the category of topological spaces while the singleton space serves as a terminal object.

2 points
Let X = {a,b} be a set with 2 elements. There are four distinct topologies on X: 1. 2. 3. 4. {, {a,b}} (the trivial topology) {, {a}, {a,b}} {, {b}, {a,b}} {, {a}, {b}, {a,b}} (the discrete topology)

The second and third topologies above are easily seen to be homeomorphic. The function from X to itself which swaps a and b is a homeomorphism. A topological space homeomorphic to one of these is called a Sierpiski space. So, in fact, there are only three inequivalent topologies on a two point set: the trivial one, the discrete one, and the Sierpiski topology. The specialization preorder on the Sierpiski space {a,b} with {b} open is given by: a a, b b, and a b.

Finite topological space

175

3 points
Let X = {a,b,c} be a set with 3 elements. There are 29 distinct topologies on X but only 9 inequivalent topologies: 1. 2. 3. 4. 5. 6. 7. 8. 9. {, {a,b,c}} {, {c}, {a,b,c}} {, {a,b}, {a,b,c}} {, {c}, {a,b}, {a,b,c}} {, {c}, {b,c}, {a,b,c}} {, {c}, {a,c}, {b,c}, {a,b,c}} {, {a}, {b}, {a,b}, {a,b,c}} {, {b}, {c}, {a,b}, {b,c}, {a,b,c}} {, {a}, {b}, {c}, {a,b}, {a,c}, {b,c}, {a,b,c}}

The last 5 of these are all T0. The first one is trivial, while in 2, 3, and 4 the points a and b are topologically indistinguishable.

Properties
Compactness and countability
Every finite topological space is compact since any open cover must already be finite. Indeed, compact spaces are often thought of as a generalization of finite spaces since they share many of the same properties. Every finite topological space is also second-countable (there are only finitely many open sets) and separable (since the space itself is countable).

Separation axioms
If a finite topological space is T1 (in particular, if it is Hausdorff) then it must, in fact, be discrete. This is because the complement of a point is a finite union of closed points and therefore closed. It follows that each point must be open. Therefore, any finite topological space which is not discrete cannot be T1, Hausdorff, or anything stronger. However, it is possible for a non-discrete finite space to be T0. In general, two points x and y are topologically indistinguishable if and only if x y and y x, where is the specialization preorder on X. It follows that a space X is T0 if and only if the specialization preorder on X is a partial order. There are numerous partial orders on a finite set. Each defines a unique T0 topology. Similarly, a space is R0 if and only if the specialization preorder is an equivalence relation. Given any equivalence relation on a finite set X the associated topology is the partition topology on X. The equivalence classes will be the classes of topologically indistinguishable points. Since the partition topology is pseudometrizable, a finite space is R0 if and only if it is completely regular. Non-discrete finite spaces can also be normal. The excluded point topology on any finite set is a completely normal T0 space which is non-discrete.

Finite topological space

176

Connectivity
Connectivity in a finite space X is best understood by considering the specialization preorder on X. We can associate to any preordered set X a directed graph by taking the points of X as vertices and drawing an edge x y whenever x y. The connectivity of a finite space X can be understood by considering the connectivity of the associated graph . In any topological space, if x y then there is a path from x to y. One can simply take f(0) = x and f(t) = y for t > 0. It is easily to verify that f is continuous. It follows that the path components of a finite topological space are precisely the (weakly) connected components of the associated graph . That is, there is a topological path from x to y if and only if there is an undirected path between the corresponding vertices of . Every finite space is locally path-connected since the set

is a path-connected open neighborhood of x that is contained in every other neighborhood. In other words, this single set forms a local base at x. Therefore, a finite space is connected if and only if it is path-connected. The connected components are precisely the path components. Each such component is both closed and open in X. Finite spaces may have stronger connectivity properties. A finite space X is hyperconnected if and only if there is a greatest element with respect to the specialization preorder. This is an element whose closure is the whole space X. ultraconnected if and only if there is a least element with respect to the specialization preorder. This is an element whose only neighborhood is the whole space X. For example, the particular point topology on a finite space is hyperconnected while the excluded point topology is ultraconnected. The Sierpiski space is both.

Additional structure
A finite topological space is pseudometrizable if and only if it is R0. In this case, one possible pseudometric is given by

where x y means x and y are topologically indistinguishable. A finite topological space is metrizable if and only if it is discrete. Likewise, a topological space is uniformizable if and only if it is R0. The uniform structure will be the pseudometric uniformity induced by the above pseudometric.

Algebraic topology
Perhaps surprisingly, there are finite topological spaces with nontrivial fundamental groups. A simple example is the pseudocircle, which is space X with four points, two of which are open and two of which are closed. There is a continuous map from the unit circle S1 to X which is a weak homotopy equivalence (i.e. it induces an isomorphism of homotopy groups). It follows that the fundamental group of the pseudocircle is infinite cyclic. More generally it has been shown that for any finite abstract simplicial complex K, there is a finite topological space XK and a weak homotopy equivalence f : |K| XK where |K| is the geometric realization of K. It follows that the homotopy groups of |K| and XK are isomorphic.

Finite topological space

177

Number of topologies on a finite set


As discussed above, topologies on a finite set are in one-to-one correspondence with preorders on the set, and T0 topologies are in one-to-one correspondence with partial orders. Therefore the number of topologies on a finite set is equal to the number of preorders and the number of T0 topologies is equal to the number of partial orders. The table below lists the number of distinct (T0) topologies on a set with n elements. It also lists the number of inequivalent (i.e. nonhomeomorphic) topologies.

Number of topologies on a set with n points


n Distinct topologies 1 1 4 29 355 6942 209527 9535241 642779354 63260289423 Distinct Inequivalent Inequivalent T0 topologies topologies T0 topologies 1 1 3 19 219 4231 130023 6129859 431723379 44511042511 1 1 3 9 33 139 718 4535 35979 363083 4717687 A001930 1 1 2 5 16 63 318 2045 16999 183231 2567284 A000112

0 1 2 3 4 5 6 7 8 9 10 OEIS

8977053873043 6611065248783 A000798 A001035

Let T(n) denote the number of distinct topologies on a set with n points. There is no known simple formula to compute T(n) for arbitrary n. The Online Encyclopedia of Integer Sequences presently lists T(n) for n 18. The number of distinct T0 topologies on a set with n points, denoted T0(n), is related to T(n) by the formula

where S(n,k) denotes the Stirling number of the second kind.

References
[1] Thurston, William P. (April 1994), "On Proof and Progress in Mathematics", Bulletin of the American Mathematical Society 30 (2): pages 161177, arXiv:math/9404236, doi:10.1090/S0273-0979-1994-00502-6.

Finite topological spaces (http://www.maths.ed.ac.uk/~aar/papers/stong2.pdf), RE Stong - Trans. Amer. Math. Soc, 1966 Singular homology groups and homotopy groups of finite topological spaces, Michael C. McCord, Duke Math. J. Volume 33, Number 3 (1966), 465-474. Barmak, Jonathan (2011), Algebraic Topology of Finite Topological Spaces and Applications, Springer, ISBN9783642220029. Merrifield, Richard; Simmons, Howard (1989), Topological methods in chemistry, Wiley, ISBN9780471838173.

Finite topological space

178

External links
Notes and reading materials on finite topological spaces (http://math.uchicago.edu/~may/finite.html), J.P. MAY

FishburnShepp inequality
In combinatorial mathematics, the FishburnShepp inequality is an inequality for the number of extensions of partial orders to linear orders, found by Fishburn (1984) and Shepp (1982). It states that if x, y, and z are incomparable elements of a finite poset, then

where P(*) is the probability that a linear order < extending the partial order has the property *. In other words the probability that x<z strictly increases if one adds the condition that x<y. In the language of conditional probability,

The proof uses the AhlswedeDaykin inequality.

References
Fishburn, Peter C. (1984), "A correlational inequality for linear extensions of a poset", Order 1 (2): 127137, doi:10.1007/BF00565648, ISSN0167-8094, MR764320 Fishburn, P.C.; Shepp, L.A. (2001), "FishburnShepp inequality" [1], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN978-1556080104 Shepp, L. A. (1982), "The XYZ conjecture and the FKG inequality", The Annals of Probability (Institute of Mathematical Statistics) 10 (3): 824827, doi:10.1214/aop/1176993791, ISSN0091-1798, JSTOR2243391, MR659563

References
[1] http:/ / www. encyclopediaofmath. org/ index. php?title=f/ f110080

Free convolution

179

Free convolution
Free convolution is the free probability analog of the classical notion of convolution of probability measures. Due to the non-commutative nature of free probability theory, one has to talk separately about additive and multiplicative free convolution, which arise from addition and multiplication of free random variables (see below; in the classical case, what would be the analog of free multiplicative convolution can be reduced to additive convolution by passing to logarithms of random variables). The notion of free convolution was introduced by Voiculescu in early 80s in the papers [1] and [2] .

Free additive convolution


Let and and be two probability measures on the real line, and assume that . Assume finally that . explicitly by using complex-analytic and is the law of is a random variable with law is a random variable with law are freely independent. Then the free

additive convolution

In many cases, it is possible to compute the probability measure techniques and the R-transform of the measures and .

Free multiplicative convolution


Let law and and be two probability measures on the interval is a random variable with law is the law of , and assume that and is a random variable with . . . Assume finally that are freely independent. Then the

free multiplicative convolution

(or, equivalently, the law of supported on the unit circle

A similar definition can be made in the case of laws

Explicit computations of multiplicative free convolution can be carried out using complex-analytic techniques and the S-transform.

Applications of free convolution


Free convolution can be used to give a proof of the free central limit theorem. Free convolution can be used to compute the laws and spectra of sums or products of random variables which are free. Such examples include: random walk operators on free groups (Kesten measures); and asymptotic distribution of eigenvalues of sums or products of independent random matrices. Through its applications to random matrices, free convolution has some strong connections with other works on G-estimation of Girko. The applications in wireless communications, finance and biology have provided a useful framework when the number of observations is of the same order as the dimensions of the system.

Free convolution

180

References
[1] Voiculescu, D., Addition of certain non-commuting random variables, J. Funct. Anal. 66 (1986), 323346 [2] Voiculescu, D., Multiplciation of certain noncommuting random variables , J. Operator Theory 18 (1987), 22232235.

"Free Deconvolution for Signal Processing Applications", O. Ryan and M. Debbah, ISIT 2007, pp. 18461850

External links
Alcatel Lucent Chair on Flexible Radio (http://www.supelec.fr/d2ri/flexibleradio/Welcome.html) http://www.cmapx.polytechnique.fr/~benaych (http://www.cmapx.polytechnique.fr/~benaych) http://folk.uio.no/oyvindry (http://folk.uio.no/oyvindry)

Fuzzy transportation
The aim of fuzzy transportation is to find the least transportation cost of some commodities through a capacitated network when the supply and demand of nodes and the capacity and cost of edges are represented as fuzzy numbers. This problem is a new branch in Combinatorial Optimization and network flow problems. Combinatorial algorithms can be provided to solve fuzzy transportation problem to find the fuzzy optimal flow(s). Such methods are capable of handling the decision maker's risk taking. Some application of such standpoint were presented in industries. Liu and Kao pursued this attempt to find better solution for this problem (Network flow problems with fuzzy arc lengths, IEEE Transactions on Systems, Man and Cybernetics Part B: Cybernetics, 34 (2004) 765-769). It is interesting to check that which methods in traditional fuzzy optimization problem can be extended to combinatorial optimization problems e.g., transformation that they maintain the nice structure of problem. Then, valuable algorithms can be proposed for fuzzy combinatorial optimization to take the uncertainty of real problems into account. By using fuzzy transportation, it is a reasonable attempt to find special solutions for hazardous material transportation because of the possibility of implementing the optimistic and pessimistic concepts into account.

Generalized arithmetic progression

181

Generalized arithmetic progression


In mathematics, a multiple arithmetic progression, generalized arithmetic progression, or k-dimensional arithmetic progression, is a set of integers constructed as an arithmetic progression is, but allowing several possible differences. So, for example, we start at 17 and may add a multiple of 3 or of 5, repeatedly. In algebraic terms we look at integers a + mb + nc + ... where a, b, c and so on are fixed, and m, n and so on are confined to some ranges 0 m M, and so on, for a finite progression. The number k, that is the number of permissible differences, is called the dimension of the generalized progression. More generally, let

be the set of all elements

in

of the form

with

in

, is finite.

in

, and

in

is said to be a linear set if

consists of exactly

one element, and A subset of

is said to be semilinear if it is a finite union of linear sets.

Geometric combinatorics
Geometric combinatorics is a branch of mathematics in general and combinatorics in particular. It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. Other important areas include metric geometry of polyhedra, such as the Cauchy theorem on rigidity of convex polytopes. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope. Also studied are finite geometries.

Further reading
Topics in Geometric Combinatorics [1] Geometric Combinatorics [2], Edited by: Ezra Miller and Victor Reiner Combinatorics of Finite Geometries [3]

References
[1] http:/ / www. cis. upenn. edu/ ~cis610/ topics. pdf [2] http:/ / www. ams. org/ bookstore?fn=20& arg1=geotopo& item=PCMS-13 [3] http:/ / scholar. google. co. uk/ scholar?q=%22Combinatorics+ of+ Finite+ Geometries%22

Glaisher's theorem

182

Glaisher's theorem
In number theory, Glaisher's theorem is an identity useful to the study of integer partitions. It is named for James Whitbread Lee Glaisher. It states that the number of partitions of an integer partitions of the form into parts not divisible by is equal to the number of

where

and

that is, partitions in which no part is repeated d or more times.

References
D.H. Lehmer (1946). "Two nonexistence theorems on partitions" [1]. Bull. Amer. Math. Soc. 52 (6): 538544. doi:10.1090/S0002-9904-1946-08605-X.

References
[1] http:/ / projecteuclid. org/ euclid. bams/ 1183509416

Graph dynamical system


In mathematics, the concept of graph dynamical systems can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result. The work on GDSs considers finite graphs and finite state spaces. As such, the research typically involves techniques from, e.g., graph theory, combinatorics, algebra, and dynamical systems rather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g. cellular automata over or interacting particle systems), as well as GDSs with infinite state space (e.g. as in coupled map lattices); see, e.g., Wu.[1] In the following everything is implicitly assumed to be finite unless stated otherwise.

Formal definition
A graph dynamical system is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex v of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[v] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of v in Y (in some fixed order). A vertex function fv for each vertex v. The vertex function maps the state of vertex v at time t to the vertex state at time t+1 based on the states associated to the 1-neighborhood of v in Y. An update scheme specifying the mechanism by which the mapping of individual vertex states is carried out so as to induce a discrete dynamical system with map F: Kn Kn.

Graph dynamical system The phase space associated to a dynamical system with map F: Kn Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update scheme. The research in this area seeks to infer phase space properties based on the structure of the system constituents. The analysis has a local-to-global character.

183

Generalized cellular automata (GCA)


If, for example, the update scheme consists of applying the vertex functions synchronously one obtains the class of generalized cellular automata (CA). In this case, the global map F: Kn Kn is given by

This class is referred to as generalized cellular automata since the classical or standard cellular automata are typically defined and studied over regular graphs or grids, and the vertex functions are typically assumed to be identical. Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K = {0,1} be the state space for each vertex and use the function nor3 : K3 K defined by nor3(x,y,z)=(1+x)(1+y)(1+z) with arithmetic modulo 2 for all vertex functions. Then for example the system state (0,1,0,0) is mapped to (0,0,0,1) using a synchronous update. All the transitions are shown in the phase space below.

Sequential dynamical systems (SDS)


If the vertex functions are applied asynchronously in the sequence specified by a word w = (w1, w2, ... , wm) or permutation = ( , ) of v[Y] one obtains the class of Sequential dynamical systems (SDS).[2] In this case it is convenient to introduce the Y-local maps Fi constructed from the vertex functions by The SDS map F = [FY , w] : Kn Kn is the function composition If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K={0,1} be the state space for each vertex and use the function nor3 : K3 K defined by nor3(x,y,z) = (1+x)(1+y)(1+z) with arithmetic modulo 2 for all vertex functions. Using the update sequence (1,2,3,4) then the system state (0,1,0,0) is mapped to (0,0,1,0). All the system state transitions for this sequential dynamical system are shown in the phase space below.

Graph dynamical system

184

Stochastic graph dynamical systems


From, e.g., the point of view of applications it is interesting to consider the case where one or more of the components of a GDS contains stochastic elements. Motivating applications could include processes that are not fully understood (e.g. dynamics within a cell) and where certain aspects for all practical purposes seem to behave according to some probability distribution. There are also applications governed by deterministic principles whose description is so complex or unwieldy that it makes sense to consider probabilistic approximations. Every element of a graph dynamical system can be made stochastic in several ways. For example, in a sequential dynamical system the update sequence can be made stochastic. At each iteration step one may choose the update sequence w at random from a given distribution of update sequences with corresponding probabilities. The matching probability space of update sequences induces a probability space of SDS maps. A natural object to study in this regard is the Markov chain on state space induced by this collection of SDS maps. This case is referred to as update sequence stochastic GDS and is motivated by, e.g., processes where "events" occur at random according to certain rates (e.g. chemical reactions), synchronization in parallel computation/discrete event simulations, and in computational paradigms described later. This specific example with stochastic update sequence illustrates two general facts for such systems: when passing to a stochastic graph dynamical system one is generally led to (1) a study of Markov chains (with specific structure governed by the constituents of the GDS), and (2) the resulting Markov chains tend to be large having an exponential number of states. A central goal in the study of stochastic GDS is to be able to derive reduced models. One may also consider the case where the vertex functions are stochastic, i.e., function stochastic GDS. For example, Random Boolean networks are examples of function stochastic GDS using a synchronous update scheme and where the state space is K = {0,1}. Finite probabilistic cellular automata (PCA) is another example of function stochastic GDS. In principle the class of Interacting particle systems (IPS) covers finite and infinite PCA, but in practice the work on IPS is largely concerned with the infinite case since this allows one to introduce more interesting topologies on state space.

Graph dynamical system

185

Applications
Graph dynamical systems constitute a natural framework for capturing distributed systems such as biological networks and epidemics over social networks, many of which are frequently referred to as complex systems.

References
[1] Wu, Chai Wah (2005). "Synchronization in networks of nonlinear dynamical systems coupled via a directed graph". Nonlinearity 18 (3): 10571064. doi:10.1088/0951-7715/18/3/007. [2] Mortveit, Henning S.; Reidys, Christian M. (2007). An introduction to sequential dynamical systems. Universitext. New York: Springer Verlag. ISBN978-0-387-30654-4.

External links
Graph Dynamical Systems A Mathematical Framework for Interaction-Based Systems, Their Analysis and Simulations by Henning Mortveit (http://legacy.samsi.info/200809/algebraic/presentations/discrete/friday/ samsi-05-dec-08.pdf)

Further reading
Macauley, Matthew; Mortveit, Henning S. (2009). "Cycle equivalence of graph dynamical systems". Nonlinearity 22 (2): 421436. doi:10.1088/0951-7715/22/2/010. Golubitsky, Martin; Stewart, Ian (2003). The Symmetry Perspective. Basel: Birkhauser. ISBN0817621717.

Group testing
In combinatorial mathematics, group testing is a set of problems with the objective of reducing the cost of identifying certain elements of a set.

Background
Robert Dorfman's paper in 1943 introduced the field of (Combinatorial) Group Testing. The motivation arose during the Second World War when the United States Public Health Service and the Selective service embarked upon a large scale project. The objective was to weed out all syphilitic men called up for induction. However, syphilis testing back then was expensive and testing every soldier individually would have been very cost heavy and inefficient. A basic breakdown of a test is: Draw sample from a given individual Perform required tests Determine presence or absence of syphilis Say we have soldiers, then this method of testing leads to tests. If we have 70-75% of the people infected then the method of individual testing would be reasonable. Our goal however, is to achieve effective testing in the more likely scenario where it does not make sense to test 100,000 people to get (say) 10 positives. The feasibility of a more effective testing scheme hinges on the following property. We can combine blood samples and test a combined sample together to check if at least one soldier has syphilis.

Group testing

186

Formalization of the problem


We now formalize the group testing problem abstractly. The total number of soldiers , an upper bound on the number of infected soldiers where . Hence, where . The (unknown) if the item is the Hamming

information about which soldier is infected described as a vector is infected else . The Hamming Weight of is defined as the number of weight. The vector find out is to run the tests. Formal notion of a Test A query/test is a subset of . The answer to the query in

is an implicit input since we do not know the positions of

in the input. The only way to

is defined as follows:

Note that the addition operation used by the summation is the logical. Goal Compute and minimize the number of tests required to determine

, i.e.

The question boils down to one of Combinatorial Searching. Combinatorial searching in general can be explained as follows: Say you have a set of variables and each of these can take on possible values. So, finding possible solutions that match a certain constraint is a problem of combinatorial searching. The major problem with such questions is that the solution can grow exponentially in the size of the input. Here, we have no direct questions or answers. Any piece of information can only be obtained using an indirect query. Definition Given a set of items with defects, the minimum number of tests that one would have to make to

detect all the defective items is defined as . Consider the case when only one person in the group will test positive. Then if we tested in the naive way, in the best case we would at least have to test the first person to find out if he/she is infected. However, in the worst case one might have to end up testing the entire group and only the last person we test will turn out to really be the one who was infected. Hence,

Testing Methods
There are two basic principles via which the testing may be carried out: 1. Adaptive Group Testing is where we test a given subset of items and, get the answer from the of test. We then base the next test on the outcome of the current test. 2. Non-adaptive Group Testing on the otherhand is when all the tests to be performed are decided a priori Definition Given a set of items with defects, is defined as the number of adaptive tests that one

would have to make to detect all the defective items. One should note that in the case of group testing for the Syphilis problem, non-adaptive group testing is crucial. This is because the soldiers might in spread out geographically and adaptive group testing will need a lot of co-ordination.

Group testing

187

Mathematical representation of the set of non-adaptive tests


For, , define such that . is a matrix of . is the input vector transposed and is the resultant. The construction is based on the grounds that for non-adaptive testing with tests is represented by a subset . for a given is the test. test matrix where logical OR ( there was a is one if for the ). Then, in that position in both test, where and . Note that here multiplication is logical AND ( ) and is

is the resultant of the matrix multiplication. To think of this in will have a 1 in position if and only if i.e. if that person was tested with that particular group and if he

terms of testing, it is helpful to visualize matrix multiplication. Here,

tested out to be positive.

Bounds for testing on


The reason for

and
is due to the fact that any non-adaptive test can be performed by an adaptive

test by running all of of the tests in the first step of the adaptive test. Adaptive tests can be more efficient than non-adaptive tests since the test can be changed after certain things are discovered. Lower bound on Fix a valid group testing scheme with tests. Now, for two distinct vectors . Here and where , the . This is

resulting vectors will not be the same i.e.

is the resultant vector when

because, two valid inputs will never give us the same result. If this ever happened, then we would always have an error in finding both and . This gives us that the total number of distinct results is the volume of a Hamming Ball of radius vectors is Now, Thus we have proved, Upper bound on . Since we know that the upper bound on the number of positives is , we run a binary search at most times or until there are no more values to be found. To simplify the problem we try to give a testing sccheme that uses adaptive tests to figure out a such that . The related problem is solved by splitting in two halves and querying to find a where the query returned a will take be done at most in one of those and then proceeding recursively to find the exact position in the half . This will take time or if the first query is performed on the whole set, it co-ordinate. This can . For a full proof and an algorithm for the is found, the search is then repeated after removing the
[1]

, centered about

i.e. . Taking the

. However, for

bits, the total number of possible distinct . tests.

. Hence,

on both sides gives us

. Therefore, we will end up having to perform a minimum of

. Once a

times. This justifies the running time of

problem referto: CSE545 at the University at Buffalo

Group testing Upper bound on This upper bound is for the special case where this case, the matrix multiplication gets simplified and the resultant test of . This gives a lower bound of i.e. there is a maximum of 1 positive. In represents the binary representation of for

188

. Note that decoding becomes trivial because the binary representation for the . Such tight

gives us the location directly. The group test matrix here is just the parity check matrix when

Hamming code. Thus as the upper and lower bounds are the same, we have a tight bound for bounds are not known for general .

Upper Bounds for Non-Adaptive Group Testing For non-adaptive group testing upper bounds we shift focus toward disjunct matrices. Disjunct matrices have been used for many of the bounds because of their nice properties. Through use of different constructions of disjunct matrices it has been shown that (explicit construction) and (ii) . Also for upper bounds we currently have that (i) (strongly is already a explicit factor larger

construction). It is good to note that the current known lower bound for than the upper bound for

. Another thing to note is that give the smallest upper bound and biggest lower which is fairly small.

bound they are only off by a factor of

See Also
Disjunct Matrix Robert Dorfman Concatenated error correction codes Hamming weight Hamming code

References
1. Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Spring 2007), Lectures 7 [2]. 2. Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Spring 2010), Lectures 10 [3], 11 [1], 28 [4], 29 [5] 3. Dorfman, R. The Detection of Defective Members of Large Populations. The Annals of Mathematical Statistics, 14(4), 436-440. Retrieved from [6] 4. Du, D., & Hwang, F. (2006). Pooling Designs and Nonadaptive Group Testing. Boston: Twayne Publishers. 5. Ely Porat, Amir Rothschild: Explicit Non-adaptive Combinatorial Group Testing Schemes. ICALP (1) 2008: 748-759

Group testing

189

References
[1] [2] [3] [4] [5] [6] http:/ / www. cse. buffalo. edu/ ~atri/ courses/ coding-theory/ spr10/ lectures/ lect11. pdf http:/ / www. cse. buffalo. edu/ ~atri/ courses/ coding-theory/ lectures/ lect7. pdf http:/ / www. cse. buffalo. edu/ ~atri/ courses/ coding-theory/ spr10/ lectures/ lect10. pdf http:/ / www. cse. buffalo. edu/ ~atri/ courses/ coding-theory/ spr10/ lectures/ lect28. pdf http:/ / www. cse. buffalo. edu/ ~atri/ courses/ coding-theory/ spr10/ lectures/ lect29. pdf http:/ / www. jstor. org/ pss/ 2235930

History of combinatorics
The history of combinatorics is an area of study within the history of mathematics, dedicated to the history of combinatorics and its variations, from antiquity to modern times.

Earliest uses
The earliest books about combinatorics are from India.[1] A Jain text, The appearance of the Fibonacci number five in prosody. There is one way to arrange one beat, the Bhagabati Sutra, had the first mention of a combinatorics problem; two for two, three for three, and five for four it asked how many ways one could take six tastes one, two, or three beats. tastes at a time. The Bhagabati Sutra was written around 300 BC, and was the first book to mention the choose function.[2] The next ideas of Combinatorics came from Pingala, who was interested in prosody. Specifically, he wanted to know how many ways a six-syllable meter could be made from short and long notes. He wrote this problem in the Chanda sutra (also Chandahsutra) in the second century BC.[3] [4] In addition, he also found the number of meters that had n long notes and k short notes, which is equivalent to finding the binomial coefficients. The ideas of the Bhagabati were generalised by the Indian mathematician Mahavira in 850 AD, and Pingala's work on prosody was expanded by Bhaskara[2] [5] and Hemacandra in 1100 AD. Bhaskara was the first known person to find the generalised choice function, although Brahmagupta may have known earlier.[6] Hemacandra asked how many meters existed of a certain length if a long note was considered to be twice as long as a short note, which is equivalent to finding the Fibonacci numbers.[3] While India was the first nation to publish results on Combinatorics, there were discoveries by other nations on similar topics. The earliest known connection to Combinatorics comes from the Rhind papyrus, problem 79, for the implementation of a geometric series. The next milestone is held by the I Ching. The book is about what different hexagrams mean, and to do this they needed to know how many possible hexagrams there were. Since each hexagram is a permutation with repetitions of six lines, where each line can be one of two states, solid or dashed, combinatorics yields the result that there are 26=64 hexagrams. A monk also may have counted the number of configurations to a game similar to Go around 700 AD.[7] Although China had relatively few advancements in enumerative combinatorics, they solved a combinatorial design problem, the magic square, around

A hexagram

100 AD.[6] In Greece, Plutarch wrote that the Xenocrates discovered the number of different syllables possible in the Greek language. This, however, is unlikely because this is one of the few mentions of Combinatorics in Greece. The

History of combinatorics number they found, 1.0021012 also seems too round to be more than a guess.[7] [8] Magic squares remained an interest of China, and they began to generalise their original 33 square between 900 and 1300 AD. China corresponded with the Middle East about this problem in the 13th century.[6] The Middle East also learned about binomial coefficients from Indian work, and found the connection to polynomial expansion.[9] The philosopher and astronomer Rabbi Abraham ibn Ezra (c. 1140) established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.[10] The arithmetical triangle a graphical diagram showing relationships among the binomial coefficients was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.[11]

190

Combinatorics in the West


Combinatorics came to Europe in the 13th century through two mathematicians, Leonardo Fibonacci and Jordanus de Nemore. Fibonacci's Liber Abaci introduced many of the Arabian and Indian ideas to Europe, including that of the Fibonacci numbers.[12] [13] Jordanus was the first person to arrange the binomial coefficients in a triangle, as he did in proposition 70 of De Arithmetica. This was also done in the Middle East in 1265, and China around 1300.[6] Today, this triangle is known as Pascal's triangle. Pascal's contribution to the triangle that bears his name comes from his work on formal proofs about it, in addition to his connection between it and probability.[6] Together with Leibniz and his ideas about partitions in the 17th century [14] , they are considered the founders of modern combinatorics[15] . Both Pascal and Leibniz understood that algebra and combinatorics corresponded (aka, binomial expansion was equivalent to the choice function). This was expanded by De Moivre, who found the expansion of a multinomial.[16] De Moivre also found the formula for derangements using the principle of inclusion-exclusion, a method different from Nikolaus Bernoulli, who had found them previously.[6] He managed to approximate the binomial coefficients and factorial. Finally, he found a closed form for the Fibonacci numbers by inventing generating functions.[17] [18] In the 18th century, Euler worked on problems of combinatorics. In addition to working on several problems of probability which link to combinatorics, he worked on the knights tour, Graeco-Latin square, Eulerian numbers, and others. He also invented graph theory by solving the Seven Bridges of Knigsberg problem, which also led to the formation of topology. Finally, he broke ground with partitions by the use of generating functions.[19]

Notes
[1] (http:/ / ncertbooks. prashanthellina. com/ class_11. Mathematics. Mathematics/ Ch-07(Permutation and Combinations FINAL 04. 01. 06). pdf) [2] "India" (http:/ / binomial. csueastbay. edu/ India. html). . Retrieved 2008-03-05. [3] Hall, Rachel (2005-02-16) (PDF). Math for Poets and Drummers-The Mathematics of Meter (http:/ / www. sju. edu/ ~rhall/ Rhythms/ poets. pdf). . Retrieved 2008-03-05. [4] Kulkarni, Amba. Recursion and Combinatorial Mathematics in Chandashstra. arXiv:math/0703658. [5] Bhaskara. "The Lilavati of Bhaskara" (http:/ / web. archive. org/ web/ 20080325004552/ http:/ / www. brown. edu/ Departments/ History_Mathematics/ lilavati. html). Brown University. Archived from the original (http:/ / www. brown. edu/ Departments/ History_Mathematics/ lilavati. html) on 2008-03-25. . Retrieved 2008-03-06. [6] Biggs, Norman; Keith Lloyd, Robin Wilson (1995). "44" (http:/ / books. google. com/ ?id=kfiv_-l2KyQC). In Ronald Grahm, Martin Grtschel, Lszl Lovsz (Google book). Handbook of Combinatorics. MIT Press. pp.21632188. ISBN0262571722. . Retrieved 2008-03-08. [7] Dieudonn, J.. "The Rhind/Ahmes Papyrus - Mathematics and the Liberal Arts" (http:/ / mtcs. truman. edu/ ~thammond/ history/ RhindPapyrus. html). Historia Math. Truman State University. . Retrieved 2008-03-06. [8] Gow, James (1968). A Short History of Greek Mathematics (http:/ / books. google. com/ ?id=68sYLQa9FuQC& printsec=frontcover). AMS Bookstore. pp.71. ISBN0-828-40218-3. . [9] "Middle East" (http:/ / binomial. csueastbay. edu/ MidEast. html). . Retrieved 2008-03-08.

History of combinatorics
[10] History of Combinatorics (http:/ / ncertbooks. prashanthellina. com/ class_11. Mathematics. Mathematics/ Ch-07(Permutation and Combinations FINAL 04. 01. 06). pdf), chapter in a textbook. [11] Arthur T. White, Ringing the Cosets, Amer. Math. Monthly 94 (1987), no. 8, 721-746; Arthur T. White, Fabian Stedman: The First Group Theorist?, Amer. Math. Monthly 103 (1996), no. 9, 771-778. [12] Devlin, Keith (10 2002). "The 800th birthday of the book that brought numbers to the west" (http:/ / www. maa. org/ devlin/ devlin_10_02. html). Devlin's Angle. . Retrieved 2008-03-08. [13] "Fibonacci Sequence- History" (http:/ / science. jrank. org/ pages/ 2705/ Fibonacci-Sequence-History. html). Net Industries. 2008. . Retrieved 2008-03-08. [14] Leibniz habilitation thesis De Arte Combinatoria was published as a book in 1666 and reprinted later [15] Dickson, Leonard (2005) [1919]. "Chapter III". Diophantine Analysis. History of the Theory of Numbers. Mineola, New York: Dover Publications, Inc.. pp.101. ISBN0-486-44233-0. [16] Hodgson, James; William Derham; Richard Mead (1708) (Google book). Miscellanea Curiosa (http:/ / books. google. com/ ?id=sr04AAAAMAAJ& printsec=titlepage). Volume II. pp.183191. . Retrieved 2008-03-08. [17] O'Connor, John; Edmund Robertson (06 2004). "Abraham de Moivre" (http:/ / www-history. mcs. st-andrews. ac. uk/ Biographies/ De_Moivre. html). The MacTutor History of Mathematics archive. . Retrieved 2008-03-09. [18] Pang, Jong-Shi; Olvi Mangasarian (1999). "10.6 Generating Function" (http:/ / books. google. com/ ?id=kJa15IMxAoIC& printsec=frontcover#PPA5,M1). In Jong-Shi Pang (Google book). Computational Optimisation. Volume 1. Netherlands: Kluwer Academic Publishers. pp.182183. ISBN0-792-38480-6. . Retrieved 2008-03-09. [19] "Combinatorics and probability" (http:/ / math. dartmouth. edu/ ~euler/ ). . Retrieved 2008-03-08.

191

References
Katz, Victor J. (1998). A History of Mathematics: An Introduction, 2nd Edition. Addison-Wesley Education Publishers. ISBN 0-321-01618-1. O'Connor, John J. and Robertson, Edmund F. (19992004). MacTutor History of Mathematics archive. St Andrews University. Rashed, R. (1994). The development of Arabic mathematics: between arithmetic and algebra. London.

HuntMcIlroy algorithm
In computer science, the HuntMcIlroy algorithm is a solution to the longest common subsequence problem. It was one of the first non-heuristic algorithms used in diff. To this day, variations of this algorithm are found in incremental version control systems, wiki engines, and molecular phylogenetics research software. The research accompanying the final version of Unix diff, written by Douglas McIlroy, was published in the 1976 paper "An Algorithm for Differential File Comparison", co-written with James W. Hunt, who developed an initial prototype of diff.[1]

References
[1] James W. Hunt and M. Douglas McIlroy (June 1976). "An Algorithm for Differential File Comparison" (http:/ / cm. bell-labs. com/ cm/ cs/ cstr/ 41. pdf). Computing Science Technical Report, Bell Laboratories 41. .

Ideal ring bundle

192

Ideal ring bundle


Ideal ring bundle (IRB) is a mathematical term which means an n-stage cyclic sequence of semi-measured terms, e.g. integers for which the set of all circular sums enumerates row of natural numbers by fixed times. The circular sum is called a sum of consecutive terms in the n-sequence of any number of terms (from 1 to n1).

Examples
For example, the cyclic sequence (1,3,2,7) is an Ideal Ring Bundle because four (n=4) its terms enumerate of all natural numbers from 1 to n(n1)=12 as its starting term, and can be of any number of summing terms by exactly one (R=1) way: 1, 2, 3, 4 = 1 + 3, 5 = 3 + 2, 6 = 1 + 3 + 2, 7, 8 = 7 + 1, 9 = 2 + 7, 10 = 2 + 7 + 1, 11 = 7 + 1 + 3, 12 = 3 + 2 + 7, 13 = 1 + 3 + 2 + 7. The cyclic sequence (1,1,2,3) is an ideal ring bundle also, because four (n=4) its terms enumerate all numbers of the natural row from 1 to n(n1)/R=6 as its starting term, and can be of any number of summing terms by exactly two (R=2) ways: 1, 1 2, 2 = 1 + 1 3, 3 = 2 + 1 4 = 3 + 1, 4 = 1 + 1 + 2 5 = 2 + 3, 5 = 3 + 1 + 1 6 = 1 + 2 + 3, 6 = 2 + 3 + 1

References
. - : , 1989.- 168 . Multi-dimensional Systems Based on Perfect Combinatorial Models, IEEE, Multidimensional Systems: Problems and Solutions, #225, London, 1998.

Incidence matrix

193

Incidence matrix
In mathematics, an incidence matrix is a matrix that shows the relationship between two classes of objects. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related (called incident in this context) and 0 if they are not. There are variations; see below.

Graph theory
Incidence matrices are mostly used in graph theory.

Undirected and directed graphs


In graph theory an undirected graph G has two kinds of incidence matrices: unoriented and oriented. The incidence matrix (or unoriented incidence matrix) of G is a p q matrix , where p and q are the numbers of vertices and edges respectively, such that if the vertex and edge are incident and 0 otherwise. For example the incidence matrix of the undirected graph shown on the right is a matrix consisting of 4 rows (corresponding to the four vertices, 1-4) and 4 columns (corresponding to the four edges, e1-e4):

An undirected graph

The incidence matrix of a directed graph D is a p q matrix edges respectively, such that if the edge leaves vertex

where p and q are the number of vertices and , if it enters vertex and 0 otherwise.

(Note that many authors use the opposite sign convention.) An oriented incidence matrix of an undirected graph G is the incidence matrix, in the sense of directed graphs, of any orientation of G. That is, in the column of edge e, there is one +1 in the row corresponding to one vertex of e and one 1 in the row corresponding to the other vertex of e, and all other rows have 0. All oriented incidence matrices of G differ only by negating some set of columns. In many uses, this is an insignificant difference, so one can speak of the oriented incidence matrix, even though that is technically incorrect. The oriented or unoriented incidence matrix of a graph G is related to the adjacency matrix of its line graph L(G) by the following theorem:

Incidence matrix where is the adjacency matrix of the line graph of G, B(G) is the incidence matrix, and is the identity

194

matrix of dimension q. The Kirchhoff matrix is obtained from the oriented incidence matrix M(G) by the formula

The integral cycle space of a graph is equal to the null space of its oriented incidence matrix, viewed as a matrix over the integers or real or complex numbers. The binary cycle space is the null space of its oriented or unoriented incidence matrix, viewed as a matrix over the two-element field.

Signed and bidirected graphs


The incidence matrix of a signed graph is a generalization of the oriented incidence matrix. It is the incidence matrix of any bidirected graph that orients the given signed graph. The column of a positive edge has a +1 in the row corresponding to one endpoint and a 1 in the row corresponding to the other endpoint, just like an edge in an ordinary (unsigned) graph. The column of a negative edge has either a +1 or a 1 in both rows. The line graph and Kirchhoff matrix properties generalize to signed graphs.

Multigraphs
The definitions of incidence matrix apply to graphs with loops and multiple edges. The column of an oriented incidence matrix that corresponds to a loop is all zero, unless the graph is signed and the loop is negative; then the column is all zero except for 2 in the row of its incident vertex.

Hypergraphs
Because the edges of ordinary graphs can only have two vertices (one at each end), the column of an incidence matrix for graphs can only have two non-zero entries. By contrast, a hypergraph can have multiple vertices assigned to one edge; thus, the general case describes a hypergraph.

Incidence structures
The incidence matrix of an incidence structure C is a p q matrix and lines respectively, such that if the point and line , where p and q are the number of points are incident and 0 otherwise. In this case the

incidence matrix is also a biadjacency matrix of the Levi graph of the structure. As there is a hypergraph for every Levi graph, and vice-versa, the incidence matrix of an incidence structure describes a hypergraph.

Finite geometries
An important example is a finite geometry. For instance, in a finite plane, X is the set of points and Y is the set of lines. In a finite geometry of higher dimension, X could be the set of points and Y could be the set of subspaces of dimension one less than the dimension of Y; or X could be the set of all subspaces of one dimension d and Y the set of all subspaces of another dimension e.

Incidence matrix

195

Block designs
Another example is a block design. Here X is a finite set of "points" and Y is a class of subsets of X, called "blocks", subject to rules that depend on the type of design. The incidence matrix is an important tool in the theory of block designs. For instance, it is used to prove the fundamental theorem of symmetric 2-designs, that the number of blocks equals the number of points.

References
Diestel, Reinhard (2005), Graph Theory, Graduate Texts in Mathematics, 173 (3rd ed.), Springer-Verlag, ISBN3-540-26183-4. Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 (Section 9.2 Incidence matrices, pp. 166-171) Jonathan L Gross, Jay Yellen, Graph Theory and its applications, second edition, 2006 (p 97, Incidence Matrices for unidrect graphs; p 98, incidence matrices for digraphs) Weisstein, Eric W., "Incidence matrix [1]" from MathWorld.

References
[1] http:/ / mathworld. wolfram. com/ IncidenceMatrix. html

Incidence structure
In mathematics, an incidence structure is a triple where P is a set of "points", L is a set of "lines" and called flags. If we say that point p "lies on" line containment ( . One may concretely have L be a set of subsets of P, and have incidence I be ), but one may also work more abstractly. is the incidence relation. The elements of I are

if and only if

Incidence structures generalize planes (such as affine, projective, and Mbius planes) in their axiomatic definitions, as the terminology indicates. The higher-dimensional analog is called an incidence geometry.

Comparison with other structures


A figure may look like a graph, but in a graph an edge has just two ends (beyond a vertex a new edge starts), while a line in an incidence structure can be incident to more points. An incidence structure has no concept of a point being in between two other points; the order of points on a line is undefined. Compare with ordered geometry, which does have a notion of betweenness.

Dual structure
If we interchange the role of "points" and "lines" in C=(P,L,I) the dual structure C*=(L,P,I*) is obtained, where I* is the inverse relation of I. Clearly

Incidence structure C**=C. This is an abstract version of projective duality. A structure C that is isomorphic to its dual C* is called self-dual.

196

Correspondence with hypergraphs


Each hypergraph or set system can be regarded as an incidence structure in which the universal set plays the role of "points", the corresponding family of sets plays the role of "lines" and the incidence relation is set membership "". Conversely, every incidence structure can be viewed as a hypergraph.

Example: Fano plane


In particular, let P={1,2,3,4,5,6,7}, L={{1,2,3},{1,4,5},{1,6,7},{2,4,6},{2,5,7},{3,4,7},{3,5,6}}. The corresponding incidence structure is called the Fano plane. The lines are precisely the subsets of the points that consist of three points whose labels add up to zero using nim addition.
Seven points are elements of seven lines in the Fano plane

Geometric representation
Incidence structures can be modelled by points and curves in the Euclidean plane with usual geometric incidence. Some incidence structures admit representation by points and lines. The Fano plane is not one of them since it needs at least one curve.

Levi graph of an incidence structure


Each incidence structure C corresponds to a bipartite graph called Levi graph or incidence graph with a given black and white vertex coloring where black vertices correspond to points and white vertices correspond to lines of C and the edges correspond to flags.

Example: Heawood graph


For instance, the Levi graph of the Fano plane is the Heawood graph. Since the Heawood graph is connected and vertex-transitive, there exists an automorphism (such as the one defined by a reflection about the vertical axis in the above figure) interchanging black and white vertices. This, in turn, implies that the Fano plane is self-dual.

Heawood graph with labeling

Incidence structure

197

References
CRC Press (2000). Handbook of discrete and combinatorial mathematics, (Chapter 12.2), ISBN 0-8493-0149-1

Independence system
In combinatorial mathematics, an independence system S is a pair (E,I), where E is a finite set and I is a collection of subsets of E (called the independent sets) with the following properties: 1. The empty set is independent, i.e., I. (Alternatively, at least one subset of E is independent, i.e.,I.) 2. Every subset of an independent set is independent, i.e., for each E'E, EIE'I. This is sometimes called the hereditary property. Adding the augmentation property or the independent set exchange property yields a matroid. For a more general description, see abstract simplicial complex.

Infinitary combinatorics
In mathematics, infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum[1] and combinatorics on successors of singular cardinals.[2]

Ramsey theory for infinite sets


Write , for ordinals, m for a cardinal number and n for a natural number. Erds & Rado (1956) introduced the notation as a shorthand way of saying that every partition of the set []n of n-element subsets of into m pieces has a homogeneous set of order type . A homogeneous set is in this case a subset of such that every n-element subset is in the same element of the partition. When m is 2 it is often omitted. There are no ordinals with (), so n is usually taken to be finite. An extension where n is almost allowed to be infinite is the notation

which is a shorthand way of saying that every partition of the set of finite subsets of into m pieces has a subset of order type such that for any finite n, all subsets of size n are in the same element of the partition. When m is 2 it is often omitted. Another variation is the notation which is a shorthand way of saying that every coloring of the set []n of n-element subsets of with 2 colors has a subset of order type such that all elements of []n have the first color, or a subset of order type such that all elements of []n have the second color. Some properties of this include: (in what follows is a cardinal)

for all finite n and k (Ramsey's theorem). (ErdsRado theorem.)

Infinitary combinatorics (Sierpiski theorem)

198

(ErdsDushnikMiller theorem).

Large cardinals
Several large cardinal properties can be defined using this notation. In particular: Weakly compact cardinals are those that satisfy ()2 -Erds cardinals are the smallest that satisfy ()< Ramsey cardinals are those that satisfy ()<

References
Dushnik, Ben; Miller, E. W. (1941), "Partially ordered sets", American Journal of Mathematics 63 (3): 600610, doi:10.2307/2371374, ISSN0002-9327, JSTOR2371374, MR0004862 Erds, Paul; Hajnal, Andrs (1971), "Unsolved problems in set theory", Axiomatic Set Theory ( Univ. California, Los Angeles, Calif., 1967), Proc. Sympos. Pure Math, XIII Part I, Providence, R.I.: Amer. Math. Soc., pp.1748, MR0280381 Erds, Paul; Hajnal, Andrs; Mt, Attila; Rado, Richard (1984), Combinatorial set theory: partition relations for cardinals, Studies in Logic and the Foundations of Mathematics, 106, Amsterdam: North-Holland Publishing Co.,, ISBN0-444-86157-2, MR0795592 Erds, P.; Rado, R. (1956), "A partition calculus in set theory" [1], Bull. Amer. Math. Soc. 62 (5): 427489, doi:10.1090/S0002-9904-1956-10036-0, MR0081864 Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs, Amsterdam: North-Holland, ISBN978-0-444-85401-8

Notes
[1] Andreas Blass, Combinatorial Cardinal Characteristics of the Continuum, Chapter 6 in Handbook of Set Theory, edited by Matthew Foreman and Akihiro Kanamori, Springer, 2010 [2] Todd Eisworth, Successors of Singular Cardinals Chapter 15 in Handbook of Set Theory, edited by Matthew Foreman and Akihiro Kanamori, Springer, 2010

Inversion (discrete mathematics)

199

Inversion (discrete mathematics)


In computer science and discrete mathematics, an inversion in a sequence of numbers is a pair of numbers in the sequence that are "out of order" with respect to an ascending or descending order.

Definitions
Formally, let be a sequence of n distinct numbers. If
[1] [2]

and

, then the pair

is called an inversion of . The inversion number of a sequence is one common measure of its sortedness.[3] [2] Formally, the inversion number is defined to be the number of inversions, that is, .[3] Other measures of (pre-)sortedness include the minimum number of elements that can be deleted from the sequence to yield a fully sorted sequence, the number and lengths of sorted "runs" within the sequence, and the smallest number of exchanges needed to sort the sequence.[4] Standard comparison sorting algorithms can be adapted to compute the inversion number in time O(n log n). The inversion vector V(i) of the sequence is defined for i = 2, ..., n as . In other words each element is the number of elements preceding

the element in the original sequence that are greater than it. Note that the inversion vector of a sequence has one less element than the sequence, because of course the number of preceding elements that are greater than the first is always zero. Each permutation of a sequence has a unique inversion vector and it is possible to construct any given permutation of a (fully sorted) sequence from that sequence and the permutation's inversion vector.[5]

Weak order of permutations


The set of permutations on n items can be given the structure of a partial order, called the weak order of permutations, which forms a lattice. To define this order, consider the items being permuted to be the integers from 1 to n, and let Inv(u) denote the set of inversions of a permutation u for the natural ordering on these items. That is, Inv(u) is the set of ordered pairs (i, j) such that 1 i < j n and u(i) > u(j). Then, in the weak order, we define u v whenever Inv(u) Inv(v). The edges of the Hasse diagram of the weak order are given by permutations u and v such that u < v and such that v is obtained from u by interchanging two consecutive values of u. These edges form a Cayley graph for the group of permutations that is isomorphic to the skeleton of a permutohedron. The identity permutation is the minimum element of the weak order, and the permutation formed by reversing the identity is the maximum element.

Example
The following files show the permutations of 4 elements, their inversion vectors and their sets of up to six inversions. When a pair (i,j) is marked in red as an element of the inversion set, it means that the elements on places i and j are out of their natural order in the permutation. For example the inversion set of permutation number 1 contains only the pair (1,2), so only the elements on places 1 and 2 are out of their natural order.

These

are the six possible pairs.

Inversion (discrete mathematics)

200

List of 4 element permutations showing 4 place inversion vectors and sets of up to 6 inversions

Permutations ordered by the bitwise less or equal relation between their inversion vectors

Permutations ordered by the subset between their inversion sets

relation

References
[1] [2] [3] [4] [5] Cormen et al. 2001, pp.39. Vitter & Flajolet 1990, pp.459. Barth & Mutzel 2004, pp.183. Mahmoud 2000, pp.284. Pemmaraju & Skiena 2003, pp.69.

Source bibliography
Barth, Wilhelm; Mutzel, Petra (2004). "Simple and Efficient Bilayer Cross Counting". Journal of Graph Algorithms and Applications 8 (2): 179194. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. ISBN0-262-53196-8.

Inversion (discrete mathematics) Mahmoud, Hosam Mahmoud (2000). "Sorting Nonrandom Data". Sorting: a distribution theory. Wiley-Interscience series in discrete mathematics and optimization. 54. Wiley-IEEE. ISBN9780471327103. Pemmaraju, Sriram V.; Skiena, Steven S. (2003). "Permutations and combinations". Computational discrete mathematics: combinatorics and graph theory with Mathematica. Cambridge University Press. ISBN9780521806862. Vitter, J.S.; Flajolet, Ph. (1990). "Average-Case Analysis of Algorithms and Data Structures". In van Leeuwen, Jan. Algorithms and Complexity. 1 (2nd ed.). Elsevier. ISBN9780444880710.

201

Further reading
Margolius, Barbara H. (2001). "Permutations with Inversions". Journal of Integer Sequences 4.

Presortedness measures
Mannila, Heikki (1984). "Measures of presortedness and optimal sorting algorithms". Lecture Notes in Computer Science 172: 324336. doi:10.1007/3-540-13345-3_29. Estivill-Castro, Vladimir; Wood, Derick (1989). "A new measure of presortedness". Information and Computation 83 (1): 111119. Skiena, Steven S. (1988). "Encroaching lists as a measure of presortedness". BIT 28 (4): 755784.

Inversive plane
An inversive plane is a class of incidence structure in mathematics. It may be axiomatised by taking two classes, "points" and "circles" (or "blocks") with the properties any three points lie on exactly one circle; if P and Q are points and c a circle with P on c and Q not, then there is exactly one circle e containing P and Q and intersecting c only in P; there are four points not all on the same circle. The finite inversive planes are precisely the system. designs. Such a design is always a Steiner

Ovoids
When one takes as points the points of an ovoid in PG(3,q), with q a prime power, and as blocks the planes that are not tangent to the ovoid, one finds a design. Inversive planes that arise in this way are said to be egglike. Dembowksi proved that when n is even, every inversive plane is egglike (and thus n is a power of 2). It is not known to be true when n is odd.

Derived designs and extensions


Inversive planes are precisely the extensions of the designs or hence the affine planes.

Inversive plane

202

References
E.F. Assmus Jr and J.D. Key, Designs and their codes, Cambridge University Press, ISBN 0-521-45839-0. p.309-312. P. Dembowski, Finite geometries, Springer Verlag, 1968, repr.1996, ISBN 3540617868. D.R. Hughes and F.C. Piper, Design theory, Cambridge University Press, ISBN 0-521-35872-8. p.133-136.

Isolation lemma
The isolation lemma, also sometimes known as the isolating lemma, is a lemma in probability theory with several applications in computer science. It was introduced in a paper by Mulmuley, Vazirani and Vazirani, who used it to give a randomized parallel algorithm for the maximum matching problem.[1]

Statement
Let be any family of subsets of a set with n elements. Suppose each element x of the set is independently is defined as assigned a weight w(x) uniformly from {1,,N}, and the weight of a set S in

Then, the probability that there is a unique set in


n

of minimum weight is at least 1n/N. ; for instance is in {1,,nN}), there would be an average of

The remarkable thing about the lemma is that it assumes nothing about the nature of the family may include all 2 subsets, and (as the weight of each set in
n

2 /(nN) sets with the same weight. Still, with high probability, there is a unique set of minimum weight.

Proof
Proof 1
(The original proof from the paper.) Suppose we have fixed the weights of all elements except an element x. Then x has a threshold weight , such that if the weight w(x) of x is greater than , then it is not contained in any minimum-weight subset, and if , then it is contained in some set of minimum weight. Further, observe that if , then every minimum-weight subset must contain x (since, when we decrease w(x) from , sets that do not contain x do not decrease in weight, while those that contain x do). Thus, ambiguity about whether a minimum-weight subset contains x or not can happen only when its weight is exactly equal to its threshold; in this case we will call x "singular". Now, as the threshold of x was defined only in terms of the weights of the other elements, it is independent of w(x), and therefore, as w(x) is chosen uniformly from {1,,N}, and the probability that some x is singular is at mostn/N. As there is a unique minimum-weight subset iff no element is singular, the lemma follows.

Isolation lemma

203

Proof 2
This is a restatement version of the above proof, due to Joel Spencer (1995).[2] For any element x in the set, define

Observe that value of

depends only on the weights of elements other than x, and not on w(x) itself. So whatever the , as w(x) is chosen uniformly from {1,,N}, the probability that it is equal to for some x is at mostn/N. with minimum weight, then, taking any x in A\B, we have is at most1/N.

Thus the probability that Now if there are two sets A and B in

and as we have seen, this event happens with probability at mostn/N.

Examples/applications
The original application was to minimum-weight (or maximum-weight) perfect matchings in a graph. Each edge is assigned a random weight in {1,,2m}, and is the set of perfect matchings, so that with probability at least1/2, there exists a unique perfect matching. When each indeterminate replaced with where is nonzero, and further use this to find the matching. More generally, the paper also observed that any search problem of the form "Given a set system set in " could be reduced to a decision problem of the form "Is there a set in , find a with total weight at most k?". in the Tutte matrix of the graph is is the random weight of the edge, we can show that the determinant of the matrix

For instance, it showed how to solve the following problem posed by Papadimitriou and Yannakakis, for which (as of the time the paper was written) no deterministic polynomial-time algorithm is known: given a graph and a subset of the edges marked as "red", find a perfect matching with exactly k red edges. The ValiantVazirani theorem, concerning unique solutions to NP-complete problems, has a simpler proof using the isolation lemma. This is proved by giving a randomized reduction from CLIQUE to UNIQUE-CLIQUE. Avi Wigderson used the isolation lemma in 1994 to give a randomized reduction from NL to UL, and thereby prove that NL/poly L/poly.[3] Reinhardt and Allender later used the isolation lemma again to prove that NL/poly = UL/poly.[4] The book by Hemaspaandra and Ogihara has a chapter on the isolation technique, including generalizations.[5] The isolation lemma has been proposed as the basis of a scheme for digital watermarking.[6] There is ongoing work on derandomizing the isolation lemma in specific cases[7] and on using it for identity testing.[8]

Isolation lemma

204

References
[1] Mulmuley, K.; U. V Vazirani, V. V Vazirani (1987). "Matching is as easy as matrix inversion" (http:/ / www. springerlink. com/ content/ r4rw2x4l46476708/ ). Combinatorica 7 (1): 105113. doi:10.1007/BF02579206. . STOC 1987 version: doi:10.1145/28395.383347 [2] Jukna, Stasys (2001), Extremal combinatorics: with applications in computer science (http:/ / lovelace. thi. informatik. uni-frankfurt. de/ ~jukna/ EC_Book/ index. html), Springer, pp.147150, ISBN9783540663133, (http:/ / lovelace. thi. informatik. uni-frankfurt. de/ ~jukna/ EC_Book/ sample/ 147-150. pdf) [3] Wigderson, Avi (July 1994). "NL/poly L/poly" (http:/ / www. math. ias. edu/ ~avi/ PUBLICATIONS/ MYPAPERS/ W94/ proc. pdf). Proceedings of the 9th Structures in Complexity Conference. pp.5962. . [4] Reinhardt, K.; E. Allender (2000). "Making Nondeterminism Unambiguous" (http:/ / chosei. informatik. uni-tuebingen. de/ ~reinhard/ nlul. pdf). SIAM Journal on Computing 29: 1118. . [5] Hemaspaandra, Lane A.; Ogihara, Mitsunori (2002), "Chapter 4. The Isolation Technique" (http:/ / www. cs. rochester. edu/ ~lane/ =companion/ isolation. pdf), The complexity theory companion, Springer, ISBN9783540674191, (http:/ / www. cs. rochester. edu/ ~lane/ =companion/ ) [6] Majumdar, Rupak; Jennifer L. Wong (2001). "Watermarking of SAT using combinatorial isolation lemmas" (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 16. 9300). Proceedings of the 38th annual Design Automation Conference. Las Vegas, Nevada, United States: ACM. pp.480485. doi:10.1145/378239.378566. ISBN1-58113-297-2. . Retrieved 2010-05-10. [7] Arvind, V.; Partha Mukhopadhyay (2008). "Derandomizing the Isolation Lemma and Lower Bounds for Circuit Size" (http:/ / portal. acm. org/ citation. cfm?id=1429791. 1429816). Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques. Boston, MA, USA: Springer-Verlag. pp.276289. ISBN978-3-540-85362-6. . Retrieved 2010-05-10. arXiv:0804.0957 [8] Arvind, V.; Partha Mukhopadhyay, Srikanth Srinivasan (2008). "New Results on Noncommutative and Commutative Polynomial Identity Testing" (http:/ / portal. acm. org/ citation. cfm?id=1380843. 1380966). Proceedings of the 2008 IEEE 23rd Annual Conference on Computational Complexity. IEEE Computer Society. pp.268279. ISBN978-0-7695-3169-4. . Retrieved 2010-05-10. arXiv:0801.0514

External links
Favorite Theorems: Unique Witnesses (http://blog.computationalcomplexity.org/2006/09/ favorite-theorems-unique-witnesses.html) by Lance Fortnow The Isolation Lemma and Beyond (http://rjlipton.wordpress.com/2009/07/01/ the-isolation-lemma-and-beyond/) by Richard J. Lipton

Johnson scheme

205

Johnson scheme
In mathematics, the Johnson scheme, named after Selmer M. Johnson, is also known as the triangular association scheme. It consists of the set of all binary vectors X of length and weightn, such that .[1] [2] [3]

Two vectorsx,yX are called ith associates if dist(x,y)=2i for i=0,1,...,n. The eigenvalues are given by

where

and Ek(x) is an Eberlein polynomial defined by

References
[1] P. Delsarte and V. I. Levenshtein, Association schemes and coding theory, IEEE Trans. Inform. Theory, vol. 44, no. 6, pp. 24772504, 1998. [2] P. Camion, "Codes and Association Schemes: Basic Properties of Association Schemes Relevant to Coding," in Handbook of Coding Theory, V. S. Pless and W. C. Huffman, Eds., Elsevier, The Netherlands, 1998. [3] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, Elsevier, New York, 1978.

Josephus problem

206

Josephus problem
In computer science and mathematics, the Josephus Problem (or Josephus permutation) is a theoretical problem related to a certain counting-out game. There are people standing in a circle waiting to be executed. After the first person is executed, a certain number of people are skipped and one person is executed. Then, again, people are skipped and a person is executed. The elimination proceeds around the circle (which is becoming smaller and smaller as the executed people are removed), until only the last person remains, who is given freedom. The task is to choose the place in the initial circle so that you are the last one remaining and so survive.

History
The problem is named after Flavius Josephus, a Jewish historian living in the 1st century. According to Josephus' account of the siege of Yodfat, he and his 40 comrade soldiers were trapped in a cave, the exit of which was blocked by Romans. They chose suicide over capture and decided that they would form a circle and start killing themselves using a step of three. Josephus states that by luck or maybe by the hand of God (modern scholars point out that Josephus was a well educated scholar and predicted the outcome), he and another man remained the last and gave up to the Romans. The reference comes from Book 3, Chapter 8, par 7 of Josephus' The Jewish War (writing of himself in the third person): However, in this extreme distress, he was not destitute of his usual sagacity; but trusting himself to the providence of God, he put his life into hazard [in the manner following]: "And now," said he, "since it is resolved among you that you will die, come on, let us commit our mutual deaths to determination by lot. He whom the lot falls to first, let him be killed by him that hath the second lot, and thus fortune shall make its progress through us all; nor shall any of us perish by his own right hand, for it would be unfair if, when the rest are gone, somebody should repent and save himself." This proposal appeared to them to be very just; and when he had prevailed with them to determine this matter by lots, he drew one of the lots for himself also. He who had the first lot laid his neck bare to him that had the next, as supposing that the general would die among them immediately; for they thought death, if Josephus might but die with them, was sweeter than life; yet was he with another left to the last, whether we must say it happened so by chance, or whether by the providence of God. And as he was very desirous neither to be condemned by the lot, nor, if he had been left to the last, to imbrue his right hand in the blood of his countrymen, he persuaded him to trust his fidelity to him, and to live as well as himself.[1]

Solution
k=2
We explicitly solve the problem when every 2nd person will be killed, i.e. , we outline a solution below.) We express the solution recursively. Let survivor when there are initially people (and . (For the more general case denote the position of the

). The first time around the circle, all of the even-numbered

people die. The second time around the circle, the new 2nd person dies, then the new 4th person, etc.; it's as though there were no first time around the circle. If the initial number of people was even, then the person in position during the second time around the circle was originally in position (for every choice of ). So the person in position was originally in position . This gives us the recurrence:

Josephus problem If the initial number of people was odd, then we think of person 1 as dying at the end of the first time around the circle. Again, during the second time around the circle, the new 2nd person dies, then the new 4th person, etc. In this case, the person in position was originally in position . This gives us the recurrence:

207

When we tabulate the values of

and

we see a pattern:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 1 3 1 3 5 7 1 3 5 7 9 11 13 15 1

This suggests that

is an increasing odd sequence that restarts with and

whenever the index n is a , then people are dead there are only . Below, we give a proof . It

power of 2. Therefore, if we choose m and l so that people and we go to the by induction. Theorem: If even and when If have induction hypothesis. If is odd, then choose . We have and such that is odd. and such that and

is clear that values in the table satisfy this equation. Or we can think that after th person. He must be the survivor. So , then . The base case .

Proof: We use strong induction on is even, then choose

is true. We consider separately the cases when and . Note that . We

is

, where the second equality follows from the and . Note that , where the second : as can be obtained by a , then the solution is given .

equality follows from the induction hypothesis. This completes the proof. The most elegant form of the answer involves the binary representation of size one-bit left cyclic shift of by itself. If we represent in binary as . The proof of this follows from the representation of

The general case


The easiest way to solve this problem in the general case is to use dynamic programming. This approach gives us the recurrence:

This can be seen from the following arguments: When starting from position positions After the first round of the thus has the label 1 in the This approach has running time 2k-th, ..., instead of 0, the number of the last remaining person also shifts by person problem one eliminates the person on position (in the problem. We must therefore shift the outcome of the persons. and large there is another approach. The second . It is based on considering killing k-th, , but for small and is left with an counts and problem by

person problem. The person

person case) is the first of the next

positions to get the answer for the case with approach also uses dynamic programming but has running time

-th people as one step, then changing the numbering.

Josephus problem

208

Variants and generalizations


According to Concrete Mathematics, section 1.3, Josephus had an accomplice; the problem was then to find the places of the two last remaining survivors (whose conspiracy would ensure their survival).

Extended Josephus problem


Problem definition: There are n persons, numbered 1 to n, around a circle. We eliminate second of every two remaining persons until one person remains. Given the n, determine the number of xth person who is eliminated.[2]

Notes
[1] Flavius Josephus, Wars Of The Jews III. 8. 7. (http:/ / www. gutenberg. org/ dirs/ 2/ 8/ 5/ 2850/ 2850-h/ book3. htm#2HCH0008) (Translation by William Whiston). [2] Armin Shams-Baragh Formulating The Extended Josephus Problem (http:/ / www. cs. manchester. ac. uk/ ~shamsbaa/ Josephus. pdf).

References
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001). "Chapter 14: Augmenting Data Structures". Introduction to Algorithms (Second Edition ed.). MIT Press and McGraw-Hill. p.318. ISBN0-262-03293-7.

External links
Josephus Flavius game (http://www.cut-the-knot.org/recurrence/flavius.shtml) (Java Applet) at cut-the-knot allowing selection of every nth out of 50 (maximum). Josephus Problem at the MathWorld encyclopedia (http://mathworld.wolfram.com/JosephusProblem.html) Josephus Problem at Shippensburg University (http://mathdl.maa.org/mathDL/3/?pa=content& sa=viewDocument&nodeId=322)

Kalmanson combinatorial conditions

209

Kalmanson combinatorial conditions


In mathematics, the Kalmanson combinatorial conditions are a set of conditions on the distance matrix used in determining the solvability of the traveling salesman problem. These conditions apply to a special kind of cost matrix, the Kalmanson matrix.

References
KLINZ B. (1) ; WOEGINGER G. J. (1) ; The Steiner tree problem in Kalmanson matrices and in circulant matrices; Journal of combinatorial optimization (J. comb. optim.) ISSN 1382-6905 , 1999, vol. 3, no1, pp. 51-58 (11 ref.) DEINEKO V. G. (1) ; VAN DER VEEN J. A. (2) ; RUDOLF R. ; WOEGINGER G. J. ; Three easy special cases of the euclidean travelling salesman problem: RAIRO. Recherche oprationnelle (RAIRO, Rech. opr.) ISSN 0399-0559 CODEN RSROD3 , 1997, vol. 31, no4, pp. 343-362 (13 ref.) Traveling salesman games with the Monge property. Discrete Applied Mathematics, Volume 138, Issue 3, Pages 349-369 Y. Okamoto The Quadratic Assignment Problem: Theory and Algorithms By Eranda. Cela, 1998 Springer Publishing, ISBN 0792348788

Kemnitz's conjecture
In mathematics, Kemnitz's conjecture states that the centroid of a certain set of lattice points in plane is also a lattice point. It was proved independently in the autumn of 2003 by Christian Reiher and Carlos di Fiore. The exact formulation of this conjecture is as follows: Let with be a natural number and a set of 4n3 lattice points in plane. Then there exists a subset is also a lattice point. points such that the centroid of all points from

The Kemnitz's conjecture was formulated in 1983 by Arnfried Kemnitz as a generalization of the ErdsGinzburgZiv theorem. In 2000, Lajos Rnyai proved the conjecture for sets with 4n2 lattice points. Then, in 2003, Christian Reiher proved the full conjecture using the ChevalleyWarning theorem.

References
Erdos, P.; Ginzburg, A.; Ziv, A. (1961). "Theorem in additive number theory". Bull. Research Council Israel 10F: 4143. Kemnitz, A. (1983). "On a lattice point problem". Ars combinatorica 16b: 151160. Rnyai, L. (April 2000). "On a conjecture of Kemnitz". Combinatorica 20 (4): 569573. doi:10.1007/s004930070008. Reiher, Ch. (June 2007). "On Kemnitz' conjecture concerning lattice-points in the plane". The Ramanujan Journal 13: 333337. doi:10.1007/s11139-006-0256-y. Gao, W. D.; Thangadurai, R. (July 2004). "A variant of Kemnitz Conjecture". Journal of Combinatorial Theory, Series A 107 (1): 6986. doi:10.1016/j.jcta.2004.03.009. Savchev, S.; Chen, F. (July 2005). "Kemnitz' conjecture revisited". Discrete Mathematics 297 (13): 196201. doi:10.1016/j.disc.2005.02.018.

Langford pairing

210

Langford pairing
In combinatorial mathematics, a Langford pairing, also called a Langford sequence, is a permutation of the sequence of 2n numbers 1, 1, 2, 2, ..., n, n in which the two ones are one unit apart, the two twos are two units apart, and more generally the two copies of each number k are k units apart. Langford pairings are named after C. Dudley Langford, who posed the problem of constructing them in 1958. Langford's problem is the task of finding Langford pairings for a given value of n.[1] The closely related concept of a Skolem sequence[2] is defined in the same way, but instead permutes the sequence 0, 0, 1, 1, ..., n1, n1.

A Langford pairing for n = 4.

Example
For example, a Langford pairing for n = 3 is given by the sequence 2,3,1,2,1,3.

Properties
Langford pairings exist only when n is congruent to 0 or 3 modulo 4; for instance, there is no Langford pairing when n = 1, 2, or 5. The numbers of different Langford pairings for n = 1, 2, , counting any sequence as being the same as its reversal, are 0, 0, 1, 1, 0, 0, 26, 150, 0, 0, 17792, 108144, (sequence A014552 in OEIS). As Knuth (2008) describes, the problem of listing all Langford pairings for a given n can be solved as an instance of the exact cover problem, but for large n the number of solutions can be calculated more efficiently by algebraic methods.

Applications
Skolem (1957) used Skolem sequences to construct Steiner triple systems. In the 1960s, E. J. Groth used Langford pairings to construct circuits for integer multiplication.[3]

Notes
[1] Knuth (2008); Gardner (1978). [2] Nordh (2008) [3] Knuth (2008).

References
Gardner, Martin (1978), "Langford's problem", Mathematical Magic Show, Vintage, p.70. Knuth, Donald E. (2008), The Art of Computer Programming, Vol. IV, Fascicle 0: Introduction to Combinatorial Algorithms and Boolean Functions, Addison-Wesley, ISBN978-0-321-53496-5. Langford, C. Dudley (1958), "Problem", Mathematical Gazette 42: 228.

Langford pairing Nordh, Gustav (2008), "Perfect Skolem sets", Discrete Mathematics 308 (9): 16531664, arXiv:math/0506155, doi:10.1016/j.disc.2006.12.003, MR2392605. Skolem, Thoralf (1957), "On certain distributions of integers in pairs with given differences", Mathematica Scandinavica 5: 5768, MR0092797.

211

External links
John E. Miller, Langford's Problem (http://www.lclark.edu/~miller/langford.html), 2006. (with an extensive bibliography (http://www.lclark.edu/~miller/langford/langford-biblio.html)). Weisstein, Eric W., " Langford's Problem (http://mathworld.wolfram.com/LangfordsProblem.html)" from MathWorld.

Laver table
In mathematics, Laver tables (named after Richard Laver, who discovered them towards the end of the 1980s in connection with his works on set theory) are tables of numbers that have certain properties.

Definition
For a given a natural number n, one can define the n-th Laver table (with 2n rows and columns) by setting , where p denotes the row and q denotes the column of the entry. Define

and then calculate the remaining entries of each row from the m-th to the first using the equation

The resulting table is then called the n-th Laver table; for example, for n = 2, we have:
1 2 3 4 1 2 4 2 4 2 3 4 3 4 3 4 4 4 4 4 1 2 3 4

There is no known closed-form expression to calculate the entries of a Laver table directly, and it is in fact suspected that such a formula does not exist.

Periodicity
When looking at the first row of entries in a Laver table, it can be seen that the entries repeat with a certain periodicity m. This periodicity is always a power of 2; the first few periodicities are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... (sequence A098820 in OEIS). The sequence is increasing, and it was proved in 1995 by Richard Laver that under the assumption that there exists a rank-into-rank (a large cardinal), it actually increases without bound. Nevertheless, it grows extremely slowly; Randall Dougherty showed that the first n for which the table entries' period can possibly be 32 is A(9,A(8,A(8,255))), where A denotes the Ackermann function.

Laver table

212

References
Patrick Dehornoy, "Das Unendliche als Quelle der Erkenntnis", in: Spektrum der Wissenschaft Spezial 1/2001, pp.8690

Further reading
R. Laver, On the Algebra of Elementary Embeddings of a Rank into Itself, Advances in Mathematics 110, p.334, 1995 arXiv:math.LO/9204204 R. Dougherty, Critical Points in an Algebra of Elementary Embeddings, Annals of Pure and Applied Logic 65, p.211, 1993 arXiv:math.LO/9205202 Patrick Dehornoy, Diagrams colourings and applications, Proceedings of the East Asian School of Knots, Links and Related Topics, 2004 (online [1])

References
[1] http:/ / knot. kaist. ac. kr/ 2004/ proceedings/ DEHORNOY. pdf

Lehmer code
In mathematics and in particular in combinatorics, the terms inversion table and Lehmer code refer to ways to encode any permutation of n by a sequence of n numbers in manner that makes evident the fact that there are

such permutations. If a permutation is specified by the sequence (1, , n) of its images of 1, , n, then it is encoded by a sequence of n numbers, but not all such sequences are valid since every number must be used only once. By contrast the encodings considered here choose the first number from a set of n values, the next number from a fixed set of n 1 values, and so forth decreasing the number of possibilities until the last number for which only a single fixed value is allowed; every sequence of numbers chosen from these sets encodes a single permutation. While several encodings can be defined, the Lehmer code has several additional useful properties; it is the sequence

in other words the term L()i counts the number of terms in (1, , n) to the right of i that are smaller than it, a number between 0 and n i, allowing for n + 1 i different values. A pair of indices (i,j) with i < j and i > j is called an inversion of , and L()i counts the number of inversions (i,j) with i fixed and varying j. It follows that L()1 + L()2 + + L()n is the total number of inversions of , which is also the number of adjacent transpositions that are needed to transform the permutation into the identity permutation. Other properties of the Lehmer code include that the lexicographic order of the encodings of two permutations is the same as that of their sequences (1, , n), that any value 0 in the code represents a right-to-left minimum in the permutation (i.e., a i smaller than any j to its right), and a value n i at position i similarly signifies a right-to-left maximum, and that the Lehmer code of coincides with the combinatorial number system representation of its position in the list of permutations of n in lexicographic order (numbering the positions starting from 0). Variations of this encoding can be obtained by counting inversions (i,j) for fixed j rather than fixed i, by counting inversions with a fixed smaller value j rather than smaller index i, or by counting non-inversions rather than inversions; while this does not produce a fundamentally different type of encoding, some properties of the encoding will change correspondingly. In particular counting inversions with a fixed smaller value j gives the inversion table of , which can be seen to be the Lehmer code of the inverse permutation. The Lehmer code is named in reference to Derrick Henry Lehmer least [2] .
[1]

, but the code had been known since 1888 at

Lehmer code

213

Encoding and decoding


The usual way to prove that there are n! different permutations of n objects is to observe that the first object can be chosen in n different ways, the next object in n 1 different ways (because choosing the same number as the first is forbidden), the next in n 2 different ways (because there are now 2 forbidden values, and so forth. Translating this freedom of choice at each step into a number, one obtains an encoding algorithm, one that finds the Lehmer code of a given permutation. One need not suppose the objects permuted to be numbers, but one needs a total ordering of the set of objects. Since the code numbers are to start from 0, the appropriate number to encode each object i by is the number of objects that were available at that point (so they do not occur before position i), but which are smaller than the object i actually chosen. (Inevitably such objects must appear at some position j > i, and (i,j) will be an inversion, which shows that this number is indeed L()i.) This number to encode each object can be found by direct counting, in several ways (directly counting inversions, or correcting the total number of objects smaller than a given one, which is its sequence number starting from 0 in the set, by those that are unavailable at its position). Another method which is in-place, but not really more efficient, is to start with the permutation of {0, 1, n 1} obtained by representing each object by its mentioned sequence number, and then for each entry x, in order from left to right, correct the items to its right by subtracting 1 from all entries (still) greater than x (to reflect the fact that the object corresponding to x is no longer available). Concretely a Lehmer code for the permutation B,F,A,G,D,E,C of letters, ordered alphabetically, would first give the list of sequence numbers 1,5,0,6,3,4,2, which is successively transformed

where the final line is the Lehmer code (at each line the boldface entry is subtracted from the larger entries to its right of it to form the next line). For decoding a Lehmer code into a permutation of a given set, the latter procedure may be reversed: for each entry x, in order from right to left, correct the items to its right by adding 1 to all those (currently) greater than or equal to x; finally interpret the resulting permutation of {0, 1, n 1} as sequence numbers (which amounts to adding 1 to each entry if a permutation of {1, 2, n} is sought). Alternatively the entries of the Lehmer code can be processed from left to right, and interpreted as a number determining the next choice of an element as indicated above; this requires maintaining a list of available elements, from which each chosen element is removed. In the example this would mean choosing element 1 from {A,B,C,D,E,F,G} (which is B) then element 4 from {A,C,D,E,F,G} (which is F), then element 0 from {A,C,D,E,G} (giving A) and so on, reconstructing the sequence B,F,A,G,D,E,C.

Applications to combinatorics and probabilities


Independence of relative ranks
This application stem from an immediate property of the Lehmer code L() seen as a sequence of integers. Under the uniform law on the symmetric group , the sequence L() consists of independent and uniform variables; L() follows the uniform law on {0, 1, ,n i}. In other words, if we draw a permutation at random in with equiprobability (each permutation has a probability of 1/n! to be chosen), then its Lehmer code =L() is a sequence of random and uniform variables. The independence of the components of L results from a general principle concerning uniform variables on a Cartesian product.

Lehmer code

214

Number of right-to-left minima and maxima


Definition : In a sequence u=(uk)1kn, there is right-to-left minimum (resp. maximum) at rank k if uk is strictly smaller (resp. strictly bigger) than each element ui with i>k, i.e., to its right. Let B(k) (resp. H(k)) be the event "there is right-to-left minimum (resp. maximum) at rank k" , i.e. B(k) is the set of the permutations which exhibit a right-to-left minimum (resp. maximum) at rank k. We clearly have

Thus the number Nb() (resp. Nh()) of right-to-left minimum (resp. maximum) for the permutation can be written as a sum of independent Bernoulli random variables each with a respective parameter of 1/k :

Indeed, as L(k) follows the uniform law on

The generating function for the Bernoulli random variable

is

therefore the generating function of Nb is

which allow us to find again the product form for the generative series of the Stirling numbers of the first kind (unsigned).

The secretary problem


This is an optimal stop problem, a classic in decision theory, statistics and applied probabilities, where a random permutation is gradually revealed through the first elements of its Lehmer code, and where the goal is to stop exactly at the element k such as (k)=n, whereas the only available information (the k first values of the Lehmer code) is not sufficient to compute (k). In less mathematical words : a series of n applicants are interviewed one after the other. The interviewer must hire the best applicant, but must make his decision (Hire or Not hire) on the spot, without interviewing the next applicant ( and a fortiori without interviewing all applicants). The interviewer thus knows the rank of the kth applicant, therefore, at the moment of making his kth decision, the interviewer knows only the k first elements of the Lehmer code whereas he would need to know all of them to make a well informed decision. To determine the optimal strategies (i.e. the strategy maximizing the probability of a win), the statistical properties of the Lehmer code are crucial. Allegedly, Johannes Kepler clearly exposed this secretary problem to a friend of his at a time when he was trying to make up his mind and choose one out elven prospective brides as his second wife. His first marriage had been an unhappy one, having been arranged without himself being consulted, and he was thus very concerned that he could reach the right decision. [3]

Lehmer code

215

References
[1] Lehmer, D.H. (1960), "Teaching combinatorial tricks to a computer", Proc. Sympos. Appl. Math. Combinatorial Analysis, Amer. Math. Soc. 10: 179193 [2] Laisant, Charles-Ange (1888), "Sur la numration factorielle, application aux permutations" (http:/ / www. numdam. org/ item?id=BSMF_1888__16__176_0) (in fr), Bulletin de la Socit Mathmatique de France 16: 176183, [3] Ferguson, Thomas S. (1989), "Who solved the secretary problem ?" (http:/ / www. jstor. org/ stable/ 2245639), Statistical Science 4 (3): 282289,

Bibliography
Mantaci, Roberto (2001), "A permutation representation that knows what "Eulerian" means" (http://www. dmtcs.org/volumes/abstracts/pspapers/dm040203.ps), Discrete Mathematics and Theoretical Computer Science (4): 101108 Knuth, Don (1981), The art of computer programming, 3, Reading: Addison-Wesley, pp.1213

LindstrmGesselViennot lemma
In mathematics, the LindstrmGesselViennot lemma provides a way to count the number of tuples of non-intersecting lattice paths.

Statement
Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices and destination vertices , and also assign edge weights corresponding formal product formal sum over all paths for each directed edge. For each directed path P between two vertices, assign the of the edges of the path. For any two vertices a and b, write e(a,b) as the . In particular, if between any two points there are only finitely

many paths, one can assign the weight 1 to each edge and then e(a,b) counts the number of paths from a to b. With this setup, write

The LindstrmGesselViennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B:

That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of , given by taking to . In particular, if we can take the weights to be 1 and the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i), then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B.

LindstrmGesselViennot lemma

216

Proof
To prove the LindstrmGesselViennot lemma, one uses an involution, whose fixed points are precisely the tuples of nonintersecting paths, and which preserves the weights. To define this involution f on the set of all n-tuples of paths from A to B, we define f to fix any tuple of nonintersecting paths, and for a tuple which contains an intersection, we want to switch the tails of the paths so as to give the negative sign in the above formula. To make sure this is well defined, it is necessary to define an ordering on the paths that intersect. Let i be the smallest index such that the path starting at ai contains an intersection, and then let j be the largest index such that a path intersecting the previous one starts at aj. Then define f to switch the tails of these two paths. This is a well defined involution on the set of all n-tuples of paths from A to B, whose fixed points is exactly the set of non-intersecting tuples.

Applications
Schur polynomials
The LindstrmGesselViennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition of n, the Schur polynomial can be defined as: where the sum is over all semistandard Young tableaux T of shape , and the weight of a tableau is given by the corresponding monomial, obtained by taking the product of the xi indexed by the entries of T. For instance, the

weight of the tableau

is

where hi are the complete homogeneous symmetric polynomials. For instance, for the partition (3,2,2,1), the corresponding determinant is

To prove the equivalence, given any partition as above, with conjugate partition considers the r starting points in the lattice and the r ending points

, one , as points

, which acquires the structure of a directed graph by asserting that the only allowed directions are

going one to the right or one up; the weight associated to any horizontal edge at height i is xi, and the weight associated to a vertical edge is 1. With this definition, r-tuples of paths from A to B are exactly semistandard Young tableaux of shape , and the weight of such an r-tuple is the corresponding summand in the first definition of the

Schur polynomials. For instance, with the tableau

, one gets the corresponding 4-tuple

LindstrmGesselViennot lemma

217

On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence.

The CauchyBinet formula


One can also use the LindstrmGesselViennot lemma to prove the CauchyBinet formula, and in particular the multiplicativity of the determinant.

References
Bruce E. Sagan. The Symmetric Group. Springer, 2001.

List of factorial and binomial topics


This is a list of factorial and binomial topics in mathematics, by Wikipedia page. See also binomial (disambiguation). Abel's binomial theorem Alternating factorial Antichain Beta function Binomial coefficient Binomial distribution Binomial proportion confidence interval Binomial-QMF (Daubechies wavelet filters) Binomial series Binomial theorem Pascal's triangle Binomial transform Binomial type Carlson's theorem Catalan number Central binomial coefficient Combination De Polignac's formula Difference operator Difference polynomials Digamma function ErdsKoRado theorem

List of factorial and binomial topics EulerMascheroni constant Fa di Bruno's formula Factorial Factorial moment Factorial prime Gamma distribution Gamma function Gaussian binomial coefficient Hyperfactorial Hypergeometric distribution Hypergeometric function identities Hypergeometric series Incomplete beta function Incomplete gamma function Lah number Lanczos approximation Lozani's triangle Mahler's theorem Multinomial distribution Multinomial coefficient, Multinomial formula, Multinomial theorem Multiplicities of entries in Pascal's triangle Multiset Multivariate gamma function Narayana numbers Negative binomial distribution NrlundRice integral Pascal matrix Pascal's pyramid Pascal's simplex Pascal's triangle Permutation List of permutation topics Pochhammer symbol (also falling, lower, rising, upper factorials) Poisson distribution Polygamma function Primorial Proof of Bertrand's postulate Sierpinski triangle Star of David theorem Stirling number Stirling transform Stirling's approximation Subfactorial Table of Newtonian series Taylor series

218

Trinomial expansion Vandermonde's identity

List of factorial and binomial topics Wilson prime Wilson's theorem Wolstenholme prime

219

LittlewoodOfford problem
In mathematics, the LittlewoodOfford problem is the combinatorial question in geometry of bounding from above the number of subsums made out of vectors

that fall into a fixed convex set

The first result (for d=1, 2) was given in a paper from 1938[1] by John Edensor Littlewood and A. Cyril Offord, on random polynomials. This LittlewoodOfford lemma states that, for real or complex numbers vi of absolute value at least one and any disc of radius one, not more than of the 2n sums of vi fall into the disc. In 1945 Paul Erds improved the upper bound for d=1 to

using Sperner's theorem. This bound is sharp; equality is attained when all v_i are equal. Then Kleitman in 1966 showed that the same bound held for complex numbers. He extended this (1970) to vi in a normed space. See [2] for the proofs of these results. The semi-sum m = vi can be subtracted from all the subsums. That is, by change of origin and then scaling by a factor of 2, we may as well consider sums ivi in which i takes the value 1 or 1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.

References
[1] Littlewood, J.E.; Offord, A.C. (1943). "On the number of real roots of a random algebraic equation (III)". Rec. Math. [Mat. Sbornik] N.S. 12 (54) (3): 277286. [2] Bollobs, Bla (1986). Combinatorics. Cambridge. ISBN0-521-33059-9; 0-521-33703-8.

Longest common subsequence problem

220

Longest common subsequence problem


The longest common subsequence (LCS) problem is to find the longest subsequence common to all sequences in a set of sequences (often just two). Note that subsequence is different from a substring, see substring vs. subsequence. It is a classic computer science problem, the basis of file comparison programs such as diff, and has applications in bioinformatics.

Complexity
For the general case of an arbitrary number of input sequences, the problem is NP-hard.[1] When the number of sequences is constant, the problem is solvable in polynomial time by dynamic programming (see Solution below). Assume you have sequences of lengths . A naive search would test each of the subsequences of the first sequence to determine whether they are also subsequences of the remaining sequences; each subsequence may be tested in time linear in the lengths of the remaining sequences, so the time for this algorithm would be

For the case of two sequences of n and m elements, the running time of the dynamic programming approach is O(n m). For an arbitrary number of input sequences, the dynamic programming approach gives a solution in

There exist methods with lower complexity,[2] which often depend on the length of the LCS, the size of the alphabet, or both. Notice that the LCS is not necessarily unique; for example the LCS of "ABC" and "ACB" is both "AB" and "AC". Indeed the LCS problem is often defined to be finding all common subsequences of a maximum length. This problem inherently has higher complexity, as the number of such subsequences is exponential in the worst case,[3] even for only two input strings. It should not be confused with the longest common substring problem (substrings are necessarily contiguous).

Solution for two sequences


The LCS problem has an optimal substructure: the problem can be broken down into smaller, simple "subproblems", which can be broken down into yet simpler subproblems, and so on, until, finally, the solution becomes trivial. The LCS problem also has overlapping subproblems: the solution to a higher subproblem depends on the solutions to several of the lower subproblems. Problems with these two propertiesoptimal substructure and overlapping subproblemscan be approached by a problem-solving technique called dynamic programming, in which the solution is built up starting with the simplest subproblems. The procedure requires memoizationsaving the solutions to one level of subproblem in a table so that the solutions are available to the next level of subproblems. This method is illustrated here.

Prefixes
The subproblems become simpler as the sequences become shorter. Shorter sequences are conveniently described using the term prefix. A prefix of a sequence is the sequence with the end cut off. Let S be the sequence (AGCA). Then, a prefix of S is the sequence (AG). Prefixes are denoted with the name of the sequence, followed by a subscript to indicate how many characters the prefix contains.[4] The prefix (AG) is denoted S2, since it contains the first 2 elements of S. The possible prefixes of S are S1 = (A)

Longest common subsequence problem S2 = (AG) S3 = (AGC) S4 = (AGCA). The solution to the LCS problem for two arbitrary sequences, X and Y, amounts to constructing some function, LCS(X, Y), that gives the longest subsequences common to X and Y. That function relies on the following two properties.

221

First property
Suppose that two sequences both end in the same element. To find their LCS, shorten each sequence by removing the last element, find the LCS of the shortened sequences, and to that LCS append the removed element. For example, here are two sequences having the same last element: (BANANA) and (ATANA). Remove the same last element. Repeat the procedure until you find no common last element. The removed sequence will be (ANA). The sequences now under consideration: (BAN) and (AT) The LCS of these last two sequences is, by inspection, (A). Append the removed element, (ANA), giving (AANA), which, by inspection, is the LCS of the original sequences. In terms of prefixes, LCS(Xn, Ym) = (LCS( Xn-1, Ym-1), xn) where the comma indicates that the following element, xn, is appended to the sequence. Note that the LCS for Xn and Ym involves determining the LCS of the shorter sequences, Xn-1 and Ym-1.

Second property
Suppose that the two sequences X and Y do not end in the same symbol. Then the LCS of X and Y is the longer of the two sequences LCS(Xn,Ym-1) and LCS(Xn-1,Ym). To understand this property, consider the two following sequences : sequence X: ABCDEFG (n elements) sequence Y: BCDGK (m elements) The last character of the LCS of these two sequences either ends with a G (the last element of sequence X) or does not. Case 1: the LCS ends with a G Then it cannot end with a K. Thus it does not hurt to remove the K from sequence Y: if K were in the LCS, it would be its last character; as a consequence K is not in the LCS. We can then write: LCS(Xn,Ym) = LCS(Xn, Ym-1). Case 2: the LCS does not end with a G Then it does not hurt to remove the G from the sequence X (for the same reason as above). And then we can write: LCS(Xn,Ym) = LCS(Xn-1, Ym). In any case, the LCS we are looking for is one of LCS(Xn, Ym-1) or LCS(Xn-1, Ym). Those two last LCS are both common subsequences to X and Y. LCS(X,Y) is the longest. Thus its value is the longest sequence of LCS(Xn, Ym-1) and LCS(Xn-1, Ym).

Longest common subsequence problem

222

LCS function defined


Let two sequences be defined as follows: X = (x1, x2...xm) and Y = (y1, y2...yn). The prefixes of X are X1, 2,...m; the prefixes of Y are Y1, 2,...n. Let LCS(Xi, Yj) represent the set of longest common subsequence of prefixes Xi and Yj. This set of sequences is given by the following.

To find the longest subsequences common to Xi and Yj, compare the elements xi and yj. If they are equal, then the sequence LCS(Xi-1, Yj-1) is extended by that element, xi. If they are not equal, then the longer of the two sequences, LCS(Xi, Yj-1), and LCS(Xi-1, Yj), is retained. (If they are both the same length, but not identical, then both are retained.) Notice that the subscripts are reduced by 1 in these formulas. That can result in a subscript of 0. Since the sequence elements are defined to start at 1, it was necessary to add the requirement that the LCS is empty when a subscript is zero.

Worked example
The longest subsequence common to C = (AGCAT), and R = (GAC) will be found. Because the LCS function uses a "zeroth" element, it is convenient to define zero prefixes that are empty for these sequences: C0 = ; and R0 = . All the prefixes are placed in a table with C in the first row (making it a column header) and R in the first column (making it a row header).

LCS Strings
0 0 G A C A G C A T

This table is used to store the LCS sequence for each step of the calculation. The second column and second row have been filled in with , because when an empty sequence is compared with a non-empty sequence, the longest common subsequence is always an empty sequence. LCS(R1, C1) is determined by comparing the first elements in each sequence. G and A are not the same, so this LCS gets (using the "second property") the longest of the two sequences, LCS(R1, C0) and LCS(R0, C1). According to the table, both of these are empty, so LCS(R1, C1) is also empty, as shown in the table below. The arrows indicate that the sequence comes from both the cell above, LCS(R0, C1) and the cell on the left, LCS(R1, C0). LCS(R1, C2) is determined by comparing G and G. They match, so G is appended to the upper left sequence, LCS(R0, C1), which is (), giving (G), which is (G). For LCS(R1, C3), G and C do not match. The sequence above is empty; the one to the left contains one element, G. Selecting the longest of these, LCS(R1, C3) is (G). The arrow points to the left, since that is the longest of the two sequences. LCS(R1, C4), likewise, is (G).

Longest common subsequence problem

223

"G" Row Completed


G A C A G (G) C (G) A (G) T (G)

For LCS(R2, C1), A is compared with A. The two elements match, so A is appended to , giving (A). For LCS(R2, C2), A and G do not match, so the longest of LCS(R1, C2), which is (G), and LCS(R2, C1), which is (A), is used. In this case, they each contain one element, so this LCS is given two subsequences: (A) and (G). For LCS(R2, C3), A does not match C. LCS(R2, C2) contains sequences (A) and (G); LCS(R1, C3) is (G), which is already contained in LCS(R2, C2). The result is that LCS(R2, C3) also contains the two subsequences, (A) and (G). For LCS(R2, C4), A matches A, which is appended to the upper left cell, giving (GA). For LCS(R2, C5), A does not match T. Comparing the two sequences, (GA) and (G), the longest is (GA), so LCS(R2, C5) is (GA).

"G" & "A" Rows Completed


G A C A (A) G (G) (A) & (G) C (G) (A) & (G) A (G) (GA) T (G) (GA)

For LCS(R3, C1), C and A do not match, so LCS(R3, C1) gets the longest of the two sequences, (A). For LCS(R3, C2), C and G do not match. Both LCS(R3, C1) and LCS(R2, C2) have one element. The result is that LCS(R3, C2) contains the two subsequences, (A) and (G). For LCS(R3, C3), C and C match, so C is appended to LCS(R2, C2), which contains the two subsequences, (A) and (G), giving (AC) and (GC). For LCS(R3, C4), C and A do not match. Combining LCS(R3, C3), which contains (AC) and (GC), and LCS(R2, C4), which contains (GA), gives a total of three sequences: (AC), (GC), and (GA). Finally, for LCS(R3, C5), C and T do not match. The result is that LCS(R3, C5) also contains the three sequences, (AC), (GC), and (GA).

Longest common subsequence problem

224

Completed LCS Table


G A C A (A) (A) G (G) (A) & (G) (A) & (G) C (G) (A) & (G) (AC) & (GC) A (G) (GA) (AC) & (GC) & (GA) T (G) (GA) (AC) & (GC) & (GA)

The final result is that the last cell contains all the longest subsequences common to (AGCAT) and (GAC); these are (AC), (GC), and (GA). The table also shows the longest common subsequences for every possible pair of prefixes. For example, for (AGC) and (GA), the longest common subsequence are (A) and (G).

Traceback approach
Calculating the LCS of a row of the LCS table requires only the solutions to the current row and the previous row. Still, for long sequences, these sequences can get numerous and long, requiring a lot of storage space. Storage space can be saved by saving not the actual subsequences, but the length of the subsequence and the direction of the arrows, as in the table below.

Storing length, rather than sequences


G A C 0 0 0 0 A 0 0 1 1 G 0 1 1 1 C 0 1 1 2 A 0 1 2 2 T 0 1 2 2

The actual subsequences are deduced in a "traceback" procedure that follows the arrows backwards, starting from the last cell in the table. When the length decreases, the sequences must have had a common element. Several paths are possible when two arrows are shown in a cell. Below is the table for such an analysis, with numbers colored in cells where the length is about to decrease. The bold numbers trace out the sequence, (GA).[5]

Traceback example
G A C 0 0 0 0 A 0 0 1 1 G 0 1 1 1 C 0 1 1 2 A 0 1 2 2 T 0 1 2 2

Longest common subsequence problem

225

Relation to other problems


For two strings LCS by
[2]

and

, the length of the shortest common supersequence is related to the length of the

The edit distance when only insertion and deletion is allowed (no substitution), or when the cost of the substitution is the double of the cost of an insertion or deletion, is:

Code for the dynamic programming solution


Computing the length of the LCS
The function below takes as input sequences X[1..m] and Y[1..n] computes the LCS between X[1..i] and Y[1..j] for all 1 i m and 1 j n, and stores it in C[i,j]. C[m,n] will contain the length of the LCS of X and Y. function LCSLength(X[1..m], Y[1..n]) C = array(0..m, 0..n) for i := 0..m C[i,0] = 0 for j := 0..n C[0,j] = 0 for i := 1..m for j := 1..n if X[i] = Y[j] C[i,j] := C[i-1,j-1] + 1 else: C[i,j] := max(C[i,j-1], C[i-1,j]) return C[m,n] Alternatively, memoization could be used.

Reading out an LCS


The following function backtracks the choices taken when computing the C table. If the last characters in the prefixes are equal, they must be in an LCS. If not, check what gave the largest LCS of keeping and , and make the same choice. Just choose one if they were equally long. Call the function with i=m and j=n. function backtrack(C[0..m,0..n], X[1..m], Y[1..n], i, j) if i = 0 or j = 0 return "" else if X[i] = Y[j] return backtrack(C, X, Y, i-1, j-1) + X[i] else if C[i,j-1] > C[i-1,j] return backtrack(C, X, Y, i, j-1) else return backtrack(C, X, Y, i-1, j)

Longest common subsequence problem

226

Reading out all LCSs


If choosing and would give an equally long result, read out both resulting subsequences. This is returned as a set by this function. Notice that this function is not polynomial, as it might branch in almost every step if the strings are similar. function backtrackAll(C[0..m,0..n], X[1..m], Y[1..n], i, j) if i = 0 or j = 0 return {""} else if X[i] = Y[j] return {Z + X[i] for all Z in backtrackAll(C, X, Y, i-1, j-1)} else R := {} if C[i,j-1] C[i-1,j] R := backtrackAll(C, X, Y, i, j-1) if C[i-1,j] C[i,j-1] R := R backtrackAll(C, X, Y, i-1, j) return R

Print the diff


This function will backtrack through the C matrix, and print the diff between the two sequences. Notice that you will get a different answer if you exchange and <, with > and below. function printDiff(C[0..m,0..n], X[1..m], Y[1..n], i, j) if i > 0 and j > 0 and X[i] = Y[j] printDiff(C, X, Y, i-1, j-1) print " " + X[i] else if j > 0 and (i = 0 or C[i,j-1] C[i-1,j]) printDiff(C, X, Y, i, j-1) print "+ " + Y[j] else if i > 0 and (j = 0 or C[i,j-1] < C[i-1,j]) printDiff(C, X, Y, i-1, j) print "- " + X[i] else print ""

Example
Let be "XMJYAUZ" and be "MZJAWXU". The longest common subsequence between and is "MJAU". The table C shown below, which is generated by the function LCSlength, shows the lengths of the longest common subsequences between prefixes of and . The th row and th column shows the length of the LCS between and .

| 0 1 2 3 4 5 6 7 | M Z J A W X U -----|----------------0 | 0 0 0 0 0 0 0 0 1 X | 0 0 0 0 0 0 1 1 2 M | 0 1 1 1 1 1 1 1

Longest common subsequence problem 3 4 5 6 7 J Y A U Z | | | | | 0 0 0 0 0 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 2 2 3 3 3 2 2 3 3 3 2 2 3 4 4

227

The underlined numbers show the path the function backtrack would follow from the bottom right to the top left corner, when reading out an LCS. If the current symbols in and are equal, they are part of the LCS, and we go both up and left. If not, we go up or left, depending on which cell has a higher number. This corresponds to either taking the LCS between and , or and

Code optimization
Several optimizations can be made to the algorithm above to speed it up for real-world cases.

Reduce the problem set


The C matrix in the naive algorithm grows quadratically with the lengths of the sequences. For two 100-item sequences, a 10,000-item matrix would be needed, and 10,000 comparisons would need to be done. In most real-world cases, especially source code diffs and patches, the beginnings and ends of files rarely change, and almost certainly not both at the same time. If only a few items have changed in the middle of the sequence, the beginning and end can be eliminated. This reduces not only the memory requirements for the matrix, but also the number of comparisons that must be done. function LCS(X[1..m], Y[1..n]) start := 1 m_end := m n_end := n trim off the matching items at the beginning while start m_end and start n_end and X[start] = Y[start] start := start + 1 trim off the matching items at the end while start m_end and start n_end and X[m_end] = Y[n_end] m_end := m_end - 1 n_end := n_end - 1 C = array(start-1..m_end, start-1..n_end) only loop over the items that have changed for i := start..m_end for j := start..n_end the algorithm continues as before ... In the best case scenario, a sequence with no changes, this optimization would completely eliminate the need for the C matrix. In the worst case scenario, a change to the very first and last items in the sequence, only two additional comparisons are performed.

Longest common subsequence problem

228

Reduce the comparison time


Most of the time taken by the naive algorithm is spent performing comparisons between items in the sequences. For textual sequences such as source code, you want to view lines as the sequence elements instead of single characters. This can mean comparisons of relatively long strings for each step in the algorithm. Two optimizations can be made that can help to reduce the time these comparisons consume.

Reduce strings to hashes


A hash function or checksum can be used to reduce the size of the strings in the sequences. That is, for source code where the average line is 60 or more characters long, the hash or checksum for that line might be only 8 to 40 characters long. Additionally, the randomized nature of hashes and checksums would guarantee that comparisons would short-circuit faster, as lines of source code will rarely be changed at the beginning. There are three primary drawbacks to this optimization. First, an amount of time needs to be spent beforehand to precompute the hashes for the two sequences. Second, additional memory needs to be allocated for the new hashed sequences. However, in comparison to the naive algorithm used here, both of these drawbacks are relatively minimal. The third drawback is that of collisions. Since the checksum or hash is not guaranteed to be unique, there is a small chance that two different items could be reduced to the same hash. This is unlikely in source code, but it is possible. A cryptographic hash would therefore be far better suited for this optimization, as its entropy is going to be significantly greater than that of a simple checksum. However, the setup and computational requirements of a cryptographic hash may not be worth it for small sequence lengths.

Reduce the required space


If only the length of the LCS is required, the matrix can be reduced to a matrix with ease, or to a vector (smarter) as the dynamic programming approach only needs the current and previous columns of the matrix. Hirschberg's algorithm allows the construction of the optimal sequence itself in the same quadratic time and linear space bounds.[6]

References
[1] David Maier (1978). "The Complexity of Some Problems on Subsequences and Supersequences". J. ACM (ACM Press) 25 (2): 322336. doi:10.1145/322063.322075. [2] L. Bergroth and H. Hakonen and T. Raita (2000). "A Survey of Longest Common Subsequence Algorithms". SPIRE (IEEE Computer Society) 00: 3948. doi:10.1109/SPIRE.2000.878178. ISBNISBN 0-7695-0746-8. [3] Ronald I. Greenberg (2003-08-06). "Bounds on the Number of Longest Common Subsequences". arXiv:cs.DM/0301030. [4] Xia, Xuhua (2007). Bioinformatics and the Cell: Modern Computational Approaches in Genomics, Proteomics and Transcriptomics. New York: Springer. p.24. ISBN0387713360. [5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein (2001). "15.4". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp.350355. ISBN0-262-53196-8. [6] Hirschberg, D. S. (1975). "A linear space algorithm for computing maximal common subsequences". Communications of the ACM 18 (6): 341343. doi:10.1145/360825.360861.

Longest common subsequence problem

229

External links
Dictionary of Algorithms and Data Structures: longest common subsequence (http://nist.gov/dads/HTML/ longestCommonSubsequence.html) GPL implementation of the LCS-Delta algorithm (http://www.markusstengel.de/text/en/i_4_1_5_3.html) LCS lecture notes (http://www.ics.uci.edu/~eppstein/161/960229.html) Dynamic programming and sequence alignment (http://www.ibm.com/developerworks/java/library/ j-seqalign/index.html?ca=dgr-jw17dynamicjava&S_TACT=105AGX59&S_CMP=GR)

Longest increasing subsequence


In computer science, the longest increasing subsequence problem is to find a subsequence of a given sequence in which the subsequence elements are in sorted order, lowest to highest, and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous, or unique. Longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics.[1] The longest increasing subsequence problem is solvable in time O(n log n), where n denotes the length of the input sequence.[2]

Example
In the binary Van der Corput sequence 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15, a longest increasing subsequence is 0, 2, 6, 9, 13, 15. This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not unique: for instance, 0, 4, 6, 9, 11, 15 is another increasing subsequence of equal length in the same input sequence.

Relations to other algorithmic problems


The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence S is the longest common subsequence of S and T, where T is the result of sorting S. However, for the special case in which the input is a permutation of the integers 1, 2, ..., n, this approach can be made much more efficient, leading to time bounds of the form O(n log log n).[3] The largest clique in a permutation graph is defined by the longest decreasing subsequence of the permutation that defines the graph; the longest decreasing subsequence is equivalent in computational complexity, by negation of all numbers, to the longest increasing subsequence. Therefore, longest increasing subsequence algorithms can be used to solve the clique problem efficiently in permutation graphs.[4]

Longest increasing subsequence

230

Efficient algorithms
The algorithm outlined below solves the longest increasing subsequence problem efficiently, using only arrays and binary searching. It processes the sequence elements in order, maintaining the longest increasing subsequence found so far. Denote the sequence values as X[1], X[2], etc. Then, after processing X[i], the algorithm will have stored values in two arrays: M[j] stores the position k of the smallest value X[k] such that there is an increasing subsequence of length j ending at X[k] on the range k i (note we have j k i here). P[k] stores the position of the predecessor of X[k] in the longest increasing subsequence ending at X[k]. In addition the algorithm stores a variable L representing the length of the longest increasing subsequence found so far. Note that, at any point in the algorithm, the sequence X[M[1]], X[M[2]], ..., X[M[L]] is nondecreasing. For, if there is an increasing subsequence of length i ending at X[M[i]], then there is also a subsequence of length i-1 ending at a smaller value: namely the one ending at X[P[M[i]]]. Thus, we may do binary searches in this sequence in logarithmic time. The algorithm, then, proceeds as follows. L = 0 for i = 1, 2, ... n: binary search for the largest positive j L such that X[M[j]] < X[i] (or set j = 0 if no such value exists) P[i] = M[j] if j == L or X[i] < X[M[j+1]]: M[j+1] = i L = max(L, j+1) The result of this is the length of the longest sequence in L. The actual longest sequence can be found by backtracking through the P array: the last item of the longest sequence is in X[M[L]], the second-to-last item is in X[P[M[L]]], etc. Thus, the sequence has the form ..., X[P[P[M[L]]]], X[P[M[L]]], X[M[L]]. Because the algorithm performs a single binary search per sequence element, its total time can be expressed using Big O notation as O(nlogn). Fredman (1975) discusses a variant of this algorithm, which he credits to Donald Knuth; in the variant that he studies, the algorithm tests whether each value X[i] can be used to extend the current longest increasing sequence, in constant time, prior to doing the binary search. With this modification, the algorithm uses at most n log2 n n log2log2 n + O(n) comparisons in the worst case, which is optimal for a comparison-based algorithm up to the constant factor in the O(n) term.[5]

Length bounds
According to the ErdsSzekeres theorem, any sequence of n2+1 distinct integers has either an increasing or a decreasing subsequence of length n + 1.[6] [7] For inputs in which each permutation of the input is equally likely, the expected length of the longest increasing subsequence is approximately 2n. [8] In the limit as n approaches infinity, the length of the longest increasing subsequence of a randomly permuted sequence of n items has a distribution approaching the TracyWidom distribution, the distribution of the largest eigenvalue of a random matrix in the Gaussian unitary ensemble.[9]

Longest increasing subsequence

231

Online algorithms
The longest increasing subsequence has also been studied in the setting of online algorithms, in which the elements of a permutation are presented one at a time to an algorithm that must decide whether to include or exclude each element, without knowledge of the ordering of later elements. In this variant of the problem, it is possible to devise a selection procedure that, when given a random permutation as input, will generate an increasing sequence with expected length approximately (2n). [10] More precise results (including the variance) are known for the corresponding problem in the setting of a Poisson arrival process.[11]

References
[1] Aldous, David; Diaconis, Persi (1999), "Longest increasing subsequences: from patience sorting to the BaikDeiftJohansson theorem", Bulletin of the American Mathematical Society 36 (04): 413432, doi:10.1090/S0273-0979-99-00796-X. [2] *Schensted, C. (1961), "Longest increasing and decreasing subsequences" (http:/ / books. google. com/ ?id=G3sZ2zG8AiMC& pg=PA179), Canadian Journal of Mathematics (Canadian Mathematical Society) 13: 179191, doi:10.4153/CJM-1961-015-3, ISSN0008-414X, MR0121305, [3] Hunt, J.; Szymanski, T. (1977), "A fast algorithm for computing longest common subsequences", Communications of the ACM 20 (5): 350353, doi:10.1145/359581.359603. [4] Golumbic, M. C. (1980), Algorithmic Graph Theory and Perfect Graphs, Computer Science and Applied Mathematics, Academic Press, p.159. [5] Fredman, Michael L. (1975), "On computing the length of longest increasing subsequences", Discrete Mathematics 11 (1): 2935, doi:10.1016/0012-365X(75)90103-X. [6] Erds, Paul; Szekeres, George (1935), "A combinatorial problem in geometry" (http:/ / www. numdam. org/ item?id=CM_1935__2__463_0), Compositio Mathematica 2: 463470, . [7] Steele, J. Michael (1995), "Variations on the monotone subsequence theme of Erds and Szekeres" (http:/ / www-stat. wharton. upenn. edu/ ~steele/ Publications/ PDF/ VOTMSTOEAS. pdf), in Aldous, David; Diaconis, Persi; Spencer, Joel et al., Discrete Probability and Algorithms, IMA Volumes in Mathematics and its Applications, 72, Springer-Verlag, pp.111131, . [8] Vershik, A. M.; Kerov, C. V. (1977), "Asymptotics of the Plancheral measure of the symmetric group and a limiting form for Young tableaux", Dokl. Akad. Nauk USSR 233: 10241027. [9] Baik, Jinho; Deift, Percy; Johansson, Kurt (1999), "On the distribution of the length of the longest increasing subsequence of random permutations", Journal of the American Mathematical Society 12 (4): 11191178, arXiv:math/9810105, doi:10.1090/S0894-0347-99-00307-0. [10] Samuels, Stephen. M.; Steele, J. Michael (1981), "Optimal Sequential Selection of a Monotone Sequence From a Random Sample", Ann. Probab. 9 (6): 937947, doi:10.1214/aop/1176994265 [11] Bruss, F. Thomas; Delbaen, Freddy (2004), "A central limit theorem for the optimal selection process for monotone subsequences of maximum expected length", Stochastic Processes and their Applications 114 (2): 287311, doi:10.1016/j.spa.2004.09.002.

External links
Algorithmist's Longest Increasing Subsequence (http://www.algorithmist.com/index.php/ Longest_Increasing_Subsequence)

Longest repeated substring problem

232

Longest repeated substring problem


In computer science, the longest repeated substring problem is the problem of finding the longest substring of a string that occurs at least twice. This problem can be solved in linear time and space by building a suffix tree for the string, and finding the deepest internal node in the tree. The string spelled by the edges from the root to such a node is a longest repeated substring. The problem of finding the longest substring with at least occurrences can be found by first preprocessing the tree to count the number of leaf descendants for each internal node, and then finding the deepest node with at least descendants.

External links
Allison, L. "Suffix Trees" [1]. Retrieved 2008-10-14.

References
[1] http:/ / www. allisons. org/ ll/ AlgDS/ Tree/ Suffix/

Lottery mathematics
Lottery mathematics is used here to mean the calculation of the probabilities in a lottery game. The lottery game used in the examples below is one in which one selects 6 numbers from 49, and hopes that as many of those 6 as possible match the 6 that are randomly selected from the same pool of 49 numbers in the "draw".

Calculation explained in choosing 6 from 49


In a typical 6/49 game, six numbers are drawn from a range of 49 and if the six numbers on a ticket match the numbers drawn, the ticket holder is a jackpot winnerthis is true no matter in which order the numbers appear. The probability of this happening is 1 in 13,983,816. This small chance of winning can be demonstrated as follows: Starting with a bag of 49 differently-numbered lottery balls, there are 49 different but equally likely ways of choosing the number of the first ball selected from the bag, and so there is a 1 in 49 chance of predicting the number correctly. When the draw comes to the second number, there are now only 48 balls left in the bag (because the balls already drawn are not returned to the bag) so there is now a 1 in 48 chance of predicting this number. Thus for each of the 49 ways of choosing the first number there are 48 different ways of choosing the second. This means that the probability of correctly predicting 2 numbers drawn from 49 in the correct order is calculated as 1 in 49 48. On drawing the third number there are only 47 ways of choosing the number; but of course we could have got to this point in any of 49 48 ways, so the chances of correctly predicting 3 numbers drawn from 49, again in the correct order, is 1 in 49 48 47. This continues until the sixth number has been drawn, giving the final calculation, 49 48 47 46 45 44, which can also be written as number, 10,068,347,520, which is much bigger than the 14 million stated above. The last step is to understand that the order of the 6 numbers is not significant. That is, if a ticket has the numbers 1, 2, 3, 4, 5, and 6, it wins as long as all the numbers 1 through 6 are drawn, no matter what order they come out in. Accordingly, given any set of 6 numbers, there are 6 5 4 3 2 1 = 6! or 720 orders in which they could be drawn. Dividing 10,068,347,520 by 720 gives 13,983,816, also written as 49! / (6! (49 - 6)!), or more generally as . This works out to a very large

Lottery mathematics

233

. This function is called the combination function; in a popular spreadsheet computer program, this function is implemented as COMBIN(n, k). For example, COMBIN(49, 6) (the calculation shown above), would return 13,983,816. For the rest of this article, we will use the notation selected, irrespective of the order in which they are drawn. The range of possible combinations for a given lottery can be referred to as the "number space". "Coverage" is the percentage of a lottery's number space that is in play for a given drawing. . "Combination" means the group of numbers

Odds of getting other possibilities in choosing 6 from 49


One must divide the number of combinations producing the given result by the total number of possible combinations (for example, , as explained in the section above). The numerator equates to

the number of ways one can select the winning numbers multiplied by the number of ways one can select the losing numbers. For a score of n (for example, if 3 of your numbers match the 6 balls drawn, then n = 3), there are ways of

selecting n winning numbers from the 6 winning numbers. This means that there are 6 - n losing numbers, which are chosen from the 43 losing numbers in ways. The total number of combinations giving that result is, as

stated above, the first number multiplied by the second. The expression is therefore

This can be written in a general form for all lotteries as:

, where

is the number of balls in lottery,

is the number of balls in a single ticket, and is the number of matching balls for a winning ticket. The generalisation of this formula is called the hypergeometric distribution (the HYPGEOMDIST() function in most popular spreadsheets). This gives the following results:
Score 0 Calculation Exact Probability 435,461/998,844 Approximate Decimal Probability 0.436 Approximate 1/Probability 2.2938

68,757/166,474

0.413

2.4212

44,075/332,948

0.132

7.5541

8,815/499,422

0.0177

56.66

645/665,896

0.000969

1,032.4

Lottery mathematics

234
43/2,330,636 0.0000184 54,200.8

1/13,983,816

0.0000000715

13,983,816

Powerballs And Bonus Balls


Many lotteries have a powerball (or "bonus ball"). If the powerball is drawn from a pool of numbers different from the main lottery, then simply multiply the odds by the number of powerballs. For example, in the 6 from 49 lottery, if there were 10 powerball numbers, then the odds of getting a score of 3 and the powerball would be 1 in 56.66 10, or 566.6 (the probability would be divided by 10, to give an exact value of 8815/4994220). Another example of such a game is Mega Millions, albeit with different jackpot odds. Where more than 1 powerball is drawn from a separate pool of balls to the main lottery (for example, in the Euromillions game), the odds of the different possible powerball matching scores should be calculated using the method shown in the "other scores" section above (in other words, treat the powerballs like a mini-lottery in their own right), and then multiplied by the odds of achieving the required main-lottery score. If the powerball is drawn from the same pool of numbers as the main lottery, then, for a given target score, one must calculate the number of winning combinations, including the powerball. For games based on the Canadian lottery (such as the United Kingdom's lottery), after the 6 main balls are drawn, an extra ball is drawn from the same pool of balls, and this becomes the powerball (or "bonus ball"), and there is an extra prize for matching 5 balls and the bonus ball. As described in the "other scores" section above, the number of ways one can obtain a score of 5 from a single ticket is or 258. Since the number of remaining balls is 43, and the ticket has 1 unmatched number

remaining, 1/43 of these 258 combinations will match the next ball drawn (the powerball). So, there are 258/43 = 6 ways of achieving it. Therefore, the odds of getting a score of 5 and the powerball are = 1 in 2,330,636.

Of the 258 combinations that match 5 of the main 6 balls, in 42/43 of them the remaining number will not match the powerball, giving odds of matching the powerball. Using the same principle, to calculate the odds of getting a score of 2 and the powerball, calculate the number of ways to get a score of 2 as = 1,851,150 then multiply this by the probability of one of the remaining = 3/166,474 (approximately 55,491.33) for obtaining a score of 5 without

four numbers matching the bonus ball, which is 4/43. Since 1,851,150 (4/43) = 172,200, the probability of obtaining the score of 2 and the bonus ball is The general formula for is: The general formula for is: matching balls in a choose lottery with zero bonus ball from the pool of balls matching balls in a = 1025/83237. This gives approximate decimal odds of 81.2. choose lottery with one bonus ball from the pool of balls

Lottery mathematics The general formula for balls is: The general formula for balls is: matching balls in a choose lottery with no bonus ball from a separate pool of matching balls in a choose lottery with one bonus ball from a separate pool of

235

Minimum number of tickets for a match


It is a hard, in most cases open, mathematical problem to calculate the minimum number of tickets one needs to purchase to guarantee that at least one of these tickets matches at least 2 numbers. In the 5-from-90 lotto, the minimum number that can guarantee a ticket with at least 2 matches is 100.[1] .

References
[1] Z. Fredi, G. J. Szkely, and Z. Zubor (1996). "On the lottery problem". Journal of Combinatorial Designs (Wiley) 4 (1): 510. (http:/ / onlinelibrary. wiley. com/ doi/ 10. 1002/ (SICI)1520-6610(1996)4:1<5::AID-JCD2>3. 0. CO;2-J/ abstract)

External links
Lottery odds calculator (http://www.easysurf.cc/lotodd.htm) Euler's Analysis of the Genoese Lottery (http://mathdl.maa.org/convergence/1/?pa=content& sa=viewDocument&nodeId=217&bodyId=146) at Convergence (http://mathdl.maa.org/convergence/1/) Lottery Mathematics (http://probability.infarom.ro/lottery.html)

Lovsz local lemma

236

Lovsz local lemma


In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovsz local lemma (a weaker version was proved in 1975 by Lszl Lovsz and Paul Erds in the article Problems and results on 3-chromatic hypergraphs and some related questions) allows one to relax the independence condition slightly: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. It is most commonly used in the probabilistic method, in particular to give existence proofs. There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. For other versions, see (Alon & Spencer 2000).

Statements of the Lemma (symmetric version)


Let A1, A2,..., Ak be a series of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them. Lovsz and Erds showed 1973 (published 1975) that if

then there is a nonzero probability that none of the events occurs. In 1977 Joel Spencer published[1] an improvement to this result which he attributed to Lovsz: there is a nonzero probability that none of the events occurs already if

(where e = 2.718... is the base of natural logarithms). This version today is usually referred to as Lovsz Local Lemma. Then Shearer[2] showed that this again can be slightly strengthened by weakening the hypothesis to (except if then to ); this is the optimal threshold and it implies that the bound is also sufficient.

Asymmetric Lovsz local lemma


A statement of the asymmetric version (which allows for events with different probability bounds) is as follows: Let denote a subset of be a finite set of events in the probability space such that is independent from the collection of events to the events such that . For let . If

there exists an assignment of reals

then the probability of avoiding all events in

is positive, in particular

The symmetric version follows immediately from the asymmetric version by setting the sufficient condition

for all

to get

Lovsz local lemma

237

since

Constructive versus non-constructive


Note that, as is often the case with probabilistic arguments, this theorem is nonconstructive and gives no method of determining an explicit element of the probability space in which no event occurs. However, algorithmic versions of the local lemma with stronger preconditions are also known (Beck 1991; Czumaj and Scheideler 2000). More recently, a constructive version of the local lemma was given by Robin Moser and Gbor Tardos requiring no stronger preconditions.

Example
Suppose 11n points are placed around a circle and colored with n different colors, 11 points colored with each color. Then there must be a set of n points containing one point of each color but not containing any pair of adjacent points. To see this, imagine picking a point of each color randomly, with all points equally likely (i.e., having probability 1/11) to be chosen. The 11n different events we want to avoid correspond to the 11n pairs of adjacent points on the circle. For each pair our odds of picking both points in that pair is at most 1/121 (exactly 1/121 if the two points are of different colors, otherwise 0). This will be our p. Whether a given pair (a,b) of points is chosen depends only on what happens in the colors of a and b, and not at all on whether any other collection of points in the other n2 colors are chosen. This implies the event "a and b are both chosen" is dependent only on those pairs of adjacent points which share a color either with a or with b. There are 11 points on the circle sharing a color with a (including a itself), each of which is involved with 2 pairs. This means there are 21 pairs other than (a,b) which include the same color as a, and the same holds true for b. The worst that can happen is that these two sets are disjoint, so we can take d=42 in the lemma. This gives

By the local lemma, there is a positive probability that none of the bad events occur, meaning that our set contains no pair of adjacent points. This implies that a set satisfying our conditions must exist.

Notes
[1] J. Spencer. Asymptotic lower bounds for Ramsey functions. Discrete Mathematics, 20:69-76, 1977. [2] Shearer, J. On a problem of Spencer. Combinatorica 5(3):241-245, 1985, http:/ / dx. doi. org/ 10. 1007/ BF02579368.

References
Alon, Noga; Spencer, Joel H. (2000). The probabilistic method (2nd ed.). New York: Wiley-Interscience. ISBN0-471-37046-0. Beck, J. (1991). "An algorithmic approach to the Lovsz local lemma, I". Random Structures and Algorithms 2 (4): 343365. doi:10.1002/rsa.3240020402. Czumaj, Artur; Scheideler, Christian (2000). "Coloring nonuniform hypergraphs: A new algorithmic approach to the general Lovsz local lemma". Random Structures & Algorithms 17 (34): 213237. doi:10.1002/1098-2418(200010/12)17:3/4<213::AID-RSA3>3.0.CO;2-Y. Erds, Paul; Lovsz, Lszl (1975). "Problems and results on 3-chromatic hypergraphs and some related questions" (http://www.cs.elte.hu/~lovasz/scans/LocalLem.pdf). In A. Hajnal, R. Rado, and V. T. Ss, eds.. Infinite and Finite Sets (to Paul Erds on his 60th birthday). II. North-Holland. pp.609627. Moser, Robin A. (2008). "A constructive proof of the Lovasz Local Lemma". arXiv:0810.4812[cs.DS].

LubellYamamotoMeshalkin inequality

238

LubellYamamotoMeshalkin inequality
In combinatorial mathematics, the LubellYamamotoMeshalkin inequality, more commonly known as the LYM inequality, is an inequality on the sizes of sets in a Sperner family, proved by Bollobs (1965), Lubell (1966), Meshalkin (1963), and Yamamoto (1954). It is named for the initials of three of its discoverers. This inequality has many applications in combinatorics; in particular, it can be used to prove Sperner's theorem. The term is also used for similar inequalities.

Statement of the theorem


Let U be an n-element set, let A be a family of subsets of U such that no set in A is a subset of another set in A, and let ak denote the number of sets of size k in A. Then

Lubell's proof
Lubell (1966) proves the LubellYamamotoMeshalkin inequality by a double counting argument in which he counts the permutations of U in two different ways. First, by counting all permutations of U directly, one finds that there are n! of them. But secondly, one can generate a permutation of U by selecting a set S in A and concatenating a permutation of the elements of S with a permutation of the nonmembers. If |S|=k, it will be associated in this way with k!(nk)! permutations. Each permutation can only be associated with a single set in A, for if two prefixes of a permutation both formed sets in A then one would be a subset of the other. Therefore, the number of permutations that can be generated by this procedure is

Since this number is at most the total number of all permutations,

Finally dividing the above inequality by n! leads to the result.

References
Bollobs, B. (1965), "On generalized graphs", Acta Mathematica Academiae Scientiarum Hungaricae 16 (34): 447452, doi:10.1007/BF01904851, MR0183653. Lubell, D. (1966), "A short proof of Sperner's lemma", Journal of Combinatorial Theory 1 (2): 299, doi:10.1016/S0021-9800(66)80035-2, MR0194348. Meshalkin, L. D. (1963), "Generalization of Sperner's theorem on the number of subsets of a finite set", Theory of Probability and its Applications 8 (2): 203204, doi:10.1137/1108023, MR0150049. Yamamoto, Koichi (1954), "Logarithmic order of free distributive lattice", Journal of the Mathematical Society of Japan 6: 343353, MR0067086.

M. Lothaire

239

M. Lothaire
M. Lothaire is the pseudonym of a group of mathematicians, many of whom were students of Marcel-Paul Schtzenberger. The name is used as the author of several of their joint books about combinatorics on words. He is named for Lothair I. Mathematicians in the group have included Jean Berstel, Veronique Bruyere, Julien Cassaigne, Christian Choffrut, Robert Cori, Jacques Desarmenien, Volker Diekert, Dominique Foata, Christiane Frougny, Guo-Niu Han, Tero Harju, Juhani Karhumki, Alain Lascoux, Bernard Leclerc, Aldo De Luca, Filippo Mignosi, Dominique Perrin, Jean-ric Pin, Giuseppe Pirillo, Wojciech Plandowski, Antonio Restivo, Christophe Reutenauer, Jacques Sakarovitch, Marcel-Paul Schtzenberger, Patrice Sbold, Imre Simon, Jean-Yves Thibon, and Stefano Varricchio.

Publications
Lothaire, M. (1983), Combinatorics on words [1], Encyclopedia of Mathematics and its Applications, 17, Addison-Wesley Publishing Co., Reading, Mass., ISBN978-0-201-13516-9, MR675953 Lothaire, M. (2002), Algebraic combinatorics on words [2], Encyclopedia of Mathematics and its Applications, 90, Cambridge University Press, ISBN978-0-521-81220-7, MR1905123 Lothaire, M. (2005), Applied combinatorics on words [3], Encyclopedia of Mathematics and its Applications, 105, Cambridge University Press, ISBN978-0-521-84802-2; 978-0-521-84802-2, MR2165687

External links
Lothaire's books [4]

References
[1] [2] [3] [4] http:/ / books. google. com/ books?id=UV9_3plEr8wC http:/ / www-igm. univ-mlv. fr/ ~berstel/ Lothaire/ AlgCWContents. html http:/ / www-igm. univ-mlv. fr/ ~berstel/ Lothaire/ AppliedCW/ AppCWContents. html http:/ / www-igm. univ-mlv. fr/ ~berstel/ Lothaire/

Markov spectrum

240

Markov spectrum
In mathematics, the Markov spectrum devised by Andrey Markov is a complicated set of real numbers arising in the theory of certain quadratic forms, and containing all the real numbers larger than Freiman's constant.[1]

Context
Starting from Hurwitz's theorem on diophantine approximation, that any real number approximations m/n tending to it with has a sequence of rational

it is possible to ask for each value of 1/c with 1/c 5 about the existence of some

for which

for such a sequence, for which c is the best possible (maximal) value. Such 1/c make up the Lagrange spectrum, a set of real numbers at least 5 (which is the smallest value of the spectrum). The formulation with the reciprocal is awkward, but the traditional definition invites it; looking at the set of c instead allows a definition instead by means of an inferior limit. For that, consider

where m is chosen as an integer function of n to make the difference minimal. This is a function of reciprocal of the Lagrange spectrum is the range of values it takes on irrational numbers.

, and the

The initial part of the Lagrange spectrum, namely the part lying in the interval [5, 3), is associated with some binary quadratic forms that are indefinite (so factoring into two real linear forms). The Markov spectrum deals directly with the phenomena associated to those quadratic forms. Freiman's constant is the name given to the end of the last gap in the Lagrange spectrum, namely:

Real numbers greater than F are also members of the Markov spectrum.[2]

References
[1] Markov Spectrum (http:/ / mathworld. wolfram. com/ MarkovSpectrum. html) Weisstein, Eric W. "Freiman's Constant." From MathWorldA Wolfram Web Resource), accessed 26 Aug 2008 [2] Markov Freiman's Constant (http:/ / mathworld. wolfram. com/ FreimansConstant. html) Weisstein, Eric W. "Freiman's Constant." From MathWorldA Wolfram Web Resource), accessed 26 Aug 2008

Further reading
Conway, J. H. and Guy, R. K. The Book of Numbers. New York: Springer-Verlag, pp. 188189, 1996. Cusick, T. W. and Flahive, M. E. The Markov and Lagrange Spectra. Providence, RI: Amer. Math. Soc., 1989.

Markov spectrum

241

External links
Hazewinkel, Michiel, ed. (2001), "Markov spectrum problem" (http://www.encyclopediaofmath.org/index. php?title=m/m062540), Encyclopedia of Mathematics, Springer, ISBN978-1556080104

Meander (mathematics)
In mathematics, a meander or closed meander is a self-avoiding closed curve which intersects a line a number of times. Intuitively, a meander can be viewed as a road crossing a river through a number of bridges.

Meander
Given a fixed oriented line L in the Euclidean plane R2, a meander of order n is a non-self-intersecting closed curve in R2 which transversally intersects the line at 2n points for some positive integer n. Two meanders are said to be equivalent if they are homeomorphic in the plane.

Examples
The meander of order 1 intersects the line twice:

The meanders of order 2 intersect the line four times:

Meandric numbers
The number of distinct meanders of order n is the meandric number Mn. The first fifteen meandric numbers are given below (sequence A005315 in OEIS). M1 = 1 M2 = 2 M3 = 8 M4 = 42 M5 = 262 M6 = 1828 M7 = 13820 M8 = 110954 M9 = 933458 M10 = 8152860 M11 = 73424650 M12 = 678390116 M13 = 6405031050 M14 = 61606881612 M15 = 602188541928

Meander (mathematics)

242

Open meander
Given a fixed oriented line L in the Euclidean plane R2, an open meander of order n is a non-self-intersecting oriented curve in R2 which transversally intersects the line at n points for some positive integer n. Two open meanders are said to be equivalent if they are homeomorphic in the plane.

Examples
The open meander of order 1 intersects the line once:

The open meander of order 2 intersects the line twice:

Open meandric numbers


The number of distinct open meanders of order n is the open meandric number mn. The first fifteen open meandric numbers are given below (sequence A005316 in OEIS). m1 = 1 m2 = 1 m3 = 2 m4 = 3 m5 = 8 m6 = 14 m7 = 42 m8 = 81 m9 = 262 m10 = 538 m11 = 1828 m12 = 3926 m13 = 13820 m14 = 30694 m15 = 110954

Meander (mathematics)

243

Semi-meander
Given a fixed oriented ray R in the Euclidean plane R2, a semi-meander of order n is a non-self-intersecting closed curve in R2 which transversally intersects the ray at n points for some positive integer n. Two semi-meanders are said to be equivalent if they are homeomorphic in the plane.

Examples
The semi-meander of order 1 intersects the ray once:

The semi-meander of order 2 intersects the ray twice:

Semi-meandric numbers
The number of distinct semi-meanders of order n is the semi-meandric number Mn (usually denoted with an overline instead of an underline). The first fifteen semi-meandric numbers are given below (sequence A000682 in OEIS). M1 = 1 M2 = 1 M3 = 2 M4 = 4 M5 = 10 M6 = 24 M7 = 66 M8 = 174 M9 = 504 M10 = 1406 M11 = 4210 M12 = 12198 M13 = 37378 M14 = 111278 M15 = 346846

Meander (mathematics)

244

Properties of meandric numbers


There is an injective function from meandric to open meandric numbers: Mn = m2n1 Each meandric number can be bounded by semi-meandric numbers: Mn Mn M2n For n > 1, meandric numbers are even: Mn 0 (mod 2)

External links
"Approaches to the Enumerative Theory of Meanders" by Michael La Croix [1]

References
[1] http:/ / www. math. uwaterloo. ca/ ~malacroi/ Latex/ Meanders. pdf

Method of distinguished element


In enumerative combinatorial mathematics, identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set.

Examples
The binomial coefficient is the number of size-k subsets of a size-n set. A basic identity, one of whose

consequences is that these are precisely the numbers appearing in Pascal's triangle, states that:

Proof: In a size-(n+1) set, choose one distinguished element. The set of all size-k subsets contains: (1) all size-k subsets that do contain the distinguished element, and (2) all size-k subsets that do not contain the distinguished element. If a size-k subset of a size-(n+1) set does contain the distinguished element, then its other k1 elements are chosen from among the other n elements of our size-(n+1) set. The number of ways to choose those is therefore . If a size-k subset does not contain the distinguished element, then all

of its k members are chosen from among the other n "non-distinguished" elements. The number of ways to choose those is therefore .

The number of subsets of any size-n set is 2n. Proof: We use mathematical induction. The basis for induction is the truth of this proposition in case n=0. The empty set has 0 members and 1 subset, and 20=1. The induction hypothesis is the proposition in case n; we use it to prove case n+1. In a size-(n+1) set, choose a distinguished element. Each subset either contains the distinguished element or does not. If a subset contains the distinguished element, then its remaining elements are chosen from among the other n elements. By the induction hypothesis, the number of ways to do that is 2n. If a subset does not contain the distinguished element, then it is a subset of the set of all non-distinguished elements. By the induction hypothesis, the number of such subsets is 2n. Finally, the whole

Method of distinguished element list of subsets of our size-(n+1) set contains 2n+2n=2n+1 elements. Let Bn be the nth Bell number, i.e., the number of partitions of a set of n members. Let Cn be the total number of "parts" (or "blocks", as combinatorialists often call them) among all partitions of that set. For example, the partitions of the size-3 set {a,b,c} may be written thus:

245

We see 5 partitions, containing 10 blocks, so B3=5 and C3=10. An identity states: Proof: In a size-(n+1) set, choose a distinguished element. In each partition of our size-(n+1) set, either the distinguished element is a "singleton", i.e., the set containing only the distinguished element is one of the blocks, or the distinguished element belongs to a larger block. If the distinguished element is a singleton, then deletion of the distinguished element leaves a partition of the set containing the n non-distinguished elements. There are Bn ways to do that. If the distinguished element belongs to a larger block, then its deletion leaves a block in a partition of the set containing the n non-distinguished elements. There are Cn such blocks.

Mnev's universality theorem


In algebraic geometry, Mnev's universality theorem is a result which can be used to represent algebraic (or semi algebraic) varieties as realizations of oriented matroids, a notion of combinatorics.

Oriented matroids
For the purposes of Mnev's universality, an oriented matroid of a finite subset points in S induced by hyperplanes in on the incidence relations in S, inducing on S a matroid structure. The realization space of an oriented matroid is the space of all configurations of points same oriented matroid structure on S. inducing the is a list of all partitions of . In particular, the structure of oriented matroid contains full information

Stable equivalence of semialgebraic sets


For the purposes of Mnev's Universality, the stable equivalence of semialgebraic sets is defined as follows. Let U, V be semialgebraic sets, obtained as a disconnected union of connected semialgebraic sets , We say that U and V are rationally equivalent if there exist homeomorphisms Let be semialgebraic sets, , with mapping to under the natural projection deleting last d coordinates. We say that is a stable projection if there exist integer polynomial maps such that defined by rational maps.

Mnev's universality theorem and for all

246

The stable equivalence is an equivalence relation on semialgebraic subsets generated by stable projections and rational equivalence.

Mnev's Universality theorem


THEOREM (Mnev's universality theorem) Let V be a semialgebraic subset in defined over integers. Then V is stably equivalent to a realization space of a certain oriented matroid.

History
Mnev's universality theorem was discovered by Nikolai Mnev in his Ph. D. thesis. It has numerous applications in algebraic geometry, due to Laurent Lafforgue, Ravi Vakil and others, allowing one to construct moduli spaces with arbitrarily bad behaviour.

Notes
Universality Theorem [1], a lecture of Nikolai Mnev (in Russian). N. E. Mnev, The universality theorems on the classification problem of configuration varieties and convex polytopes varieties (pp.527543), in "Topology and geometry: Rohlin Seminar." Edited by O. Ya. Viro. Lecture Notes in Mathematics, 1346. Springer-Verlag, Berlin, 1988. R. Vakil "Murphy's Law in algebraic geometry: Badly-behaved deformation spaces" [2], Invent. math. 164, 569-590 (2006). J. Richter-Gebert, Mnev's Universality Theorem Revisited [3], Seminaire Lotharingien de Combinatoire, B34h (1995), 15pp. J. Richter-Gebert The universality theorems for oriented matroids and polytopes [4], Contemporary Mathematics 223, 269-292 (1999).

References
[1] [2] [3] [4] http:/ / club. pdmi. ras. ru/ ~panina/ 9. pdf http:/ / arxiv. org/ abs/ math/ 0411469 http:/ / www. emis. de/ journals/ SLC/ wpapers/ s34berlin. html http:/ / www-m10. mathematik. tu-muenchen. de/ ~richter/ Papers/ PDF/ 23_UniversalityTheorems. pdf

Multi-index notation

247

Multi-index notation
The mathematical notation of multi-indices simplifies formulae used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices.

Multi-index notation
An n-dimensional multi-index is an n-tuple

of non-negative integers. For multi-indices Componentwise sum and difference

and

one defines:

Partial order

Sum of components (absolute value)

Factorial

Binomial coefficient

Multinomial coefficient where Power

Higher-order partial derivative where

Some applications
The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, (or ), , and (or ).

Multinomial theorem

Multi-index notation

248

Multi-binomial theorem

Leibniz formula
For smooth functions f and g

Taylor series
For an analytic function f in n variables one has

In fact, for a smooth enough function, we have the similar Taylor expansion

where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets

General partial differential operator


A formal N-th order partial differential operator in n variables is written as

Integration by parts
For smooth functions with compact support in a bounded domain one has

This formula is used for the definition of distributions and weak derivatives.

Multi-index notation

249

An example theorem
If are multi-indices and , then

Proof
The proof follows from the power rule for the ordinary derivative; if and are in {0,1,2,...}, then

Suppose

, and

. Then we have that

For each i in {1,...,n}, the function

only depends on

. In the above, each partial differentiation . Hence, from equation (1), it follows that

therefore reduces to the corresponding ordinary differentiation

vanishes if i>i for at least one i in {1,...,n}. If this is not the case, i.e., if as multi-indices, then

for each

and the theorem follows.

References
Saint Raymond, Xavier (1991). Elementary Introduction to the Theory of Pseudodifferential Operators. Chap 1.1 . CRC Press. ISBN 0-8493-7158-9 This article incorporates material from multi-index derivative of a power on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Natural density

250

Natural density
In number theory, asymptotic density (or natural density or arithmetic density) is one of the possibilities to measure how large a subset of the set of natural numbers is. Intuitively, we think that there are "more" positive integers than perfect squares, since every perfect square is already positive, and many other positive integers exist besides. However, the set of positive integers is not in fact "bigger" than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence. Clearly, we need a better way to formalize our intuitive notion. If we pick randomly an integer from the set [1,n], then the probability that it belongs to A is the ratio of the number of elements of A in [1,n] to the total number of elements in [1,n]. If this probability tends to some limit as n tends to infinity, then we call this limit the asymptotic density of A. We see that this notion can be understood as a kind of probability of choosing a number from the set A. Indeed, the asymptotic density (as well as some other types of densities) is studied in probabilistic number theory. Asymptotic density contrasts, for example, with the Schnirelmann density. A drawback of this approach is that the asymptotic density is not defined for all subsets of .

Definition
A subset A of positive integers has natural density (or asymptotic density) , where 0 1, if the proportion of elements of A among all natural numbers from 1 to n is asymptotic to as n tends to infinity. More explicitly, if one defines for any natural number n the counting function a(n) as the number of elements of A less than or equal to n, then the natural density of A being exactly means that a(n)/n as n +.

Upper and lower asymptotic density


Let be a subset of the and Define the upper asymptotic density of by set of natural . numbers For any put

where lim sup is the limit superior. Similarly, we define

is also known simply as the upper density of , by

, the lower asymptotic density of

One may say

has asymptotic density

if

, in which case we put

This definition can be restated in the following way:

if the limit exists. A somewhat weaker notion of density is upper Banach density; given a set , define as

If one were to write a subset of

as an increasing sequence

Natural density

251

then

and

if the limit exists.

Examples
If d(A) exists for some set A, then for the complement set we have d(Ac) = 1 - d(A). Obviously, d(N) = 1. For any finite set F of positive integers, d(F) = 0. If If is the set of all squares, then d(A) = 0. is the set of all even numbers, then d(A) = 1/2. Similarly, for any arithmetical progression

we get d(A) = 1/a. For the set P of all primes we get from the prime number theorem d(P) = 0. The set of all square-free integers has density The density of the set of abundant numbers is known to be between 0.2474 and 0.2480. The set of numbers whose binary expansion contains an odd number of

digits is an example of a set which does not have an asymptotic density, since the upper density of this set is

whereas its lower density is

Consider an equidistributed sequence

in

and define a monotone family

of sets :

Then, by definition,

for all

References
H. H. Ostmann (1956) (in German). Additive Zahlentheorie I. Berlin-Gttingen-Heidelberg: Springer-Verlag. Steuding, Jrn. "Probabilistic number theory" [1]. Retrieved 2005-10-06. G. Tenenbaum (1995). Introduction to analytic and probabilistic number theory. Cambridge: Cambridge Univ. Press. This article incorporates material from Asymptotic density on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Natural density

252

References
[1] http:/ / www. math. uni-frankfurt. de/ ~steuding/ steuding/ prob. pdf

No-three-in-line problem
In mathematics, in the area of discrete geometry, the no-three-in-line-problem, introduced by Henry Dudeney in 1917, asks for the maximum number of points that can be placed in the n n grid so that no three points are collinear. This number is at most 2n, since if 2n + 1 points are placed in the grid some row will contain three points.

Lower bounds
Paul Erds (in Roth, 1951) observed that, when n is a prime number, the set of n grid points (i, i2 mod n), for 0 i < n, contains no three collinear points. When n is not prime, one can perform this construction for a p p grid contained in the n n grid, where p is the largest prime that is at most n. As a consequence, for any and any sufficiently large n, one can place (1 )n points in the n n grid with no three points collinear. Erds' bound has been improved subsequently: Hall et al. (1975) show that, when n/2 is prime, one can obtain a solution with 3(n - 2)/2 points by placing points on the hyperbola xy k (mod n/2) for a suitable k. Again, for arbitrary n one can perform this construction for a prime near n/2 to obtain a solution with (3/2 )n points.

A set of 20 points in a 10 10 grid, with no three points in a line.

Conjectured upper bounds


Guy and Kelly (1968) conjectured that one cannot do better, for large n, than cn with

As recently as March, 2004, Gabor Ellmann notes an error in the original paper of Guy and Kelly's heuristic reasoning, which if corrected, results in

No-three-in-line problem

253

Applications
The Heilbronn triangle problem asks for the placement of n points in a unit square that maximizes the area of the smallest triangle formed by three of the points. By applying Erds' construction of a set of grid points with no three collinear points, one can find a placement in which the smallest triangle has area

Generalizations
A noncollinear placement of n points can also be interpreted as a graph drawing of the complete graph in such a way that, although edges cross, no edge passes through a vertex. Erds' construction above can be generalized to show that every n-vertex k-colorable graph has such a drawing in a O(n) x O(k) grid (Wood 2005). Non-collinear sets of points in the three-dimensional grid were considered by Pr and Wood (2007). They proved that the maximum number of points in the n x n x n grid with no three points collinear is . Similarly to Erds's 2D construction, this can be accomplished by using points (x, y, x2+y2) mod p, where p is a prime congruent to 3 mod 4. One can also consider graph drawings in the three-dimensional grid. Here the non-collinearity condition means that a vertex should not lie on a non-adjacent edge, but it is normal to work with the stronger requirement that no two edges cross (Pach et al. 1998; Dujmovi et al. 2005; Di Giacomo 2005).

Small values of n
For n 40, it is known that 2n points may be placed with no three in a line. The numbers of solutions (not counting reflections and rotations as distinct) for small n = 2, 3, ..., are 1, 1, 4, 5, 11, 22, 57, 51, 156, 158, 566, 499, 1366, ... (sequence A000769 in OEIS).

References
Dudeney, Henry (1917). Amusements in Mathematics. Edinburgh: Nelson. Emilio Di Giacomo, Giuseppe Liotta, and Henk Meijer (2005). "Computing Straight-line 3D Grid Drawings of Graphs in Linear Volume". Computational Geometry: Theory and Applications 32 (1): 2658. doi:10.1016/j.comgeo.2004.11.003. Vida Dujmovi, Pat Morin, and David R. Wood (2005). "Layout of Graphs with Bounded Tree-Width". SIAM Journal on Computing 34 (3): 553579. doi:10.1137/S0097539702416141. Stefan Felsner, Giuseppe Liotta, and Stephen K. Wismath (2003). "Straight-Line Drawings on Restricted Integer Grids in Two and Three Dimensions" [1]. Journal of Graph Algorithms and Applications 7 (4): 363398. Flammenkamp, Achim (1992). "Progress in the no-three-in-line problem". Journal of Combinatorial Theory. Series A 60 (2): 305311. doi:10.1016/0097-3165(92)90012-J. Flammenkamp, Achim (1998). "Progress in the no-three-in-line problem, II". Journal of Combinatorial Theory. Series A 81 (1): 108113. doi:10.1006/jcta.1997.2829. Guy, R. K.; Kelly, P. A. (1968). "The no-three-in-line problem". Canadian Mathematical Bulletin 11 (0): 527531. doi:10.4153/CMB-1968-062-3. MR0238765. Hall, R. R.; Jackson, T. H.; Sudbery, A.; Wild, K. (1975). "Some advances in the no-three-in-line problem". Journal of Combinatorial Theory. Series A 18 (3): 336341. doi:10.1016/0097-3165(75)90043-6. Lefmann, Hanno (2008). "No l Grid-Points in spaces of small dimension". Algorithmic Aspects in Information and Management, 4th International Conference, AAIM 2008, Shanghai, China, June 23-25, 2008, Proceedings. Lecture Notes in Computer Science. 5034. Springer-Verlag. pp.259270. doi:10.1007/978-3-540-68880-8_25. Pach, Jnos; Thiele, Torsten; Tth, Gza (1998). "Three-dimensional grid drawings of graphs". Graph Drawing, 5th Int. Symp., GD '97. Lecture Notes in Computer Science. 1353. Springer-Verlag. pp.4751.

No-three-in-line problem doi:10.1007/3-540-63938-1_49. Attila Pr and David R. Wood (2007). "No-three-in-line-in-3D". Algorithmica 47 (4): 481. doi:10.1007/s00453-006-0158-9. Roth, K. F. (1951). "On a problem of Heilbronn". Journal of the London Mathematical Society 26 (3): 198204. doi:10.1112/jlms/s1-26.3.198. David R. Wood (2005). "Grid drawings of k-colourable graphs". Computational Geometry: Theory and Applications 30 (1): 2528. doi:10.1016/j.comgeo.2004.06.001.

254

External links
Flammenkamp, Achim. "The No-Three-in-Line Problem" [2]. Weisstein, Eric W., "No-Three-in-a-Line-Problem [3]" from MathWorld.

References
[1] http:/ / jgaa. info/ accepted/ 2003/ Felsner+ 2003. 7. 4. pdf [2] http:/ / wwwhomes. uni-bielefeld. de/ achim/ no3in/ readme. html [3] http:/ / mathworld. wolfram. com/ No-Three-in-a-Line-Problem. html

Occupancy theorem
In combinatorial mathematics, the occupancy theorem states that the number of ways of putting r indistinguishable balls into n buckets is

Furthermore, the number of ways of putting r indistinguishable balls into n buckets, leaving none empty is

Applications
This has many applications in many areas where the problem can be reduced to the problem stated above. For example: Take 12 red and 3 yellow cards, shuffle them and deal them in such a way that all the red cards before the first yellow card go to player 1, between the 1st and 2nd second yellow cards go to player 2, and so on. Q: Find Pr(Everyone has at least 1 card) A: The number of allocations of 12 balls (red cards) to 4 buckets (players) is . The number of allocations

where each player gets at least one card is

, so the probability is

Ordered partition of a set

255

Ordered partition of a set


In combinatorial mathematics, an ordered partition O of a set S is a sequence A1, A2, A3, ..., An of subsets of S, with union is S, which are non-empty, and pairwise disjoint. This differs from a partition of a set, in that the order of the Ai matters. For example, one ordered partition of { 1, 2, 3, 4, 5 } is {1, 2} {3, 4} {5} which is equivalent to {1, 2} {4, 3} {5} but distinct from {3, 4} {1, 2} {5}. The number of ordered partitions Tn of { 1, 2, ..., n } can be found recursively by the formula:

Furthermore, the exponential generating function is

An ordered partition of "type

" is one in which the ith part has ki members, for i = 1, ..., m. The

number of such partitions is given by the multinomial coefficient

For example, for n = 3: type 1+1+1: 6 type 2+1: 3 type 1+2: 3 type 3: 1

Together this is the ordered Bell number 13.

Oriented matroid

256

Oriented matroid
An oriented matroid is a mathematical structure that abstracts the properties of directed graphs and of arrangements of vectors in a vector space over an ordered field (particularly for partially ordered vector spaces).[1] In comparison, an ordinary (i.e., non-oriented) matroid abstracts the dependence properties that are common both to graphs, which are not necessarily directed, and to arrangements of vectors over fields, which are not necessarily ordered.[2] [3]

All oriented matroids have an underlying matroid. Thus, results on ordinary matroids can be applied to oriented matroids. However, the converse is false; some matroids cannot become an oriented matroid by orienting an underlying structure (e.g., circuits or independent sets).[4] The distinction between matroids and oriented matroids is discussed further below. Matroids are often useful in areas such as dimension theory and algorithms. Because of an oriented matroid's inclusion of additional details about the oriented nature of a structure, its usefulness extends further into several areas including geometry and optimization.

Oriented-matroid theory allows a combinatorial approach to the max-flow min-cut theorem. A network with the value of flow equal to the capacity of an s-t cut

Axiomatizations
Like ordinary matroids, several equivalent systems of axioms exist. (Such structures that possess multiple equivalent axiomatizations are called cryptomorphic.)

Circuit axioms
Signed sets Before we list the circuit axioms a few terms must be defined. A signed set, , combines a set of objects with a partition of that set into two subsets: and .

The members of The set The empty signed set, The signed set

are called the positive elements; members of is called the support of . , if and only if

are the negative elements.

is defined in the obvious way. , i.e., and

is the opposite of

The concept of signed sets is key to distinguishing oriented from ordinary matroids. Axioms Let be any set. We refer to . as the ground set. Let be a collection of signed sets, each of which is supported , then equivalently is the set of signed circuits for an by a subset of (C0) (C1) (symmetric) (C2) (incomparable) (C3) (weak elimination) . If the following axioms hold for

oriented matroid on

Oriented matroid

257

Examples
Oriented matroids are often introduced (e.g., Bachem and Kern) as an abstraction for directed graphs or systems of linear inequalities. Thus, it may be helpful to first be knowledgeable of the structures for which oriented matroids are an abstraction. Several of these topics are listed below.

A simple directed acyclic graph

Linear optimization
The theory of oriented matroids was initiated by R.Tyrrell Rockafellar to describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired by Albert W. Tucker studies of such sign patterns in "Tucker tableaux".[5] Much of the theory of oriented matroids (OMs) was developed to study the combinatorial invariants of linear-optimization, particularly those visible in the basis-exchange pivoting of the simplex algorithm.[6]

Convex polytope
For example, Ziegler introduces oriented matroids via convex polytopes.
A 3-dimensional convex polytope

Results
Algebra: duality and polarity
Oriented matroids have a satisfying theory of duality.[7]

Geometry

Oriented matroid

258

The theory of oriented matroids has influenced the development of combinatorial geometry, especially the theory of convex polytopes, zonotopes, and of configurations of vectors (arrangements of hyperplanes).[8] Many resultsCarathodory's theorem, Helly's theorem, Radon's theorem, the HahnBanach theorem, the KreinMilman theorem, the lemma of Farkascan be formulated using appropriate oriented matroids.[9] Rank 3 oriented matroids are equivalent to arrangements of pseudolines.[10]

|A zonotope, which is a Minkowski sum of line segments, is a fundamental model for oriented matroids. The sixteen dark-red points (on the right) form the Minkowski sum of the four non-convex sets (on the left), each of which consists of a pair of red points. Their convex hulls (shaded pink) contain plus-signs (+): The right plus-sign is the sum of the left plus-signs.

Similarly, matroid theory is useful for developing combinatorial notions of dimension, rank, etc. In combinatorial convexity, the notion of an antimatroid is also useful.

Optimization
The theory of oriented matroids (OM) has led to break-throughs in combinatorial optimization. In linear programming, OM theory was the language in which Bland formulated his pivoting rule by which the simplex algorithm avoids cycles; similarly, OM theory was used by Terlakey and Zhang to prove that their criss-cross algorithms have finite termination for linear programming problems. Similar results were made in convex quadratic programming by Todd and Terlaky.[11] The criss-cross algorithm is often studied using the theory of oriented matroids (OMs), which is a combinatorial abstraction of linear-optimization theory.[6] [12]

Historically, an OMalgorithm for quadratic-programming problems and linear-complementarity problems was published by Michael J. Todd, before Terlaky and Wang published their criss-cross algorithms.[6] [13] However, Todd's pivoting-rule cycles on nonrealizable oriented matroids (and only on nonrealizable oriented matroids). Such cycling does not stump the OM criss-cross algorithms of Terlaky and Wang, however.[6] There are oriented-matroid variants of the criss-cross algorithm for linear programming, for quadratic programming, and for the linear-complementarity problem.[6] [6] [14] [14] Oriented matroid theory is used in many areas of optimization, besides linear programming. OM theory has been applied to linear-fractional programming[15] quadratic-programming problems, and linear complementarity problems.[14] [16] [17] Outside of combinatorial optimization, OM theory also appears in convex minimization in Rockafellar's theory of "monotropic programming" and related notions of "fortified descent".[18] Similarly, matroid theory has influenced the development of combinatorial algorithms, particularly the greedy algorithm.[19] More generally, a greedoid is useful for studying the finite termination of algorithms.

In convex geometry, the simplex algorithm for linear programming is interpreted as tracing a path along the vertices of a convex polyhedron. Oriented matroid theory studies the combinatorial invariants that are revealed in the sign-patterns of the matrices that appear as pivoting algorithms exchange bases.

Oriented matroid

259

References
[1] Rockafellar 1969. Bjrner et alia, Chapters 1-3. Bokowski, Chapter 1. Ziegler, Chapter 7. [2] Bjrner et alia, Chapters 1-3. Bokowski, Chapters 1-4. [3] Because matroids and oriented matroids are abstractions of other mathematical abstractions, nearly all the relevant books are written for mathematical scientists rather than for the general public. For learning about oriented matroids, a good preparation is to study the textbook on linear optimization by Nering and Tucker, which is infused with oriented-matroid ideas, and then to proceed to Ziegler's lectures on polytopes. [4] Bjrner et alia, Chapter 7.9. [5] (Rockafellar 1969):

Rockafellar, R.T. (1969). "The elementary vectors of a subspace of

(1967)" (http:/ / www. math. washington.

edu/ ~rtr/ papers/ rtr-ElemVectors. pdf). In R. C. Bose and T.A. Dowling. Combinatorial Mathematics and its Applications. The University of North Carolina Monograph Series in Probability and Statistics. Chapel Hill, North Carolina: University of North Carolina Press.. pp.104127. MR278972. . .
[6] Bjrner, Anders; LasVergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Gnter (1999). "10 Linear programming" (http:/ / ebooks. cambridge. org/ ebook. jsf?bid=CBO9780511586507). Oriented Matroids. Cambridge University Press. pp.417479. doi:10.1017/CBO9780511586507. ISBN9780521777506. MR1744046. . [7] In oriented-matroid theory, duality differs from polarity; see Bachem and Kern, Chapters 5.11, 6, 7.2. [8] Bachem and Kern, Chapters 12 and 49. Bjrner et alia, Chapters 18. Ziegler, Chapter 78. Bokowski, Chapters 710. [9] Bachem and Wanka, Chapters 12, 5, 79. Bjrner et alia, Chapter 8. [10] Bjrner et alia, Chapter 6. [11] Bjrner et alia, Chapters 8-9. Fukuda and Terlaky. Compare Ziegler. [12] The theory of oriented matroids was initiated by R.Tyrrell Rockafellar to describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired by Albert W. Tucker studies of such sign patterns in "Tucker tableaux". (Rockafellar 1969):

Rockafellar, R.T. (1969). "The elementary vectors of a subspace of

(1967)" (http:/ / www. math. washington.

edu/ ~rtr/ papers/ rtr-ElemVectors. pdf). In R. C. Bose and T.A. Dowling. Combinatorial Mathematics and its Applications. The University of North Carolina Monograph Series in Probability and Statistics. Chapel Hill, North Carolina: University of North Carolina Press.. pp.104127. MR278972. . .
[13] Todd, MichaelJ. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. SeriesB 39 (2): 105133. doi:10.1016/0095-8956(85)90042-5. MR811116. [14] Fukuda & Terlaky (1997) [15] Ills, Szirmai & Terlaky (1999) [16] Fukuda & Terlaky (1997, p.385) [17] Fukuda & Namiki (1994, p.367) [18] Rockafellar 1984 and 1998. [19] Lawler. Rockafellar 1984 and 1998.

Further reading
Books
A. Bachem and W. Kern. Linear Programming Duality: An Introduction to Oriented Matroids. Universitext. Springer-Verlag, 1992. Bjrner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Gnter (1999). Oriented Matroids. Cambridge University Press. ISBN9780521777506. Bokowski, Jrgen (2006). Computational oriented matroids. Cambridge University Press. ISBN9780521849302. Eugene Lawler (2001). Combinatorial Optimization: Networks and Matroids. Dover. ISBN0486414531. Evar D. Nering and Albert W. Tucker, 1993, Linear Programs and Related Problems, Academic Press. (elementary) R. T. Rockafellar. Network Flows and Monotropic Optimization, Wiley-Interscience, 1984 (610 pages); republished by Athena Scientific of Dimitri Bertsekas, 1998. Ziegler, Gnter M., Lectures on Polytopes, Springer-Verlag, New York, 1994.

Oriented matroid Richter-Gebert, J. and G. Ziegler, Oriented Matroids, In Handbook of Discrete and Computational Geometry, J. Goodman and J.O'Rourke, (eds.), CRC Press, Boca Raton, 1997, p.111-132.

260

Articles
A. Bachem, A. Wanka, Separation theorems for oriented matroids, Discrete Math. 70 (1988) 303310. R.G. Bland, New finite pivoting rules for the simplex method, Math. Oper. Res. 2 (1977) 103107. Jon Folkman and James Lawrence, Oriented Matroids, J. Combin. Theory Ser. B 25 (1978) 199236. Fukuda, Komei; Terlaky, Tams (1997). "Criss-cross methods: A fresh view on pivot algorithms" (http://www. cas.mcmaster.ca/~terlaky/files/crisscross.ps). Mathematical Programming: SeriesB (Amsterdam: North-Holland PublishingCo.) 79 (Papers from the16th International Symposium on Mathematical Programming held in Lausanne,1997): pp.369395. doi:10.1016/S0025-5610(97)00062-2. MR1464775. Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming 64 (1): 365370. doi:10.1007/BF01582581. MR1286455. Ills, Tibor; Szirmai, kos; Terlaky, Tams (1999). "The finite criss-cross method for hyperbolic programming" (http://www.sciencedirect.com/science/article/B6VCT-3W3DFHB-M/2/ 4b0e2fcfc2a71e8c14c61640b32e805a). European Journal of Operational Research 114 (1): 198214. doi:10.1016/S0377-2217(98)00049-6. ISSN0377-2217. PDF preprint (http://www.cas.mcmaster.ca/~terlaky/ files/dut-twi-96-103.ps.gz). Fukuda, Komei; Terlaky, Tams (1997). ThomasM. Liebling and Dominique deWerra. ed. "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming: SeriesB (Amsterdam: North-Holland PublishingCo.) 79 (Papers from the16th International Symposium on Mathematical Programming held in Lausanne,1997): 369395. doi:10.1016/S0025-5610(97)00062-2. MR1464775. Postscript preprint (http://www. cas.mcmaster.ca/~terlaky/files/crisscross.ps). R. T. Rockafellar. The elementary vectors of a subspace of , in Combinatorial Mathematics and its Applications, R. C. Bose and T. A. Dowling (eds.), Univ. of North Carolina Press, 1969, 104-127. Roos, C. (1990). "An exponential example for Terlaky's pivoting rule for the criss-cross simplex method". Mathematical Programming. SeriesA 46 (1): 79. doi:10.1007/BF01585729. MR1045573. Terlaky, T. (1985). "A convergent criss-cross method". Optimization: A Journal of Mathematical Programming and Operations Research 16 (5): 683690. doi:10.1080/02331938508843067. ISSN0233-1934. MR798939. Terlaky, Tams (1987). "A finite crisscross method for oriented matroids". Journal of Combinatorial Theory. SeriesB 42 (3): 319327. doi:10.1016/0095-8956(87)90049-9. ISSN0095-8956. MR888684. Terlaky, Tams; Zhang, ShuZhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research (Springer Netherlands) 4647 (Degeneracy in optimization problems): 203233. doi:10.1007/BF02096264. ISSN0254-5330. MR1260019. PDF file of (1991) preprint (http:/ /citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.7658&rep=rep1&type=pdf). Michael J. Todd, Linear and quadratic programming in oriented matroids, J. Combin. Theory Ser. B 39 (1985) 105133. Wang, ZheMin (1987). "A finite conformal-elimination free algorithm over orientedmatroid programming". Chinese Annals of Mathematics (Shuxue NiankanBJi). SeriesB 8 (1): 120125. ISSN0252-9599. MR886756.

Oriented matroid

261

On the web
Ziegler, Gnter (1998). "Oriented Matroids Today" (http://www.combinatorics.org/Surveys/ds4.pdf). The Electronic Journal of Combinatorics. Malkevitch, Joseph. "Oriented Matroids: The Power of Unification" (http://www.ams.org/featurecolumn/ archive/oriented1.html). Feature Column. American Mathematical Society. Retrieved 2009-09-14.

External links
Komei Fukuda (ETH Zentrum, Zurich) (http://www.ifor.math.ethz.ch/~fukuda/) with publications (http:// www.ifor.math.ethz.ch/~fukuda/publ/publ.html) including Oriented matroid programming (1982 Ph.D. thesis) (ftp://ftp.ifor.math.ethz.ch/pub/fukuda/reports/fukuda1982thesis.pdf) Tams Terlaky (Lehigh University) (http://coral.ie.lehigh.edu/~terlaky/) with publications (http://coral.ie. lehigh.edu/~terlaky/publications)

Partial permutation
In combinatorial mathematics, given a set S and two subsets U and V, a bijection from U to V is a partial permutation of S. Thus any permutation is a partial permutation withU=V. Another way of looking at it is that a partial permutation on S is a partial function on S which can be extended to a permutation of S.

Partition (number theory)


In number theory and combinatorics, a partition of a positive integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered to be the same partition; if order matters then the sum becomes a composition. For example, 4 can be partitioned in five distinct ways: 4,3 + 1,2 + 2,2 + 1 + 1,1 + 1 + 1 + 1. The order-dependent composition 1 + 3 is the same partition as 3 + 1, while 1 + 2 + 1 and 1 + 1 + 2 are the same partition as 2 + 1 + 1. A summand in a partition is also called a part. The number of partitions of n is given by the partition function p(n). So p(4) = 5. The notation q n means that q is a partition of n.

Young diagrams associated to the partitions of the positive integers 1 through 8. They are so arranged that images under the reflection about the main diagonal of the square are conjugate partitions.

Partition (number theory) Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials, the symmetric group and in group representation theory in general.

262

Examples
The partitions of 4 are: 1. 2. 3. 4. 5. 4 3+1 2+2 2+1+1 1+1+1+1

In some sources partitions are treated as the sequence of summands, rather than as an expression with plus signs. For example, the partition 2+1+1 might instead be written as the tuple (2, 1, 1) or in the even more compact form (2, 12) where the superscript indicates the number of repetitions of a term.

Partition function
In number theory, the partition function p(n) represents the number of possible partitions of a natural number n, which is to say the number of distinct (and order independent) ways of representing n as a sum of natural numbers. By convention p(0) = 1, p(n) = 0 for n negative.

Values
The number of partitions of the numbers from 1 to 10 are 1, 2, 3, 5, 7, 11, 15, 22, 30, 42 (sequence A000041 in OEIS). There are 190,569,292 partitions of the number 100, and approximately 2.41031 partitions of the number 1000. As of September 2011, the largest known prime number that counts a number of partitions is p(60016427), with 8622 decimal digits, found by Bernardo Boncompagni.[1]

Intermediate function
One way of getting a handle on the partition function involves an intermediate function p(k, n), which represents the number of partitions of n using only natural numbers at least as large as k. For any given value of k, partitions counted by p(k, n) fit into exactly one of the following categories: 1. smallest addend is k 2. smallest addend is strictly greater than k. The number of partitions meeting the first condition is p(k,nk). To see this, imagine a list of all the partitions of the number nk into numbers of size at least k, then imagine appending "+k" to each partition in the list. Now what is it a list of? As a side note, one can use this to define a sort of recursion relation for the partition function in terms of the intermediate function, namely

where

is the floor function.

The number of partitions meeting the second condition is p(k+1,n) since a partition into parts of at least k that contains no parts of exactly k must have all parts at leastk+1.

Partition (number theory) Since the two conditions are mutually exclusive, the number of partitions meeting either condition is p(k+1,n)+p(k,nk). The recursively defined function is thus: p(k, n) = 0 if k > n p(k, n) = 1 if k = n p(k, n) = p(k+1, n) + p(k, n k) otherwise. This function tends to exhibit deceptive behavior. p(1, 4) = 5 p(2, 8) = 7 p(3, 12) = 9 p(4, 16) = 11 p(5, 20) = 13 p(6, 24) = 16 Our original function p(n) is just p(1, n). The values of this function:
k 1 n 1 2 3 4 5 6 7 8 9 10 1 2 3 5 7 11 15 22 30 2 0 1 1 2 2 4 4 7 8 3 0 0 1 1 1 2 2 3 4 5 4 0 0 0 1 1 1 1 2 2 3 5 0 0 0 0 1 1 1 1 1 2 6 0 0 0 0 0 1 1 1 1 1 7 0 0 0 0 0 0 1 1 1 1 8 0 0 0 0 0 0 0 1 1 1 9 0 0 0 0 0 0 0 0 1 1 10 0 0 0 0 0 0 0 0 0 1

263

42 12

Generating function
A generating function for p(n) is given by the reciprocal of Euler's function:

Expanding each term on the right-hand side as a geometric series, we can rewrite it as (1 + x + x2 + x3 + ...)(1 + x2 + x4 + x6 + ...)(1 + x3 + x6 + x9 + ...) .... The xn term in this product counts the number of ways to write n = a1 + 2a2 + 3a3 + ... = (1 + 1 + ... + 1) + (2 + 2 + ... + 2) + (3 + 3 + ... + 3) + ..., where each number i appears ai times. This is precisely the definition of a partition of n, so our product is the desired generating function. More generally, the generating function for the partitions of n into numbers from a set A can be found by taking only those terms in the product where k is an element of A. This result is due to Euler. The formulation of Euler's generating function is a special case of a q-Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function. It can also be used in

Partition (number theory) conjunction with the pentagonal number theorem to derive a recurrence for the partition function stating that: p(k) = p(k 1) + p(k 2) p(k 5) p(k 7) + p(k 12) + p(k 15) p(k 22) ... where p(0) is taken to equal 1, p(k) is zero for negative k, and the sum is taken over all generalized pentagonal numbers of the form n(3n 1), for n running over positive and negative integers: successively taking n = 1, 1, 2, 2, 3, 3, 4, 4 ..., generates the values 1, 2, 5, 7, 12, 15, 22, 26, 35, 40, 51, .... The signs in the summation continue to alternate+,+,,,+,+,...

264

Congruences
Srinivasa Ramanujan is credited with discovering that "congruences" in the number of partitions exist for integers ending in 4 and 9.

For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This is implied by an identity, also by Ramanujan,[2]

He also discovered congruences related to 7 and 11:

Since 5, 7, and 11 are consecutive primes, one might think that there would be such a congruence for the next prime 13, for some a. This is, however, false. It can also be shown that there is no congruence of the form for any prime b other than 5, 7, or 11. In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences for small prime moduli. For example:

In 2000, Ken Ono of the University of WisconsinMadison proved that there are such congruences for every prime modulus. A few years later Ono, together with Scott Ahlgren of the University of Illinois, proved that there are partition congruences modulo every integer coprime to 6.[3]

Partition function formulas


An asymptotic expression for p(n) is given by

This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering p(1000), the asymptotic formula gives about 2.44021031, reasonably close to the exact answer given above (1.415% larger than the true value). In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for p(n). It is

where

Partition (number theory) It can be shown that the derivative part of the sum can be simplified. [4] Here, the notation (m,n)=1 implies that the sum should occur only over the values of m that are relatively prime to n. The function s(m,k) is a Dedekind sum. The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function in a central way. In January 2011, it was announced that Ono and Jan Hendrik Bruinier, of the Technische Universitt Darmstadt, had developed a finite, algebraic formula determining the value of p(n) for any positive integer n.[5] [6]

265

Restricted partitions
Among the 22 partitions for the number 8, 6 contain only odd parts: 7+1 5+3 5+1+1+1 3+3+1+1 3+1+1+1+1+1 1+1+1+1+1+1+1+1

If we count the partitions of 8 with distinct parts, we also obtain the number 6: 8 7+1 6+2 5+3 5+2+1 4+3+1

It is true for all positive numbers that the number of partitions with odd parts always equals the number of partitions with distinct parts. This result was proved by Leonhard Euler in 1748.[7] Some similar results about restricted partitions can be obtained by the aid of a visual tool, a Ferrers graph (also called Ferrers diagram, since it is not a graph in the graph-theoretical sense, or sometimes Young diagram, alluding to the Young tableau).

Ferrers diagram
The partition 6+4+3+1 of the positive number 14 can be represented by the following diagram; these diagrams are named in honor of Norman Macleod Ferrers:

6+4+3+1

The 14 circles are lined up in 4 columns, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are listed below:

Partition (number theory)

266
4 = 3+1 = 2+2 = 2+1+1 = 1+1+1+1

If we now flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14:

6+4+3+1

4+3+3+2+1+1

By turning the rows into columns, we obtain the partition 4+3+3+2+1+1 of the number 14. Such partitions are said to be conjugate of one another. In the case of the number 4, partitions 4 and 1+1+1+1 are conjugate pairs, and partitions 3+1 and 2+1+1 are conjugate of each other. Of particular interest is the partition 2+2, which has itself as conjugate. Such a partition is said to be self-conjugate. Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts. Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram:

One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example:

9+7+3 Dist. odd

5+5+4+3+2 self-conjugate

Similar techniques can be employed to establish, for example, the following equalities: The number of partitions of n into no more than k parts is the same as the number of partitions of n into parts no larger thank. The number of partitions of n into no more than k parts is the same as the number of partitions of n+k into exactly k parts.

Partition (number theory)

267

Young diagrams
An alternative visual representation of an integer partition is its Young diagram, named after the British mathematician Alfred Young. Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes. Thus, the Young diagram for the partition 5 + 4 + 1 is

while the Ferrers diagram for the same partition is

While this seemingly trivial variation doesn't appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: in particular, filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance.

Notes
[1] http:/ / primes. utm. edu/ top20/ page. php?id=54 [2] Berndt and Ono, "Ramanujan's Unpublished Manuscript on the Partition and Tau Functions with Proofs and Commentary" (http:/ / www. math. wisc. edu/ ~ono/ reprints/ 044. pdf) [3] Ono, Ken; Ahlgren, Scott (2001). "Congruence properties for the partition function" (http:/ / www. math. wisc. edu/ ~ono/ reprints/ 061. pdf). Proceedings of the National Academy of Sciences 98 (23): 12,88212,884. doi:10.1073/pnas.191488598. . [4] "WolframAlpha" (http:/ / www. wolframalpha. com/ input/ ?i=d/ dn+ ((1/ sqrt(n-(1/ 24)))*sinh(pi/ k*sqrt((2/ 3)*(n-(1/ 24)))))). . [5] http:/ / www. aimath. org/ news/ partition/ [6] Bruinier and Ono, "Algebraic formulas for the coefficients of half-integral weight harmonic weak Maass forms" (http:/ / arxiv. org/ abs/ 1104. 1182v1/ ) [7] Andrews, George E. Number Theory. W. B. Saunders Company, Philadelphia, 1971. Dover edition, page 149150.

References
George E. Andrews, The Theory of Partitions (1976), Cambridge University Press. ISBN 0-521-63766-X . Tom M. Apostol, Modular functions and Dirichlet Series in Number Theory (1990), Springer-Verlag, New York. ISBN 0-387-97127-0 (See chapter 5 for a modern pedagogical intro to Rademacher's formula). Sautoy, Marcus Du. The Music of the Primes. New York: Perennial-HarperCollins, 2003. Lehmer, D. H. (1939). "On the remainder and convergence of the series for the partition function". Trans. Amer. Math. Soc. 46: 362373. doi:10.1090/S0002-9947-1939-0000410-9. MR0000410. Provides the main formula (no derivatives), remainder, and older form for Ak(n).) Gupta, Gwyther, Miller, Roy. Soc. Math. Tables, vol 4, Tables of partitions, (1962) (Has text, nearly complete bibliography, but they (and Abramowitz) missed the Selberg formula for Ak(n), which is in Whiteman.) Ian G. Macdonald, Symmetric functions and Hall polynomials, Oxford University Press, 1979, ISBN 0-19-853530-9 (See section I.1) Ken Ono, Distribution of the partition function modulo m, Annals of Mathematics 151 (2000) pp 293307. (This paper proves congruences modulo every prime greater than 3) Richard P. Stanley, Enumerative Combinatorics, Volumes 1 and 2 (http://www-math.mit.edu/~rstan/ec/). Cambridge University Press, 1999 ISBN 0-521-56069-1

Partition (number theory) A. L. Whiteman, A sum connected with the series for the partition function (http://projecteuclid.org/Dienst/UI/ 1.0/Summarize/euclid.pjm/1103044252), Pacific Journal of Math. 6:1 (1956) 159176. (Provides the Selberg formula. The older form is the finite Fourier expansion of Selberg.) Hans Rademacher, Collected Papers of Hans Rademacher, (1974) MIT Press; v II, p 100107, 108122, 460475. Mikls Bna (2002). A Walk Through Combinatorics: An Introduction to Enumeration and Graph Theory. World Scientific Publishing. ISBN981-02-4900-4. (qn elementary introduction to the topic of integer partition, including a discussion of Ferrers graphs) George E. Andrews, Kimmo Eriksson (2004). Integer Partitions. Cambridge University Press. ISBN0-521-60090-1. 'A Disappearing Number', devised piece by Complicite, mention Ramanujan's work on the Partition Function, 2007

268

External links
Partition and composition calculator (http://www.btinternet.com/~se16/js/partitions.htm) First 4096 values of the partition function (http://www.numericana.com/data/partition.htm) An algorithm to compute the partition function (http://www.numericana.com/answer/numbers. htm#partitions) Weisstein, Eric W., " Partition (http://mathworld.wolfram.com/Partition.html)" from MathWorld. Weisstein, Eric W., " Partition Function P (http://mathworld.wolfram.com/PartitionFunctionP.html)" from MathWorld. Pieces of Number (http://www.sciencenews.org/articles/20050618/bob9.asp) from Science News Online Lectures on Integer Partitions (http://www.math.upenn.edu/~wilf/PIMS/PIMSLectures.pdf) by Herbert S. Wilf Counting with partitions (http://www.luschny.de/math/seq/CountingWithPartitions.html) with reference tables to the On-Line Encyclopedia of Integer Sequences Integer::Partition Perl module (http://search.cpan.org/perldoc?Integer::Partition) from CPAN Fast Algorithms For Generating Integer Partitions (http://www.site.uottawa.ca/~ivan/F49-int-part.pdf) Generating All Partitions: A Comparison Of Two Encodings (http://arxiv.org/abs/0909.2331) Amanda Folsom, Zachary A. Kent, and Ken Ono, l-adic properties of the partition function (http://www.aimath. org/news/partition/folsom-kent-ono.pdf). In press. Jan Hendrik Bruinier and Ken Ono, An algebraic formula for the partition function (http://www.aimath.org/ news/partition/brunier-ono.pdf). In press.

Partition of a set

269

Partition of a set
In mathematics, a partition of a set X is a division of X into non-overlapping and non-empty "parts" or "blocks" or "cells" that cover all of X. More formally, these "cells" are both collectively exhaustive and mutually exclusive with respect to the set being partitioned.

Definition
A partition of a set X is a set of nonempty subsets of X such that every element x in X is in exactly one of these subsets. Equivalently, a set P is a partition of X if, and only if, it does not contain the empty set and: 1. The union of the elements of P is equal to X. (The elements of P are said to cover X.) In mathematical notation, these two conditions can be represented as 1. 2. where is the empty set. The elements of P are called the blocks, parts or cells of the partition.[1]

A partition of a set into 6 parts: an Euler diagram representation

2. The intersection of any two distinct elements of P is empty. (We say the elements of P are pairwise disjoint.)

Examples
Every singleton set {x} has exactly one partition, namely { {x} }. For any nonempty set X, P = {X} is a partition of X. For any non-empty proper subset A of a set U, this A together with its complement is a partition of U. The set { 1, 2, 3 } has these five partitions:

{ {1}, {2}, {3} }, or 1/2/3. { {1, 2}, {3} }, or 12/3. { {1, 3}, {2} }, or 13/2. { {1}, {2, 3} }, or 1/23. { {1, 2, 3} }, or 123 (in contexts where there will be no confusion with the number). The following are not partitions of { 1, 2, 3 }: { {}, {1,3}, {2} } is not a partition because one of its elements is the empty set. { {1,2}, {2, 3} } is not a partition (of any set) because the element 2 is contained in more than one distinct subset. { {1}, {2} } is not a partition of {1, 2, 3} because none of its blocks contains 3; however, it is a partition of {1, 2}.

Partition of a set

270

Partitions and equivalence relations


For any equivalence relation on a set X, the set of its equivalence classes is a partition of X. Conversely, from any partition P of X, we can define an equivalence relation on X by setting x ~ y precisely when x and y are in the same part in P. Thus the notions of equivalence relation and partition are essentially equivalent.[2]

Refinement of partitions
Any partition of a set X is a refinement of a partition of Xand we say that is finer than and that is coarser than if every element of is a subset of some element of . Informally, this means that is a further fragmentation of . In that case, it is written that . This finer-than relation on the set of partitions of X is a partial order (so the notation "" is appropriate); it is a complete lattice. For the simple example of X = {1, 2, 3, 4}, the partition lattice has 15 elements and is depicted in the following Hasse diagram.

Another example illustrates the refining of partitions from the perspective of equivalence relations. If D is the set of cards in a standard 52-card deck, the same-color-as relation on D which can be denoted ~C has two equivalence classes: the sets {red cards} and {black cards}. The 2-part partition corresponding to ~C has a refinement that yields the same-suit-as relation ~S, which has the four equivalence classes {spades}, {diamonds}, {hearts}, and {clubs}.

Partition of a set

271

Noncrossing partitions
A partition of the set N = {1, 2, ..., n} with corresponding equivalence relation ~ is noncrossing provided that there are no distinct numbers a, b, c, and d in N with a < b < c < d for which a ~ c and b ~ d. The lattice of noncrossing partitions of a finite set has recently taken on importance because of its role in free probability theory. These form a subset of the lattice of all partitions, but not a sublattice, since the join operations of the two lattices do not agree.

Counting partitions
The total number of partitions of an n-element set is the Bell number Bn. The first several Bell numbers are B0 = 1, B1 = 1, B2 = 2, B3 = 5, B4 = 15, B5 = 52, and B6 = 203. Bell numbers satisfy the recursion and have the exponential generating function

The number of partitions of an n-element set into exactly k nonempty parts is the Stirling number of the second kind S(n, k). The number of noncrossing partitions of an n-element set is the Catalan number Cn, given by

Notes
[1] Brualdi, pp. 44-45 [2] Schechter, p. 54

References
Brualdi, Richard A. (2004). Introductory Combinatorics (4th edition ed.). Pearson Prentice Hall. ISBN0131001191. Schechter, Eric (1997). Handbook of Analysis and Its Foundations. Academic Press. ISBN0126227608.

Pascal's rule

272

Pascal's rule
In mathematics, Pascal's rule is a combinatorial identity about binomial coefficients. It states that for any natural number n we have

where

is a binomial coefficient. This is also commonly written

Combinatorial proof
Pascal's rule has an intuitive combinatorial meaning. Recall that counts in how many ways can we pick a is counting how

subset with b elements out from a set with a elements. Therefore, the right side of the identity

many ways can we get a k-subset out from a set with n elements. Now, suppose you distinguish a particular element 'X' from the set with n elements. Thus, every time you choose k elements to form a subset there are two possibilities: X belongs to the chosen subset or not. If X is in the subset, you only really need to choose k1 more objects (since it is known that X will be in the subset) out from the remaining n1 objects. This can be accomplished in ways.

When X is not in the subset, you need to choose all the k elements in the subset from the n1 objects that are not X. This can be done in ways.

We conclude that the numbers of ways to get a k-subset from the n-set, which we know is

, is also the number

See also Bijective proof.

Algebraic proof
We need to show

Let us begin by writing the left-hand side as

Getting a common denominator and simplifying, we have

Pascal's rule

273

Generalization
Let and . Then

Sources
This article incorporates material from Pascal's rule on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. This article incorporates material from Pascal's rule proof on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Merris, Russell. [1]. John Wiley & Sons. 2003 ISBN 978-0-471-26296-1

Pascal's rule

274

External links
Central binomial coefficient [2] on PlanetMath Binomial coefficient [3] on PlanetMath Pascal's triangle [4] on PlanetMath

References
[1] [2] [3] [4] http:/ / media. wiley. com/ product_data/ excerpt/ 6X/ 04712629/ 047126296X. pdf''Combinatorics'' http:/ / planetmath. org/ ?op=getobj& amp;from=objects& amp;id=5936 http:/ / planetmath. org/ ?op=getobj& amp;from=objects& amp;id=273 http:/ / planetmath. org/ ?op=getobj& amp;from=objects& amp;id=4248

Percolation
In physics, chemistry and materials science, percolation (from Lat. perclre, to filter or trickle through) concerns the movement and filtering of fluids through porous materials (for more details see percolation theory). During the last five decades, percolation theory, an extensive mathematical model of percolation, has brought new understanding and techniques to a broad range of topics in physics, materials science, complex networks, epidemiology as well as in geography. Percolation typically exhibits universality. Statistical physics concepts such as scaling theory, renormalization, phase transition, critical phenomena and fractals are useful to characterize percolation properties. Combinatorics is commonly employed to study percolation thresholds. Applications / specific examples include: coffee percolation, where the solvent is water, the permeable substance is the coffee grounds, and the soluble constituents are the chemical compounds that give coffee its color, taste, and aroma movement of weathered material down on a slope under the earth's surface the act of 'upwards' claiming; whereby a claimed subject who is claimed by another entity, is funneled to their claimer cracking of trees with the presence of two conditions, sunlight and under the influence of pressure Robustness of networks to random and targeted attacks Transport in porous media Epidemic spreading Surface roughening Network robustness

References
Harry Kesten, What is percolation? [1] Notices of the AMS, May 2006. Muhammad Sahimi. Applications of Percolation Theory. Taylor & Francis, 1994. ISBN 0-7484-0075-3 (cloth), ISBN 0-7484-0076-1 (paper) Geoffrey Grimmett. Percolation (2. ed). [2] Springer Verlag, 1999. D.Stauffer and A.Aharony. Introduction to Percolation Theory A. Bunde, S. Havlin (Editors) Fractals and Disordered Systems [3], Springer, 1996 S. Kirkpatrick Percolation and conduction Rev. Mod. Phys. 45, 574, 1973 D. Ben-Avraham, S. Havlin Diffusion and Reactions in Fractals and Disordered Systems [4], Cambridge University Press, 2000

Percolation

275

External links
Introduction to Percolation Theory: short course by Shlomo Havlin [5]

References
[1] [2] [3] [4] [5] http:/ / www. ams. org/ notices/ 200605/ what-is-kesten. pdf http:/ / www. statslab. cam. ac. uk/ ~grg/ papers/ perc/ perc. html http:/ / havlin. biu. ac. il/ Shlomo%20Havlin%20books_fds. php http:/ / havlin. biu. ac. il/ Shlomo%20Havlin%20books_d_r. php http:/ / havlin. biu. ac. il/ course3. php

Percolation theory
In mathematics, percolation theory describes the behavior of connected clusters in a random graph. The applications of percolation theory to materials science and other domains are discussed in the article percolation.

Introduction
A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of nnn points (or vertices/sites) the connections (or edges/bonds) between each two neighbors may be open (allowing the liquid through) with probability p, or closed with probability 1p, and they are assumed to be independent. Therefore, for a given p, what is the probability that an open path exists from the top to the bottom? The behavior for largen is of primary interest. This problem, called now bond-percolation, was introduced in the mathematics literature by Broadbent & Hammersley (1957), and has been studied intensively by mathematicians and physicists since. When a site is occupied with probability p or empty (its edges are also removed) with probability 1-p, the problem is called "site percolation". The question is the same: for a given p, what is the probability that a path exists between top and bottom? Of course the same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? By Kolmogorov's zero-one law, for any given p, the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function of p (proof via coupling argument), there must be a critical p (denoted bypc) below which the probability is always 0 and above which the probability is always1. In practice, this criticality is very easy to observe. Even for n as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values ofp.

Percolation theory

276

In some cases pc may be calculated explicitly. For example, for the square lattice Z2 in two dimensions, pc=1/2, a fact which was an open question for more than 20 years and was finally resolved by Harry Kesten in the early 1980s, see Kesten (1982). A limit case for lattices in many dimensions is given by the Bethe lattice, whose threshold is at pc=1/(z1) for a coordination numberz. For most infinite lattice graphs, pc cannot be calculated exactly. For example, pc is not known for bond-percolation in hypercubic lattice in two dimensions.

Universality
The universality principle states that the value of pc is connected to the Detail of a bond percolation on the square lattice local structure of the graph, while the behavior of clusters below, at, in two dimensions with percolation probability and above pc are invariant with respect to the local structure, and p=0.51 therefore, in some sense are more natural quantities to consider. This universality also means that for the same dimension independent of the type of the lattice or type of percolation (e.g., bond or site) the fractal dimension of the clusters at pc is the same.

Phases
Subcritical and supercritical
The main fact in the subcritical phase is "exponential decay". That is, when p<pc, the probability that a specific point (for example, the origin) is contained in an open cluster of size r decays to zero exponentially inr. This was proved for percolation in three and more dimensions by Menshikov (1986) and independently by Aizenman & Barsky (1987). In two dimensions, it formed part of Kesten's proof that pc=1/2. The dual graph of the square lattice Z2 is also the square lattice. It follows that, in two dimensions, the supercritical phase is dual to a subcritical percolation process. This provides essentially full information about the supercritical model with d=2. The main result for the supercritical phase in three and more dimensions is that, for sufficiently largeN, there is an infinite open cluster in the two-dimensional slabZ2[0,N]d2. This was proved by Grimmett & Marstrand (1990). In two dimensions with p<1/2, there is with probability one a unique infinite closed cluster. Thus the subcritical phase may be described as finite open islands in an infinite closed ocean. When p>1/2 just the opposite occurs, with finite closed islands in an infinite open ocean. The picture is more complicated when d3 since pc<1/2, and there is coexistence of infinite open and closed clusters for p between pc and1pc.

Critical
The model has a singularity at the critical point p=pc believed to be of power-law type. Scaling theory predicts the existence of critical exponents, depending on the number d of dimensions, that determine the class of the singularity. When d=2 these predictions are backed up by arguments from quantum field theory and quantum gravitation, and include predicted numerical values for the exponents. Most of these predictions are conjectural except when the number d of dimensions satisfies either d=2 ord19. They include: There are no infinite clusters (open or closed) The probability that there is an open path from some fixed point (say the origin) to a distance of r decreases polynomially, i.e. is on the order of r for some

Percolation theory does not depend on the particular lattice chosen, or on other local parameters. It depends only on value of the dimension d (this is an instance of the universality principle). d decreases from d=2 until d=6 and then stays fixed. 6=1 2=5/48. The shape of a large cluster in two dimensions is conformally invariant. See Grimmett (1999). In dimension19, these facts are largely proved using a technique known as the lace expansion. It is believed that a version of the lace expansion should be valid for 7 or more dimensions, perhaps with implications also for the threshold case of 6 dimensions. The connection of percolation to the lace expansion is found in Hara & Slade (1990). In dimension2, the first fact ("no percolation in the critical phase") is proved for many lattices, using duality. Substantial progress has been made on two-dimensional percolation through the conjecture of Oded Schramm that the scaling limit of a large cluster may be described in terms of a SchrammLoewner evolution. This conjecture was proved by Smirnov (2001) in the special case of site percolation on the triangular lattice.

277

Different models
The first model studied was Bernoulli percolation. In this model all bonds are independent. This model is called bond percolation by physicists. A generalization next was introduced as the FortuinKasteleyn random cluster model, which has many connections with the Ising model and other Potts models. Bernoulli (bond) percolation on complete graphs is an example of a random graph. The critical probability isp=1/N. Directed percolation, which has connections with the contact process. First passage percolation. Invasion percolation. Percolation with dependency links was introduced by Parshani et.al. [1] Opinion Model

References
[1] http:/ / havlin. biu. ac. il/ Publications. php?keyword=bashan+ %26+ parshani& year=2011& match=any

Aizenman, Michael; Barsky, David (1987), "Sharpness of the phase transition in percolation models", Communications in Mathematical Physics 108 (3): 489526, Bibcode1987CMaPh.108..489A, doi:10.1007/BF01212322 Bollobas, Bela; Riordan, Oliver (2006), Percolation (http://www.cambridge.org/catalogue/catalogue. asp?isbn=0521872324), Cambridge University Press, doi:10.2277/ Broadbent, Simon; Hammersley, John (1957), "Percolation processes I. Crystals and mazes", Proceedings of the Cambridge Philosophical Society 53: 629641, Bibcode1957PCPS...53..629B, doi:10.1017/S0305004100032680 Bunde A. and Havlin S. (1996), Fractals and Disordered Systems (http://havlin.biu.ac.il/Shlomo Havlin books_fds.php), Springer Cohen R. and Havlin S. (2010), Complex Networks: Structure, Robustness and Function (http://havlin.biu.ac. il/Shlomo Havlin books_com_net.php), Cambridge University Press Grimmett, Geoffrey (1999), Percolation (http://www.statslab.cam.ac.uk/~grg/papers/perc/perc.html), Springer Grimmett, Geoffrey; Marstrand, John (1990), "The supercritical phase of percolation is well behaved", Proceedings of the Royal Society (London), Series A 430 (1879): 439457, Bibcode1990RSPSA.430..439G, doi:10.1098/rspa.1990.0100

Percolation theory Hara, Takashi; Slade, Gordon (1990), "Mean-field critical behaviour for percolation in high dimensions", Communications in Mathematical Physics 128 (2): 333391, Bibcode1990CMaPh.128..333H, doi:10.1007/BF02108785 Kesten, Harry (1982), Percolation theory for mathematicians, Birkhauser Menshikov, Mikhail (1986), "Coincidence of critical points in percolation problems", Soviet Mathematics Doklady 33: 856859 Smirnov, Stanislav (2001), "Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits", Comptes Rendus de l'Academie des Sciences 333 (3): 239244, Bibcode2001CRASM.333..239S, doi:10.1016/S0764-4442(01)01991-7 Stauffer, Dietrich; Aharony, Anthony (1994), Introduction to Percolation Theory (2nd ed.), CRC Press, ISBN978-0748402533

278

External links
PercoVIS: OSX program to visualize percolation on networks in real time (http://amath.colorado.edu/student/ larremore/PercoVIS.html) Interactive Percolation (http://ibiblio.org/e-notes/Perc/contents.htm) Kesten, Harry (May 2006), "What Is ... Percolation?" (http://www.ams.org/notices/200605/what-is-kesten. pdf) (PDF), Notices of the American Mathematical Society (Providence, RI: American Mathematical Society) 53 (5): 572573, ISSN1088-9477 Austin, David (July 2008), Percolation: Slipping through the Cracks (http://www.ams.org/featurecolumn/ archive/percolation.html), American Mathematical Society Online course on Percolation Theory (http://nanohub.org/resources/5660) Introduction to Percolation Theory: short course by Shlomo Havlin (http://havlin.biu.ac.il/course3.php)

Perfect ruler

279

Perfect ruler
A perfect ruler of length is a ruler with a subset of the integer markings that appear on a regular ruler. The defining criterion of this subset is that can be expressed uniquely as a difference there exists an such that any positive integer

for some . This is referred to as an -perfect ruler. A 4-perfect ruler of length is given by . To verify this, we need to show that every number can be expressed as a difference of two numbers in the above set:

An optimal perfect ruler is one where for a fixed value of

the value of

is minimized.

Permanent is sharp-P-complete
In a 1979 paper Leslie Valiant proved[1] that the problem of computing the permanent of a matrix is #P-hard, and remains #P-complete even if the matrix is restricted to have entries that are all 0 or 1. This result is sometimes known as Valiant's theorem[2] and is considered a seminal result in computational complexity theory.[3] [4] Valiant's 1979 paper also introduced #P as a complexity class.[5]

Significance
One reason for interest in the computational complexity of the permanent is that it provides an example of a problem where constructing a single solution can be done efficiently but where counting all solutions is hard.[6] As Papadimitriou writes in his book Computational Complexity:

The most impressive and interesting #P-complete problems are those for which the corresponding search problem can be solved in polynomial time. The PERMANENT problem for 01 matrices, which is equivalent to the problem of counting perfect matchings in a [2] bipartite graph [...] is the classic example here.

Specifically, computing the permanent (shown to be difficult by Valiant's results) is closely connected with finding a perfect matching in a bipartite graph, which is solvable in polynomial time by the HopcroftKarp algorithm.[7] [8] For a bipartite graph with 2n vertices partitioned into two parts with n vertices each, the number of perfect matchings equals the permanent of its biadjacency matrix and the square of the number of perfect matchings is equal to the permanent of its adjacency matrix.[9] Since any 01 matrix is the biadjacency matrix of some bipartite graph, Valiant's theorem implies[9] that the problem of counting the number of perfect matchings in a bipartite graph is #P-complete, and in conjunction with Toda's theorem this implies that it is hard for the entire polynomial hierarchy.[10] [11] The computational complexity of the permanent also has some significance in other aspects of complexity theory: it is not known whether NC equals P (informally, whether every polynomially-solvable problem can be solved by a polylogarithmic-time parallel algorithm) and Ketan Mulmuley has suggested an approach to resolving this question that relies on writing the permanent as the determinant of a matrix. Hartmann [12] proved a generalization of Valiant's theorem concerning the complexity of computing immanants of matrices that generalize both the determinant and the permanent.

Permanent is sharp-P-complete

280

Ben-Dor and Halevi's proof


Below, the proof that computing the permanent of a 01-matrix is #P-complete is described. It mainly follows the proof by Ben-Dor & Halevi (1993).[13]

Overview
Any square matrix can be viewed as the adjacency matrix of a directed graph, with representing the weight of the edge from vertex i to vertex j. Then, the permanent of A is equal to the sum of the weights of all cycle-covers of the graph; this is a graph-theoretic interpretation of the permanent. #SAT, a function problem related to the Boolean satisfiability problem, is the problem of counting the number of satisfying assignments of a given Boolean formula. It is a #P-complete problem (by definition), as any NP machine can be encoded into a Boolean formula by a process similar to that in Cook's theorem, such that the number of satisfying assignments of the Boolean formula is equal to the number of accepting paths of the NP machine. Any formula in SAT can be rewritten as a formula in 3-CNF form preserving the number of satisfying assignments, and so #SAT and #3SAT are equivalent and #3SAT is #P-complete as well. In order to prove that 01-Permanent is #P-hard, it is therefore sufficient to show that the number of satisfying assignments for a 3-CNF formula can be expressed succinctly as a function of the permanent of a matrix that contains only the values 0 and 1. This is usually accomplished in two steps: 1. Given a 3-CNF formula , construct a directed integer-weighted graph cycle covers of , such that the sum of the weights of

(or equivalently, the permanent of its adjacency matrix) is equal to the number of satisfying

assignments of . This establishes that Permanent is #P-hard. 2. Through a series of reductions, reduce Permanent to 01-Permanent, the problem of computing the permanent of a matrix all entries 0 or 1. This establishes that 01-permanent is #P-hard as well.

Constructing the integer graph


Given a 3CNF-formula that 1. each satisfying assignment for will have a corresponding set of cycle covers in where the sum of the weights of cycle covers in this set will be ; and 2. all other cycle covers in will have weights summing to 0. Thus if is the number of satisfying assignments for , the permanent of this graph will be whose permanent is . with m clauses and n variables, one can construct a weighted, directed graph such

(Valiant's original proof constructs a graph with entries in

where is "twice the number of occurrences of literals in " .) The graph construction makes use of a component that is treated as a "black box." To keep the explanation simple, the properties of this component are given without actually defining the structure of the component. To specify G, one first constructs a variable node in G for each of the n variables in . Additionally, for each of the m clauses in , one constructs a clause component Cj in G that functions as a sort of "black box." All that needs to be noted about Cj is that it has three input edges and three output edges. The input edges come either from variable nodes or from previous clause components (e.g., Co for some o<j) and the output edges go either to variable nodes or to later clause components (e.g., Co for some ). The first input and output edges correspond with the first variable of the clause j, and so on. Thus far, all of the nodes that will appear in the graph G have been specified. Next, one would consider the edges. For each variable (F-cycle) in component corresponding to of , one makes a true cycle (T-cycle) and a false cycle and draw an edge to the clause appears. If is the first variable in the clause of . To create the T-cycle, one starts at the variable node for that corresponds to the first clause in which , this edge will be the first input edge of in which

, and so on. Thence, draw an edge to the next appears, connecting it from the appropriate

clause component corresponding to the next clause of

Permanent is sharp-P-complete output edge of

281

to the appropriate input edge of the next clause component, and so on. After the last clause in which 's variable node. Of

appears, we connect the appropriate output edge of the corresponding clause component back to

course, this completes the cycle. To create the F-cycle, one would follow the same procedure, but connect 's variable node to those clause components in which ~ appears, and finally back to 's variable node. All of these edges outside the clause components are termed external edges, all of which have weight 1. Inside the clause components, the edges are termed internal edges. Every external edge is part of a T-cycle or an F-cycle (but not boththat would force inconsistency). Note that the graph is of size linear in , so the construction can be done in polytime (assuming that the clause components do not cause trouble). Notable properties of the graph A useful property of is that its cycle covers correspond to variable assignments for . For a cycle cover Z of

, one can say that Z induces an assignment of values for the variables in

just in case Z contains all of the

external edges in 's T-cycle and none of the external edges in 's F-cycle for all variables that the assignment makes true, and vice versa for all variables that the assignment makes false. Although any given cycle cover Z need not induce an assignment for , any one that does induces exactly one assignment, and the same assignment induced depends only on the external edges of Z. The term Z is considered an incomplete cycle cover at this stage, because one talks only about its external edges, M. In the section below, one considers M-completions to show that one has a set of cycle covers corresponding to each M that have the necessary properties. The sort of Z's that don't induce assignments are the ones with cycles that "jump" inside the clause components. That is, if for every , at least one of 's input edges is in Z, and every output edge of the clause components is in Z when the corresponding input edge is in Z, then Z is proper with respect to each clause component, and Z will produce a satisfying assignment for . This is because proper Z's contain either the complete T-cycle or the complete F-cycle of every variable in as well as each including edges going into and coming out of each and ensure that each clause , and any other Z's have clause component. Thus, these Z's assign either true or false (but never both) to each is satisfied. Further, the sets of cycle covers corresponding to all such Z's have weight weight

. The reasons for this depend on the construction of the clause components, and are outlined below.

The clause component To understand the relevant properties of the clause components cover of , one needs the notion of an M-completion. A

cycle cover Z induces a satisfying assignment just in case its external edges satisfy certain properties. For any cycle , consider only its external edges, the subset M. Let M be a set of external edges. A set of internal is a cycle cover of . Further, denote the set of all , the cycle edges L is an M-completion just in case

M-completions by and the set of all resulting cycle covers of by . Recall that construction of was such that each external edge had weight 1, so the weight of

covers resulting from any M, depends only on the internal edges involved. We add here the premise that the construction of the clause components is such that the sum over possible M-completions of the weight of the internal edges in each clause component, where M is proper relative to the clause component, is 12. Otherwise the weight of the internal edges is 0. Since there are m clause components, and the selection of sets of internal edges, L, within each clause component is independent of the selection of sets of internal edges in other clause components, so one can multiply everything to get the weight of . So, the weight of each , where M induces a satisfying assignment, is . Further, where M does not induce a satisfying assignment, M is not proper with respect to some , so the product of the weights of internal edges in will be . The clause component is a weighted, directed graph with 7 nodes with edges weighted and nodes arranged to yield the properties specified above, and is given in Appendix A of Ben-Dor and Halevi (1993). Note that the internal

Permanent is sharp-P-complete edges here have weights drawn from the set ; not all edges have 01 weights.

282

Finally, since the sum of weights of all the sets of cycle covers inducing any particular satisfying assignment is 12m, and the sum of weights of all other sets of cycle covers is 0, one has Perm(G)=12m(#). The following section reduces computing Perm( ) to the permanent of a 01 matrix.

01-Matrix
The above section has shown that Permanent is #P-hard. Through a series of reductions, any permanent can be reduced to the permanent of a matrix with entries only 0 or 1. This will prove that 01-Permanent is #P-hard as well. Reduction to a non-negative matrix Using modular arithmetic, convert an integer matrix A into an equivalent non-negative matrix permanent of Let be an can be computed easily from the permanent of , as follows: . integer matrix where no entry has a magnitude larger than . The choice of Q is due to the fact that so that the

Compute Compute Compute If The transformation of into

then Perm(A) = P. Otherwise is polynomial in and , since the number of bits required to represent

is polynomial in and An example of the transformation and why it works is given below.

. Here, , , and , so . Thus

Note how the elements are non-negative because of the modular arithmetic. It is simple to compute the permanent

so

. Then

, so

Permanent is sharp-P-complete Reduction to powers of 2 Note that any number can be decomposed into a sum of powers of 2. For example,

283

This fact is used to convert a non-negative matrix into an equivalent matrix whose entries are all powers of 2. The reduction can be expressed in terms of graphs equivalent to the matrices. Let be a -node weighted directed graph with non-negative weights, where largest weight is is converted into an equivalent edge with weights in powers of 2 as follows: , This can be seen graphically in the Figure 1. The subgraph that replaces the existing edge contains edges. To prove that this produces an equivalent graph correspondence between the cycle covers of Consider some cycle-cover If an edge is not in in . and match. , there must be a path from and that has the same permanent as the original, one must show the . nodes and . Every edge with weight

, then to cover all the nodes in the new sub graph, one must use the self-loops. Since all to , where u and

self-loops have a weight of 1, the weight of cycle-covers in If is in , then in all the corresponding cycle-covers in

v are the nodes of edge e. From the construction, one can see that there are different paths and sum of all these paths equal to the weight of the edge in the original graph . So the weight of corresponding cycle-covers in and match. is polynomial in and . Note that the size of

Permanent is sharp-P-complete Reduction to 01 The objective here is to reduce a matrix whose entries are powers of 2 into an equivalent matrix containing only zeros and ones (i.e. a directed graph where each edge has a weight of 1). Let G be a -node directed graph where all the weights on edges are powers of two. Construct a graph, , where the weight of each edge is 1 and Perm(G) = Perm(G'). The size of this new graph, G', is polynomial in and where the maximal weight of any edge in graph G is . This reduction is done locally at each edge in G that has a weight larger than 1. Let be an edge in G with a weight . It is replaced by a subgraph Each edge in Consider some cycle-cover If an original edge in . , one cannot create a path through the new subgraph . The in such a case is for each node in the subgraph to take its self-loop. As that is made up of nodes and edges as seen in Figure 2. has a weight of 1. Thus, the resulting graph G' contains only edges with a weight of 1. from graph G is not in

284

only way to form a cycle cover over

each edge has a weight of one, the weight of the resulting cycle cover is equal to that of the original cycle cover. However, if the edge in G is a part of the cycle cover then in any cycle cover of there must be a path from to in the subgraph. At each step down the subgraph there are two choices one can make to form such a path. One must make this choice times, resulting in possible paths from to . Thus, there are possible cycle covers and since each path has a weight of 1, the sum of the weights of all these cycle covers equals the weight of the original cycle cover.

Aaronson's proof
Aaronson[14] has proved #P-completeness of permanent using quantum methods.

References
[1] Leslie G. Valiant (1979). "The Complexity of Computing the Permanent". Theoretical Computer Science (Elsevier) 8 (2): 189201. doi:10.1016/0304-3975(79)90044-6. [2] Christos H. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. ISBN 0-201-53082-1. Page 443 [3] Allen Kent, James G. Williams, Rosalind Kent and Carolyn M. Hall (editors). Encyclopedia of microcomputers. (http:/ / books. google. com/ books?id=uDegDR4ikTQC& pg=PA34& dq="permanent+ of+ a+ matrix"+ valiant& lr=& as_brr=3& ei=M8NKSa7LH4G4M8mVyNoI)Marcel Dekker, 1999. ISBN 978-0-8247-2722-2; p. 34 [4] Jin-Yi Cai, A. Pavan and D. Sivakumar, On the Hardness of Permanent. (http:/ / books. google. com/ books?id=fIAMFv4doooC& pg=PA90& dq="permanent+ of+ a+ matrix"+ valiant& as_brr=3& ei=h8BKScClJYOUMtTP6LEO) In: STACS, '99: 16th Annual Symposium on Theoretical Aspects of Computer Science, Trier, Germany, March 46, 1999 Proceedings. pp. 9099. Springer-Verlag, New York, LLC Pub. Date: October 2007. ISBN 978-3-540-65691-3; p. 90. [5] Lance Fortnow. My Favorite Ten Complexity Theorems of the Past Decade. (http:/ / books. google. com/ books?id=JWOA9M9CcX8C& pg=PA265& dq=permanent+ Valiant) Foundations of Software Technology and Theoretical Computer Science: Proceedings of the 14th Conference, Madras, India, December 1517, 1994. P. S. Thiagarajan (editor), pp. 256275, Springer-Verlag, New York, 2007. ISBN 978-3-540-58715-6; p. 265 [6] Peter Burgisser. Completeness and Reduction in Algebraic Complexity Theory. (http:/ / books. google. com/ books?id=XBlnjSW1VekC& pg=PA2& dq=permanent+ Valiant#PPA2,M1) Springer-Verlag, New York, 2000. ISBN 978-3-540-66752-0; p. 2 [7] John E. Hopcroft, Richard M. Karp: An Algorithm for Maximum Matchings in Bipartite Graphs. SIAM J. Comput. 2(4), 225231

(1973) [8] Cormen, Thomas H.; Leiserson, Charles E., Rivest, Ronald L., Stein, Clifford (2001) [1990]. "26.5: The relabel-to-front algorithm". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp.pp. 696697. ISBN0-262-03293-7. [9] Dexter Kozen. The Design and Analysis of Algorithms. (http:/ / books. google. com/ books?id=L_AMnf9UF9QC& pg=PA141& dq="permanent+ of+ a+ matrix"+ valiant& as_brr=3& ei=h8BKScClJYOUMtTP6LEO#PPA142,M1) Springer-Verlag, New York, 1991.

Permanent is sharp-P-complete
ISBN 978-0-387-97687-7; pp. 141142 [10] Seinosuke Toda. PP is as Hard as the Polynomial-Time Hierarchy. (http:/ / siamdl. aip. org/ getabs/ servlet/ GetabsServlet?prog=normal& id=SMJCAT000020000005000865000001& idtype=cvips& gifs=Yes) SIAM Journal on Computing, Volume 20 (1991), Issue 5, pp. 865877. [11] 1998 Gdel Prize. Seinosuke Toda (http:/ / sigact. acm. org/ prizes/ godel/ 1998. html) [12] W. Hartmann. On the complexity of immanants. (http:/ / www. informaworld. com/ smpp/ content~content=a778402740~db=all) Linear and Multilinear Algebra 18 (1985), no. 2, pp. 127140. [13] Ben-Dor, Amir; Halevi, Shai (1993). "Proceedings of the 2nd Israel Symposium on the Theory and Computing Systems" (http:/ / citeseer. ist. psu. edu/ ben-dor95zeroone. html). pp. 108117. .. [14] S. Aaronson, A Linear-Optical Proof that the Permanent is #P-Hard (http:/ / eccc. hpi-web. de/ report/ 2011/ 043/ download)

285

Permutation
In mathematics, the notion of permutation is used with several slightly different meanings, all related to the act of permuting (rearranging) objects or values. Informally, a permutation of a set of objects is an arrangement of those objects into a particular order. For example, there are six permutations of the set {1,2,3}, namely (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1). One might define an anagram of a word as a permutation of its letters. The study of permutations in this sense generally belongs to the field of combinatorics. The number of permutations of n distinct objects is n(n 1)(n 2)...21, which number is called "nfactorial" and written "n!". Permutations occur, in more or less prominent ways, in almost every domain of mathematics. They often arise when different orderings on certain finite sets are considered, possibly only because one wants to ignore such orderings and needs to know how many configurations are thus identified. For similar reasons permutations arise in the study of sorting algorithms in computer science. In algebra and particularly in group theory, a permutation of a set S is defined The 6 permutations of 3 balls as a bijection from S to itself (i.e., a map S S for which every element of S occurs exactly once as image value). This is related to the rearrangement of S in which each element s takes the place of the corresponding f(s). The collection of such permutations form a symmetric group. The key to its structure is the possibility to compose permutations: performing two given rearrangements in succession defines a third rearrangement, the composition. Permutations may act on composite objects by rearranging their components, or by certain replacements (substitutions) of symbols. In elementary combinatorics, the name "permutations and combinations" refers to two related problems, both counting possibilities to select k distinct elements from a set of n elements, where for k-permutations the order of selection is taken into account, but for k-combinations it is ignored. However k-permutations do not correspond to permutations as discussed in this article (unless k=n).

Permutation

286

History
The rule to determine the number of permutations of n objects was known in Hindu culture at least as early as around 1150: the Lilavati by the Indian mathematician Bhaskara II contains a passage that translates to The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.[1] A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of variste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it.

Generalities
The notion of permutation is used in the following contexts.

In group theory
In group theory and related areas, one considers permutations of arbitrary sets, even infinite ones. A permutation of a set S is a bijection from S to itself. This allows for permutations to be composed, which allows the definition of groups of permutations. If S is a finite set of n elements, then there are n! permutations of S.

In combinatorics
In combinatorics, a permutation is usually understood to be a sequence containing each element from a finite set once, and only once. The concept of sequence is distinct from that of a set, in that the elements of a sequence appear in some order: the sequence has a first element (unless it is empty), a second element (unless its length is less than 2), and so on. In contrast, the elements in a set have no order; {1, 2, 3} and {3, 2, 1} are different ways to denote the same set. In this sense a permutation of a finite set S of n elements is equivalent to a bijection from {1, 2, ... , n} to S (in which any i is mapped to the i-th element of the sequence), or to a choice of a total ordering on S (for which x < y if x comes before y in the sequence). In this sense there are also n! permutations of S. There is also a weaker meaning of the term "permutation" that is sometimes used in elementary combinatorics texts, designating those sequences in which no element occurs more than once, but without the requirement to use all elements from a given set. Indeed this use often involves considering sequences of a fixed lengthk of elements taken from a given set of size n. These objects are also known as sequences without repetition, a term that avoids confusion with the other, more common, meanings of "permutation". The number of such k-permutations of n is denoted variously by such symbols as n Pk, n Pk, Pn,k, or P(n,k), and its value is given by the product

Permutations of multisets

Permutation

287

which is 0 when k > n, and otherwise is equal to

The product is well defined without the assumption that n is a non-negative integer, and is of importance outside combinatorics as well; it is known as the Pochhammer symbol (n)k or as the k-th falling factorial power nk of n. If M is a finite multiset, then a multiset permutation is a sequence of elements of M in which each element appears exactly as often as is its multiplicity in M. If the multiplicities of the elements of M (taken in some order) are , , ..., and their sum (i.e., the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient

Permutations in group theory


In group theory, the term permutation of a set means a bijective map, or bijection, from that set onto itself. The set of all permutations of any given set S forms a group, with composition of maps as product and the identity as neutral element. This is the symmetric group of S. Up to isomorphism, this symmetric group only depends on the cardinality of the set, so the nature of elements of S is irrelevant for the structure of the group. Symmetric groups have been studied most in the case of a finite sets, in which case one can assume without loss of generality that S={1,2,...,n} for some natural number n, which defines the symmetric group of degreen, written Sn. Any subgroup of a symmetric group is called a permutation group. In fact by Cayley's theorem any group is isomorphic to some permutation group, and every finite group to a subgroup of some finite symmetric group. However, permutation groups have more structure than abstract groups, allowing for instance to define the cycle type of an element of a permutation group; different realizations of a group as a permutation group need not be equivalent for this additional structure. For instance S3 is naturally a permutation group, in which any transposition has cycle type (2,1), but the proof of Cayley's theorem realizes S3 as a subgroup of S6 (namely the permutations of the 6 elements of S3 itself), in which permutation group transpositions get cycle type (2,2,2). So in spite of Cayley's theorem, the study of permutation groups differs from the study of abstract groups.

Notation
There are three main notations for permutations of a finite set S. In two-line notation, one lists the elements of S in the first row, and for each one its image under the permutation below it in the second row. For instance, a particular permutation of the set {1,2,3,4,5} can be written as:

this means that satisfies (1)=2, (2)=5, (3)=4, (4)=3, and (5)=1. In one-line notation, one gives only the second row of this array, so the one-line notation for the permutation above is 25431. (It is typical to use commas to separate these entries only if some have two or more digits.) Cycle notation, the third method of notation, focuses on the effect of successively applying the permutation. It expresses the permutation as a product of cycles corresponding to the orbits (with at least two elements) of the permutation; since distinct orbits are disjoint, this is loosely referred to as "the decomposition into disjoint cycles" of the permutation. It works as follows: starting from some element x of S with (x) x, one writes the sequence (x (x) ((x)) ...) of successive images under , until the image would be x, at which point one instead closes the parenthesis. The set of values written down forms the orbit (under ) of x, and the parenthesized expression gives the

Permutation corresponding cycle of . One then continues choosing an element y of S that is not in the orbit already written down, and such that (y) y, and writes down the corresponding cycle, and so on until all elements of S either belong to a cycle written down or are fixed points of . Since for every new cycle the starting point can be chosen in different ways, there are in general many different cycle notations for the same permutation; for the example above one has for instance

288

Each cycle (x1 x2 ... xl) of denotes a permutation in its own right, namely the one that takes the same values as on this orbit (so it maps xi to xi+1 for i < l, and xl to x1), while mapping all other elements of S to themselves. The size l of the orbit is called the length of the cycle. Distinct orbits of are by definition disjoint, so the corresponding cycles are easily seen to commute, and is the product of its cycles (taken in any order). Therefore the concatenation of cycles in the cycle notation can be interpreted as denoting composition of permutations, whence the name "decomposition" of the permutation. This decomposition is essentially unique: apart from the reordering the cycles in the product, there are no other ways to write as a product of cycles (possibly unrelated to the cycles of ) that have disjoint orbits. The cycle notation is less unique, since each individual cycle can be written in different ways, as in the example above where (5 1 2) denotes the same cycle as (1 2 5) (but (5 2 1) would denote a different permutation). An orbit of size1 (a fixed point x in S) has no corresponding cycle, since that permutation would fix x as well as every other element of S, in other words it would be the identity, independently of x. It is possible to include (x) in the cycle notation for to stress that fixes x (and this is even standard in combinatorics, as described in cycles and fixed points), but this does not correspond to a factor in the (group theoretic) decomposition of . If the notion of "cycle" were taken to include the identity permutation, then this would spoil the uniqueness (up to order) of the decomposition of a permutation into disjoint cycles. The decomposition into disjoint cycles of the identity permutation is an empty product; its cycle notation would be empty, so some other notation like e is usually used instead. Cycles of length two are called transpositions; such permutations merely exchange the place of two elements.

Product and inverse


The product of two permutations is defined as their composition as functions, in other words is the function that maps any element x of the set to ((x)). Note that the rightmost permutation is applied to the argument first, because of the way function application is written. Some authors prefer the leftmost factor acting first, but to that end permutations must be written to the right of their argument, for instance as an exponent, where acting on x is written x; then the product is defined by x=(x). However this gives a different rule for multiplying permutations; this article uses the definition where the rightmost permutation is applied first. Since the composition of two bijections always gives another bijection, the product of two permutations is again a permutation. Since function composition is associative, so is the product operation on permutations: ()=(). Therefore, products of more than two permutations are usually written without adding parentheses to express grouping; they are also usually written without a dot or other sign to indicate multiplication. The identity permutation, which maps every element of the set to itself, is the neutral element for this product. In two-line notation, the identity is

Since bijections have inverses, so do permutations, and the inverse 1 of is again a permutation. Explicitly, whenever (x)=y one also has 1(y)=x. In two-line notation the inverse can be obtained by interchanging the two lines (and sorting the columns if one wishes the first line to be in a given order). For instance

Permutation

289

In cycle notation one can reverse the order of the elements in each cycle to obtain a cycle notation for its inverse. Having an associative product, a neutral element, and inverses for all its elements, makes the set of all permutations of S into a group, called the symmetric group of S. Every permutation of a finite set can be expressed as the product of transpositions. Moreover, although many such expressions for a given permutation may exist, there can never be among them both expressions with an even number and expressions with an odd number of transpositions. All permutations are then classified as even or odd, according to the parity of the transpositions in any such expression. Multiplying permutations written in cycle notation follows no easily described pattern, and the cycles of the product can be entirely different from those of the permutations being composed. However the cycle structure is preserved in the special case of conjugating a permutation by another permutation , which means forming the product 1. Here the cycle notation of the result can be obtained by taking the cycle notation for and applying to all the entries in it.[2] One can represent a permutation of {1, 2, ..., n} as an nn matrix. There are two natural ways to do so, but only one for which multiplications of matrices corresponds to multiplication of permutations in the same order: this is the one that associates to the Composition of permutations corresponds to multiplication of permutation matrices. matrix M whose entry Mi,j is 1 if i = (j), and 0 otherwise. The resulting matrix has exactly one entry 1 in each column and in each row, and is called a permutation matrix. Here [3] (file) is a list of these matrices for permutations of 4 elements. The Cayley table on the right shows these matrices for permutations of 3 elements.

Permutations in combinatorics
In combinatorics a permutation of a set S with n elements is a listing of the elements of S in some order (each element occurring exactly once). This can be defined formally as a bijection from the set { 1, 2, ..., n } to S. Note that if S equals { 1, 2, ..., n }, then this definition coincides with the definition in group theory. More generally one could use instead of { 1, 2, ..., n } any set equipped with a total ordering of its elements. One combinatorial property that is related to the group theoretic interpretation of permutations, and can be defined without using a total ordering of S, is the cycle structure of a permutation. It is the partition of n describing the lengths of the cycles of. Here there is a part"1" in the partition for every fixed point of. A permutation that has no fixed point is called a derangement. Other combinatorial properties however are directly related to the ordering of S, and to the way the permutation relates to it. Here are a number of such properties.

Permutation

290

Ascents, descents and runs


An ascent of a permutation of n is any position i<n where the following value is bigger than the current one. That is, if =12...n, then i is an ascent if i<i+1. For example, the permutation 3452167 has ascents (at positions) 1,2,5,6. Similarly, a descent is a position i<n with i>i+1, so every i with of. The number of permutations of n with k ascents is the Eulerian number ; this is also the number of either is an ascent or is a descent

permutations of n with k descents.[4] An ascending run of a permutation is a nonempty increasing contiguous subsequence of the permutation that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence of elements obtained from the permutation by omitting the values at some positions. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation has k1 descents, then it must be the union of k ascending runs. Hence, the number of permutations of n with k ascending runs is the same as the number of permutations with k1 descents.
[5]

Inversions
An inversion of a permutation is a pair (i,j) of positions where the entries of a permutation are in the opposite order: and .[6] So a descent is just an inversion at two adjacent positions. For example, the permutation =23154 has three inversions: (1,3), (2,3), (4,5), for the pairs of entries (2,1), (3,1), (5,4). Sometimes an inversion is defined as the pair of values (i,j) itself whose order is reversed; this makes no difference for the number of inversions, and this pair (reversed) is also an inversion in the above sense for the inverse permutation 1. The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for and for 1. To bring a permutation with k inversions into order (i.e., transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and i + 1 where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by1; also note that as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing as a product of adjacent transpositions. The number of permutations of n with k inversions is expressed by a Mahonian number,[7] it is the coefficient of Xk in the expansion of the product

which is also known (with q substituted for X) as the q-factorial [n]q!.

Permutation

291

Counting sequences without repetition


In this section, a k-permutation of a set S is an ordered sequence of k distinct elements of S. For example, given the set of letters {C, E, G, I, N, R}, the sequence ICE is a 3-permutation, RING and RICE are 4-permutations, NICER and REIGN are 5-permutations, and CRINGE is a 6-permutation; since the latter uses all letters, it is a permutation of the given set in the ordinary combinatorial sense. ENGINE on the other hand is not a permutation, because of the repetitions: it uses the elements E and N twice. Let n be the size of S, the number of elements available for selection. In constructing a k-permutation, there are n possible choices for the first element of the sequence, and this is then number of 1-permutations. Once it has been chosen, there are n 1 elements of S left to choose from, so a second element can be chosen in n 1 ways, giving a total n (n 1) possible 2-permutations. For each successive element of the sequence, the number of possibilities decreases by 1 which leads to the number of n (n 1) (n 2) ... (n k + 1) possible k-permutations. This gives in particular the number of n-permutations (which contain all elements of S once, and are therefore simply permutations of S): n (n 1) (n 2) ... 2 1, a number that occurs so frequently in mathematics that it is given a compact notation "n!", and is called "n factorial". These n-permutations are the longest sequences without repetition of elements of S, which is reflected by the fact that the above formula for the number of k-permutations gives zero whenever k>n. The number of k-permutations of a set of n elements is sometimes denoted by P(n,k) or a similar notation (usually accompanied by a notation for the number of k-combinations of a set of n elements in which the "P" is replaced by "C"). That notation is rarely used in other contexts than that of counting k-permutations, but the expression for the number does arise in many other situations. Being a product of k factors starting at n and decreasing by unit steps, it is called the k-th falling factorial power of n:

though many other names and notations are in use, as detailed at Pochhammer symbol. When kn the factorial power can be completed by additional factors: nk(nk)!=n!, which allows writing

The right hand side is often given as expression for then number of k-permutations, but its main merit is using the compact factorial notation. Expressing a product of k factors as a quotient of potentially much larger products, where all factors in the denominator are also explicitly present in the numerator is not particularly efficient; as a method of computation there is the additional danger of overflow or rounding errors. It should also be noted that the expression is undefined when k>n, whereas in those cases the number nk of k-permutations is just0.

Permutations in computing
Numbering permutations
One way to represent permutations of n is by an integer N with 0N<n!, provided convenient methods are given to convert between the number and the usual representation of a permutation as a sequence. This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word; for 32-bit words this means n12, and for 64-bit words this means n20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn1, ..., d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is simply expression of N in the factorial number system, which is just a particular mixed radix representation, where for numbers up to n! the bases for successive digits are n,

Permutation n 1, ..., 2, 1. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table.

292

Rothe diagram for


i

i 1 2 3 4 5 6 7 8 9

Lehmer code d9 = 5 d8 = 2

d7 = 5 d6 = 0

d5 = 1 d4 = 3 d3 = 2 d2 = 0 d1 = 0 0 2 0 0

inversion table

In the Lehmer code for a permutation, the number dn represents the choice made for the first term1, the number dn1 represents the choice made for the second term 1 among the remaining n 1 elements of the set, and so forth. More precisely, each dn+1i gives the number of remaining elements strictly less than the term i. Since those remaining elements are bound to turn up as some later term j, the digit dn+1i counts the inversions (i,j) involving i as smaller index (the number of values j for which i<j and i>j). The inversion table for is quite similar, but here dn+1k counts the number of inversions (i,j) where k=j occurs as the smaller of the two values appearing in inverted order.[8] Both encodings can be visualized by an nbyn Rothe diagram[9] (named after Erich Hans Rothe) in which dots at (i,i) mark the entries of the permutation, and a cross at (i,j) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,j) in its column, and before the dot (i,i) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer code dn, dn1, ..., d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set i to the element in the list that is preceded by dn+1i other ones, and remove that element from the list. To convert an inversion table dn, dn1, ..., d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence; at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots. Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the

Permutation example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn1, ..., d2, d1 has an ascent n i if and only if di di+1.

293

Algorithms to generate permutations


In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence. An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n2/4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time. Random generation of permutations For generating random permutations of a given sequence of n values, it makes no difference whether one means apply a randomly selected permutation of n to the sequence, or choose a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation. The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2,...,dn satisfying 0 di < i (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald A. Fisher and Frank Yates.[10] While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation of a[0], a[1], ..., a[n 1] can be described as follows in pseudocode: for i from n downto 2 do di random element of { 0, ..., i 1 } swap a[di] and a[i 1]

Permutation This can be combined with the initialization of the array a[i] = i as follows: for i from 0 to n1 do di+1 random element of { 0, ..., i } a[i] a[di+1] a[di+1] i If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i. Generation in lexicographic order There are many ways to systematically generate all permutations of a given sequence. One classical algorithm, which is both simple and flexible, is based on finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates the distinct multiset permutations each once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. To use it, one starts by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been frequently rediscovered ever since.[11] The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place. 1. 2. 3. 4. Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation. Find the largest index l such that a[k] < a[l]. Since k + 1 is such an index, l is well defined and satisfies k < l. Swap a[k] with a[l]. Reverse the sequence from a[k + 1] up to and including the final element a[n].

294

After step 1, one knows that all of the elements strictly after position k form a weakly decreasing sequence, so no permutation of these elements will make it advance in lexicographic order; to advance one must increase a[k]. Step 2 finds the smallest value a[l] to replace a[k] by, and swapping them in step 3 leaves the sequence after position k in weakly decreasing order. Reversing this sequence in step 4 then produces its lexicographically minimal permutation, and the lexicographic successor of the initial state for the whole sequence. Generation with minimal changes An alternative to the above algorithm, the SteinhausJohnsonTrotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.[11]

Permutation

295

Software implementations
Calculator functions Many scientific calculators and computing software have a built-in function for calculating the number of k-permutations of n. Casio and TI calculators: nPr HP calculators: PERM[12] Mathematica: FallingFactorial Spreadsheet functions Most spreadsheet software also provides a built-in function for calculating the number of k-permutations of n, called PERMUT in many popular spreadsheets. Apple's Numbers '08 software notably did not include such a function[13] but this was rectified in Apple's Numbers '09 software package.

Applications
Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212 [14] ). Such applications raise the question of fast generation of permutation satisfying certain desirable properties. One of the methods is based on the permutation polynomials.

Notes
[1] N. L. Biggs, The roots of combinatorics, Historia Math. 6 (1979) 109136 [2] Humphreys (1996), p. 84 (http:/ / books. google. com/ books?id=2jBqvVb0Q-AC& pg=PA84& dq="conjugate+ permutations+ have+ the+ same+ cycle+ type") [3] http:/ / upload. wikimedia. org/ wikipedia/ commons/ thumb/ 6/ 6d/ Symmetric_group_4%3B_permutation_list_with_matrices. svg/ 1000px-Symmetric_group_4%3B_permutation_list_with_matrices. svg. png [4] Combinatorics of Permutations, ISBN 1584884347, M. Bona, 2004, p. 3 [5] Combinatorics of Permutations, ISBN 1584884347, M. Bona, 2004, p. 4f [6] Combinatorics of Permutations, ISBN 1584884347, M. Bona, 2004, p. 43 [7] Combinatorics of Permutations, ISBN 1584884347, M. Bona, 2004, p. 43ff [8] D. E. Knuth, The Art of Computer Programming, Vol 3, Sorting and Searching, Addison-Wesley (1973), p.12. This book mentions the Lehmer code (without using that name) as a variant C1,...,Cn of inversion tables in exercise 5.1.17 (p.19), together with two other variants. [9] H. A. Rothe, Sammlung combinatorisch-analytischer Abhandlungen 2 (Leipzig, 1800), 263305. Cited in, p.14. [10] Fisher, R.A.; Yates, F. (1948) [1938]. Statistical tables for biological, agricultural and medical research (3rd ed.). London: Oliver & Boyd. pp.2627. OCLC14222135. [11] Knuth, D. E. (2005). "Generating All Tuples and Permutations". The Art of Computer Programming. 4, Fascicle 2. Addison-Wesley. pp.126. ISBN0-201-85393-0. [12] http:/ / h20331. www2. hp. com/ Hpsub/ downloads/ 50gProbability-Rearranging_items. pdf [13] Curmi, Jamie (2009-01-10). "Summary of Functions in Excel and Numbers" (http:/ / curmi. com/ blog/ wp-content/ uploads/ 2009/ 01/ functions-in-excel-and-numbers-09. pdf) (PDF). . Retrieved 2009-01-26. [14] 3GPP TS 36.212 (http:/ / www. 3gpp. org/ ftp/ Specs/ html-info/ 36212. htm)

Permutation

296

References
Miklos Bona. "Combinatorics of Permutations", Chapman Hall-CRC, 2004. ISBN 1-58488-434-7. Donald Knuth. The Art of Computer Programming, Volume4: Generating All Tuples and Permutations, Fascicle2, first printing. Addison-Wesley, 2005. ISBN 0-201-85393-0. Donald Knuth. The Art of Computer Programming, Volume3: Sorting and Searching, Second Edition. Addison-Wesley, 1998. ISBN 0-201-89685-0. Section 5.1: Combinatorial Properties of Permutations, pp.1172. Humphreys, J. F.. A course in group theory. Oxford University Press, 1996. ISBN 9780198534594

Permutation pattern
In combinatorial mathematics and theoretical computer science, a permutation pattern is a sub-permutation of a longer permutation. The permutation , written as a word in one-line notation (i.e., in two-line notation with the first line omitted), is said to contain the permutation if there exists a subsequence of entries of that has the same relative order as , and in this case is said to be a pattern of , written . Otherwise, is said to avoid the permutation . The subsequence of need not consist of consecutive entries. For example, permutation = 391867452 (written in one-line notation) contains the pattern = 51342, as can be seen by considering the subsequence 91672. Such a subsequence is called a copy or instance of.

Prehistory
A case can be made that Percy MacMahon was the first to prove a result in the field with his study of "lattice permutations" in Volume I, Section III, Chapter V of 1915. In particular, in Items 97 and 98, MacMahon shows that the permutations which can be divided into two decreasing subsequences (i.e., the 123-avoiding permutations) are counted by the Catalan numbers. Another early landmark result in the field is the ErdsSzekeres theorem; in permutation pattern language, the theorem states that for any positive integers a and b every permutation of length at least ab+1 must contain either the pattern 1,2,3,...,a+1 or the pattern b+1,b,...,2,1.

Computer science origins


The study of permutation patterns began in earnest with Donald Knuth's consideration of stack-sorting in (1968). Knuth showed that the permutation can be sorted by a stack if and only if avoids 231, and that these permutations are enumerated by the Catalan numbers (1968, Section 2.2.1, Exercises 4 and 5). Knuth also raised questions about sorting with deques. In particular, (1968, Section 2.2.1, Exercise 13), which asks how many permutation of n elements are obtainable with the use of a deque, remains an open question (it is rated M49 in the first printing, and M48 in the second). Shortly thereafter, Robert Tarjan investigated sorting networks in (1972), while Vaughan Pratt showed in (1973) that the permutation can be sorted by a deque if and only if for all k, avoids 5,2,7,4,...,4k+11,4k2,3,4k,1, and 5,2,7,4,...,4k+3,4k,1,4k+2,3, and every permutation that can be obtained from either of these by interchanging the last two elements or the 1 and the 2. Because this collection of permutations is infinite (in fact, it is the first published example of an infinite antichain of permutations), it is not immediately clear how long it takes to decide if a permutation can be sorted by a deque. Rosenstiehl & Tarjan (1984) later presented a linear (in the length of ) time algorithm which determines if can be sorted by a deque. In his paper, Pratt remarked that this permutation pattern order seems to be the only partial order on permutation that arises in a simple and natural way and concludes by noting that from an abstract point of view, the permutation pattern order is even more interesting than the networks we were characterizing.

Permutation pattern

297

Enumerative origins
Another major influence on the early development of the study of permutation patterns came from enumerative combinatorics, and focused on finding formulas for the number of permutations which avoid a fixed (and typically short) permutation. Let Avn() denote the set of permutations of length n which avoid . As already noted, MacMahon and Knuth showed that |Avn(123)| = |Avn(231)| = Cn, the nth Catalan number. Thus these two patterns are equally difficult to avoid. Simion & Schmidt (1985) was the first paper to focus solely on enumeration. Among other results, Simion and Schmidt counted even and odd permutations avoiding a pattern of length three, counted permutations avoiding two patterns of length three, and gave the first bijective proof that 123- and 231-avoiding permutations are equinumerous. Since their paper, many other bijections have been given, see Claesson & Kitaev (2008) for a survey. In general, if |Avn()| = |Avn()| for all n, then and are said to be Wilf-equivalent. Many Wilf-equivalences stem from the trivial fact that |Avn()| = |Avn(1)| = |Avn(rev)| for all n, where -1 denotes the inverse of and rev denotes the reverse of . (These two operations generate the Dihedral group D8 with a natural action on permutation matrices.) However, there are also numerous examples of nontrivial Wilf-equivalences: Stankova (1994) proved that the permutations 1342 and 2413 are Wilf-equivalent. Backelin, West & Xin (2007) proved that for any permutation and any positive integer m, the permutations 12..m and m...21 are Wilf-equivalent, where denotes the direct sum operation. From these two Wilf-equivalences and the inverse and reverse symmetries, it follows that there are three different sequences |Avn()| where is of length four:
sequence enumerating Avn() OEIS reference A022558 A005802 A061552 exact enumeration reference Bna (1997) Gessel (1990) unknown

1342 1, 2, 6, 23, 103, 512, 2740, 15485, 91245, 555662, ... 1234 1, 2, 6, 23, 103, 513, 2761, 15767, 94359, 586590, ... 1324 1, 2, 6, 23, 103, 513, 2762, 15793, 94776, 591950, ...

In the late 1980s, Richard P. Stanley and Herbert Wilf conjectured that for every permutation , there is some constant K such that |Avn()| < Kn. This was known as the StanleyWilf conjecture until it was proved in (2004) by Adam Marcus and Gbor Tardos.

Closed classes
A closed class, also known as a pattern class, permutation class, or simply class of permutations is a downset in the permutation pattern order. Every class can be defined by the minimal permutations which do not lie inside it, its basis. Thus the basis for the stack-sortable permutations is {231}, while the basis for the deque-sortable permutations is infinite. The generating function for a class is x|| where the sum is taken over all permutations in the class. Given a class of permutations, there are numerous questions that one may seek to answer, such as: What is the enumeration of the class? Does the class have a rational/algebraic/holonomic generating function? What is the growth rate of the class? (Or, if this does not exist, the upper or lower growth rate.) Is the basis of the class finite or infinite? Is the class partially well-ordered? Does the class satisfy the joint embedding property? (Classes which satisfy this are often called atomic.) How quickly can the membership problem for this class be decided? I.e., given a permutation of length n, how long does it take to determine if lies in the class?

Permutation pattern General techniques to answer these questions are few and far between.

298

Packing densities
The permutation is said to be -optimal if no permutation of the same length as has more copies of . In his address to the SIAM meeting on Discrete Mathematics in 1992, Wilf defined the packing density of the permutation of length k as

An unpublished argument of Fred Galvin shows that the quantity inside this limit is decreasing for n k, and so the limit exists. When is monotone, its packing density is clearly 1, and packing densities are invariant under the group of symmetries generated by inverse and reverse, so for permutations of length three, there is only one nontrivial packing density. Walter Stromquist (unpublished) settled this case by showing that the packing density of 132 is 23 - 3, approximately 0.46410. For permutations of length four, there are (due to symmetries) seven cases to consider:
1234 packing density 1 reference trivial Price (1997) Price (1997) Albert et al. (2002)

1432 root of x3 - 12x2 + 156x - 64 0.42357 2143 1243 1324 1342 2413 = 0.375 = 0.375 conjectured to be 0.244 conjectured to be 0.19658 conjectured to be 0.10474

For the three unknown permutations, there are bounds and conjectures. Price (1997) used an approximation algorithm which suggests that the packing density of 1324 is around 0.244. Birjan Batkeyev (unpublished) constructed a family of permutations showing that the packing density of 1342 is at least the product of the packing densities of 132 and 1432, 0.19658. This is conjectured to be the precise packing density of 1342. Recently, Presutti & Stromquist (2010) have provided a lower bound on the packing density of 2413. This lower bound, which can be expressed in terms of an integral, is approximately 0.10474, and conjectured to be the true packing density.

Generalizations
There are several ways in which this notion of permutation patterns may be generalized. For example, a vincular pattern is a permutation containing dashes indicating the entries that need not occur consecutively (in the normal pattern definition, no entries need to occur consecutively). For example, the permutation 314265 has two copies of the dashed pattern 2-31-4, given by the entries 3426 and 3425. For a dashed pattern and any permutation , we write () for the number of copies of in . Thus the number of inversions in is 2-1(), while the number of descents is 21(). Going further, the number of valleys in is 213() + 312(), while the number of peaks is 231() + 132(). These patterns were introduced by Babson & Steingrmsson (2000), who showed that almost all known Mahonian statistics could be expressed in terms of vincular permutations. For example, the Major index of is equal to 1-32() + 2-31() + 3-21() + 21(). Another generalization is that of a barred pattern, in which some of the entries are barred. For to avoid the barred pattern means that every set of entries of which form a copy of the nonbarred entries of can be extended to

Permutation pattern form a copy of all entries of . West (1993) introduced these types of patterns in his study of permutations which could be sorted by passing them twice through a stack. (Note that West's definition of sorting twice through a stack is not the same as sorting with two stacks in series.) Another example of barred patterns occurs in the work of Bousquet-Mlou & Butler (2007), who showed that the Schubert variety corresponding to is locally factorial if and only if avoids 1324 and 21354.

299

References
Albert, Michael H.; Atkinson, M. D.; Handley, C. C.; Holton, D. A.; Stromquist, W. (2002), "On packing densities of permutations" [1], Electron. J. Combin. 9: Research article 5, 20 pp., MR1887086. Babson, Erik; Steingrmsson, Einar (2000), "Generalized permutation patterns and a classification of the Mahonian statistics" [2], Sm. Lothar. Combin. 44: Research article B44b, 18 pp., MR1758852. Backelin, Jrgen; West, Julian; Xin, Guoce (2007), "Wilf-equivalence for singleton classes", Adv. In Appl. Math. 38 (2): 133149, doi:10.1016/j.aam.2004.11.006, MR2290807. Bna, Mikls (1997), "Exact enumeration of 1342-avoiding permutations: a close link with labeled trees and planar maps", J. Combin. Theory Ser. A 80 (2): 257272, doi:10.1006/jcta.1997.2800, MR1485138. Bousquet-Mlou, Mireille; Butler, Steve (2007), "Forest-like permutations", Ann. Comb. 11: 335354, doi:10.1007/s00026-007-0322-1l, MR2376109. Claesson, Anders; Kitaev, Sergey (2008), "Classification of bijections between 321- and 132-avoiding permutations", Sm. Lothar. Combin. 60: Article B60d, 30pp., MR2465405. Gessel, Ira M. (1990), "Symmetric functions and P-recursiveness", J. Combin. Theory Ser. A 53 (2): 257285, doi:10.1016/0097-3165(90)90060-A, MR1041448. Knuth, Donald E. (1968), The Art Of Computer Programming Vol. 1, Boston: Addison-Wesley, ISBN0-201-89683-4, OCLC155842391, MR0286317 MacMahon, Percy A. (1915/16), Combinatory Analysis, London: Cambridge University Press Marcus, Adam; Tardos, Gbor (2004), "Excluded permutation matrices and the Stanley-Wilf conjecture", Journal of Combinatorial Theory. Series A 107 (1): 153160, doi:10.1016/j.jcta.2004.04.002, MR2063960. Pratt, Vaughn R. (1973), "Computing permutations with double-ended queues. Parallel stacks and parallel queues", Fifth Annual ACM Symposium on Theory of Computing (Austin, Tex., 1973): 268277, doi:10.1145/800125.804058, MR0489115. Presutti, Cathleen Battiste; Stromquist, Walter (2010), "Packing rates of measures and a conjecture for the packing density of 2413", in Linton, Steve; Rukuc, Nik; Vatter, Vincent, Permutation Patterns, London Math. Soc. Lecture Notes, 376, Cambridge University Press, pp.287316 Price, Alkes (1997), Packing densities of layered patterns, Ph.D. thesis, University of Pennsylvania Rosenstiehl, Pierre; Tarjan, Robert (1984), "Gauss codes, planar Hamiltonian graphs, and stack-sortable permutations", J. Algorithms 5 (3): 375390, doi:10.1016/0196-6774(84)90018-X, MR756164. Simion, Rodica; Schmidt, Frank W. (1985), "Restricted permutations", European J. Combin. 6: 383406, MR0829358. Stankova, Zvezdelina (1994), "Forbidden subsequences", Discrete Math. 132 (13): 291316, doi:10.1016/0012-365X(94)90242-9, MR1297387. Tarjan, Robert (1972), "Sorting using networks of queues and stacks", J. Assoc. Comput. Mach. 19 (2): 341346, doi:10.1145/321694.321704, MR0298803. West, Julian (1993), "Sorting twice through a stack", Theoret. Comput. Sci. 117 (12): 303313, doi:10.1016/0304-3975(93)90321-J, MR1235186.

Permutation pattern

300

External links
Permutation Patterns 2011 [3], the ninth in a series of conferences devoted to permutation patterns. June 2024, 2011. Permutation Patterns 2010 [4], the eighth in a series of conferences devoted to permutation patterns. August 913, 2010. Database of Permutation Pattern Avoidance [5], maintained by Bridget Tenner.

References
[1] [2] [3] [4] [5] http:/ / www. combinatorics. org/ Volume_9/ Abstracts/ v9i1r5. html http:/ / www. emis. de/ journals/ SLC/ wpapers/ s44stein. html http:/ / math. calpoly. edu/ PP2011/ http:/ / math. dartmouth. edu/ ~pp2010/ http:/ / math. depaul. edu/ ~bridget/ patterns. html

Piecewise syndetic set


In mathematics, piecewise syndeticity is a notion of largeness of subsets of the natural numbers. Let denote the set of finite subsets of such that for every . Then a set there exists an is called piecewise syndetic if there exists such that

where

. Informally, S is piecewise syndetic if S contains arbitrarily long

intervals with gaps bounded by some fixed bound b.

Properties
A set is piecewise syndetic if and only if it is the intersection of a syndetic set and a thick set. If S is piecewise syndetic then S contains arbitrarily long arithmetic progressions. A set S is piecewise syndetic if and only if there exists some ultrafilter U which contains S and U is in the smallest two-sided ideal of , the Stoneech compactification of the natural numbers. Partition regularity: if If A and B are subsets of is piecewise syndetic and , then for some , contains a piecewise syndetic set. (Brown, 1968) , and A and B have positive upper Banach density, then is piecewise syndetic[1]

Other Notions of Largeness


There are many alternative definitions of largeness that also usefully distinguish subsets of natural numbers: cofinite positive upper density syndetic thick member of a nonprincipal ultrafilter

IP set

Piecewise syndetic set

301

Notes
[1] R. Jin, Nonstandard Methods For Upper Banach Density Problems (http:/ / math. cofc. edu/ faculty/ jin/ research/ banach. pdf), Journal of Number Theory 91, (2001), 20-38</math>.

References
J. McLeod, " Some Notions of Size in Partial Semigroups (http://www.mtholyoke.edu/~jmcleod/ somenotionsofsize.pdf)" Topology Proceedings 25 (2000), 317-332 Vitaly Bergelson, " Minimal Idempotents and Ergodic Ramsey Theory (http://www.math.ohio-state.edu/ ~vitaly/vbkatsiveli20march03.pdf)", Topics in Dynamics and Ergodic Theory 8-39, London Math. Soc. Lecture Note Series 310, Cambridge Univ. Press, Cambridge, (2003) Vitaly Bergelson, N. Hindman, " Partition regular structures contained in large sets are abundant (http:// members.aol.com/nhfiles2/pdf/large.pdf)", J. Comb. Theory (Series A) 93 (2001), 18-36 T. Brown, " An interesting combinatorial method in the theory of locally finite semigroups (http://projecteuclid. org/Dienst/UI/1.0/Summarize/euclid.pjm/1102971066)", Pacific J. Math. 36, no. 2 (1971), 285289.

Pigeonhole principle
In mathematics and computer science, the pigeonhole principle states that if n items are put into m pigeonholes with n > m, then at least one pigeonhole must contain more than one item. This theorem is exemplified in real-life by truisms like "there must be at least two left gloves or two right gloves in a group of three gloves". It is an example of a counting argument, and despite seeming intuitive it can be used to demonstrate possibly unexpected results; for example, that two people in London have the same number of hairs on their heads (see below). The first formalization of the idea is believed to have been made by Johann Dirichlet in 1834 under the name Schubfachprinzip ("drawer principle" or "shelf principle"). For this reason it is also commonly called Dirichlet's box principle, Dirichlet's drawer principle or simply "Dirichlet principle"a name that could also refer to the minimum principle for harmonic functions. The original "drawer" name is still in use in French ("principe des tiroirs"), Italian ("principio dei cassetti") and German ("Schubfachprinzip").

A photograph of pigeons in holes. Here there are n = 10 pigeons in m = 9 holes, so by the pigeonhole principle, at least one hole has more than one pigeon: in this case, both of the top corner holes contain two pigeons. The principle says nothing about which holes are empty: for n = 10 pigeons in m = 9 holes, it simply says that at least one hole here will be over-full; in this case, the bottom-left hole is empty.

Though the most straightforward application is to finite sets (such as pigeons and boxes), it is also used with infinite sets that cannot be put into one-to-one correspondence. To do so requires the formal statement of the pigeonhole principle, which is "there does not exist an injective function on finite sets whose codomain is smaller than its domain". Advanced mathematical proofs like Siegel's lemma build upon this more general concept.

Pigeonhole principle

302

Examples
Softball team
Imagine five people who want to play softball (n = 5 items), with a limitation of only four softball teams (m = 4 holes) to choose from. A further limitation is imposed in the form of each of the five refusing to play on a team with any of the other four players. It is impossible to divide five people among four teams without putting two of the people on the same team, and since they refuse to play on the same team, by asserting the pigeonhole principle it is easily deducible that at most four of the five possible players will be able to play.

Sock-picking
Assuming that in a box there are 10 black socks and 12 blue socks, calculate the maximum number of socks needed to be drawn from the box before a pair of the same color can be made. Using the pigeonhole principle, to have at least one pair of the same color (m = 2 holes, one per colour) using one pigeonhole per color, you need only three socks (n = 3 items). In this example, if the first and second sock drawn are not of the same color, the very next sock drawn would complete at least one same-color pair. (m = 2)

Hand-shaking
If there are n number of people who can shake hands with one another (where n > 1), the pigeonhole principle shows that there is always a pair of people who will shake hands with the same number of people. As the 'holes', or m, correspond to number of hands shaken, and each person can shake hands with anybody from 0 to n 1 other people, this creates n 1 possible holes. This is because either the '0' or the 'n 1' hole must be empty (if one person shakes hands with everybody, it's not possible to have another person who shakes hands with nobody; likewise, if one person shakes hands with no one there cannot be a person who shakes hands with everybody). This leaves n people to be placed in at most n 1 non-empty holes, guaranteeing duplication.

Hair-counting
We can demonstrate there must be at least two people in London with the same number of hairs on their heads as follows. A typical human head has around 150,000 hairs; therefore it is reasonable to assume that no one has more than 1,000,000 hairs on their head (m = 1 million holes). Since there are more than 1,000,000 people in London (n is bigger than 1million items), if we assign a pigeonhole to each number of hairs on a person's head, and assign people to pigeonholes according to the number of hairs on each person's head, there must be at least two people with the same number of hair on their heads (or, n > m). In the case with the fewest overlaps, there will be at least one person assigned to every pigeonhole, at which point the 1,000,001st person is assigned to the same pigeonhole as someone else. Of course, there may be empty pigeonholes in which case this "collision" happens before we get to the 1,000,000th person. The principle just proves the existence of an overlap; it says nothing of the number of overlaps.

The birthday problem


The birthday problem asks that, in a set of n randomly chosen people, what is the probability that some pair of them will have the same birthday. If there are 367 people in the room, we know that there is at least one pair who share the same birthday, as there are only 366 possible birthdays to choose from.

Uses and applications


The pigeonhole principle arises in computer science. For example, collisions are inevitable in a hash table because the number of possible keys exceeds the number of indices in the array. A hashing algorithm, no matter how clever, cannot avoid these collisions. The principle can also be used to prove that any lossless compression algorithm,

Pigeonhole principle provided it makes some inputs smaller (as the name compression suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given length l could be mapped to the (much) smaller set of all sequences of length less than l, and do so without collisions (because the compression is lossless), which possibility the pigeonhole principle excludes. A notable problem in mathematical analysis is, for a fixed irrational number a, to show that the set {[na]: n is an integer} of fractional parts is dense in [0,1]. After a moment's thought, one finds that it is not easy to explicitly find integers n, m such that |na m| < e, where e > 0 is a small positive number and a is some arbitrary irrational number. But if one takes M such that 1/M < e, by the pigeonhole principle there must be n1, n2 {1, 2, ..., M + 1} such that n1a and n2a are in the same integer subdivision of size 1/M (there are only M such subdivisions between consecutive integers). In particular, we can find n1,n2 such that n1a is in (p + k/M, p + (k + 1)/M), and n2a is in (q + k/M, q + (k + 1)/M), for some p,q integers and k in {0, 1, ..., M 1}. We can then easily verify that (n2 n1)a is in (q p 1/M, q p + 1/M). This implies that [na] < 1/M < e, where n = n2 n1 or n = n1 n2. This shows that 0 is a limit point of {[na]}. We can then use this fact to prove the case for p in (0,1]: find n such that [na] < 1/M < e; then if p (0, 1/M], we are done. Otherwise p in (j/M, (j + 1)/M], and by setting k = sup{r N : r[na] < j/M}, one obtains |[(k + 1)na] p| < 1/M < e.

303

Generalizations of the pigeonhole principle


A generalized version of this principle states that, if n discrete objects are to be allocated to m containers, then at least one container must hold no fewer than objects, where is the ceiling function, denoting the smallest integer larger than or equal to x. Similarly, at least one container must hold no more than objects, where is the floor function, denoting the largest integer smaller than or equal to x. A probabilistic generalization of the pigeonhole principle states that if n pigeons are randomly put into m pigeonholes with uniform probability 1/m, then at least one pigeonhole will hold more than one pigeon with probability

where (m)n is the falling factorial m(m 1)(m 2)...(m n + 1). For n = 0 and for n = 1 (and m > 0), that probability is zero; in other words, if there is just one pigeon, there cannot be a conflict. For n > m (more pigeons than pigeonholes) it is one, in which case it coincides with the ordinary pigeonhole principle. But even if the number of pigeons does not exceed the number of pigeonholes (n m), due to the random nature of the assignment of pigeons to pigeonholes there is often a substantial chance that clashes will occur. For example, if 2 pigeons are randomly assigned to 4 pigeonholes, there is a 25% chance that at least one pigeonhole will hold more than one pigeon; for 5 pigeons and 10 holes, that probability is 69.76%; and for 10 pigeons and 20 holes it is about 93.45%. If the number of holes stays fixed, there is always a greater probability of a pair when you add more pigeons. This problem is treated at much greater length at birthday paradox. A further probabilistic generalisation is that when a real-valued random variable X has a finite mean E(X), then the probability is nonzero that X is greater than or equal to E(X), and similarly the probability is nonzero that X is less than or equal to E(X). To see that this implies the standard pigeonhole principle, take any fixed arrangement of n pigeons into m holes and let X be the number of pigeons in a hole chosen uniformly at random. The mean of X is n/m, so if there are more pigeons than holes the mean is greater than one. Therefore, X is sometimes at least 2.

Pigeonhole principle

304

Infinite sets
The pigeonhole principle can be extended to infinite sets by phrasing it in terms of cardinal numbers: if the cardinality of set A is greater than the cardinality of set B, then there is no injection from A to B. However in this form the principle is tautological, since the meaning of the statement that the cardinality of set A is greater than the cardinality of set B is exactly that there is no injective map from A to B. What makes the situation of finite sets particular is that adding at least one element to a set is sufficient to ensure that the cardinality increases.

References
Grimaldi, Ralph P. Discrete and Combinatorial Mathematics: An Applied Introduction. 4th edn. 1998. ISBN 0-201-19912-2. pp.244248. Jeff Miller, Peter Flor, Gunnar Berg, and Julio Gonzlez Cabilln. "Pigeonhole principle [1]". In Jeff Miller (ed.) Earliest Known Uses of Some of the Words of Mathematics [2]. Electronic document, retrieved November 11, 2006.

External links
"The strange case of The Pigeon-hole Principle [3]"; Edsger Dijkstra investigates interpretations and reformulations of the principle. "The Pigeon Hole Principle [4]"; Elementary examples of the principle in use by Larry Cusick. "Pigeonhole Principle from Interactive Mathematics Miscellany and Puzzles [5]"; basic Pigeonhole Principle analysis and examples by Alexander Bogomolny. "The Puzzlers' Pigeonhole [6]"; Alexander Bogomolny on the importance of the principle in the field of puzzle solving and analysis.

References
[1] [2] [3] [4] [5] [6] http:/ / jeff560. tripod. com/ p. html http:/ / jeff560. tripod. com/ mathword. html http:/ / www. cs. utexas. edu/ users/ EWD/ transcriptions/ EWD09xx/ EWD980. html http:/ / zimmer. csufresno. edu/ ~larryc/ proofs/ proofs. pigeonhole. html http:/ / www. cut-the-knot. org/ do_you_know/ pigeon. shtml http:/ / www. maa. org/ editorial/ knot/ pigeonhole. html

Probabilistic method

305

Probabilistic method
This article is not about interactive proof systems which use probability to convince a verifier that a proof is correct, nor about probabilistic algorithms, which give the right answer with high probability but not with certainty, nor about Monte Carlo methods, which are simulations relying on pseudo-randomness. The probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erds, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is more than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding).

Introduction
If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Turning this around, if the probability that the random object has the property is greater than zero, then this proves the existence of at least one object in the collection that has the property. It doesn't matter if the probability is vanishingly small; any positive probability will do. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovsz local lemma.

Two examples due to Erds


Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erds. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number R(r, r).

First example
Suppose we have a complete graph on n vertices. We wish to show (for small enough values of n) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on r vertices which is monochromatic (every edge colored the same color). To do so, we color the graph randomly. Color each edge independently with probability 1/2 of being red and 1/2 of being blue. We calculate the expected number of monochromatic subgraphs on r vertices as follows: For any set S of r vertices from our graph, define the variable X(S) to be 1 if every edge amongst the r vertices is the same color, and 0 otherwise. Note that the number of monochromatic r-subgraphs is the sum of X(S) over all possible subsets. For any S, the expected value of X(S) is simply the probability that all of the the same color, edges in S are

Probabilistic method (the factor of 2 comes because there are two possible colors). This holds true for any of the C(n,r) possible subsets we could have chosen, so we have that the sum of E[X(S)] over all S is

306

The sum of an expectation is the expectation of the sum (regardless of whether the variables are independent), so the expectation of the sum (the expected number of monochromatic r-subgraphs) is

Consider what happens if this value is less than 1. The number of monochromatic r-subgraphs in our random coloring will always be an integer, so at least one coloring must have less than the expected value. But the only integer which satisfies this criterion is 0. Thus if

some coloring fits our desired criterion, so by definition R(r, r) must be bigger than n. In particular, R(r, r) must grow at least exponentially with r. A peculiarity of this argument is that it is entirely nonconstructive. Even though it proves (for example) that almost every coloring of the complete graph on (1.1)r vertices contains no monochromatic r-subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years.

Second example
A 1959 paper of Erds (see reference cited below) addressed the following problem in graph theory: given positive integers g and k, does there exist a graph G containing only cycles of length at least g, such that the chromatic number of G is at least k? It can be shown that such a graph exists for any g and k, and the proof is reasonably simple. Let n be very large and consider a random graph G on n vertices, where every edge in G exists with probability p=n1/g-1. It can be shown that with positive probability, the following two properties hold: G contains at most n/2 cycles of length less than g. Proof. Let X be the number cycles of length less than g. Number of cycles of length i in the complete graph on n vertices is and each of them is present in G with probability . Hence by Markov's inequality we have . G contains no independent set of size .

Proof. Let Y be the size of the largest independent set in G. Clearly, we have

when

Here comes the trick: since G has these two properties, we can remove at most n/2 vertices from G to obtain a new graph G' on n' vertices that contains only cycles of length at least g. We can see that this new graph has no independent set of size . Hence G' has chromatic number at least k, as chromatic number is lower bounded by 'number of vertices/size of largest independent set'.

Probabilistic method This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large.

307

References
Alon, Noga; Spencer, Joel H. (2000). The probabilistic method (2ed). New York: Wiley-Interscience. ISBN 0-471-37046-0. Erds, P. (1959). "Graph theory and probability" [1]. Canad. J. Math. 11 (0): 3438. doi:10.4153/CJM-1959-003-9. MR0102081. Erds, P. (1961). "Graph theory and probability, II" [2]. Canad. J. Math. 13 (0): 346352. doi:10.4153/CJM-1961-029-9. MR0120168. J. Matouek, J. Vondrak. The Probabilistic Method [3]. Lecture notes. Alon, N and Krivelevich, M (2006). Extremal and Probabilistic Combinatorics [4]

References
[1] http:/ / www. math-inst. hu/ ~p_erdos/ 1959-06. pdf [2] http:/ / www. math-inst. hu/ ~p_erdos/ 1961-06. pdf [3] http:/ / kam. mff. cuni. cz/ ~matousek/ prob-ln-2pp. ps. gz [4] http:/ / www. math. tau. ac. il/ ~nogaa/ PDFS/ epc7. pdf

q-analog
Roughly speaking, in mathematics, specifically in the areas of combinatorics and special functions, a q-analog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as q 1. Typically, mathematicians are interested in q-analogues that arise naturally, rather than in arbitrarily contriving q-analogues of known results. The earliest q-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century. q-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic dynamical systems. The relationship to fractals and dynamical systems results from the fact that many fractal patterns have the symmetries of Fuchsian groups in general (see, for example Indra's pearls and the Apollonian gasket) and the modular group in particular. The connection passes through hyperbolic geometry and ergodic theory, where the elliptic integrals and modular forms play a prominent role; the q-series themselves are closely related to elliptic integrals. q-analogs also appear in the study of quantum groups and in q-deformed superalgebras. The connection here is similar, in that much of string theory is set in the language of Riemann surfaces, resulting in connections to elliptic curves, which in turn relate to q-series. There are two main groups of q-analogs, the "classical" q-analogs, with beginnings in the work of Leonhard Euler and extended by F. H. Jackson and others.[1] A second group of q-analogs have been developed by Constantino Tsallis and others.[2]

q-analog

308

"Classical" q-theory
Classical q-theory begins with the q-analogs of the nonnegative integers[1] . The equality

suggests that we define the q-analog of n, also known as the q-bracket or q-number of n, to be

By itself, the choice of this particular q-analog among the many possible options is unmotivated. However, it appears naturally in several contexts. For example, having decided to use [n]q as the q-analog of n, one may define the q-analog of the factorial, known as the q-factorial, by

This q-analog appears naturally in several contexts. Notably, while n! counts the number of permutations of length n, we have that [n]q! counts permutations while keeping track of the number of inversions. That is, if inv(w) denotes the number of inversions of the permutation w and Sn denotes the set of permutations of length n, we have

In particular, one recovers the usual factorial by taking the limit as

The q-factorial also has a concise definition in terms of the q-Pochammer symbol, a basic building-block of all q-theories: . From the q-factorials, one can move on to define the q-binomial coefficients, also known as Gaussian coefficients, Gaussian polynomials, or Gaussian binomial coefficients:

which allows q-addition to be defined:

and subtraction:

The q-exponential is defined as:

Q-trigonometric functions, along with a q-Fourier transform have been defined in this context[2]

q-analog

309

Combinatorial q-analogs
The Gaussian coefficients count subspaces of a finite vector space. Let q be the number of elements in a finite field. (The number q is then a power of a prime number, q = pe, so using the letter q is especially appropriate.) Then the number of k-dimensional subspaces of the n-dimensional vector space over the q-element field equals

Letting q approach 1, we get the binomial coefficient

or in other words, the number of k-element subsets of an n-element set. Thus, one can regard a finite vector space as a q-generalization of a set, and the subspaces as the q-generalization of the subsets of the set. This has been a fruitful point of view in finding interesting new theorems. For example, there are q-analogs of Sperner's theorem and Ramsey theory.

Tsallis q-theory
In Tsallis q-theory, one begins by defining the q-addition of two real numbers:[2]

Q-addition is commutative, associative, has 0 as the identity element, and for q=1 becomes the usual sum of x and y. By inversion, q-subtraction is defined:

The q-product is defined as:

where

is defined to mean x when

and 0 when

. Note that for

, q-division by zero is

allowed. By inversion, q-division is defined as:

Many q-functions may be defined using the above building blocks, such as the Tsallis q-exponential and its inverse, the Tsallis q-logarithm which are defined as:

q-analog

310

q1
Conversely to letting q vary and seeing q-analogs as deformations, one can consider the combinatorial case of q=1 as a limit of q-analogs as q1 (often one cannot simply let q=1 in the formulae, hence the need to take a limit). This can be formalized in the field with one element, which recovers combinatorics as linear algebra over the field with one element: for example, Weyl groups are simple algebraic groups over the field with one element.

References
[1] Ernst, Thomas (2003). "A Method for q-calculus" (http:/ / www. solnaschack. org/ ernst/ ernst_10-4cor. pdf). Journal of Nonlinear Mathematical Physics 10 (4): 487525. . Retrieved 2011-07-27. [2] Umarov, Sabir; Tsallis, Constantino and Steinberg, Stanly (2008). "On a q-Central Limit Theorem Consistent with Nonextensive Statistical Mechanics" (http:/ / www. cbpf. br/ GrupPesq/ StatisticalPhys/ pdftheo/ UmarovTsallisSteinberg2008. pdf). Milan j. math. (Birkhauser Verlag) 76: 307328. doi:10.1007/s00032-008-0087-y. . Retrieved 2011-07-27.

q-analog (http://mathworld.wolfram.com/q-Analog.html) from MathWorld q-bracket (http://mathworld.wolfram.com/q-Bracket.html) from MathWorld q-factorial (http://mathworld.wolfram.com/q-Factorial.html) from MathWorld q-binomial coefficient (http://mathworld.wolfram.com/q-BinomialCoefficient.html) from MathWorld

External links
Hazewinkel, Michiel, ed. (2001), "Umbral calculus" (http://www.encyclopediaofmath.org/index.php?title=/ U/u095050), Encyclopedia of Mathematics, Springer, ISBN978-1556080104

q-Vandermonde identity
In mathematics, in the field of combinatorics, the q-Vandermonde identity is a q-analogue of the Chu-Vandermonde identity. Using standard notation for q-binomial coefficients, the identity states that

(The nonzero contributions to this sum come from values of j such that the q-binomial coefficients on the right side are nonzero, that is, .)

Other conventions
As is typical for q-analogues, the q-Vandermonde identity can be rewritten in a number of ways. In the conventions common in applications to quantum groups, a different q-binomial coefficient is used. This q-binomial coefficient, which we denote here by , is defined by . (In particular, it is the unique .) Using

shift of the "usual" q-binomial coefficient by a power of q such that the result is symmetric in q and this q-binomial coefficient, the q-Vandermonde identity can be written in the form

q-Vandermonde identity

311

Proofs of the identity


As with the (non-q) Chu-Vandermonde identity, there are several possible proofs of the q-Vandermonde identity. We give one proof here, using the q-binomial theorem. One standard proof of the Chu-Vandermonde identity is to expand the product ways. Following Stanley, the product can be expanded by the q-binomial theorem as
[1]

in two different

we can tweak this proof to prove the q-Vandermonde identity, as well. First, observe that

Less obviously, we can write

and we may expand both subproducts separately using the q-binomial theorem. This yields

Multiplying this latter product out and combining like terms gives

Finally, equating powers of

between the two expressions yields the desired result. in two different

This argument may also be phrased in terms of expanding the product .

ways, where A and B are operators (for example, a pair of matrices) that "q-commute," that is, that satisfy

Notes
[1] Stanley (2011), Solution to exercise 1.100, p. 188.

References
Richard P. Stanley (2011). Enumerative Combinatorics, Volume 1 (http://www-math.mit.edu/~rstan/ec/ec1. pdf) (2 ed.). Retrieved August 2, 2011. Gaurav Bhatnagar (2011). "In Praise of an Elementary Identity of Euler". Electronic J. Combinatorics, P13, 44pp. 18 (2). arXiv:1102.0659. Victor J. W. Guo (2008). "Bijective Proofs of Gould's and Rothe's Identities". Discrete Mathematics 308 (9): 1756. arXiv:1005.4256. doi:10.1016/j.disc.2007.04.020. Sylvie Corteel; Carla Savage (2003). "Lecture Hall Theorems, q-series and Truncated Objects". arXiv:math/0309108[math.CO].

Random permutation statistics

312

Random permutation statistics


The statistics of random permutations, such as the cycle structure of a random permutation are of fundamental importance in the analysis of algorithms, especially of sorting algorithms, which operate on random permutations. Suppose, for example, that we are using quickselect (a cousin of quicksort) to select a random element of a random permutation. Quickselect will perform a partial sort on the array, as it partitions the array according to the pivot. Hence a permutation will be less disordered after quickselect has been performed. The amount of disorder that remains may be analysed with generating functions. These generating functions depend in a fundamental way on the generating functions of random permutation statistics. Hence it is of vital importance to compute these generating functions. The article on random permutations contains an introduction to random permutations.

The fundamental relation


Permutations are sets of labelled cycles. Using the labelled case of the fundamental theorem of combinatorial enumeration and writing for the set of permutations and for the singleton set, we have

Translating into exponential generating functions (EGFs), we have

where we have used the fact that the EGF of the set of permutations (there are n! permutations of n elements) is

This one equation will allow us to derive a surprising number of permutation statistics. Firstly, by dropping terms from , i.e. exp, we may constrain the number of cycles that a permutation contains, e.g. by restricting the EGF to we obtain permutations containing two cycles. Secondly, note that the EGF of labelled cycles, i.e. of , is

because there are k!/k labelled cycles. This means that by dropping terms from this generating function, we may constrain the size of the cycles that occur in a permutation and obtain an EGF of the permutations containing only cycles of a given size. Now instead of dropping, let's put different weights on different size cycles. If depends only on the size k of the cycle and for brevity we write is a weight function that

the value of b for a permutation to be the sum of its values on the cycles, then we may mark cycles of length k with ub(k) and obtain a bivariate generating function g(z,u) that describes the parameter, i.e.

This is a mixed generating function which is exponential in the permutation size and ordinary in the secondary parameter u. Differentiating and evaluating at u=1, we have

Random permutation statistics i.e. the EGF of the sum of b over all permutations, or alternatively, the OGF, or more precisely, PGF (probability generating function) of the expectation of b. This article uses the coefficient extraction operator [zn], documented on the page for formal power series.

313

Number of permutations that are involutions


An involution is a permutation so that 2 = 1 under permutation composition. It follows that may only contain cycles of length one or two, i.e. the EGF g(z) of these permutations is

This gives the explicit formula for the total number

of involutions among the permutations Sn:

Dividing by n! yields the probability that a random permutation is an involution.

Number of permutations that are mth roots of unity


This generalizes the concept of an involution. An mth root of unity is a permutation so that m = 1 under permutation composition. Now every time we apply we move one step in parallel along all of its cycles. A cycle of length d applied d times produces the identity permutation on d elements (d fixed points) and d is the smallest value to do so. Hence m must be a multiple of all cycle sizes d, i.e. the only possible cycles are those whose length d is a divisor of m. It follows that the EGF g(x) of these permutations is

When m = p, where p is prime, this simplifies to

Number of permutations that are derangements


Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella? This problem is equivalent to counting permutations with no fixed points, and hence the EGF (subtract out fixed points by removing z) g(x) is

Now multiplication by total number of derangements:

just sums coefficients, so that we have the following formula for

, the

Hence there are about

derangements and the probability that a random permutation is a derangement is where to denote the set of

This result may also be proved by inclusion-exclusion. Using the sets permutations that fix p, we have

Random permutation statistics

314

This formula counts the number of permutations that have at least one fixed point. The cardinalities are as follows:

Hence the number of permutations with no fixed point is

or

and we have the claim. There is a generalization of these numbers, which is known as rencontres numbers, i.e. the number permutations of function with the variable u, i.e. choosing b(k) equal to one for of

containing m fixed points. The corresponding EGF is obtained by marking cycles of size one and zero otherwise, which yields the generating of the set of permutations by the number of fixed points:

It follows that

and hence

This immediately implies that

for n large, m fixed.

One hundred prisoners


A prison warden wants to make room in his prison and is considering liberating one hundred prisoners, thereby freeing one hundred cells. He therefore assembles one hundred prisoners and asks them to play the following game: he lines up one hundred urns in a row, each containing the name of one prisoner, where every prisoner's name occurs exactly once. The game is played as follows: every prisoner is allowed to look inside fifty urns. If he or she does not find his or her name in one of the fifty urns, all prisoners will immediately be executed, otherwise the game continues. The prisoners have a few moments to decide on a strategy, knowing that once the game has begun, they will not be able to communicate with each other, mark the urns in any way or move the urns or the names inside them. Choosing urns at random, their chances of survival are almost zero, but there is a strategy giving them a 30% chance of survival, assuming that the names are assigned to urns randomly what is it? First of all, the survival probability using random choices is

Random permutation statistics so this is definitely not a practical strategy. The 30% survival strategy is to consider the contents of the urns to be a permutation of the prisoners, and traverse cycles. To keep the notation simple, assign a number to each prisoner, for example by sorting their names alphabetically. The urns may thereafter be considered to contain numbers rather than names. Now clearly the contents of the urns define a permutation. The first prisoner opens the first urn. If he finds his name, he has finished and survives. Otherwise he opens the urn with the number he found in the first urn. The process repeats: the prisoner opens an urn and survives if he finds his name, otherwise he opens the urn with the number just retrieved, up to a limit of fifty urns. The second prisoner starts with urn number two, the third with urn number three, and so on. This strategy is precisely equivalent to a traversal of the cycles of the permutation represented by the urns. Every prisoner starts with the urn bearing his number and keeps on traversing his cycle up to a limit of fifty urns. The number of the urn that contains his number is the pre-image of that number under the permutation. Hence the prisoners survive if all cycles of the permutation contain at most fifty elements. We have to show that this probability is at least 30%. Note that this assumes that the warden chooses the permutation randomly; if the warden anticipates this strategy, he can simply choose a permutation with a cycle of length 51. To overcome this, the prisoners may agree in advance on a random permutation of their names. We consider the general case of prisoners and urns being opened. We first calculate the complementary probability, i.e. that there is a cycle of more than elements. With this in mind, we introduce

315

or

so that the desired probability is

because the cycle of more than find that

elements will necessarily be unique. Using the fact that

, we

which yields

Finally, using an integral estimate such as EulerMacLaurin summation, or the asymptotic expansion of the nth harmonic number, we obtain

so that

or at least 30%, as claimed. A related result is that asymptotically, the expected length of the longest cycle is a n, where is the GolombDickman constant, approximately 0.62. This example is due to Anna Gl and Peter Bro Miltersen; consult the paper by Peter Winkler for more information, and see the discussion on Les-Mathematiques.net. Consult the references on 100 prisoners for links to these references.

Random permutation statistics The above computation may be performed in a more simple and direct way, as follows: first note that a permutation of elements contains at most one cycle of length strictly greater than . Thus, if we denote

316

then

For

, the number of permutations that contain a cycle of length exactly

is

Explanation: ways of arranging Thus,

is the number of ways of choosing the items in a cycle; and

elements that consist the cycle;

is the number of

is the number of ways to permute the remaining elements.

We conclude that

Number of permutations containing

cycles

Applying the fundamental theorem of combinatorial enumeration, i.e. the labelled enumeration theorem with , to the set

we obtain the generating function

The term

yields the Stirling numbers of the first kind, i.e.

is the EGF of the unsigned Stirling numbers of the first kind.

We can compute the OGF of these numbers for n fixed, i.e.

Start with

which yields

Summing this, we obtain

Random permutation statistics

317

Using the formula for

on the left, the definition of

on the right, and the binomial theorem, we obtain

Comparing the coefficients of

, and using the definition of the binomial coefficient, we finally have

a falling factorial.

Expected number of cycles of a given size m


In this problem we use a bivariate generating function g(z,u) as described in the introduction. The value of b for a cycle not of size m is zero, and one for a cycle of size m. We have

or

This means that the expected number of cycles of size m in a permutation of length n less than m is zero (obviously). A random permutation of length at least m contains on average 1/m cycles of length m. In particular, a random permutation contains about one fixed point. The OGF of the expected number of cycles of length less than or equal to m is therefore

where Hm is the mth harmonic number. Hence the expected number of cycles of length at most m in a random permutation is about lnm.

Moments of fixed points


The mixed GF of the set of permutations by the number of fixed points is

Let the random variable X be the number of fixed points of a random permutation. Using Stirling numbers of the second kind, we have the following formula for the mth moment of X:

where

is a falling factorial. Using

, we have

which is zero when

, and one otherwise. Hence only terms with

contribute to the sum. This yields

Random permutation statistics

318

Expected number of cycles of any length of a random permutation


We construct the bivariate generating function contributes one to the total number of cycles). Note that has the closed form using , where is one for all cycles (every cycle

and generates the unsigned Stirling numbers of the first kind. We have

Hence the expected number of cycles is

, or about

Expected number of transpositions of a random permutation


We can use the disjoint cycle decomposition of a permutation to factorize it as a product of transpositions by replacing a cycle of length k by transpositions. E.g. the cycle factors as . The function for cycles is equal to and we obtain

and

Hence the expected number of transpositions

is

We could also have obtained this formula by noting that the number of transpositions is obtained by adding the lengths of all cycles (which gives n) and subtracting one for every cycle (which gives by the previous section). Note that precisely, we have again generates the unsigned Stirling numbers of the first kind, but in reverse order. More

To see this, note that the above is equivalent to

and that

which we saw to be the EGF of the unsigned Stirling numbers of the first kind in the section on permutations consisting of precisely m cycles.

Random permutation statistics

319

Expected cycle size of a random element


We select a random element q of a random permutation and ask about the expected size of the cycle that contains q. Here the function is equal to , because a cycle of length k contributes k elements that are on cycles of length k. Note that unlike the previous computations, we need to average out this parameter after we extract it from the generating function (divide by n). We have

Hence the expected length of the cycle that contains q is

Probability that a random element lies on a cycle of size m


This average parameter represents the probability that if we again select a random element of permutation, the element lies on a cycle of size m. The function is equal to for of a random and zero

otherwise, because only cycles of length m contribute, namely m elements that lie on a cycle of length m. We have

It follows that the probability that a random element lies on a cycle of length m is

Probability that a random subset of [n] lies on the same cycle


Select a random subset Q of [n] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b(k) is equal to , because a cycle of length k contributes subsets of size m, where for k<m. This yields

Averaging out we obtain that the probability of the elements of Q being on the same cycle is

or

In particular, the probability that two elements p<q are on the same cycle is 1/2.

Random permutation statistics

320

Number of permutations containing an even number of even cycles


We may use the Fundamental Theorem of Combinatorial Enumeration directly and compute more advanced permutation statistics. (Check that page for an explanation of how the operators we will use are computed.) For example, the set of permutations containing an even number of even cycles is given by

Translating to exponential generating functions (EGFs), we obtain

or

This simplifies to

or

This says that there is one permutation of size zero containing an even number of even cycles (the empty permutation, which contains zero cycles of even length), one such permutation of size one (the fixed point, which also contains zero cycles of even length), and that for , there are such permutations.

Permutations that are squares


Consider what happens when we square a permutation. Fixed points are mapped to fixed points. Odd cycles are mapped to odd cycles in a one-to-one correspondence, e.g. turns into . Even cycles split in two and produce a pair of cycles of half the size of the original cycle, e.g. cycles of size two, an even number of cycles of size four etc., and are given by which yields the EGF turns into . Hence permutations that are squares may contain any number of odd cycles, and an even number of

Odd cycle invariants


The types of permutations presented in the preceding two sections, i.e. permutations containing an even number of even cycles and permutations that are squares, are examples of so-called odd cycle invariants, studied by Sung and Zhang (see external links). The term odd cycle invariant simply means that membership in the respective combinatorial class is independent of the size and number of odd cycles occurring in the permutation. In fact we can prove that all odd cycle invariants obey a simple recurrence, which we will derive. First, here are some more examples of odd cycle invariants.

Random permutation statistics

321

Permutations where the sum of the lengths of the even cycles is six
This class has the specification

and the generating function

The first few values are

Permutations where all even cycles have the same length


This class has the specification

and the generating function

There is a sematic nuance here. We could consider permutations containing no even cycles as belonging to this class, since zero is even. The first few values are

Permutations where the maximum length of an even cycle is four


This class has the specification

and the generating function

The first few values are

The recurrence
Observe carefully how the specifications of the even cycle component are constructed. It is best to think of them in terms of parse trees. These trees have three levels. The nodes at the lowest level represent sums of products of even-length cycles of the singleton . The nodes at the middle level represent restrictions of the set operator. Finally the node at the top level sums products of contributions from the middle level. Note that restrictions of the set operator, when applied to a generating function that is even, will preserve this feature, i.e. produce another even generating function. But all the inputs to the set operators are even since they arise from even-length cycles. The result is that all generating functions involved have the form

where

is an even function. This means that

Random permutation statistics is even, too, and hence

322

Letting

and extracting coefficients, we find that

which yields the recurrence

A problem from the 2005 Putnam competition


A link to the Putnam competition website appears in the section External links. The problem asks for a proof that

where the sum is over all if Now the sign of is odd, and is given by

permutations of

is the sign of .

, i.e.

if

is even and

is the number of fixed points of

where the product is over all cycles c of Hence we consider the combinatorial class

, as explained e.g. on the page on even and odd permutations.

where

marks one minus the length of a contributing cycle, and

marks fixed points. Translating to generating

functions, we obtain

or

Now we have

and hence the desired quantity is given by

Doing the computation, we obtain

or

Extracting coefficients, we find that the coefficient of the formula (should be zero). For

is zero. The constant is one, which does not agree with

positive, however, we obtain

Random permutation statistics

323

or

which is the desired result. As an interesting aside, we observe that matrix: may be used to evaluate the following determinant of an

where

. Recall the formula for the determinant:

Now the value of the product on the right for a permutation . Hence

is

, where f is the number of fixed points of

which yields

and finally

External links
Alois Panholzer, Helmut Prodinger, Marko Riedel, Measuring post-quickselect disorder. [1] Putnam Competition Archive, William Lowell Putnam Competition Archive [2] Philip Sung, Yan Zhang, Recurring Recurrences in Counting Permutations [3]

100 prisoners
Anna Gl, Peter Bro Miltersen, The cell probe complexity of succinct data structures [4] Peter Winkler, Seven puzzles you think you must not have heard correctly [5] Various authors, Les-Mathematiques.net [6]. Cent prisonniers [7] (French)

Random permutation statistics

324

References
[1] [2] [3] [4] [5] [6] [7] http:/ / www. mathematik. uni-stuttgart. de/ ~riedelmo/ papers/ qsdis-jalc. pdf http:/ / www. unl. edu/ amc/ a-activities/ a7-problems/ putnamindex. shtml http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 91. 1088& rep=rep1& type=pdf http:/ / www. daimi. au. dk/ ~bromille/ Papers/ succinct. pdf http:/ / www. math. dartmouth. edu/ ~pw/ solutions. pdf http:/ / les-mathematiques. net http:/ / les-mathematiques. u-strasbg. fr/ phorum5/ read. php?12,341672

Road coloring problem


In graph theory the road coloring theorem, known until recently as the road coloring conjecture, deals with synchronized instructions. The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze).[1] In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics. The theorem was first conjectured by Roy Adler and Benjamin Weiss(1970). It was proved by Avraham Trahtman(2009).

Example and intuition


The image to the right shows a directed graph on eight vertices in which each vertex has out-degree2. (Each vertex in this case also has in-degree2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring. For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-redblue-red-redblue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-redblue-blue-redblue-blue-red", you will always end up at the vertex marked in green, no matter where you started. The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring.
A directed graph with a synchronizing coloring

Mathematical description
Let G be a finite directed graph where all the vertices have the same out-degree k. Let A be the alphabet containing the letters 1, ..., k. A synchronizing coloring (also known as a collapsible coloring) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such that all paths in G corresponding to w terminate at v. The terminology synchronizing coloring is due to the relation between this notion and that of a synchronizing word in finite automata theory. For such a coloring to exist at all, it is necessary that G be both strongly connected and aperiodic.[2] The road coloring theorem states that these two conditions are also sufficient for such a coloring to exist. Therefore, the road coloring problem can be stated briefly as:

Road coloring problem Every finite strongly connected aperiodic directed graph of uniform out-degree has a synchronizing coloring.

325

Previous partial results


Previous partial or special-case results include the following: If G is a finite strongly connected aperiodic directed graph with no multiple edges, and G contains a simple cycle of prime length which is a proper subset of G, then G has a synchronizing coloring. (O'Brien 1981) If G is a finite strongly connected aperiodic directed graph (multiple edges allowed) and every vertex has the same in-degree and out-degree k, then G has a synchronizing coloring. (Kari 2003)

Notes
[1] Seigel-Itzkovich, Judy (2008-02-08). "Russian immigrant solves math puzzle" (http:/ / www. jpost. com/ Home/ Article. aspx?id=91431). The Jerusalem Post. . Retrieved 2010-09-13. [2] Hegde & Jain (2005).

References
Adler, R.L.; Weiss, B. (1970), Similarity of automorphisms of the torus, Memoires of the American Mathematical Society, 98. Hegde, Rajneesh; Jain, Kamal (2005), "A min-max theorem about the road coloring conjecture" (http://www. emis.de/journals/DMTCS/pdfpapers/dmAE0155.pdf), Proc. EuroComb 2005, Discrete Mathematics & Theoretical Computer Science, pp.279284. Kari, Jarkko (2003), "Synchronizing finite automata on Eulerian digraphs", Theoretical Computer Science 295 (1-3): 223232, doi:10.1016/S0304-3975(02)00405-X. O'Brien, G. L. (1981), "The road-colouring problem", Israel Journal of Mathematics 39 (12): 145154, doi:10.1007/BF02762860. Trahtman, Avraham N. (2009), "The road coloring problem", Israel Journal of Mathematics 172 (1): 5160, arXiv:0709.0099, doi:10.1007/s11856-009-0062-5.

RotaBaxter algebra

326

RotaBaxter algebra
In mathematics, a RotaBaxter algebra is an algebra, usually over a field k, together with a particular k-linear map R which satisfies the weight- RotaBaxter identity. It appeared first in the work of the American mathematician Glen Baxter[1] in the realm of probability theory. Baxter's work was further explored from different angles by Gian-Carlo Rota,[2] [3] [4] Pierre Cartier,[5] and Frederic V. Atkinson,[6] among others. Baxters derivation of this identity that later bore his name emanated from some of the fundamental results of the famous probabilist Frank Spitzer in random walk theory.[7] [8]

Definition and first properties


Let A be a k-algebra with a k-linear map R on A and let be a fixed parameter in k. We call A a Rota-Baxter k-algebra and R a Rota-Baxter operator of weight if the operator R satisfies the following RotaBaxter relation of weight :

The operator R:= idR also satisfies the RotaBaxter relation of weight .

Examples
Integration by Parts Integration by parts is an example of a RotaBaxter algebra of weight 0. Let functions from the real line to the real line. Let : RotaBaxter operator be the algebra of continuous

be a continuous function. Define integration as the

Let G(x) = I(g)(x) and F(x) = I(f)(x). Then the formula for integration for parts can be written in terms of these variables as

In other words

which shows that I is a RotaBaxter algebra of weight 0.

Spitzer identity
The Spitzer identity appeared is named after the American mathematician Frank Spitzer. It is regarded as a remarkable stepping stone in the theory of sums of independent random variables in fluctuation theory of probability. It can naturally be understood in terms of RotaBaxter operators.

Notes
[1] Baxter, G. (1960). "An analytic problem whose solution follows from a simple algebraic identity". Pacific J. Math. 10: 731742. [2] Rota, G.-C. (1969). "Baxter algebras and combinatorial identities, I, II.". Bull. Amer. Math. Soc. 75: 325329.; ibid. 75, 330334, (1969). Reprinted in: Gian-Carlo Rota on Combinatorics: Introductory papers and commentaries, J.P.S. Kung Ed., Contemp. Mathematicians, Birkhuser Boston, Boston, MA, 1995. [3] G.-C. Rota, Baxter operators, an introduction, In: Gian-Carlo Rota on Combinatorics, Introductory papers and commentaries, J.P.S. Kung Ed., Contemp. Mathematicians, Birkhuser Boston, Boston, MA, 1995. [4] G.-C. Rota and D. Smith, Fluctuation theory and Baxter algebras, Instituto Nazionale di Alta Matematica, IX, 179201, (1972). Reprinted in: Gian-Carlo Rota on Combinatorics: Introductory papers and commentaries, J.P.S. Kung Ed., Contemp. Mathematicians, Birkhuser Boston,

RotaBaxter algebra
Boston, MA, 1995. Cartier, P. (1972). "On the structure of free Baxter algebras". Advances in Math. 9: 253265. Atkinson, F. V. (1963). "Some aspects of Baxters functional equation". J. Math. Anal. Appl. 7: 130. Spitzer, F. (1956). "A combinatorial lemma and its application to probability theory". Trans. Amer. Math. Soc. 82: 323339. Spitzer, F. (1976). Principles of random walks. Graduate Texts in Mathematics. 34 (Second ed.). New York, Heidelberg: Springer-Verlag.

327

[5] [6] [7] [8]

External links
Li Guo. WHAT IS...a Rota-Baxter Algebra? (http://www.ams.org/notices/200911/rtx091101436p.pdf) Notices of the AMS, December 2009, Volume 56 Issue 11

Rule of product
In combinatorics, the rule of product or multiplication principle is a basic counting principle (a.k.a. the fundamental principle of counting). Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing, then there are ab ways of performing both actions.[1]

In this example, the rule says: multiply 3 by 2, getting 6. The sets {A, B, C} and {X, Y} in this example are disjoint, but that is not necessary. The number of ways to choose a member of {A, B, C}, and then to do so again, in effect choosing an ordered pair each of whose components is in {A, B, C}, is 33=9. In set theory, this multiplication principle is often taken to be the definition of the product of cardinal numbers. We have

where

is the Cartesian product operator. These sets need not be finite, nor is it necessary to have only finitely

many factors in the product; see cardinal number.

Simple example
When you decide to order pizza, you must first choose the type of crust: thin or deep dish (2 choices). Next, you choose the topping: cheese, pepperoni, or sausage (3choices). Using the rule of product, you know that there are 23=6 possible combinations of ordering a pizza.

References
[1] http:/ / www. wtamu. edu/ academic/ anns/ mps/ math/ mathlab/ col_algebra/ col_alg_tut55_count. htm

Rule of sum

328

Rule of sum
In combinatorics, the rule of sum or addition principle is a basic counting principle. Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing and we can not do both at the same time, then there are a+b ways to choose one of the actions. More formally, the rule of sum is a fact about set theory. It states that sum of the sizes of a finite collection of pairwise disjoint sets is the size of the union of these sets. That is, if are pairwise disjoint sets, then we have:

Simple example
A woman has decided to shop at one store today, either in the north part of town or the south part of town. If she visits the north part of town, she will either shop at a mall, a furniture store, or a jewelry store (3 ways). If she visits the south part of town then she will either shop at a clothing store or a shoe store (2 ways). Thus there are 3+2=5 possible shops the woman could end up shopping at today.

Inclusion-exclusion principle
The inclusion-exclusion principle can be thought of as a generalization of the rule of sum in that it too enumerates the number of elements in the union of some sets (but does not require the sets to be disjoint). It states that if A1, ..., An are finite sets, then

Semilinear set

329

Semilinear set
In mathematics, a multiple arithmetic progression, generalized arithmetic progression, or k-dimensional arithmetic progression, is a set of integers constructed as an arithmetic progression is, but allowing several possible differences. So, for example, we start at 17 and may add a multiple of 3 or of 5, repeatedly. In algebraic terms we look at integers a + mb + nc + ... where a, b, c and so on are fixed, and m, n and so on are confined to some ranges 0 m M, and so on, for a finite progression. The number k, that is the number of permissible differences, is called the dimension of the generalized progression. More generally, let

be the set of all elements

in

of the form

with

in

, is finite.

in

, and

in

is said to be a linear set if

consists of exactly

one element, and A subset of

is said to be semilinear if it is a finite union of linear sets.

Separable permutation
In combinatorial mathematics, a separable permutation is a permutation that can be obtained from the trivial permutation 1 by direct sums and skew sums. Separable permutations can also be characterized as the permutations that contain neither 2413 nor 3142. They are enumerated by the Schrder numbers. Separable permutations first arose in the work of Avis & Newborn (1981), who showed that they are precisely the permutations which can be sorted by an arbitrary number of stacks in series. Shapiro & Stephens (1991) showed that the permutation matrix of fills up under bootstrap percolation if and only if is separable. The term "separable permutation" was introduced later by Bose, Buss & Lubiw (1998). Separable permutations are the permutation analogues of complement-reducible graphs and series-parallel partial orders.

References
Avis, David; Newborn, Monroe (1981), "On pop-stacks in series", Utilitas Mathematica 19: 129140, MR0624050. Bose, Prosenjit; Buss, Jonathan; Lubiw, Anna (1998), "Pattern matching for permutations", Information Processing Letters 65: 277283, doi:10.1016/S0020-0190(97)00209-3, MR1620935. Shapiro, Louis; Stephens, Arthur B. (1991), "Bootstrap percolation, the Schrder numbers, and the N-kings problem", SIAM Journal on Discrete Mathematics 4: 275280, doi:10.1137/0404025, MR1093199.

Sequential dynamical system

330

Sequential dynamical system


Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory.

Definition
An SDS is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t+1 based on the states associated to the 1-neighborhood of i in Y. A word w = (w1, w2, ... , wm) over v[Y]. It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn Kn as

If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents.

Example
Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map.

Sequential dynamical system

331

References
Henning S. Mortveit, Christian M. Reidys (2008). An Introduction to Sequential Dynamical Systems. Springer. ISBN 0387306544. Predecessor and Permutation Existence Problems for Sequential Dynamical Systems [1] Genetic Sequential Dynamical Systems [2]

References
[1] http:/ / www. emis. de/ journals/ DMTCS/ pdfpapers/ dmAB0106. pdf [2] http:/ / arxiv. org/ pdf/ math. DS/ 0603370

Series multisection
In mathematics, a multisection of a power series is a new power series composed of equally-spaced terms extracted unaltered from the original. Formally, if one is given

then a multisection is a power series of the form

where c, d are integers, with 0 d < c.

Multisection of analytic functions


A multisection of the series of an analytic function

has a closed-form expression in terms of the function

where

is a primitive c-th root of unity.

Example
Multisection of a binomial

at x = 1 gives the following identity for the sum of binomial coefficients with step c:

Series multisection

332

References
Weisstein, Eric W., "Series Multisection [1]" from MathWorld. Somos, M. A Multisection of q-Series [2], 2006. John Riordan (1968). Combinatorial identities. New York: John Wiley and Sons.

References
[1] http:/ / mathworld. wolfram. com/ SeriesMultisection. html [2] http:/ / cis. csuohio. edu/ ~somos/ multiq. html

Set packing
Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. Suppose we have a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint (in other words, no two of them intersect). More formally, given a universe that all sets in and a family of subsets of , a packing is a subfamily of sets such and an integer are pairwise disjoint. In the set packing decision problem, the input is a pair

; the question is whether there is a set packing of size

or more. In the set packing optimization problem, the

input is a pair , and the task is to find a set packing that uses the most sets. The problem is clearly in NP since, given k subsets, we can easily verify that they are pairwise disjoint. The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list. It is a maximization problem that can be formulated naturally as an integer linear program, belongs to the class of packing problems, and its dual linear program is the set cover problem.[1]

Integer linear program formulation


The maximum set packing problem can be formulated as the following integer linear program.
maximize (maximize the total value)

subject to

for all

(selected sets have to be pairwise disjoint)

for all

(every set is either in the set packing or not)

Example
As a simple example, suppose you're at a convention of foreign ambassadors, each of which speaks English and also various other languages. You want to make an announcement to a group of them, but because you don't trust them, you don't want them to be able to speak among themselves without you being able to understand them. To ensure this, you will choose a group such that no two ambassadors speak the same language, other than English. On the other hand you also want to give your announcement to as many ambassadors as possible. In this case, the elements of the set are languages other than English, and the subsets are the sets of languages spoken by a particular ambassador. If two sets are disjoint, those two ambassadors share no languages other than English. A maximum set packing will choose the largest possible number of ambassadors under the desired constraint. Although this problem is hard to solve in general, in this example a good heuristic is to choose ambassadors who only speak unusual languages first, so that not too many others are disqualified.

Set packing

333

Heuristics and related problems


Set packing is one among a family of problems related to covering or partitioning the elements of a set. One closely related problem is the set cover problem. Here, we are also given a set S and a list of sets, but the goal is to determine whether we can choose k sets that together contain every element of S. These sets may overlap. The optimization version finds the minimum number of such sets. The maximum set packing need not cover every possible element. One advantage of the set packing problem is that even if it's hard for some k, it's not hard to find a k for which it is easy on a particular input. For example, we can use a greedy algorithm where we look for the set which intersects the smallest number of other sets, add it to our solution, and remove the sets it intersects. We continually do this until no sets are left, and we have a set packing of some size, although it may not be the maximum set packing. Although no algorithm can always produce results close to the maximum (see next section), on many practical inputs these heuristics do so. The NP-complete exact cover problem, on the other hand, requires every element to be contained in exactly one of the subsets. Finding such an exact cover at all, regardless of size, is an NP-complete problem. However, if we create a singleton set for each element of S and add these to the list, the resulting problem is about as easy as set packing. Karp originally showed set packing NP-complete via a reduction from the clique problem. There is a weighted version of the set cover problem in which each subset is assigned a real weight and it is this weight we wish to maximize. In our example above, we might weight the ambassadors according to the populations of their countries, so that our announcement will reach the most people possible. This seems to make the problem harder, but as we explain below, most known results for the general problem apply to the weighted problem as well.

Complexity
The set packing problem is not only NP-complete, but its optimization version (general maximum set packing problem ) has been proven as difficult to approximate as the maximum clique problem; in particular, it cannot be approximated within any constant factor. The best known algorithm approximates it within a factor of . The weighted variant can also be approximated this well. However, the problem does have a variant which is more tractable: if we assume no subset exceeds k3 elements, the answer can be approximated within a factor of k/2 + for any > 0; in particular, the problem with 3-element sets can be approximated within about 50%. In another more tractable variant, if no element occurs in more than k of the subsets, the answer can be approximated within a factor of k. This is also true for the weighted version.

Notes
[1] Vazirani (2001)

References
" set packing (http://www.nist.gov/dads/HTML/setpacking.html)". Dictionary of Algorithms and Data Structures, editor Paul E. Black, National Institute of Standards and Technology. Note that the definition here is somewhat different. Steven S. Skiena. " Set Packing (http://www2.toki.or.id/book/AlgDesignManual/BOOK/BOOK5/ NODE202.HTM)". The Algorithm Design Manual. Last modified June 2, 1997. Pierluigi Crescenzi, Viggo Kann, Magns Halldrsson, Marek Karpinski and Gerhard Woeginger. " Maximum Set Packing (http://www.nada.kth.se/~viggo/wwwcompendium/node144.html)". A compendium of NP optimization problems (http://www.nada.kth.se/~viggo/wwwcompendium/). Last modified March 20, 2000. Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN0-7167-1045-5. A3.1: SP3, pg.221.

Set packing Vazirani, Vijay V. (2001). Approximation Algorithms. Springer-Verlag. ISBN3-540-65367-8.

334

External links
(http://www.cs.sunysb.edu/~algorith/implement/syslo/implement.shtml): A Pascal program for solving the problem. From Discrete Optimization Algorithms with Pascal Programs by MacIej M. Syslo, ISBN 0-13-215509-5. Benchmarks with Hidden Optimum Solutions for Set Covering, Set Packing and Winner Determination (http:// www.nlsde.buaa.edu.cn/~kexu/benchmarks/set-benchmarks.htm) Solving packaging problem in PHP (http://www.phpqa.in/2010/10/solving-packaging-problem-in-php.html)

Sharp-SAT
In computational complexity theory, #SAT, or Sharp-SAT, a function problem related to the Boolean satisfiability problem, is the problem of counting the number of satisfying assignments of a given Boolean formula. It is well-known example for the class of counting problems, which are of special interest in computational complexity theory. It is a #P-complete problem, as any NP machine can be encoded into a Boolean formula by a process similar to that in Cook's theorem, such that the number of satisfying assignments of the Boolean formula is equal to the number of accepting paths of the NP machine. Any formula in SAT can be rewritten as a formula in 3-CNF form preserving the number of satisfying assignments, and so #SAT and #3SAT are equivalent and #3SAT is #P-complete as well.

Shortest common supersequence


In computer science, the shortest common supersequence problem is a problem closely related to the longest common subsequence problem. Given two sequences X = < x1,...,xm > and Y = < y1,...,yn >, a sequence U = < u1,...,uk > is a common supersequence of X and Y if U is a supersequence of both X and Y. In other words, the shortest common supersequence between strings x and y is the shortest string z such that both x and y are subsequences of z. The shortest common supersequence (scs) is a common supersequence of minimal length. In the shortest common supersequence problem, the two sequences X and Y are given and the task is to find a shortest possible common supersequence of these sequences. In general, the scs is not unique. For two input sequences, an scs can be formed from an lcs easily. For example, if X , the lcs is Z order, we get the scs: U It is quite clear that and Y

. By inserting the non-lcs symbols while preserving the symbol . for two input sequences. However, for three or more input sequences this

does not hold. Note also, that the lcs and the scs problems are not dual problems.

Shortest common supersequence

335

References
Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN0-7167-1045-5. A4.2: SR8, pg.228.

External links
Dictionary of Algorithms and Data Structures: shortest common supersequence [1]

References
[1] http:/ / nist. gov/ dads/ HTML/ shortestCommonSuperseq. html

Shuffle algebra
In mathematics, a shuffle algebra is a Hopf algebra with a basis corresponding to words on some set, whose product is given by the shuffle product XY of two words X, Y: the sum of all ways of interlacing them. The shuffle product was introduced by Eilenberg & Mac Lane (1953). The name "shuffle product" refers to the fact that the product can be thought of as a sum over all ways of riffle shuffling two words together. The shuffle algebra on a finite set is the graded dual of the universal enveloping algebra of the free Lie algebra on the set. Over the rational numbers, the shuffle algebra is isomorphic to the polynomial algebra in the Lyndon words. The shuffle product of two words in some alphabet is often denoted by the shuffle product symbol (a cyrillic sha, or the unicode character SHUFFLE PRODUCT (U+29E2)).

Definition
The shuffle product of words of lengths m and n is a sum over the (m+n)!/m!n! ways of interleaving the two words, as shown in the following examples: ab xy = abxy + axby + xaby + axyb + xayb + xyab aaa aa = 10aaaaa

References
Eilenberg, Samuel; Mac Lane, Saunders (1953), "On the groups of H(,n). I", Annals of Mathematics. Second Series 58: 55106, ISSN0003-486X, JSTOR1969820, MR0056295 Green, J. A. (1995), Shuffle algebras, Lie algebras and quantum groups [1], Textos de Matemtica. Srie B, 9, Coimbra: Universidade de Coimbra Departamento de Matemtica, MR1399082 Hazewinkel, M. (2001), "Shuffle algebra" [2], in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN978-1556080104 Reutenauer, Christophe (1993), Free Lie algebras [3], London Mathematical Society Monographs. New Series, 7, The Clarendon Press Oxford University Press, ISBN978-0-19-853679-6, MR1231799

Shuffle algebra

336

External links
Shuffle product symbol [4]

References
[1] [2] [3] [4] http:/ / books. google. com/ books?id=jxvvAAAAMAAJ http:/ / www. encyclopediaofmath. org/ index. php?title=S/ s110110 http:/ / books. google. com/ books?id=cBvvAAAAMAAJ http:/ / mirror. ctan. org/ fonts/ shuffle/ shuffle. pdf

Sicherman dice
Sicherman dice ( /skrmn/) are the only pair of 6-sided dice that are not normal dice, bear only positive integers, and have the same probability distribution for the sum as normal dice. The faces on the dice are numbered 1, 2, 2, 3, 3, 4 and 1, 3, 4, 5, 6, 8.

Mathematics
Crazy dice is a standard mathematical problem or puzzle in elementary combinatorics, involving a re-labeling the faces of a pair of six-sided dice to reproduce the same frequency of sums as the standard labeling. It is a standard exercise in elementary combinatorics to calculate the number of ways of rolling any given value with a pair of fair six-sided dice (by taking the sum of the two rolls). The table shows the number of such ways of rolling a given value :
n 2 3 4 5 6 7 8 9 10 11 12 # of ways 1 2 3 4 5 6 5 4 3 2 1

A question arises whether there are other ways of re-labeling the faces of the dice with positive integers that generate these sums with the same frequencies. The surprising answer to this question is that there does indeed exist such a way. These are the Sicherman dice. The table below lists all possible totals of dice rolls with standard dice and Sicherman dice. One Sicherman die is coloured for clarity: 122334, and the other is all black, 134568.
2 Standard dice 3 4 5 6 1+5 2+4 3+3 4+2 5+1 7 1+6 2+5 3+4 4+3 5+2 6+1 1+6 2+5 2+5 3+4 3+4 4+3 8 2+6 3+5 4+4 5+3 6+2 9 10 11 12

1+1 1+2 1+3 1+4 2+1 2+2 2+3 3+1 3+2 4+1

3+6 4+6 5+6 6+6 4+5 5+5 6+5 5+4 6+4 6+3

Sicherman dice 1+1 2+1 3+1 1+4 1+5 2+1 3+1 2+3 2+4 1+3 2+3 2+4 4+1 3+3 3+3

2+6 2+6 3+5 3+5 4+4

1+8 2+8 3+8 4+8 3+6 2+8 3+8 3+6 4+6 4+5

Sicherman dice

337

History
These dice were discovered by Colonel George Sicherman, of Buffalo, New York and were originally reported by Martin Gardner in a 1978 article in Scientific American. The numbers can be arranged so that all pairs of numbers on opposing sides sum to equal numbers, 5 for the first and 9 for the second. Later, in a letter to Sicherman, Gardner mentioned that a magician he knew had anticipated Sicherman's discovery. For generalizations of the Sicherman dice to more than two dice and noncubical dice, see Broline (1979), Gallian and Rusin (1979), Brunson and Swift (1997/1998), and Fowler and Swift (1999).

Mathematical justification
Let a canonical n-sided die be an n-hedron whose faces are marked with the integers [1,n] such that the probability of throwing each number is 1/n. Consider the canonical cubical (six-sided) die. The generating function for the throws of such a die is . The product of this polynomial with itself yields the generating function for the throws of a pair of dice: . From the theory of cyclotomic

polynomials, we know that

where d ranges over the divisors of n and

is the d-th cyclotomic polynomial. We note also that .

We therefore derive the generating function of a single n-sided canonical die as being

and is canceled. Thus the factorization of the generating function of a six-sided canonical die is The generating function for the throws of two dice is the product of two copies of each of these factors. How can we partition them to form two legal dice whose spots are not arranged traditionally? Here legal means that the coefficients are non-negative and sum to six, so that each die has six sides and every face has at least one spot. Only one such partition exists:

and

This gives us the distribution of spots on the faces of a pair of Sicherman dice as being {1,2,2,3,3,4} and {1,3,4,5,6,8}, as above. This technique can be extended for dice with an arbitrary number of sides.

Sicherman dice

338

References
Broline, D. (1979), "Renumbering of the faces of dice", Mathematics Magazine (Mathematics Magazine, Vol. 52, No. 5) 52 (5): 312315, doi:10.2307/2689786, JSTOR2689786 Brunson, B. W.; Swift, Randall J. (1997/8), "Equally likely sums", Mathematical Spectrum 30 (2): 3436 Fowler, Brian C.; Swift, Randall J. (1999), "Relabeling dice", College Mathematics Journal (The College Mathematics Journal, Vol. 30, No. 3) 30 (3): 204208, doi:10.2307/2687599, JSTOR2687599 Gallian, J. A.; Rusin, D. J. (1979), "Cyclotomic polynomials and nonstandard dice", Discrete Mathematics 27 (3): 245259, doi:10.1016/0012-365X(79)90161-4, MR0541471 Gardner, Martin (1978), "Mathematical Games", Scientific American 238 (2): 1932, doi:10.1038/scientificamerican0278-19

External links
Grand Illusion's Informational Page [1] Mathworld's Information Page [2] This article incorporates material from Crazy dice on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

References
[1] http:/ / www. grand-illusions. com/ acatalog/ Sicherman_Dice. html [2] http:/ / mathworld. wolfram. com/ SichermanDice. html

Sidon sequence
In number theory, a Sidon sequence (or Sidon set), named after the Hungarian mathematician Simon Sidon, is a sequence A={a0,a1,a2,...} of natural numbers in which all pairwise sums ai+aj (ij) are different. Sidon introduced the concept in his investigations of Fourier series. The main problem, posed by Sidon[1] , is how many elements can A have up to some number x. Despite a large body of research[2] the question remained unsolved for almost 80 years. Recently, it was finally settled[3] by J. Cilleruelo, I. Ruzsa and C. Vinuesa. Paul Erds and Pl Turn proved that the number of elements of A up to x is at most and using a

construction of J. Singer they get a lower bound. If, however, we consider an infinite Sidon sequence A and let A(x) denote the number of its elements up to x, then Erdos showed that

that is, infinite Sidon sequences are thinner than the above bound for finite sequences. For the other direction, Chowla and Mian observed that the greedy algorithm gives an infinite Sidon sequence with for every x. Ajtai, Komls, and Szemerdi improved this with a construction[4] of a Sidon sequence with

The best lower bound to date was given by Imre Z. Ruzsa, who proved[5] that a Sidon sequence with

Sidon sequence exists. Erds conjectured that an infinite Sidon set A exists for which
[6]

339 holds. He and Rnyi

showed the existence of such a sequence a0,a1,...with the weaker property that for every natural number n there are at most c solutions of the equation ai+aj=n for some constant c. Erds further conjectured that there exists a nonconstant integer-coefficient polynomial whose values at the natural numbers form a Sidon sequence. Specifically, he asked if the set of fifth powers is a Sidon set. Ruzsa came close to this by showing that there is a real number 0<c<1 such that the range of the function f(x)=x5+[cx4] is a Sidon sequence, where [.] denotes integer part. As c is irrational, f(x) is not a polynomial.

Relationship to Golomb rulers


All finite Sidon sets are Golomb rulers, and vice-versa. To see this, suppose for a contradiction that S is a Sidon Set and not a Golomb ruler. Since it is not a Golomb ruler, there must be four members such that . It follows that , which contradicts the proposition that S is a Sidon set. Therefore all Sidon Sets must be Golomb rulers. By a similar argument, all Golomb rulers must be Sidon sets.

References
[1] Erds, P.; Turn, P. (1941), "On a problem of Sidon in additive number theory and on some related problems" (http:/ / www. renyi. hu/ ~p_erdos/ 1941-01. pdf), J. London Math. Soc. 16: 212215, doi:10.1112/jlms/s1-16.4.212, . Addendum (http:/ / www. math-inst. hu/ ~p_erdos/ 1944-02. pdf), 19 (1944), 208. [2] O'Bryant, K. (2004), "A complete annotated bibliography of work related to Sidon sequences" (http:/ / www. emis. ams. org/ journals/ EJC/ Surveys/ ds11. pdf), Electronic Journal of Combinatorics 11: 39, . [3] Cilleruelo, J.; Ruzsa, I.; Vinuesa, C. (2010), "Generalized Sidon sets" (http:/ / www. sciencedirect. com/ science?_ob=MImg& _imagekey=B6W9F-505G2M9-1-1& _cdi=6681& _user=1675225& _pii=S0001870810001945& _origin=gateway& _coverDate=12/ 01/ 2010& _sk=997749994& view=c& wchp=dGLzVtb-zSkWb& md5=e3adbe497603d81442ebb8f745334fc4& ie=/ sdarticle. pdf), Advances in Mathematics 225: 27862807, [4] Ajtai, M.; Komls, J.; Szemerdi, E. (1981), "A dense infinite Sidon sequence", European Journal of Combinatorics 2 (1): 111, MR0611925. [5] Ruzsa, I. Z. (1998), "An infinite Sidon sequence", Journal of Number Theory 68: 6371, doi:10.1006/jnth.1997.2192, MR1492889. [6] Erds, P.; Rnyi, A. (1960), "Additive properties of random sequences of positive integers" (http:/ / www. renyi. hu/ ~p_erdos/ 1960-02. pdf), Acta Arithmetica 6: 83110, MR0120213, .

Sim (pencil game)

340

Sim (pencil game)


The game of Sim is played by two players on a board consisting of six dots ('vertices'). Each dot is connected to every other dot by a line. Two players take turns coloring any uncolored lines. One player colors in one color, and the other colors in another color, with each player trying to avoid the creation of a triangle made solely of their color; the player who completes such a triangle loses immediately. Ramsey theory shows that no game of Sim can end in a tie. Specifically, since the Ramsey number R(3,3)=6, any two-coloring of the complete graph on 6 vertices (K6) must contain a monochromatic triangle, and therefore is not a tied position. This will also apply to any super-graph of K6. Computer search has verified that the second player can win Sim with perfect play, but finding a perfect strategy that humans can easily memorize is an open problem. A Java applet is available[1] for online play against a computer program. A technical report[2] by Wolfgang Slany is also available online, with many references to literature on Sim, going back to the game's introduction by Gustavus Simmons in 1969. This game of Sim is one example of a Ramsey game. Other Ramsey games are possible. For instance, according to Ramsey theory any three-coloring of the complete graph on 17 vertices must contain a monochromatic triangle. A corresponding Ramsey game uses pencils of three colors. One approach can have three players compete, while another would allow two players to alternately select any of the three colors to paint an edge of the graph, until a player loses by completing a mono-chromatic triangle. It is unknown whether this latter game is a first or a second player win.

External links
[1] Java applet page (http:/ / www. dbai. tuwien. ac. at/ proj/ ramsey/ ) on the website of the Vienna University of Technology [2] Graph Ramsey Games by Wolfgang Slany (http:/ / arxiv. org/ abs/ cs/ 9911004) at arXiv

Singmaster's conjecture

341

Singmaster's conjecture
In combinatorial number theory, Singmaster's conjecture, named after David Singmaster, says there is a finite upper bound on the multiplicities of entries in Pascal's triangle (other than the number 1, which appears infinitely many times). It is clear that the only number that appears infinitely many times in Pascal's triangle is 1, because any other number x can appear only within the first x+1 rows of the triangle. Paul Erds said that Singmaster's conjecture is probably true but he suspected it would be very hard to prove. Let N(a) be the number of times the number a > 1 appears in Pascal's triangle. In big O notation, the conjecture is:

Known results
Singmaster (1971) showed that

Abbot, Erds, and Hanson (see References) refined the estimate. The best currently known (unconditional) bound is

and is due to Kane (2007). Abbot, Erds, and Hanson note that conditional on Cramr's conjecture on gaps between consecutive primes that

holds for any

Singmaster (1975) showed that the Diophantine equation

has infinitely many solutions for the two variables n, k. It follows that there are infinitely many entries of multiplicity at least6. The solutions are given by

where Fn is the nth Fibonacci number (indexed according to the convention that F1=F2=1).

Numerical examples
Computation tells us that 2 appears just once; all larger positive integers appear more than once; 3, 4, 5 each appear 2 times; 6 appears 3 times; Many numbers appear 4 times. Each of the following appears 6 times:

Singmaster's conjecture

342

The smallest number to appear 8 times is 3003, which is also the first member of Singmaster's infinite family of numbers with multiplicity at least 6:

The next number in Singmaster's infinite family, and the next smallest number known to occur six or more times, is 61218182743304701891431482520. It is not known whether any number appears more than eight times, nor whether any number besides 3003 appears that many times. The conjectured finite upper bound could be as small as 8, but Singmaster thought it might be 10 or 12.

Do any numbers appear exactly five or seven times?


It would appear from a related entry, (sequence A003015 in OEIS) in the Online Encyclopedia of Integer Sequences, that no one knows whether the equation N(a) = 5 can be solved fora. Nor is it known whether any number appears seven times.

References
Singmaster, D. (1971), "Research Problems: How often does an integer occur as a binomial coefficient?", American Mathematical Monthly 78 (4): 385386, doi:10.2307/2316907, JSTOR2316907, MR1536288. Singmaster, D. (1975), "Repeated binomial coefficients and Fibonacci numbers" [1], Fibonacci Quarterly 13 (4): 295298, MR0412095. Abbott, H. L.; Erds, P.; Hanson, D. (1974), "On the number of times an integer occurs as a binomial coefficient", American Mathematical Monthly 81 (3): 256261, doi:10.2307/2319526, JSTOR2319526, MR0335283. Kane, Daniel M. (2007), "Improved bounds on the number of ways of expressing t as a binomial coefficient" [2], Integers: Electronic Journal of Combinatorial Number Theory 7: #A53, MR2373115.

External links
(sequence A003016 in OEIS) (OEIS = Online Encyclopedia of Integer Sequences)

References
[1] http:/ / www. fq. math. ca/ Scanned/ 13-4/ singmaster. pdf [2] http:/ / www. emis. de/ journals/ INTEGERS/ papers/ h53/ h53. pdf

Small set (combinatorics)

343

Small set (combinatorics)


In combinatorial mathematics, a small set of positive integers is one such that the infinite sum

converges. A large set is any other set of positive integers (i.e. one whose sum diverges).

Examples
The set of all positive integers is known to be a large set (see Harmonic series), and so is the set obtained from any arithmetic sequence (i.e. of the form an+b with a0, b1 and n=0,1,2,3,...) where a=0, b=1 give the multiset and a=1, b=1 give . The set of square numbers is small (see Basel problem). So is the set of cube numbers, the set of 4th powers, and so on... And so is the set of any polynomial numbers of degree k2 (i.e. of the form , with a1, ai0 for all i1 but ai>0 for at least one i2, and n=0,1,2,3,...). Polynomial numbers of degree k<2 give an arithmetic sequence (which form a large set.) Polynomial numbers of degree 2 give a quadratic sequence which form a small set. Polynomial numbers of degree 3 give a cubic sequence which also form a small set. And so on... The set
n

of powers of 2 is known to be a small set, and so is the set of any geometric sequence

(i.e. of the form ab with a1, b2 and n=0,1,2,3,...). The set of prime numbers has been proven to be large. The set of twin primes has been proven to be small (see Brun's constant). A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime (i.e. all pn with n2) is a small set although the primes are a large set. The set of perfect powers is small. The set of numbers whose decimal representations exclude 7 (or any digit one prefers) is small. That is, for example, the set

is small. (This has been generalized to other bases as well.) See Kempner series.

Properties
A union of finitely many small sets is small, as the sum of two convergent series is a convergent series. A union of infinitely many small sets is either a small set (e.g. the sets of p2, p3, p4,... where p is prime) or a large set (e.g. the sets for k>0). Also, a large set minus a small set is still large. A large set minus a large set is either a small set (e.g. the set of all prime powers pn with n 1 minus the set of all primes) or a large set (e.g. the set of all positive integers minus the set of all positive even numbers). In set theoretic terminology, the small sets form an ideal. The MntzSzsz theorem is that a set is large if and only if the set spanned by

is dense in the uniform norm topology of continuous functions on a closed interval. This is a generalization of the StoneWeierstrass theorem.

Small set (combinatorics)

344

Open problems
There are many sets about which it is not known whether they are large or small. Paul Erds famously asked the question of whether any set that does not contain arbitrarily long arithmetic progressions must necessarily be small. He offered a prize of $3000 for the solution to this problem, more than for any of his other conjectures, and joked that this prize offer violated the minimum wage law.[1] This question is still open.

Notes
[1] Carl Pomerance, "Paul Erdos, Number Theorist Extraordinaire". Notices of the AMS. January, 1998.

References
A. D. Wadhwa (1975). An interesting subseries of the harmonic series. American Mathematical Monthly 82 (9) 931933.

Sparse ruler
A sparse ruler is a ruler in which some of the distance marks are missing, yet which allows you to measure any integer distance up to its full length. More abstractly, a sparse ruler of length with marks is a sequence of integers . A sparse ruler is called minimal if there is no sparse ruler of length with marks. In other words, if any of the marks is removed one can no longer measure all of the distances. A sparse ruler is called maximal if there is no sparse ruler of length with marks. A sparse ruler is called optimal if it is both minimal and maximal. Since the number of distinct pairs of marks is maximal sparse ruler with , this is an upper bound on the length of any where , with . The marks and correspond to the ends and such that of the ruler. In order to measure the distance there must be marks

marks. This upper bound can be achieved only for 2, 3 or 4 marks. For larger numbers

of marks, the difference between the optimal length and the bound grows gradually, and unevenly. For example, for 6 marks the upper bound is 15, but the maximal length is 13. There are 3 different configurations of sparse rulers of length 13 with 6 marks. One is {0, 1, 2, 6, 10, 13}. To measure a length of 7, say, with this ruler you would take the distance between the marks at 6 and 13. Sparse rulers are closely related to, but different from Golomb rulers because Golomb rulers require that all of the differences be distinct. In general, a Golomb ruler with marks will be considerably longer than an optimal sparse ruler with marks, since is a lower bound for the length of a Golomb ruler. A long Golomb ruler will have gaps, that is, it will have distances which it cannot measure. For example, the optimal Golomb ruler {0, 1, 4, 10, 12, 17} has length 17, but cannot measure lengths of 14 or 15.

Wichmann rulers
Many optimal rulers are of the form W(r,s) = 1^r, r+1, (2r+1)^r, (4r+3)^s, (2r+2)^(r+1), 1^r, where a^b represents b segments of length a. Thus, if r = 1 and s = 2, then W(1,2) has (in order): 1 segment of length 1, 1 segment of length 2, 1 segment of length 3, 2 segments of length 7,

Sparse ruler 2 segments of length 4, 1 segment of length 1 That gives the ruler {0, 1, 3, 6, 13, 20, 24, 28, 29}. The length of a Wichmann ruler is 4r(r+s+2)+3(s+1) and the number of marks is 4r+s+3. Note that not all Wichmann rulers are optimal and not all optimal rulers can be generated this way. None of the optimal rulers of length 1, 13, 17, 23 and 58 follow this pattern, but no optimal rulers with length greater than 68 are known that are not Wichmann rulers.

345

Examples
The following are examples of minimal sparse rulers. Optimal rulers are highlighted. When there are too many to list, not all are included. Mirror images are not shown.
Examples Length Marks Number List Form Wichmann

1 2 3 4

2 3 3 4

1 1 1 2

II III II.I III.I II.II III..I II.I.I II..I.I IIII...I III.I..I III..I.I II.I.I.I II.I..II II..II.I III..I..I II.I...II II..I.I.I II...II.I III...I..I II..I..I.I IIII..I...I IIII...I...I IIII....I...I III...I..I..I II.I.I.....II II.I...I...II II..II....I.I II..I..I..I.I II.....II.I.I III...I...I..I II..II.....I.I II....I..I.I.I IIIII....I....I

{0, 1} {0, 1, 2} {0, 1, 3} {0, 1, 2, 4} {0, 1, 3, 4} {0, 1, 2, 5} {0, 1, 3, 5} {0, 1, 4, 6} {0, 1, 2, 3, 7} {0, 1, 2, 4, 7} {0, 1, 2, 5, 7} {0, 1, 3, 5, 7} {0, 1, 3, 6, 7} {0, 1, 4, 5, 7} {0, 1, 2, 5, 8} {0, 1, 3, 7, 8} {0, 1, 4, 6, 8} {0, 1, 5, 6, 8} {0, 1, 2, 6, 9} {0, 1, 4, 7, 9} {0, 1, 2, 3, 6, 10} {0, 1, 2, 3, 7, 11} {0, 1, 2, 3, 8, 12} {0, 1, 2, 6, 9, 12} {0, 1, 3, 5, 11, 12} {0, 1, 3, 7, 11, 12} {0, 1, 4, 5, 10, 12} {0, 1, 4, 7, 10, 12} {0, 1, 7, 8, 10, 12} {0, 1, 2, 6, 10, 13} {0, 1, 4, 5, 11, 13} {0, 1, 6, 9, 11, 13} {0, 1, 2, 3, 4, 9, 14} W(0,3) W(0,2) W(0,1) W(0,0)

6 7

4 5

1 6

10 11 12

6 6 6

19 15 7

13

14

65

Sparse ruler

346
II.I..I...I...II II..I..I..I..I.I IIII....I...I...I IIII....I....I...I III...I...I...I..I III.....I...I.I..I III.....I...I..I.I II..I.....I.I..I.I II......I..I.I.I.I II..I..I..I..I..I.I IIIII....I....I....I IIIII.....I....I....I IIIII.....I.....I....I IIII....I....I....I...I III.......I....I..I..II II.I.I........II.....II II.I..I......I...I...II II.I.....I.....I...II.I II..II......I.I.....I.I II....II..I.......I.I.I II....I..I......I.I.I.I II.....II........II.I.I III........I...I..I..I.I II..I.....I.....I.I..I.I IIIIII......I.....I.....I IIIIII......I......I.....I IIIII.....I....I.....I....I IIIII.....I.....I.....I....I III..........I....I..I..I..II II.I.I.I..........II.......II II.I..I..I......I......I...II II.I.....I.....I.....I...II.I II.....I...I........I..I.II.I II.......II..........II.I.I.I III...........I...I..I..I..I.I II.I..I......I......I...I...II II..I.....I.....I.....I.I..I.I III..............I...I..I..I..I..I.I II.I..I..I......I......I......I...II II.I..I..I.........I...I......I...II II..II..........I.I......I.I.....I.I II..I.....I.....I.....I.....I.I..I.I II.I..I......I......I......I...I...II II.I..I......I......I......I......I...I...II III..I....I....I..........I.....I.....I.....III IIII...................I....I...I...I...I...I..I..I II.I..I.....I......I......I......I......I...I...II III..I....I....I..........I..........I.....I.....I.....III II.I..I......I......I......I......I......I......I...I...II {0, 1, 3, 6, 10, 14, 15} {0, 1, 4, 7, 10, 13, 15} {0, 1, 2, 3, 8, 12, 16} {0, 1, 2, 3, 8, 13, 17} {0, 1, 2, 6, 10, 14, 17} {0, 1, 2, 8, 12, 14, 17} {0, 1, 2, 8, 12, 15, 17} {0, 1, 4, 10, 12, 15, 17} {0, 1, 8, 11, 13, 15, 17} {0, 1, 4, 7, 10, 13, 16, 18} {0, 1, 2, 3, 4, 9, 14, 19} {0, 1, 2, 3, 4, 10, 15, 20} {0, 1, 2, 3, 4, 10, 16, 21} {0, 1, 2, 3, 8, 13, 18, 22} {0, 1, 2, 10, 15, 18, 21, 22} {0, 1, 3, 5, 14, 15, 21, 22} {0, 1, 3, 6, 13, 17, 21, 22} {0, 1, 3, 9, 15, 19, 20, 22} {0, 1, 4, 5, 12, 14, 20, 22} {0, 1, 6, 7, 10, 18, 20, 22} {0, 1, 6, 9, 16, 18, 20, 22} {0, 1, 7, 8, 17, 18, 20, 22} {0, 1, 2, 11, 15, 18, 21, 23} {0, 1, 4, 10, 16, 18, 21, 23} {0, 1, 2, 3, 4, 5, 12, 18, 24} {0, 1, 2, 3, 4, 5, 12, 19, 25} {0, 1, 2, 3, 4, 10, 15, 21, 26} {0, 1, 2, 3, 4, 10, 16, 22, 27} {0, 1, 2, 13, 18, 21, 24, 27, 28} {0, 1, 3, 5, 7, 18, 19, 27, 28} {0, 1, 3, 6, 9, 16, 23, 27, 28} {0, 1, 3, 9, 15, 21, 25, 26, 28} {0, 1, 7, 11, 20, 23, 25, 26, 28} {0, 1, 9, 10, 21, 22, 24, 26, 28} {0, 1, 2, 14, 18, 21, 24, 27, 29} {0, 1, 3, 6, 13, 20, 24, 28, 29} {0, 1, 4, 10, 16, 22, 24, 27, 29} {0, 1, 2, 17, 21, 24, 27, 30, 33, 35} {0, 1, 3, 6, 9, 16, 23, 30, 34, 35} {0, 1, 3, 6, 9, 19, 23, 30, 34, 35} {0, 1, 4, 5, 16, 18, 25, 27, 33, 35} {0, 1, 4, 10, 16, 22, 28, 30, 33, 35} {0, 1, 3, 6, 13, 20, 27, 31, 35, 36} {0, 1, 3, 6, 13, 20, 27, 34, 38, 42, 43} {0, 1, 2, 5, 10, 15, 26, 32, 38, 44, 45, 46} {0, 1, 2, 3, 23, 28, 32, 36, 40, 44, 47, 50} {0, 1, 3, 6, 13, 20, 27, 34, 41, 45, 49, 50} {0, 1, 2, 5, 10, 15, 26, 37, 43, 49, 55, 56, 57} {0, 1, 3, 6, 13, 20, 27, 34, 41, 48, 52, 56, 57} W(1,3) W(1,4) W(2,1) W(1,5) W(2,2) W(1,6) W(1,2) W(1,1) W(0,5) W(1,0) W(0,4)

15

40

16 17

7 7

16 6

18 19 20 21 22

8 8 8 8 8

250 163 75 33 9

23

24 25 26 27 28

9 9 9 9 9

472 230 83 28 6

29

35

10

36 43 46 50

10 11 12 12

1 1 342 2

57

13

12

Sparse ruler

347
IIII.......................I....I...I...I...I...I...I..I..I III...I.I........I........I........I........I..I......I..II III.....I......II.........I.........I.........I..I...I.I..I II.I..I..........I..I......I.......I.........I...I...I...II II.I..I..........I......I..I..........I......I...I...I...II II...I..I...I........I........I........I........I....II.I.I III..I....I....I..........I..........I..........I.....I.....I.....III III.....I......II.........I.........I.........I.........I..I...I.I..I III..I....I....I..........I..........I..........I..........I.....I.....I.....III III..I....I....I..........I..........I..........I..........I..........I.....I.....I.....III {0, 1, 2, 3, 27, 32, 36, 40, 44, 48, 52, 55, 58} {0, 1, 2, 6, 8, 17, 26, 35, 44, 47, 54, 57, 58} {0, 1, 2, 8, 15, 16, 26, 36, 46, 49, 53, 55, 58} {0, 1, 3, 6, 17, 20, 27, 35, 45, 49, 53, 57, 58} {0, 1, 3, 6, 17, 24, 27, 38, 45, 49, 53, 57, 58} {0, 1, 5, 8, 12, 21, 30, 39, 48, 53, 54, 56, 58} {0, 1, 2, 5, 10, 15, 26, 37, 48, 54, 60, 66, 67, 68} {0, 1, 2, 8, 15, 16, 26, 36, 46, 56, 59, 63, 65, 68} {0, 1, 2, 5, 10, 15, 26, 37, 48, 59, 65, 71, 77, 78, 79} {0, 1, 2, 5, 10, 15, 26, 37, 48, 59, 70, 76, 82, 88, 89, 90} W(2,3) W(2,4) W(2,5) W(2,6) W(2,7) W(3,4) W(2,8)

58

13

68

14

79 90 101 112 123

15 16 17 18 19

1 1 1 1 2

III..I....I....I..........I..........I..........I..........I..........I..........I.....I.....I.....III {0,1,2,5,10,15,26,37,48,59,70,81,87,93,99,100,101} {0,1,2,5,10,15,26,37,48,59,70,81,92,98,104,110,111,112} {0,1,2,3,7,14,21,28,43,58,73,88,96,104,112,120,121,122,123} {0,1,2,5,10,15,26,37,48,59,70,81,92,103,109,115,121,122,123}

138

20

{0,1,2,3,7,14,21,28,43,58,73,88,103,111,119,127,135,136,137,138} W(3,5)

References
http://www.luschny.de/math/rulers/prulers.html http://www.iwriteiam.nl/Ha_sparse_rulers.html http://www.maa.org/editorial/mathgames/mathgames_11_15_04.html http://www.contestcen.com/scale.htm http://members.cox.net/wnmyers/sparse_rulers.txt

Sperner's lemma
You may be looking for Sperner's theorem on set families In mathematics, Sperner's lemma is a combinatorial analog of the Brouwer fixed point theorem, which follows from it. Sperner's lemma states that every Sperner coloring (described below) of a triangulation of an n-dimensional simplex contains a cell colored with a complete set of colors. The initial result of this kind was proved by Emanuel Sperner, in relation with proofs of invariance of domain. Sperner colorings have been used for effective computation of fixed points, in root-finding algorithms, and are applied in fair division (cake cutting) algorithms. According to the Soviet Mathematical Encyclopaedia (ed. I.M. Vinogradov), a related 1929 theorem (of Knaster, Borsuk and Mazurkiewicz) has also become known as the Sperner lemma - this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as the KnasterKuratowskiMazurkiewicz lemma.

Sperner's lemma

348

One-dimensional case
In one dimension, Sperner's Lemma can be regarded as a discrete version of the Intermediate Value Theorem. In this case, it essentially says that if a discrete function takes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times.

Two-dimensional case
The two-dimensional case is the one referred to most frequently. It is stated as follows: Given a triangle ABC, and a triangulation T of the triangle. The set S of vertices of T is colored with three colors in such a way that 1. A, B and C are colored 1, 2 and 3 respectively 2. Each vertex on an edge of ABC is to be colored only with one of the two colors of the ends of its edge. For example, each vertex on AC must have a color either 1 or 3. Then there exists a triangle from T, whose vertices are colored with the three different colors. More generally, there must be an odd number of such triangles.

Two dimensional case

Multidimensional case
In the general case the lemma refers to a n-dimensional simplex

We consider a triangulation T which is a disjoint division of

into smaller n-dimensional simplices. Denote the

coloring function as f:S{1,2,3,...,n,n+1}, where S is again the set of vertices of T. The rules of coloring are: 1. The vertices of the large simplex are colored with different colors, i. e. f(Ai)=i for 1 i n+1. 2. Vertices of T located on any given k-dimensional subface

are colored only with the colors

Then there exists an odd number of simplices from T, whose vertices are colored with all n+1 colors. In particular, there must be at least one.

Sperner's lemma

349

Proof
We shall first address the two-dimensional case. Consider a graph G built from the triangulation T as follows: The vertices of G are the members of T plus the area outside the triangle. Two vertices are connected with an edge if their corresponding areas share a common border with one endpoint colored 1 and the other colored 2. Note that on the interval AB there is an odd number of borders colored 1-2 (simply because A is colored 1, B is colored 2; and as we move along AB, there must be an odd number of color changes in order to get different colors at the beginning and at the end). Therefore the vertex of G corresponding to the outer area has an odd degree. But it is known (the handshaking lemma) that in a finite graph there is an even number of vertices with odd degree. Therefore the remaining graph, excluding the outer area, has an odd number of vertices with odd degree corresponding to members of T. It can be easily seen that the only possible degree of a triangle from T is 0, 1 or 2, and that the degree 1 corresponds to a triangle colored with the three colors 1, 2 and 3. Thus we have obtained a slightly stronger conclusion, which says that in a triangulation T there is an odd number (and at least one) of full-colored triangles. A multidimensional case can be proved by induction on the dimension of a simplex. We apply the same reasoning, as in the 2-dimensional case, to conclude that in a n-dimensional triangulation there is an odd number of full-colored simplices.

Generalizations
Sperner's lemma has been generalized to colorings of polytopes with n vertices.. In that case, there are at least n-k fully labeled simplices, where k is the dimension of the polytope and "fully labeled" indicates that every label on the simplex has a different color. For example, if a polygon with n vertices is triangulated and colored according to the Sperner criterion, then there are at least n-2 fully labeled triangles. The general statement was conjectured by Atanassov in 1996, who proved it for the case k=2.[1] The proof of the general case was first given by de Loera, Peterson, and Su in 2002.[2]

Applications
Sperner colorings have been used for effective computation of fixed points. A Sperner coloring can be constructed such that fully labeled simplices correspond to fixed points of a given function. By making a triangulation smaller and smaller, one can show that the limit of the fully labeled simplices is exactly the fixed point. Hence, the technique provides a way to approximate fixed points. For this reason, Sperner's lemma can also be used in root-finding algorithms and fair division algorithms. E. Sperner has presented the development, influence and applications of his combinatorial lemma in [3].

References
[1] K. T. Atanassov (1996). "On Sperner's Lemma". Stud. Sci. Math. Hungarica 32: 7174. [2] Jesus de Loera, Elisha Peterson, and Francis Su (2002). "A polytopal generalization of Sperner's Lemma". Journal of Combinatorial Theory Series A 100 (1): 126. doi:10.1006/jcta.2002.3274.

3.^ E. Sperner, Fifty years of further development of a combinatorial lemma, Part A, p.183-197, Part B, p.199-214, in: Numerical solutions of highly nonlinear problems, Fixed Point Algorithms and Complementarity Problems, W. Forster (Ed.), North-Holland, Amsterdam, 1980.

Sperner's lemma

350

External links
Proof of Sperner's Lemma (http://www.cut-the-knot.org/Curriculum/Geometry/SpernerLemma.shtml) at cut-the-knot

Spt function
The spt function (smallest parts function) is a function in number theory that counts the sum of the number of smallest parts in each partition of a positive integer. It is related to the partition function. For example, there are five partitions of 4: (1,1,1,1), (1,1,2), (1,3), (2,2) and (4). These partitions have 4, 2, 1, 2 and 1 smallest parts respectively. So spt(4) = 4 + 2 + 1 + 2 + 1 = 10. The first few values of spt(n) are: 1, 3, 5, 10, 14, 26, 35, 57, 80, 119, 161, 238, 315, 440, 589 ... (sequence A092269 in OEIS) Like many functions in mathematics, spt(n) has a generating function. There are connections to Maass forms, and under certain conditions the generating function is an eigenform for some Hecke operators.[1] While a closed formula is not known for spt(n), there are Ramanajuan-like congruences including

[2]

References
[1] Frank Garvan. Congruences for Andrews spt-function modulo 32760 and extension of Akins Hecke-type partition congruences. [2] George Andrews. The number of smallest parts in the partitions ofn.

Stable roommates problem

351

Stable roommates problem


In mathematics, especially in the fields of game theory and combinatorics, the stable roommate problem (SRP) is the problem of finding a stable matching a matching in which there is no pair of elements, each from a different matched set, where each member of the pair prefers the other to their match. This is different from the stable marriage problem in that the stable roommates problem does not require that a set is broken up into male and female subsets. Any person can prefer anyone in the same set. It is commonly stated as: In a given instance of the Stable Roommates problem (SRP), each of 2n participants ranks the others in strict order of preference. A matching is a set of n disjoint (unordered) pairs of participants. A matching M in an instance of SRP is stable if there are no two participants x and y, each of whom prefers the other to his partner in M. Such a pair is said to block M, or to be a blocking pair with respect to M.

Solution
Unlike the stable marriage problem, the stable roommates may not, in general, have a solution. For a minimal counterexample, consider 4 people A, B, C and D where all prefer each other to D, and A prefers B over C, B prefers C over A, and C prefers A over B (so each of A,B,C is the most favorite of someone). In any solution, one of A,B,C must be paired with D and the other 2 with each other, yet D's partner and the one for whom D's partner is most favorite would each prefer to be with each other.

Algorithm
An efficient algorithm was given in (Irving 1985). The algorithm will determine, for any instance of the problem, whether a stable matching exists, and if so, will find such a matching. Irving's algorithm has O(n2) complexity, provided suitable data structures are used to facilitate manipulation of the preference lists and identification of rotations (see below). The algorithm consists of two phases. In the first phase, participants propose to each other, in a manner similar to that of the Gale Shapley algorithm for the stable marriage problem. Participants propose to each person on their preference list, in order, continuing to the next person if and when their current proposal is rejected. A participant rejects a proposal if he already holds, or subsequently receives, a proposal from someone he prefers. In this first phase, one participant might be rejected by all of the others, an indicator that no stable matching is possible. Otherwise, Phase 1 ends with each person holding a proposal from one of the others this situation can be represented as a set S of ordered pairs of the form (p,q), where q holds a proposal from p we say that q is p's current favorite. In the case that this set represents a matching, i.e., (q,p) is in S whenever (p,q) is, the algorithm terminates with this matching, which is bound to be stable. Otherwise the algorithm enters Phase 2, in which the set S is repeatedly changed by the application of so-called rotations. Suppose that (p,q) is in the set S but (q,p) is not. For each such p we identify his current second favorite to be the first successor of q in p's preference list who would reject the proposal that he holds in favor of p. A rotation relative to S is a sequence (p0,q0), (p1,q1),. . ., (pk-1,qk-1) such that (pi,qi) is in S for each i, and qi+1 is pi's current second favorite (where i + 1 is taken modulo k). If, such a rotation (p0,q0), . . . , (p2k,q2k), of odd length, is found such that pi = qi+k+1 for all i (where i + k + 1 is taken modulo 2k + 1), this is what is referred to as an odd party, which is also an indicator that there is no stable matching. Otherwise, application of the rotation involves replacing the pairs (pi,qi), in S by the pairs (pi,qi+1), (where i + 1 is again taken modulo k). Phase 2 of the algorithm can now be summarized as follows:

Stable roommates problem S = output of Phase 1; while (true) { identify a rotation r in S; if (r is an odd party) return null; (there is no stable matching) else apply r to S; if (S is a matching) return S; (guaranteed to be stable) } Example The following are the preference lists for a Stable Roommates instance involving 6 participants p1, p2, p3, p4, p5, p6. p1 : p3 p4 p2 p6 p5 p2 : p6 p5 p4 p1 p3 p3 : p2 p4 p5 p1 p6 p4 : p5 p2 p3 p6 p1 p5 : p3 p1 p2 p4 p6 p6 : p5 p1 p3 p4 p2 A possible execution of Phase 1 consists of the following sequence of proposals and rejections, where represents proposes to and represents rejects. p1 p3 p2 p6 p3 p2 p4 p5 p5 p3; p3 p1 p1 p4 p6 p5; p5 p6 p6 p1 So Phase 1 ends with the set S = {(p1,p4), (p2,p6), (p3,p2), (p4,p5), (p5,p3), (p6,p1)}. In Phase 2, the rotation r1 = (p1,p4), (p3,p2) is first identified. This is because p2 is p1's second favorite, and p4 is the second favorite of p3. Applying r1 gives the new set S = {(p1,p2), (p2,p6), (p3,p4), (p4,p5), (p5,p3), (p6,p1)}. Next, the rotation r2 = (p1,p2), (p2,p6), (p4,p5) is identified, and application of r2 gives S = {(p1,p6), (p2,p5), (p3,p4), (p4,p2), (p5,p3), (p6,p1)}. Finally, the rotation r3 = (p2,p5), (p3,p4) is identified, application of which gives S = {(p1,p6), (p2,p4), (p3,p5), (p4,p2), (p5,p3), (p6,p1)}. This is a matching, and is guaranteed to be stable.

352

Stable roommates problem

353

References
Irving, Robert W. (1985), "An efficient algorithm for the "stable roommates" problem", Journal of Algorithms 6 (4): 577595, doi:10.1016/0196-6774(85)90033-1 Irving, Robert W.; Manlove, David F. (2002), "The Stable Roommates Problem with Ties" [1], Journal of Algorithms 43 (1): 85105, doi:10.1006/jagm.2002.1219

References
[1] http:/ / eprints. gla. ac. uk/ 11/ 01/ SRT. pdf

Star of David theorem


The Star of David theorem is a mathematical result on arithmetic properties of binomial coefficients. It was discovered by H.W. Gould in 1972.

Statement
The greatest common divisors of binomial coefficients forming the Star of David shape in Pascal's triangle, are equal:

The Star of David theorem (the rows of the Pascal triangle are shown as columns here).

Star of David theorem

354

References
H.W. Gould, A New Greatest Common Divisor Property of The Binomial Coefficients, Fibonacci Quarterly 10 (1972), 579584. Star of David theorem [1], from MathForum. Star of David theorem [2], blog post.

External links
Star of David theorem [3], from MathWorld. Demonstration of the Star of David theorem [4], in Mathematica.

References
[1] [2] [3] [4] http:/ / mathforum. org/ wagon/ fall07/ p1088. html http:/ / threesixty360. wordpress. com/ 2008/ 12/ 21/ star-of-david-theorem/ http:/ / mathworld. wolfram. com/ StarofDavidTheorem. html http:/ / demonstrations. wolfram. com/ StarOfDavidTheorem/

Star product
In mathematics, the star product of two graded posets element and has a unique minimal element by , and , and and . and the bottom of and , and require that everything in be smaller than are the Boolean algebra on two elements. if and only if: ; ; or , is a poset and on the set , where has a unique maximal . We define the partial order 1. 2. 3. everything in

In other words, we pluck out the top of . For example, suppose

Then

is the poset with the Hasse diagram below.

Star product

355

The star product of Eulerian posets is Eulerian.

Bibliography
1. Stanley, R., Flag -vectors and the -index, Math. Z. 216 (1994), 483-499. This article incorporates material from star product on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Stars and bars (combinatorics)


In the context of combinatorial mathematics, stars and bars is a graphical aid for deriving certain combinatorial theorems. It was popularized by William Feller in his classic book on probability.[1]

Statements of theorems
Theorem one
For any pair of positive integers n and k, the number of distinct n-tuples of positive integers whose sum is k is given by the binomial coefficient .

Theorem two
For any pair of natural numbers n and k, the number of distinct n-tuples of non-negative integers whose sum is k is given by the binomial coefficient , or equivalently by the multiset number which counts the multisets of cardinality k, with elements taken from a set of n elements (for defined to be 1). both these numbers are

Proofs
Theorem one
Suppose one has k objects (to be represented as stars; in the example below k = 7) to be placed into n bins (in the example n = 3), such that all bins contain at least one object; one distinguishes the bins (say they are numbered 1 to n) but one does not wish to distinguish the k stars (so configurations are only distinguished by the number of stars present in each bin; in fact a configuration is represented by a n-tuple of positive integers as in the statement of the theorem). Instead of starting to place stars into bins, one starts by placing the stars on a line:

Stars and bars (combinatorics)

356
Fig. 1: seven objects represented by stars

where the stars for the first bin will be taken from the left, followed by the stars for the second bin, and so forth. Thus the configuration will be determined once one knows what is the first star going to the second bin, and the first star going to the third bin, and so on. One can indicate this by placing n1 separating bars at some places between two stars; since no bin is allowed to be empty, there can be at most one bar between a given pair of stars:

| |

Fig. 2: two bars give rise to three bins containing 4, 1, and 2 objects

Thus one views the k stars as fixed objects defining k1 gaps, in each of which there may or not be one bar (bin partition). One has to choose n1 of them to actually contain a bar; therefore there are possible configurations (see combination).

Theorem two
If , one can apply theorem one to n+k objects, giving configurations with at least

one object in each bin, and then remove one object from each bin to obtain a distribution of k objects into n bins, in which some bins may be empty. For example, placing 10 objects in 5 bins, allowing for bins to be empty, is equivalent to placing 15 objects in 5 bins and forcing something to be in each bin. An alternative way to arrive directly at the binomial coefficient: in a sequence of symbols, one has to choose k of them to be stars and the remaining to be bars (which can now be next to each other). The case n=0 (no bins at all) allows 0 configurations, unless k=0 as well (no objects to place), in which there is one configuration (since an empty sum is defined to be 0). In fact the binomial coefficient takes these values for n=0 (since by convention (unlike , which by convention takes value 0 when

, which is why the former expression is the one used in the statement of the theorem).

Example
For example, if k=5, n=4, and a set of size n is {a, b, c, d}, then ||| would represent the multiset {a, b, b, b, d} or the 4-tuple (1,3,0,1). The representation of any multiset for this example would use 5 stars (k) and 3 bars (n1). Seven indistinguishable one dollar coins are to be distributed among Amber, Ben and Curtis so that each of them receives at least one dollar. Thus the stars and bars apply with n = 7 and k = 3; hence there are ways to distribute the coins.

References
[1] Feller, W. (19f50) An Introduction to Probability Theory and Its Applications, Wiley, Vol 1, 2nd ed.

Other Sources
Pitman, Jim (1993). Probability. Berlin: Springer-Verlag. ISBN0-387-97974-3.

Sum-free sequence

357

Sum-free sequence
In mathematics, a sum-free sequence is an increasing positive integer sequence such that for each , is never a sum of preceding elements of the same sequence.

This definition is due to Paul Erds, who was interested in finding sum-free sequences with a large sum of reciprocals. An easy example of such a sequence 1, 2, 4, 8, 16, ... The definition of sum-free sequence is different of that of sum-free set.

Sunflower (mathematics)
In mathematics, a sunflower or system is a collection of sets whose pairwise intersection is constant, and called the kernel. The -lemma, sunflower lemma, and sunflower conjecture give various conditions that imply the existence of a large sunflower in a given collection of sets. The original term for this concept was "-system". More recently the term "sunflower", possibly introduced by Deza & Frankl (1981), has been gradually replacing it.

lemma
The -lemma is a combinatorial set-theoretic tool used in proofs to impose an upper bound on the size of a collection of pairwise incompatible elements in a forcing poset. It may for example be used as one of the ingredients in a proof showing that it is consistent with ZFC that the continuum hypothesis does not hold.
A mathematical sunflower can be pictured as a flower. The kernel of the sunflower is the brown part in the middle, and each set of the sunflower is the union of a petal and the kernel

A -system W is a collection of sets whose pairwise intersection is constant. That is, there exists a fixed S called the kernel (possibly empty) such that for all A, B W with A B, AB = S. The -lemma states that every uncountable collection of finite sets contains an uncountable -system.

Sunflower (mathematics)

358

Sunflower lemma and conjecture


Erds & Rado (1960, p. 86) proved the sunflower lemma, stating that if a and b are positive integers then a collection of b!ab+1 sets of cardinality at most b contains a sunflower with more than a sets. The sunflower conjecture is one of several variations of the conjecture of (Erds & Rado 1960, p. 86) that the factor of b! can be replaced by Cb for some constant C.

References
Deza, M.; Frankl, P. (1981), "Every large set of equidistant (0,+1,1)-vectors forms a sunflower" [1], Combinatorica. An International Journal of the Jnos Bolyai Mathematical Society 1 (3): 225231, doi:10.1007/BF02579328, ISSN0209-9683, MR637827 Erds, Paul; Rado, R. (1960), "Intersection theorems for systems of sets", Journal of the London Mathematical Society. Second Series 35 (1): 8590, doi:10.1112/jlms/s1-35.1.85, ISSN0024-6107, MR0111692 Jech, Thomas (2003). Set Theory. Springer. Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN0-444-85401-0.

References
[1] http:/ / dx. doi. org/ 10. 1007/ BF02579328

Superpattern
In mathematics (and specifically in combinatorics), k-superpattern is a permutation of minimal length that contains all permutation patterns of length k.

Examples
Consider the permutation 25314, which has length 5. It has 10 subsequences of length 3, including 253, 251, 234, 531, 534 and 314. (That is, we take 3 entries at a time, keeping the order preserved.) In each subsequence, we replace the smallest entry with 1, the next largest entry with 2, and the largest with 3. Thus, 253 gets renumbered to 132, 251 to 231, and so on. (If we had chosen subsequences of longer length, we would replace the third-smallest with 3, the fourth-smallest with 4, etc. That is, we apply an order-isomorphism to each subsequence to get a permutation.) Applying this operation to the six selected subsequences gives 132, 231, 123, 321, 312 and 213. In fact, this list contains all the possible 3-subpatterns, that is, all 6 (3 factorial) permutations of length 3. From this, we can see that 25314 contains all possible 3-subpatterns. This is one of the two conditions to be a 3-superpattern. The other condition is that there should be no shorter permutation with the same property. Consider also that in order to have 123 and 321 as subpatterns in the same permutation, by the inclusion-exclusion principle we need a minimum of 3 + 3 - 1 = 5 numbers (since only one number can be in both the largest increasing sequence and the largest decreasing sequence). This implies that any permutation which contains both 123 and 321 as subpatterns must be at least 5 digits long. Since 25314 has length 5 it is therefore a 3-superpattern of length 5.

Superpattern

359

Bounds on the length of superpatterns


Let denote the length of k-superpatterns. is . This bound follows from the observation that if a . Applying standard The only known lower bound for

permutation of length n contains all permutations of length k as patterns then

estimates for binomial coefficients and factorials (e.g., related to Stirling's approximation) yields the result. The upper bound It is believed that can be shown by probabilistic means. approaches k2/2 in the limit as .

References
Bna, Mikls (2002). A walk through combinatorics: an introduction to enumeration and graph theory. London: World Scientific. ISBN9810249012. Packing patterns into words [1]

References
[1] http:/ / www. combinatorics. org/ Volume_9/ PDF/ v9i2r20. pdf

Symbolic combinatorics
In mathematics, symbolic combinatorics is one of the many techniques of counting combinatorial objects. It uses the internal structure of the objects to derive formulas for their generating functions. This particular theory is due to Philippe Flajolet and Robert Sedgewick. This article describes the technique. For the underlying mathematics see fundamental theorem of combinatorial enumeration.

Procedure
Typically, one starts with the neutral class by ), and one or more atomic classes , containing a single object of size 0 (the neutral object, often denoted , each containing a single object of size 1. Next, set-theoretic relations

involving various simple operations, such as disjoint unions, products, sets, sequences, and multisets define more complex classes in terms of the already defined classes. These relations may be recursive. The elegance of symbolic combinatorics lies in that the set theoretic, or symbolic, relations translate directly into algebraic relations involving the generating functions. In this article, we will follow the convention of using script uppercase letters to denote combinatorial classes and the corresponding plain letters for the generating functions (so the class has generating function ). There are two types of generating functions commonly used in symbolic combinatoricsordinary generating functions, used for combinatorial classes of unlabelled objects, and exponential generating functions, used for classes of labelled objects. It is trivial to show that the generating functions (either ordinary or exponential) for , respectively. The disjoint union is also simple for disjoint sets labelled or unlabelled structures (and ordinary or exponential generating functions). and and , are and implies

. The relations corresponding to other operations depend on whether we are talking about

Symbolic combinatorics

360

Combinatorial sum
The restriction of unions to disjoint unions is an important one; however, in the formal specification of symbolic combinatorics, it is too much trouble to keep track of which sets are disjoint. Instead, we make use of a construction that guarantees there is no intersection (be careful, however; this affects the semantics of the operation as well). In defining the combinatorial sum of two sets and , we mark members of each set with a distinct marker, for example for members of and for members of . The combinatorial sum is then:

This is the operation that formally corresponds to addition.

Unlabelled structures
With unlabelled structures, an ordinary generating function (OGF) is used. The OGF of a sequence is defined as

Product
The product of two combinatorial classes and is specified by defining the size of an ordered pair as the sum and , of size n is . This should of the sizes of the elements in the pair. Thus we have for

be a fairly intuitive definition. We now note that the number of elements in

Using the definition of the OGF and some elementary algebra, we can show that implies

Sequence
The sequence construction, denoted by is defined as , or an ordered pair, ordered triple, etc. This

In other words, a sequence is the neutral element, or an element of leads to the relation

Symbolic combinatorics

361

Set
The set (or powerset) construction, denoted by is defined as

which leads to the relation

where the expansion

was used to go from line 4 to line 5.

Multiset
The multiset construction, denoted Therefore, is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times.

This leads to the relation

Symbolic combinatorics

362

where, similar to the above set construction, we expand .

, swap the sums, and substitute for the OGF of

Other elementary constructions


Other important elementary constructions are: the cycle construction ( ), like sequences except that cyclic rotations are not considered distinct pointing ( ), in which each member of is augmented by a neutral (zero size) pointer to one of its atoms substitution ( ), in which each atom in a member of is replaced by a member of . The derivations for these constructions are too complicated to show here. Here are the results:
Construction Generating function (where is the Euler totient function)

Examples
Many combinatorial classes can be built using these elementary constructions. For example, the class of plane trees (that is, trees embedded in the plane, so that the order of the subtrees matters) is specified by the recursive relation

In other words, a tree is a root node of size 1 and a sequence of subtrees. This gives

or

Another example (and a classic combinatorics problem) is integer partitions. First, define the class of positive integers , where the size of each integer is its value:

The OGF of

is then

Symbolic combinatorics

363

Now, define the set of partitions

as

The OGF of

is

Unfortunately, there is no closed form for

; however, the OGF can be used to derive a recurrence relation, or,

using more advanced methods of analytic combinatorics, calculate the asymptotic behavior of the counting sequence.

Labelled structures
An object is weakly labelled if each of its atoms has a nonnegative integer label, and each of these labels is distinct. An object is (strongly or well) labelled, if furthermore, these labels comprise the consecutive integers . Note: some combinatorial classes are best specified as labelled structures or unlabelled structures, but some readily admit both specifications. A good example of labelled structures is the class of labelled graphs. With labelled structures, an exponential generating function (EGF) is used. The EGF of a sequence is defined as

Product
For labelled structures, we must use a different definition for product than for unlabelled structures. In fact, if we simply used the cartesian product, the resulting structures would not even be well labelled. Instead, we use the so-called labelled product, denoted . For a pair and , we wish to combine the two structures into a single structure. In order for the result and . We will restrict our attention to

to be well labelled, this requires some relabelling of the atoms in

relabellings that are consistent with the order of the original labels. Note that there are still multiple ways to do the relabelling; thus, each pair of members determines not a single member in the product, but a set of new members. The details of this construction are found on the page of the Labelled enumeration theorem. To aid this development, let us define a function, , that takes as its argument a (possibly weakly) labelled object and relabels its atoms in an order-consistent way so that product for two objects and as and is is well labelled. We then define the labelled

Finally, the labelled product of two classes

The EGF can be derived by noting that for objects of size relabelling. Therefore, the total number of objects of size is

and

, there are

ways to do the

This binomial convolution relation for the terms is equivalent to multiplying the EGFs,

Symbolic combinatorics

364

Sequence
The sequence construction and again, as above, is defined similarly to the unlabelled case:

Set
In labelled structures, a set of elements corresponds to exactly sequences. This is different from the unlabelled , we have case, where some of the permutations may coincide. Thus for

Cycle
Cycles are also easier than in the unlabelled case. A cycle of length , we have corresponds to distinct sequences. Thus for

Other elementary constructions


The operators

represent cycles of even and odd length, and sets of even and odd cardinality.

Examples
Stirling numbers of the second kind may be derived and analyzed using the structural decomposition

The decomposition

is used to study unsigned Stirling numbers of the first kind, and in the derivation of the statistics of random permutations. A detailed examination of the exponential generating functions associated to Stirling numbers may be found on the page on Stirling numbers and exponential generating functions.

External links
Philippe Flajolet and Robert Sedgewick, Analytic Combinatorics: Symbolic Combinatorics. [1]

References
[1] http:/ / algo. inria. fr/ flajolet/ Publications/ FlSe02. ps. gz

Szemerdi's theorem

365

Szemerdi's theorem
In number theory, Szemerdi's theorem is a result that was formerly the ErdsTurn conjecture (not to be confused with ErdsTurn conjecture on additive bases). In 1936 Erds and Turn conjectured[1] for every value d called density 0<d<1 and every integer k there is a number N(d,k) such that every subset A of {1,...,N} of cardinality dN contains a length-k arithmetic progression, provided N>N(d,k). This generalizes the statement of van der Waerden's theorem. The cases k=1 and k=2 are trivial. The case k = 3 was established in 1956 by Klaus Roth[2] by an adaptation of the HardyLittlewood circle method. The case k = 4 was established in 1969 by Endre Szemerdi[3] by a combinatorial method. Using an approach similar to the one he used for the case k = 3, Roth[4] gave a second proof for this in 1972. Finally, the case of general k was settled in 1975, also by Szemerdi,[5] by an ingenious and complicated extension of the previous combinatorial argument ("a masterpiece of combinatorial reasoning", R. L. Graham). Several further proofs are now known, the most important amongst them being those by Hillel Furstenberg[6] in 1977, using ergodic theory, and by Timothy Gowers[7] in 2001, using both Fourier analysis and combinatorics. Let k be a positive integer and let 0 < 1/2. A finitary version of the theorem states that there exists a positive integer

such that every subset of {1, 2, ..., N} of size at least N contains an arithmetic progression of length k. The best known bounds for N(k, ) are

with C > 1. The lower bound is due to Behrend[8] (for k = 3) and Rankin,[9] and the upper bound is due to Gowers.[7] When k = 3 better upper bounds are known; the best known is due to Bourgain.[10]

Notes
[1] Erds, Paul; Turn, Paul (1936), "On some sequences of integers" (http:/ / www. renyi. hu/ ~p_erdos/ 1936-05. pdf), Journal of the London Mathematical Society 11 (4): 261264, doi:10.1112/jlms/s1-11.4.261, . [2] Roth, Klaus Friedrich (1953), "On certain sets of integers, I", Journal of the London Mathematical Society 28: 104109, doi:10.1112/jlms/s1-28.1.104, Zbl 0050.04002, MR0051853. [3] Szemerdi, Endre (1969), "On sets of integers containing no four elements in arithmetic progression", Acta Math. Acad. Sci. Hung. 20: 89104, doi:10.1007/BF01894569, Zbl 0175.04301, MR0245555. [4] Roth, Klaus Friedrich (1972), "Irregularities of sequences relative to arithmetic progressions, IV", Periodica Math. Hungar. 2: 301326, doi:10.1007/BF02018670, MR0369311. [5] Szemerdi, Endre (1975), "On sets of integers containing no k elements in arithmetic progression" (http:/ / matwbn. icm. edu. pl/ ksiazki/ aa/ aa27/ aa27132. pdf), Acta Arithmetica 27: 199245, Zbl 0303.10056, MR0369312, . [6] Furstenberg, Hillel (1977), "Ergodic behavior of diagonal measures and a theorem of Szemerdi on arithmetic progressions", J. DAnalyse Math. 31: 204256, doi:10.1007/BF02813304, MR0498471. [7] Gowers, Timothy (2001), "A new proof of Szemerdi's theorem" (http:/ / www. dpmms. cam. ac. uk/ ~wtg10/ sz898. dvi), Geom. Funct. Anal. 11 (3): 465588, doi:10.1007/s00039-001-0332-9, MR1844079, . [8] Behrend, Felix A. (1946), "On the sets of integers which contain no three in arithmetic progression", Proceedings of the National Academy of Sciences 23 (12): 331332, doi:10.1073/pnas.32.12.331, Zbl 0060.10302. [9] Rankin, Robert A. (1962), "Sets of integers containing not more than a given number of terms in arithmetical progression", Proc. Roy. Soc. Edinburgh Sect. A 65: 332344, Zbl 0104.03705, MR0142526. [10] Bourgain, Jean (1999), "On triples in arithmetic progression", Geom. Func. Anal. 9 (5): 968984, doi:10.1007/s000390050105, MR1726234.

Szemerdi's theorem

366

External links
PlanetMath source for initial version of this page (http://planetmath.org/encyclopedia/SzemeredisTheorem. html) Announcement by Ben Green and Terence Tao (http://www.math.ucla.edu/~tao/whatsnew.html) the preprint is available at math.NT/0404188 (http://front.math.ucdavis.edu/math.NT/0404188) Discussion of Szemerdi's theorem (part 1 of 5) (http://in-theory.blogspot.com/2006/06/szemeredis-theorem. html) Ben Green and Terence Tao: Szemerdi's theorem (http://www.scholarpedia.org/article/ Szemeredi's_Theorem) on Scholarpedia Weisstein, Eric W., " SzemeredisTheorem (http://mathworld.wolfram.com/SzemeredisTheorem.html)" from MathWorld.

Theory of relations
The theory of relations treats the subject matter of relations in its combinatorial aspect, as distinguished from, though related to, its more properly logical study on one side and its more generally mathematical study on another. A relation, as conceived in the combinatorial theory of relations, is a mathematical object that in general can have a very complex type, the complexity of which is best approached in several stages, as indicated next. In order to approach the combinatorial definition of a relation, it helps to introduce a few preliminary notions that can serve as stepping stones to the general idea. A relation in mathematics is defined as an object that has its existence as such within a definite context or setting. It is literally the case that to change this setting is to change the relation that is being defined. The particular type of context that is needed here is formalized as a collection of elements from which are chosen the elements of the relation in question. This larger collection of elementary relations or tuples is constructed by means of the set-theoretic product commonly known as the cartesian product.

Preliminaries
A relation L is defined by specifying two mathematical objects as its constituent parts: The first part is called the figure of L, notated as figure(L) or F(L). The second part is called the ground of L, notated as ground(L) or G(L). In the special case of a finitary relation, for concreteness a k-place relation, the concepts of figure and ground are defined as follows: The ground of L is a sequence of k nonempty sets, X1, , Xk, called the domains of the relation L. The figure of L is a subset of the cartesian product taken over the domains of L, that is, F(L) G(L) = X1 Xk. Strictly speaking, then, the relation L consists of a couple of things, L = (F(L), G(L)), but it is customary in loose speech to use the single name L in a systematically equivocal fashion, taking it to denote either the couple L = (F(L), G(L)) or the figure F(L). There is usually no confusion about this so long as the ground of the relation can be gathered from context.

Theory of relations

367

Definition
The formal definition of a finitary relation, specifically, a k-ary relation can now be stated. Definition. A k-ary relation L over the nonempty sets X1, , Xk is a (1+k)-tuple L = (F(L), X1, , Xk) where F(L) is a subset of the cartesian product X1 Xk. If all of the Xj for j = 1 to k are the same set X, then L is more simply called a k-ary relation over X. The set F(L) is called the figure of L and, providing that the sequence of sets X1, , Xk is fixed throughout a given discussion or determinate in context, one may regard the relation L as being determined by its figure F(L). The formal definition simply repeats more concisely what was said above, merely unwrapping the conceptual packaging of the relation's ground to define the relation in 1+k parts, as L = (F(X), X1, , Xk), rather than just the two, as L = (F(L), G(L)). A k-ary predicate is a boolean-valued function on k variables.

Local incidence properties


A local incidence property (LIP) of a relation L is a property that depends in turn on the properties of special subsets of L that are known as its local flags. The local flags of a relation are defined in the following way: Let L be a k-place relation L X1 Xk. Select a relational domain Xj and one of its elements x. Then Lx.j is a subset of L that is referred to as the flag of L with x at j, or the x.j-flag of L, an object which has the following definition: Lx.j = { (x1, , xj, , xk) L : xj = x }. Any property C of the local flag Lx.j L is said to be a local incidence property of L with respect to the locus x at j. A k-adic relation L X1 Xk is said to be C-regular at j if and only if every flag of L with x at j has the property C, where x is taken to vary over the theme of the fixed domain Xj. Expressed in symbols, L is C-regular at j if and only if C(Lx.j) is true for all x in Xj.

Numerical incidence properties


A numerical incidence property (NIP) of a relation is a local incidence property that depends on the cardinalities of its local flags. For example, L is said to be c-regular at j if and only if the cardinality of the local flag Lx.j is c for all x in Xj, or, to write it in symbols, if and only if |Lx.j| = c for all x in Xj. In a similar fashion, one can define the NIPs (<c)-regularatj, (>c)-regularatj, and so on. For ease of reference, a few of these definitions are recorded here:
is c-regular at j if and only if = c for all x in Xj. < c for all x in Xj. > c for all x in Xj.

is (< c)-regular at j if and only if is (> c)-regular at j if and only if

The definition of a local flag can be broadened from a point x in Xj to a subset M of Xj, arriving at the definition of a regional flag in the following way: Suppose that L X1 Xk, and choose a subset M Xj. Then LM.j is a subset of L that is said to be the flag of L with M at j, or the M.j-flag of L, an object which has the following definition:
LM.j = { (x1, , xj, , xk) L : xj M }.

Theory of relations Returning to 2-adic relations, it is useful to describe some familiar classes of objects in terms of their local and their numerical incidence properties. Let L S T be an arbitrary 2-adic relation. The following properties of L can be defined:
is is total total at S if and only if at T if and only if is (1)-regular at S. is (1)-regular at T. is (1)-regular at S. is (1)-regular at T.

368

is tubular at S if and only if is tubular at T if and only if

If L S T is tubular at S, then L is called a partial function or a prefunction from S to T, sometimes indicated by giving L an alternate name, say, "p", and writing L = p : S T. Just by way of formalizing the definition:
L = p:S T if and only if L is tubular at S.

If L is a prefunction p : S T that happens to be total at S, then L is called a function from S to T, indicated by writing L = f : S T. To say that a relation L S T is totally tubular at S is to say that it is 1-regular at S. Thus, we may formalize the following definition:
L = f : S T if and only if L is 1-regular at S.

In the case of a function f : S T, one has the following additional definitions:


f is surjective if and only if f is f is f is injective bijective if and only if f is total tubular at T. at T.

if and only if f is 1-regular at T.

Variations
Because the concept of a relation has been developed quite literally from the beginnings of logic and mathematics, and because it has incorporated contributions from a diversity of thinkers from many different times and intellectual climates, there is a wide variety of terminology that the reader may run across in connection with the subject. One dimension of variation is reflected in the names that are given to k-place relations, with some writers using medadic, monadic, dyadic, triadic, k-adic where other writers use nullary, unary, binary, ternary, k-ary. One finds a relation on a finite number of domains described as either a finitary relation or a polyadic relation. If the number of domains is finite, say equal to k, then the parameter k may be referred to as the arity, the adicity, or the dimension of the relation. In these cases, the relation may be described as a k-ary relation, a k-adic relation, or a k-dimensional relation, respectively. A more conceptual than nominal variation depends on whether one uses terms like 'predicate', 'relation', and even 'term' to refer to the formal object proper or else to the allied syntactic items that are used to denote them. Compounded with this variation is still another, frequently associated with philosophical differences over the status in reality accorded formal objects. Among those who speak of numbers, functions, properties, relations, and sets as being real, that is to say, as having objective properties, there are divergences as to whether some things are more real than others, especially whether particulars or properties are equally real or else one derivative of the other. Historically speaking, just about every combination of modalities has been used by one school of thought or another, but it suffices here merely to indicate how the options are generated.

Theory of relations

369

Examples
See the articles on relations, relation composition, relation reduction, sign relations, and triadic relations for concrete examples of relations. Many relations of the greatest interest in mathematics are triadic relations, but this fact is somewhat disguised by the circumstance that many of them are referred to as binary operations, and because the most familiar of these have very specific properties that are dictated by their axioms. This makes it practical to study these operations for quite some time by focusing on their dyadic aspects before being forced to consider their proper characters as triadic relations.

References
Peirce, C.S., "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic", Memoirs of the American Academy of Arts and Sciences, 9, 317-378, 1870, also separately as an extract, Google Books Eprint [1]. Reprinted, Collected Papers of Charles Sanders Peirce v. 3 paragraphs 45-149, Writings of Charles S. Peirce v. 2, pp.359429.. Ulam, S.M. and Bednarek, A.R., "On the Theory of Relational Structures and Schemata for Parallel Computation", pp.477508 in A.R. Bednarek and Franoise Ulam (eds.), Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley, CA, 1990.

Bibliography
Barr, M. and Wells, C., Category Theory for Computing Science, Hemel Hempstead, UK, 1990. Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany, 1994. Carnap, Rudolf, Introduction to Symbolic Logic with Applications, Dover Publications, New York, NY, 1958. Chang, C.C., and Keisler, H.J., Model Theory, North-Holland, Amsterdam, Netherlands, 1973. van Dalen, D., Logic and Structure, 2nd edition, Springer-Verlag, Berlin, Germany, 1980. Devlin, K.J., The Joy of Sets: Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY, 1993. Halmos, P.R., Naive Set Theory, D. Van Nostrand Company, Princeton, NJ, 1960. van Heijenoort, J., From Frege to Gdel, A Source Book in Mathematical Logic, 18791931, Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955. Kneale, W. and Kneale, M., The Development of Logic, Oxford University Press, Oxford, UK, 1962. Reprinted with corrections, 1975. Lawvere, F.W., and Rosebrugh, R., Sets for Mathematics, Cambridge University Press, Cambridge, UK, 2003. Lawvere, F.W., and Schanuel, S.H., Conceptual Mathematics, A First Introduction to Categories, Cambridge University Press, Cambridge, UK, 1997. Reprinted with corrections, 2000. Lucas, John Randolph, Conceptual Roots of Mathematics, Routledge, 1999. Maddux, R.D., Relation Algebras, Volume150, 'Studies in Logic and the Foundations of Mathematics', Elsevier Science, 2006. Manin, Yu. I., A Course in Mathematical Logic, Neal Koblitz (trans.), Springer-Verlag, New York, NY, 1977.

Theory of relations Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi It (ed.), MIT Press, Cambridge, MA, 1993. Mili, A., Desharnais, J., Mili, F., with Frappier, M., Computer Program Construction, Oxford University Press, New York, NY, 1994. Introduction to Tarskian relation theory and its applications within the relational programming paradigm. Mitchell, J.C., Foundations for Programming Languages, MIT Press, Cambridge, MA, 1996. Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1-6, Charles Hartshorne and Paul Weiss (eds.), vols. 7-8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 19311935, 1958. (= CP) Peirce, C.S., Writings of Charles S. Peirce: A Chronological Edition, Volume 2, 1867-1871, Peirce Edition Project (eds.), Indiana University Press, Bloomington, IN, 1984. (= W 2) Poizat, B., A Course in Model Theory: An Introduction to Contemporary Mathematical Logic, Moses Klein (trans.), Springer-Verlag, New York, NY, 2000. Quine, W.V., Mathematical Logic, 1940. Revised edition, Harvard University Press, Cambridge, MA, 1951. New preface, 1981. Royce, J., The Principles of Logic, Philosophical Library, New York, NY, 1961. Runes, D.D. (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ, 1962. Shoenfield, J.R., Mathematical Logic, Addison-Wesley, Reading, MA, 1967. Styazhkin, N.I., History of Mathematical Logic from Leibniz to Peano, MIT Press, Cambridge, MA, 1969. Suppes, Patrick, Introduction to Logic, 1st published 1957. Reprinted, Dover Publications, New York, NY, 1999. Suppes, Patrick, Axiomatic Set Theory, 1st published 1960. Reprinted, Dover Publications, New York, NY, 1972. Tarski, A., Logic, Semantics, Metamathematics, Papers from 1923 to 1938, J.H. Woodger (trans.), first edition, Oxford University Press, 1956, second edition, J. Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983. Ulam, S.M., Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, A.R. Bednarek and Franoise Ulam (eds.), University of California Press, Berkeley, CA, 1990. Ullman, J.D., Principles of Database Systems, Computer Science Press, Rockville, MD, 1980. Venetus, P., Logica Parva, Translation of the 1472 Edition with Introduction and Notes, Alan R. Perreiah (trans.), Philosophia Verlag, Munich, Germany, 1984.

370

References
[1] http:/ / books. google. com/ books?id=fFnWmf5oLaoC& printsec=titlepage

Toida's conjecture

371

Toida's conjecture
In combinatorial mathematics, Toida's conjecture, due to Shunichi Toida in 1977,[1] is a refinement of the disproven dm's conjecture in 1967. Toida's conjecture states formally: If S is a subset of and

then

is a CI-digraph.

Proofs
The conjecture was proven in the special case where n is a prime power by Klin and Poschel in 1978,[2] and by Golfand, Najmark, and Poschel in 1984.[3] The conjecture was then fully proven by Muzychuk, Klin, and Poschel in 2001 by using Schur algebra,[4] and simultaneously by Dobson and Morris in 2002 by using the Classification of finite simple groups.[5]

Notes
[1] *S. Toida: "A note on Adam's conjecture", J. of Combinatorial Theory (B), pp. 239246, OctoberDecember 1977 [2] *Klin, M.H. and R. Poschel: The Konig problem, the isomorphism problem for cyclic graphs and the method of Schur rings, Algebraic methods in graph theory, Vol. I, II., Szeged, 1978, pp. 405434. [3] *Golfand, J.J., N.L. Najmark and R. Poschel: The structure of S-rings over Z2m , preprint (1984). [4] Klin, M.H., M. Muzychuk and R. Poschel: The isomorphism problem for circulant graphs via Schur ring theory, Codes and Association Schemes, American Math. Society, 2001. [5] *E. Dobson, J. Morris: TOIDAS CONJECTURE IS TRUE, PhD Thesis, 2002.

Topological combinatorics

372

Topological combinatorics
The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this gradually turned into the field of algebraic topology. In 1978 the situation was reversed when methods from algebraic topology were used to solve a problem in combinatorics when Lszl Lovsz proved the Kneser conjecture, thus beginning the new study of topological combinatorics. Lovsz's proof used the Borsuk-Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems. In another application of homological methods to graph theory Lovsz proved both the undirected and directed versions of a conjecture of Frank: Given a k-connected graph G, k points v1,...,vk V(G), and k positive integers n1,n2,...,nk that sum up to |V(G)|, there exists a partition {V1,...,Vk} of V(G) such that vi Vi, |Vi|=ni and Vi spans a connected subgraph. The most notable application of topological combinatorics has been to graph coloring problems. Also in 1987 the necklace problem was solved by Noga Alon. It has also been used to study complexity problems in linear decision tree algorithms and the AanderaaKarpRosenberg conjecture. Other areas include topology of partially ordered sets and bruhat orders. Also methods from differential topology now have a combinatorial analog in discrete Morse theory.

References
de Longueville, Mark (2004). "25 years proof of the Kneser conjecture - The advent of topological combinatorics" [1] (PDF). EMS Newsletter. Southampton, Hampshire: European Mathematical Society. pp. 1619. Retrieved 2008-07-29.

Further reading
Bjrner, Anders (1995), "Topological Methods" [2], in Graham, Ronald L.; Grtschel, Martin; Lovsz, Lszl, Handbook of Combinatorics, 2, The MIT press, ISBN978-0262071710. Kozlov, Dmitry (2005), Trends in topological combinatorics, arXiv:math.AT/0507390. Kozlov, Dmitry (2007), Combinatorial Algebraic Topology, Springer, ISBN978-3540719618. Lange, Carsten (2005), Combinatorial Curvatures, Group Actions, and Colourings: Aspects of Topological Combinatorics [3], Ph.D. thesis, Berlin Institute of Technology. Matouek, Ji (2003), Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry, Springer, ISBN978-3540003625. Barmak, Jonathan (2011), Algebraic Topology of Finite Topological Spaces and Applications, Springer, ISBN9783642220029. de Longueville, Mark (2011), A Course in Topological Combinatorics, Springer, ISBN9781441979094.

Topological combinatorics

373

References
[1] http:/ / www. emis. de/ newsletter/ current/ current9. pdf [2] http:/ / www. math. kth. se/ ~bjorner/ files/ TopMeth. pdf [3] http:/ / edocs. tu-berlin. de/ diss/ 2004/ lange_carsten. pdf

Trace monoid
In mathematics and computer science, a trace is a set of strings, wherein certain letters in the string are allowed to commute, but others are not. It generalizes the concept of a string, by not forcing the letters to always be in a fixed order, but allowing certain reshufflings to take place. Traces are used in theories of concurrent computation, where commuting letters stand for portions of a job that can execute independently of one-another, while non-commuting letters stand for locks, synchronization points or thread joins. The trace monoid or free partially commutative monoid is a monoid of traces. In a nutshell, it is constructed as follows: sets of commuting letters are given by an independency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions up the free monoid (the set of all strings of finite length) into a set of equivalence classes; the result is still a monoid; it is a quotient monoid and is called the trace monoid. The trace monoid is universal, in that all homomorphic monoids are in fact isomorphic. Trace monoids are commonly used to model concurrent computation, forming the foundation for process calculi. They are the object of study in trace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid of dependency graphs; thus allowing algebraic techniques to be applied to graphs, and vice-versa. They are also isomorphic to history monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.

Trace
Let denote the free monoid, that is, the set of all strings written in the alphabet then induces a binary relation such that ), while and and ). , and a pair . Here, the asterisk denotes, as on , where . Here, if and and usual, the Kleene star. An independency relation only if there exist are understood to be strings (elements of

are letters (elements of

The trace is defined as the symmetric, reflexive and transitive closure of . The trace is thus an equivalence relation on , and is denoted by . The subscript D on the equivalence simply denotes that the equivalence is obtained from the independency I induced by the dependency D. Clearly, different dependencies will give different equivalence relations. The transitive closure simply implies that such that and if and only if there exists a sequence of strings and for all .

Trace monoid

374

Trace monoid
The trace monoid, commonly denoted as The homomorphism , is defined as the quotient monoid

is commonly referred to as the natural homomorphism or canonical homomorphism. That the terms natural or canonical are deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.

Examples
Consider the alphabet . A possible dependency relation is

The corresponding independency is

Therefore, the letters be The equivalence class

commute. Thus, for example, a trace equivalence class for the string

would

is an element of the trace monoid.

Properties
The cancellation property states that equivalence is maintained under right cancellation. That is, if , then . Here, the notation denotes right cancellation, the removal of the first occurrence of the letter a from the string w, starting from the right-hand side. Equivalence is also maintained by left-cancellation. Several corollaries follow: Embedding: monoid. Independence: if exists a string w such that and and , then a is independent of b. That is, . , then such that . occurs in for strings u, v, x, y, then there exist and . Furthermore, there if and only if for strings x and y. Thus, the trace monoid is a syntactic

Projection rule: equivalence is maintained under string projection, so that if A strong form of Levi's lemma holds for traces. Specifically, if strings and and occurs in such that , and
[1]

for all letters

Trace monoid

375

Universal property
A dependency morphism (with respect to a dependency D) is a morphism

to some monoid M, such that the "usual" trace properties hold, namely: 1. 2. 3. 4. implies that implies that implies that and imply that is a . In particular, the natural

Dependency morphisms are universal, in the sense that for a given, fixed dependency D, if dependency morphism to a monoid M, then M is isomorphic to the trace monoid homomorphism is a dependency morphism.

Normal forms
There are two well-known normal forms for words in trace monoids. One is the lexicographic normal form, due to Anatolij V. Anisimov and Donald Knuth, and the other is the Foata normal form due to Pierre Cartier and Dominique Foata who studied the trace monoid for its combinatorics in the 1960s.

Trace languages
Just as a formal language can be regarded as a subset of defined as subset of A language all possible traces. is a trace language, or is said to be consistent with dependency D if the set of all possible strings, so then a trace language is

where

is the trace closure of a set of strings, and

is the set of strings in a set of traces.

Notes
[1] Proposition 2.2, Diekert and Mtivier 1997.

References
General references Diekert, Volker; Mtivier, Yves (1997), "Partial Commutation and Traces" (http://citeseer.ist.psu.edu/ diekert97partial.html), in Rozenberg, G.; Salomaa, A., Handbook of Formal Languages Vol. 3; Beyond Words, Springer-Verlag, Berlin, pp.457534, ISBN3540606491 Antoni Mazurkiewicz, "Introduction to Trace Theory", pp 341, in The Book of Traces, V. Diekert, G. Rozenberg, eds. (1995) World Scientific, Singapore ISBN 9810220588 Volker Diekert, Combinatorics on traces, LNCS 454, Springer, 1990, ISBN 3540530312, pp.929 Seminal publications

Trace monoid Pierre Cartier and Dominique Foata, Problmes combinatoires de commutation et rarrangements, Lecture Notes in Mathematics 85, Springer-Verlag, Berlin, 1969, Free 2006 reprint (http://www.emis.de/journals/SLC/ books/cartfoa.html) with new appendixes Antoni Mazurkiewicz, Concurrent program schemes and their interpretations, DAIMI Report PB 78, Aarhus University, 1977

376

Transformation (combinatorics)
In combinatorial mathematics, the notion of transformation is used with several slightly different meanings. Informally, a transformation of a set of N values is an arrangement of those values into a particular order, where values may be repeated, but the ordered list is N elements in length. Thus, there are 27 transformations of the set {1,2,3}, namely [1,1,1], [1,1,2], [1,1,3], [1,2,1], [1,2,2], [1,2,3], [1,3,1],[1,3,2], [1,3,3], [2,1,1], [2,1,2],[2,1,3],[2,2,1], [2,2,2], [2,2,3], [2,3,1], [2,3,2], [2,3,3], [3,1,1], [3,1,2], [3,1,3], [3,2,1], [3,2,2], [3,2,3], [3,3,1], [3,3,2], and [3,3,3]. In general there are NN transformations for a set of N elements. Analogous to a permutation group having elements that are permutations, a transformation semigroup has elements that are transformations. For N>1, the set of permutations on N values is a proper subset of the set of transformations on N values.

References
J. Denes, Some combinatorial properties of transformations and their connections with the theory of graphs, Journal of Combinatorial Theory Volume 9, Issue 2, September 1970, Pages 108116

Transversal (combinatorics)
In combinatorial mathematics, given a collection C of sets, a transversal is a set containing exactly one element from each member of the collection. In the language of category theory, it is a section of the quotient map induced by the collection. If the original sets are not disjoint, there are several different definitions. One variation is that there is a bijection f from the transversal to C such that x is an element of f(x) for each x in the transversal. Then the set is also called a system of distinct representatives. A less restrictive definition requires that the transversal just has a non-empty intersection with each member of C. A partial transversal is a set containing at most one element from each member of the collection, or a set with an injection from the set to C. The transversals of a finite collection C of finite sets form the basis sets of a matroid, the "transversal matroid" of C. The independent sets of the transversal matroid are the partial transversals of C.[1]

Examples
As an example of the disjoint-sets meaning of transversal, in group theory, given a subgroup H of a group G, a right (respectively left) transversal is a set containing exactly one element from each right (respectively left) coset of H. Given a direct product of groups , then H is a transversal for the cosets of K.

Hall's marriage theorem gives necessary and sufficient conditions for possibly overlapping subsets to have a transversal.

Transversal (combinatorics)

377

Notes
[1] Oxley, James G. (2006), Matroid Theory, Oxford graduate texts in mathematics, 3, Oxford University Press, p.48, ISBN9780199202508.

References
Mirsky, Leon (1971). Transversal Theory: An account of some aspects of combinatorial mathematics. Academic Press. ISBN 0-12-498550-5. Lawler, E. L. Combinatorial Optimization: Networks and Matroids. 1976.

Tucker's lemma
In mathematics, Tucker's lemma is a combinatorial analog of the BorsukUlam theorem. Let T be a triangulation of the closed n-dimensional ball is a simplex then so is . Let be a labelling of the vertices of T which satisfies L(v)=L(v) for all vertices v in . Then Tucker's Lemma . Assume T is antipodally symmetric on the boundary provides a triangulation of where if . That means that the subset of simplices of T which are in

states that there exists a 1-simplex in T whose vertices are labelled by the same number but with opposite signs.

In the above example, where n=2, the red 1-simplex has vertices which are labelled by the same number with opposite signs. Tucker's lemma states that for such a triangulation at least one such 1-simplex must exist.

Tucker's lemma

378

References
Ji Matouek (2003). Using the BorsukUlam Theorem. Springer-Verlag. pp.34. ISBN3-540-00362-2.

Twelvefold way
In combinatorics, the twelvefold way is a name given to a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems of counting permutations, combinations, multisets, and partitions either of a set or of a number. The idea of the classification is due to Gian-Carlo Rota, and the name was suggested by Joel Spencer.[1] Let N and X be finite sets. Let n=|N| and x=|X|. Thus N is an n-set, and X is an x-set. The general problem we consider is the enumeration of functions from N toX. There are three different conditions which may be imposed upon :NX. 1. No condition: Each :Na can produce any Xb, thus each Xb can occur multiple times. 2. is injective: Each :Na must produce a unique Xb, thus each Xb can only occur once. 3. is surjective: There must be at least one :NaXb for each Xb in X, thus each Xb will occur at least once. A possible fourth condition of being bijective is not included, since the set of such functions will be empty unless n=x, in which case the condition is equivalent both to being injective and to being surjective; therefore considering this condition would not add any interesting problems. There are four different equivalence relations which may be defined on the set of functions from N toX. 1. 2. 3. 4. equality; equality up to a permutation of N; equality up to a permutation of X; equality up to permutations of N and X.

Formally, the last three cases mean that the problem is taken to be counting the orbits of the natural action of respectively the symmetric group of N, the symmetric group of X, and of the product of the two groups, on the appropriate sets of functions. These criteria can be paired in 34=12 ways. These 12 types of problems do not involve the same difficulties, and there is not one systematic method for solving them. Indeed two of the problems are trivial (since all injective functions NX, if any, are equivalent under permutations of X), some problems allow a solution expressed by a multiplicative formula in terms of n and x, while for the remaining problems the solution can only be expressed in terms of combinatorial functions adapted to the problem, notably Stirling numbers and functions counting partitions of numbers with a given number of parts. The incorporation of classical enumeration problems into this setting is as follows. Counting n-permutations (i.e., sequences without repetition) of X is equivalent to counting injective functions NX. Counting n-combinations of X is equivalent to counting injective functions NX up to permutations of N. Counting permutations of the set X is equivalent to counting injective functions NX when n=x, and also to counting surjective functions NX when n=x. Counting multisets of size n (also known as n-combinations with repetitions) of elements in X is equivalent to counting all functions NX up to permutations of N. Counting partitions of the set N into x subsets is equivalent to counting all surjective functions NX up to permutations of X. Counting compositions of the number n into x parts is equivalent to counting all surjective functions NX up to permutations of N.

Twelvefold way Counting partitions of the number n into x parts is equivalent to counting all surjective functions NX up to permutations of both N and X.

379

Interpretations
The various problems in the twelvefold way may be considered from different points of view.

Balls and boxes


Traditionally many of the problems in twelvefold way have been formulated in terms of placing balls in boxes (or some similar visualization) instead of defining functions. The set N can be identified with a set of balls, and X with a set of boxes; then function :NX then describes a way to distribute the balls into the boxes, namely by putting each ball b into box (b). Thus the property that a function ascribes a unique image to each value in its domain is reflected by the property that any ball can go into only one box (together with the requirement that no ball should remain outside of the boxes), whereas any box can accommodate (in principle) an arbitrary number of balls. Requiring in addition to be injective means forbidding to put more than one ball in any one box, while requiring to be surjective means insisting that every box contain at least one ball. Counting modulo permutations of N and/or of X is reflected by calling the balls respectively the boxes "indistinguishable". This is an imprecise formulation (in practice individual balls and boxes can always be distinguished by their location, and one could not assign different balls to different boxes without distinguishing them), intended to indicate that different configurations are not to be counted separately if one can be transformed into the other by some interchange of balls respectively of boxes; this is what the action by permutations of N and/or of X formalizes. In fact the case of indistinguishable boxes is somewhat harder to visualize than that of indistinguishable balls, since a configuration is inevitably presented with some ordering of the boxes; permuting the boxes will then appear as a permutation of their contents.

Selection, labelling, grouping


A function :NX can be considered from a the perspective of X or of N. This leads to different views: the function labels each element of N by an element of X. the function selects (chooses) an element of the set X for each element of N, a total of n choices. the function groups the elements of N together that are mapped to the same element of X. These points of view are not equally suited to all cases. The labelling and selection points of view are not well compatible with permutation of the elements of X, since this changes the labels or the selection; on the other hand the grouping point of view does not give complete information about the configuration unless the elements of X may be freely permuted. The labelling and selection points of view are more or less equivalent when N is not permuted, but when it is, the selection point of view is more suited. The selection can then be viewed as an unordered selection: a single choice of a (multi-)set of n elements from X is made.

Labelling and selection with or without repetition


When viewing as a labelling of the elements of N, the latter may be thought of as arranged in a sequence, and the labels as being successively assigned to them. A requirement that be injective means that no label can be used a second time; the result is a sequence of labels without repetition. In the absence of such a requirement, the terminology "sequences with repetition" is used, meaning that labels may be used more than once (although sequences that happen to be without repetition are also allowed). For an unordered selection the same kind of distinction applies. If must be injective, then the selection must involve n distinct elements of X, so it is a subset of X of size n, also called an n-combination. Without the requirement, a same element of X may occur multiple times in the selection, and the result is a multiset of size n of elements from X,

Twelvefold way also called an n-multicombination or n-combination with repetition. In these cases the requirement of a surjective means that every label is to be used at least once, respectively that every element of X be included in the selection at least once. Such a requirement is less natural to handle mathematically, and indeed the former case is more easily viewed first as a grouping of elements of N, with in addition a labelling of the groups by the elements of X.

380

Partitions of sets and numbers


When viewing as a grouping of the elements of N (which assumes one identifies under permutations of X), requiring to be surjective means the number of groups must be exactly x. Without this requirement the number of groups can be at most x. The requirement of injective means each element of N must be a group in itself, which leaves at most one valid grouping and therefore gives a rather uninteresting counting problem. When in addition one identifies under permutations of N, this amounts to forgetting the groups themselves but retaining only their sizes. These sizes moreover do not come in any definite order, while the same size may occur more than once; one may choose to arrange them into a weakly decreasing list of numbers, whose sum is the number n. This gives the combinatorial notion of a partition of the numbern, into exactly x (for surjective ) or at most x (for arbitrary ) parts.

Formulas
Enumeration formulas for the twelvefold way
Any f Distinct Sn orbits Sx orbits SnSx orbits Injective f Surjective f

Formulas for the different cases of the twelvefold way are summarized in the table on the right; each table entry links to a subsection below explaining the formula. The particular notations used are the following: the falling factorial power the factorial the Stirling number of the second kind subsets the binomial coefficient the Iverson bracket encoding a truth value as 0 or 1 the number of partitions of n into k parts , , denoting the number of ways to partition a set of n elements into k

Twelvefold way

381

Details of the different cases


The cases below are ordered in such a way as to group those cases for which the arguments used in counting are related, which is not the ordering in the table given. Functions from N to X This case is equivalent to counting sequences of n elements of X with no restriction: a function f : N X is determined by the n images of the elements of N, which can each be independently chosen among the elements of x. This gives a total of xn possibilities. Example:

Injective functions from N to X This case is equivalent to counting sequences of n distinct elements of X, also called n-permutations of X, or sequences without repetitions; again this sequence is formed by the n images of the elements of N. This case differs from the one of unrestricted sequences in that there is one choice less for the second element, two less for the third element, and so on. Therefore instead of by an ordinary power of x, the value is given by a falling factorial power of x, in which each successive factor is one less than the previous one. The formula is

Note that if n>x then one obtains a factor zero, so in this case there are no injective functions N X at all; this is just a restatement of the pigeonhole principle. Example:

Injective functions from N to X, up to a permutation of N This case is equivalent to counting subsets of n elements of X, also called n-combinations of X: among the sequences of n distinct elements of X, those that differ only in the order of their terms are identified by permutations of N. Since in all cases this groups together exactly n! different sequences, we can divide the number of such sequences by n! to get the number of n-combinations of X. This number is known as the binomial coefficient , which is therefore given by

Functions from N to X, up to a permutation of N This case is equivalent to counting multisets with n elements (also called n-multicombinations) of elements from X. The reason is that for each element of X it is determined how many elements of N are mapped to it by f, while two functions that give the same such "multiplicities" to each element of X can always be transformed into another by a permutation of N. The formula counting all functions N X is not useful here, because the number of them grouped together by permutations of N varies from one function to another. Rather, as explained under combinations, the number of n-multicombinations from a set with x elements can be seen to be the same as the number of n-combinations from a set with x + n 1 elements. This reduces the problem to another one in the twelvefold way, and gives as result

Twelvefold way Surjective functions from N to X, up to a permutation of N This case is equivalent to counting a multisets with n elements of elements from X, for which each element of X occurs at least once. This is also equivalent to counting the compositions of n with x (non-zero) parts, by listing the multiplicities of the elements of x in order. The correspondence between functions and multisets is the same as in the previous case, and the surjectivity requirement means that all multiplicities are at least one. By decreasing all multiplicities by1, this reduces to the previous case; since the change decreases the value of n by x, the result is

382

Note that when n<x there are no surjective functions N X at all (a kind of "empty pigeonhole" principle); this is taken into account in the formula, by the convention that binomial coefficients are always 0 if the lower index is negative. The form of the result suggests looking for a manner to associate a class of surjective functions N X directly to a subset of n x elements chosen from a total of n 1, which can be done as follows. First choose a total ordering of the sets N and X, and note that by applying a suitable permutation of N, every surjective function N X can be transformed into a unique weakly increasing (and of course still surjective) function. If one connects the elements of N in order by n 1 arcs into a linear graph, then choosing any subset of n x arcs and removing the rest, one obtains a graph with x connected components, and by sending these to the successive elements of X, one obtains a weakly increasing surjective function N X; also the sizes of the connected components give a composition of n into x parts. This argument is basically the one given at stars and bars, except that there the complementary choice of x 1 "separations" is made. Injective functions from N to X, up to a permutation of X In this case we consider sequences of n distinct elements from X, but identify those obtained from one another by applying to each element a permutation of X. It is easy to see that two different such sequences can always be identified: the permutation must map term i of the first sequence to term i of the second sequence, and since no value occurs twice in either sequence these requirements do not contradict each other; it remains to map the elements not occurring in the first sequence bijectively to those not occurring in the second sequence in an arbitrary way. The only fact that makes the result depend on n and x at all is that the existence of any such sequences to begin with requires nx, by the pigeonhole principle. The number is therefore expressed as , using the Iverson bracket. Injective functions from N to X, up to permutations of N and X This case is reduced to the previous one: since all sequences of n distinct elements from X can already be transformed into each other by applying a permutation of X to each of their terms, also allowing reordering of the terms does not give any new identifications; the number remains . Surjective functions from N to X, up to a permutation of X This case is equivalent to counting partitions of N into x (non-empty) subsets, or counting equivalence relations on N with exactly x classes. Indeed for any surjective function f : N X, the relation of having the same image under f is such an equivalence relation, and it does not change when a permutation of X is subsequently applied; conversely one can turn such an equivalence relation into a surjective function by assigning the elements of X in some manner to the x equivalence classes. The number of such partitions or equivalence relations is by definition the Stirling number of the second kind S(n,x), also written . Its value can be described using a recursion relation or using generating functions, but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation.

Twelvefold way Surjective functions from N to X For each surjective function f : N X, its orbit under permutations of X has x! elements, since composition (on the left) with two distinct permutations of X never gives the same function on N (the permutations must differ at some element of X, which can always be written as f(i) for some iN, and the compositions will then differ at i). It follows that the number for this case is x! times the number for the previous case, that is Example:

383

Functions from N to X, up to a permutation of X This case is like the corresponding one for surjective functions, but some elements of x might not correspond to any equivalence class at all (since one considers functions up to a permutation of X, it does not matter which elements are concerned, just how many). As a consequence one is counting equivalence relations on N with at most x classes, and the result is obtained from the mentioned case by summation over values up to x, giving . Surjective functions from N to X, up to permutations of N and X This case is equivalent to counting partitions of the number n into x non-zero parts. Compared to the case of counting surjective functions up to permutations of X only ( ), one only retains the sizes of the equivalence classes that the function partitions N into (including the multiplicity of each size), since two equivalence relations can be transformed into one another by a permutation of N if and only if the sizes of their classes match. This is precisely what distinguishes the notion of partition of n from that of partition of N, so as a result one gets by definition the number px(n) of partitions of n into x non-zero parts. Functions from N to X, up to permutations of N and X This case is equivalent to counting partitions of the number n into x parts with parts 0 allowed. The association is the same as for the previous case, adding for each element of X not in the image of the function one can associate a part0 to the partition. Each partition of n into at most x non-zero parts can be extended to such a partition by adding the required number of zeroes, and this accounts for all possibilities, so the result is . By adding a unit to each of the x parts one obtains a partition of n + x into x nonzero parts, and this correspondence is bijective; hence the expression (but not its computation) can be simplified by writing it as .

Extremal cases
The above formulas give the proper values for all finite sets N and X. In some cases there are alternative formulas which are almost equivalent, but which do not give the correct result in some extremal cases, such as when N or X are empty. The following considerations apply to such cases. For every set X there is exactly one function from the empty set to X (there are no values of this function to specify), which is always injective, but never surjective unless X is (also) empty. For every non-empty set N there are no functions from N to the empty set (there is at least one value of the function that must be specified, but it cannot). When n>x there are no injective functions N X, and if n<x there are no surjective functions N X. The expressions used in the formulas have as particular values

(the first three are instances of an empty product, and the value extension of binomial coefficients to arbitrary values of the upper index), while

is given by the conventional

Twelvefold way In particular in the case of counting multisets with n elements taken from X, the given expression equivalent in most cases to is

384

, but the latter expression would give 0 for the case n=x=0 (by the usual

convention that binomial coefficients with a negative lower index are always 0). Similarly, for the case of counting compositions of n with x non-zero parts, the given expression is almost equivalent to the expression given by the stars and bars argument, but the latter gives incorrect values for n=0 and all values ofx. For the cases where the result involves a summation, namely those of counting partitions of N into at most x non-empty subsets or partitions of n into at most x non-zero parts, the summation index is taken to start at0; although the corresponding term is zero whenever n>0, it is the unique non-zero term when n=0, and the result would be wrong for those cases if the summation were taken to start at1.

Generalizations
We can generalize further by allowing other groups of permutations to act on N and X. If G is a group of permutations of N, and H is a group of permutations of X, then we count equivalence classes of functions . Two functions and are considered equivalent if, and only if, there exist so that . This extension leads to notions such as cyclic and dihedral permutations, as well as cyclic and dihedral partitions of numbers and sets.

References
[1] Richard P. Stanley (1997). Enumerative Combinatorics, Volume I (http:/ / www-math. mit. edu/ ~rstan/ ec/ ). Cambridge University Press. ISBN 0-521-66351-2. p.41

Umbral calculus
In mathematics before the 1970s, the term umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to 'prove' them. These techniques were introduced by John Blissard(1861) and are sometimes called Blissard's symbolic method. They are often attributed to douard Lucas (or James Joseph Sylvester), who used the technique extensively.[1] In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing. In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences.

The 19th-century umbral calculus


That method is a notational device for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful; identities derived via the umbral calculus can also be derived by more complicated methods that can be taken literally without logical difficulty. An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion

and the remarkably similar-looking relation on the Bernoulli polynomials:

Compare also the ordinary derivative

Umbral calculus

385

to a very similar-looking relation on the Bernoulli polynomials:

These similarities allow one to construct umbral proofs, which, on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n k is an exponent:

and then differentiating, one gets the desired result:

In the above, the variable b is an "umbra" (Latin for shadow). See also Faulhaber's formula.

Umbral Taylor series


Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k 'th forward differences of a polynomial function f,

where

is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as the Newton series or Newton's forward difference expansion. The analogy to Taylor's expansion is utilized in the Calculus of finite differences.

Bell and Riordan


In the 1930s and 1940s, Eric Temple Bell tried unsuccessfully to make this kind of argument logically rigorous. The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively.

The modern umbral calculus


Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in y defined by

Then one can write

etc. Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets.

Umbral calculus In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by

386

When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term. A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence.

Notes
[1] E. T. Bell, "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly 45:7 (1938), pp. 414421.

References
Bell, E. T. (1938), "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly (Mathematical Association of America) 45 (7): 414421, ISSN0002-9890, JSTOR2304144 Blissard, John (1861), "Theory of generic equations" (http://resolver.sub.uni-goettingen.de/ purl?PPN600494829_0004), The quarterly journal of pure and applied mathematics 4: 279305 Roman, Steven M.; Rota, Gian-Carlo (1978), "The umbral calculus", Advances in Mathematics 27 (2): 95188, doi:10.1016/0001-8708(78)90087-7, ISSN0001-8708, MR0485417 G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975. Roman, Steven (1984), The umbral calculus (http://books.google.com/books?id=JpHjkhFLfpgC), Pure and Applied Mathematics, 111, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN978-0-12-594380-2, MR741185 Reprinted by Dover, 2005 Roman, S. (2001), "Umbral calculus" (http://www.encyclopediaofmath.org/index.php?title=U/u095050), in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN978-1556080104

External links
Weisstein, Eric W., " Umbral Calculus (http://mathworld.wolfram.com/UmbralCalculus.html)" from MathWorld. A. Di Bucchianico, D. Loeb (2000). "A Selected Survey of Umbral Calculus" (http://www.combinatorics.org/ Surveys/ds3.pdf). Electronic Journal of Combinatorics. Dynamic Surveys DS3.

Uniform convergence (combinatorics)

387

Uniform convergence (combinatorics)


For a class of predicates defined on a set on is and a set of samples , where , the empirical frequency of . The Uniform is "simple" and we draw samples independently (with replacement) . Here "simple" means that is small relative to the size of the sample.

Convergence Theorem states, roughly,that if from according to a distribution expectation, where the expectation is given by the Vapnik-Chernovenkis dimension of the class

, then with high probability all the empirical frequency will be close to its

In other words, a sufficiently simple collection of functions behaves roughly the same on a small random sample as it does on the distribution as a whole.

Uniform convergence theorem statement[1]


If is a set of and -valued functions defined on a set and is a probability distribution on then for a positive integer, we have, for some where, for any , , and over consisting of is defined as: For any i.i.d. draws from the distribution . -valued functions over and . And for any natural number the shattering number . From the point of Learning Theory one can consider instance set will need in our proof. to be the Concept/Hypothesis class defined over the . Before getting into the details of the proof of the theorem we will state Sauer's Lemma which we is defined as. . indicates that the probability is taken

Sauer's lemma:[2] [3]


Sauer's Lemma relates the shattering number Lemma: Corollary: . , where to the VC Dimension. . is the VC Dimension of the concept class

Proof of uniform convergence theorem [1]


Before we get into the details of the proof of the Uniform Convergence Theorem we will present a high level overview of the proof. 1. Symmetrization: We transform the problem of analyzing analyzing distribution . One can view , where and are i.i.d samples of size . into the problem of drawn according to the , while may be thought

as the original randomly drawn sample of length

as the testing sample which is used to estimate

Uniform convergence (combinatorics) 2. Permutation: Since and are picked identically and independently, so swapping elements between them will not change the probability distribution on and . So, we will try to bound the probability of for some by considering the effect of a specific collection of permutations of the joint sample . Specifically, we consider permutations which swap and in some

388

subset of . The symbol means the concatenation of and . 3. Reduction to a finite class: We can now restrict the function class to a fixed joint sample and hence, if has finite VC Dimension, it reduces to the problem to one involving a finite function class. We present the technical details of the proof.

Symmetrization
Lemma: Let for some for some Then for , . and .

Proof: By the triangle inequality, if and Therefore,

then

and and [since Now for and are independent]. such that . Thus for any our high level idea. Notice, , and hence . And hence we perform the first step of . For this , we shall show that

fix an

is

binomial

random

variable

with

expectation

and

variance

. By Chebyshev's inequality we get, for the mentioned bound on . Here we use the fact that for .

Permutations
Let be the set of all permutations of . be any subset of and that swaps any probability distribution on and . Then, in some subset of

Lemma: Let

where the expectation is over . Proof: For any

chosen according to

, and the probability is over

chosen uniformly from

[since coordinate permutations preserve the product distribution ].

Uniform convergence (combinatorics)

389

[Because [The expectation] .

is finite]

The maximum is guaranteed to exist since there is only a finite set of values that probability under a random permutation can take.

Reduction to a finite class


Lemma: Basing on the previous lemma, . Proof: Let us define functions . We see that iff for some in satisfies, and such that for any which is atmost between and with . This means there are for

. Hence if we define if For and and otherwise. , we have that . By union bound we get, iff for some in satisfies

. Since, the distribution over the permutations , with equal probability. Thus, , where the probability on the right is over this is at most . and both the possibilities are equally likely. By Hoeffding's inequality, is uniform for each , so equals

Finally, combining all the three parts of the proof we get the Uniform Convergence Theorem.

Uniform convergence (combinatorics)

390

References
[1] Martin Anthony Peter,l.Bartlett. Neural Network Learning: Theoretical Foundations,Pages-46-50.First Edition,1999.Cambridge University Press, ISBN 0-521-57353-X (http:/ / books. google. com/ books?id=OiSJYwp4lzYC& dq=neural+ network+ learning+ theoretical+ foundations& printsec=frontcover& source=bn& hl=en& ei=kF32SZDNG8TgtgeXxpSfDw& sa=X& oi=book_result& ct=result& resnum=4) [2] Sham Kakade and Ambuj Tewari,CMSC 35900 (Spring 2008) Learning Theory,Lecture 11 (http:/ / ttic. uchicago. edu/ ~tewari/ lectures/ lecture11. pdf) [3] Lon Bottou (Dec 2010), Vapnik and Chervonenkis proved their lemma long before Sauer (http:/ / leon. bottou. org/ news/ vapnik-chervonenkis_proved_their_lemma_long_before_sauer)

Vexillary permutation
In mathematics, a vexillary permutation is a permutation of the positive integers containing no subpermutation isomorphic to the permutation (2143); in other words, there do not exist four numbers i<j<k<l with (j)<(i)<(l)<(k). They were introduced by Lascoux and Schtzenberger(1982, 1985). The word "vexillary" means flag-like, and comes from the fact that vexillary permutations are related to flags of modules. Guibert, Pergola & Pinzani (2001) showed that vexillary involutions are enumerated by Motzkin numbers.

References
Guibert, O.; Pergola, E.; Pinzani, R. (2001), "Vexillary involutions are enumerated by Motzkin numbers", Annals of Combinatorics 5 (2): 153174, doi:10.1007/PL00001297, ISSN0218-0006, MR1904383 Lascoux, Alain; Schtzenberger, Marcel-Paul (1982), "Polynmes de Schubert", Comptes Rendus des Sances de l'Acadmie des Sciences. Srie I. Mathmatique 294 (13): 447450, ISSN0249-6291, MR660739 Lascoux, Alain; Schtzenberger, Marcel-Paul (1985), "Schubert polynomials and the LittlewoodRichardson rule", Letters in Mathematical Physics. A Journal for the Rapid Dissemination of Short Contributions in the Field of Mathematical Physics 10 (2): 111124, doi:10.1007/BF00398147, ISSN0377-9017, MR815233 Macdonald, I.G. (1991b), Notes on Schubert polynomials [1], Publications du Laboratoire de combinatoire et d'informatique mathmatique, 6, Laboratoire de combinatoire et d'informatique mathmatique (LACIM), Universit du Qubec a Montral, ISBN978-2-89276-086-6

References
[1] http:/ / books. google. com/ books?id=BvLuAAAAMAAJ

Virtual knot

391

Virtual knot
In knot theory, a virtual knot is a generalization of the classical idea of knots in several ways that are all equivalent, introduced by Kaufman (1999).

Overview
In the theory of classical knots, knots can be considered equivalence classes of knot diagrams under the Reidemeister moves. Likewise a virtual knot can be considered an equivalence of virtual knot diagrams that are equivalent under generalized Reidemeister moves. A virtual knot diagram is a 4-valent planar graph, but each vertex is now allowed to be a classical crossing or a new type called virtual. The generalized moves show how to manipulate such diagrams to obtain an equivalent diagram; one move called the semi-virtual move involves both classical and virtual crossings, but all the other moves involve only one variety of crossing. A classical knot can also be considered an equivalence class of Gauss diagrams under certain moves coming from the Reidemeister moves. Not all Gauss diagrams are realizable as knot diagrams, but by considering all equivalence classes of Gauss diagrams we obtain virtual knots. A classical knot can be considered an ambient isotopy class of embeddings of the circle into a thickened 2-sphere. This can be generalized by considering such classes of embeddings into thickened higher-genus surfaces. This is not quite what we want since adding a handle to a (thick) surface will create a higher-genus embedding of the original knot. The adding of a handle is called stabilization and the reverse process destabilization. Thus a virtual knot can be considered an ambient isotopy class of embeddings of the circle into thickened surfaces with the equivalence given by (de)stabilization. Some basic theorems relating classical and virtual knots: If two classical knots are equivalent as virtual knots, they are equivalent as classical knots. There is an algorithm to determine if a virtual knot is classical. There is an algorithm to determine if two virtual knots are equivalent.

References
Kauffman, Louis H. (1999). "Virtual knot theory" [1]. European Journal of Combinatorics 20 (7): 663690. doi:10.1006/eujc.1999.0314. ISSN0195-6698. MR1721925 Louis Kauffman; Vassily Olegovich Manturov (2005). "Virtual Knots and Links". arXiv:math.GT/0502014[math.GT]. Manturov, Vassily; "Knot Theory"; CRC Press, 2004; ISBN 0415310016, 9780415310017; Length 400 pages [2]

External links
A Table of Virtual Knots [3] Virtual Knots and Infinite-Dimensional Lie Algebras [4] Elementary explanation with diagrams [5]

References
[1] http:/ / www. math. uic. edu/ ~kauffman/ VKT. pdf [2] http:/ / books. google. com/ books?id=juWWdGztaUEC& pg=PA308& lpg=PA308& dq=Virtual+ knots& source=bl& ots=gdy-M2Nwmc& sig=7HXjWh-mqSDnM6-oUnulCySWc1c& hl=en& ei=nolNSvXnCZqstgf7_PGuBA& sa=X& oi=book_result& ct=result& resnum=1 [3] http:/ / www. math. toronto. edu/ ~drorbn/ Students/ GreenJ/ [4] http:/ / www. springerlink. com/ content/ l6532661624rn43r/ [5] http:/ / www. esotericka. org/ cmc/ vknots. html

Weighing matrix

392

Weighing matrix
In mathematics, a weighing matrix W of order n with weight w is an n n and weight A is often denoted by . is equivalent to a conference matrix and a -matrix such that . A weighing matrix is also called a weighing design. For convenience, a weighing matrix of order is an Hadamard matrix.

Some properties are immediate from the definition: The rows are pairwise orthogonal. Each row and each column has exactly Example of W(2, 2): non-zero elements. (assuming the weight is not 0).

, since the definition means that

The main question about weighing matrices is their existence: for which values of n and w does there exist a W(n,w)? A great deal about this is unknown. An equally important but often overlooked question about weighing matrices is their enumeration: for a given n and w, how many W(n,w)'s are there? More deeply, one may ask for a classification in terms of structure, but this is far beyond our power at present, even for Hadamard or conference matrices.

External links
On Hotelling's Weighing Problem [1], Alexander M. Mood, Ann. Math. Statist. Volume 17, Number 4 (1946), 432-446.

References
[1] http:/ / projecteuclid. org/ DPubS?service=UI& version=1. 0& verb=Display& handle=euclid. aoms/ 1177730883

WilfZeilberger pair

393

WilfZeilberger pair
In mathematics, specifically combinatorics, a WilfZeilberger pair, or WZ pair, is a pair of functions that can be used to certify certain combinatorial identities. In particular, WZ pairs are instrumental in the evaluation of many sums involving binomial coefficients, factorials, and in general any hypergeometric series. A function's WZ counterpart may be used to find an equivalent, and much simpler sum. Although finding WZ pairs by hand is impractical in most cases, Gosper's algorithm provides a sure method to find a function's WZ counterpart, and can be implemented in a symbolic manipulation program.

Definition
Two functions, F and G, form a pair if and only if the following two conditions hold:

Together, these conditions ensure that the sum

because the function G telescopes:

Example
A WilfZeilberger pair can be used to verify the identity

using the proof certificate

Define the following functions:

Now F and G will form a Wilf-Zeilberger pair:

WilfZeilberger pair

394

References
Marko Petkovsek, Herbert Wilf and Doron Zeilberger (1996). A=B [1]. AK Peters. ISBN1568810636. Tefera, Akalu (2010), "What Is . . . a Wilf-Zeilberger Pair?" [2], AMS Notices 57 (04): 508509.

External links
Gosper's algorithm [3] gives a method for generating WZ pairs when they exist. Generatingfunctionology [4] provides details on the WZ method of identity certification.

References
[1] [2] [3] [4] http:/ / www. math. upenn. edu/ ~wilf/ AeqB. html http:/ / www. ams. org/ notices/ 201004/ rtx100400508p. pdf http:/ / www. pnas. org/ cgi/ reprint/ 75/ 1/ 40. pdf http:/ / www. math. upenn. edu/ ~wilf/ gfology2. pdf

Zero-sum problem
In number theory, zero-sum problems are a certain class of combinatorial questions. In general, a finite abelian group G is considered. The zero-sum problem for the integer n is the following: Find the smallest integer k such that every sequence of elements of G with length contains n terms that sum to 0. In 1961 Paul Erds, Abraham Ginzburg, and Abraham Ziv proved the general result for that Explicitly this says that any multiset of 2n 1 integers has a subset of size n the sum of whose elements is a multiple of n. This result is generally known as the EGZ theorem after its discoverers. More general results than this theorem exist, such as Olson's theorem, Kemnitz's conjecture (proved by Christian Reiher in 2003[1] ), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005[2] ). (the integers mod n)

References
[1] Reiher, Christian (2007), "On Kemnitz' conjecture concerning lattice-points in the plane", The Ramanujan Journal 13 (13): 333337, doi:10.1007/s11139-006-0256-y. [2] Grynkiewicz, D. J. (2006), "A Weighted Erds-Ginzburg-Ziv Theorem", Combinatorica 26 (4): 445453, doi:10.1007/s00493-006-0025-y.

External links
PlanetMath Erds, Ginzburg, Ziv Theorem (http://planetmath.org/encyclopedia/ErdHosGinzburgZivTheorem. html) Sun, Zhi-Wei, "Covering Systems, Restricted Sumsets, Zero-sum Problems and their Unification" (http://math. nju.edu.cn/~zwsun/csz.htm)

Article Sources and Contributors

395

Article Sources and Contributors


Combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=462576498 Contributors: -Ozone-, 128.163.251.xxx, 16@r, APH, Aajaja, AceMyth, Agatecat2700, Ahughes6, Alberto da Calvairate, Alexisastupidnoob, Alkarex, Almit39, Anclation, AndrewBuck, Aniu, Anonymous Dissident, Antandrus, Arcades, Arcfrk, Aresch, Arid Zkwelty, AxelBoldt, Bbukh, Bethnim, Betterusername, Bobblewik, Bobo192, BoomerAB, Bracton, Brad7777, Brianv, Btyner, CRGreathouse, Camipco, Capitalist, Carter, Causa sui, Centrx, Chadernook, Charles Matthews, Charvest, Chas zzz brown, Chochopk, Ciphers, CombAuc, Conversion script, Cowsandmilk, CrniBombarder!!!, CyborgTosser, DRLB, David Eppstein, Davidhand, Dbenbenn, Dbmag9, Deeptrivia, Delaszk, Devanshuhpandey, Dicklyon, DirkOliverTheis, Dkostic, DocWatson42, Doug Bell, DrBozzball, Dratman, Dryguy, Dysprosia, ELApro, EZio, Ejrh, Eran, Erodium, EuPhyte, Fedayee, Finell, Fparnon, FreplySpang, FurnaldHall, Furrykef, G716, Gatinha, Giftlite, GoCooL, Goatasaur, Goochelaar, GrEp, Graham87, Gryspnik, Gunnar Larsson, Gwernol, Haham hanuka, Hairy Dude, Hermel, Heyzeuss, Hyacinth, Ianb, Igorpak, Ijgt, Indeed123, Infinity0, JRavn, Jagged 85, Japanese Searobin, Jersey Devil, JesseW, JohnBlackburne, Jumbuck, KSmrq, Kaicarver, Kensor, King Bee, Kitarak, Kku, Kmhkmh, Koko90, Kusma, Kvng, Lantonov, LarryLACa, Leonxlin, Lesonyrra, Lethe, Ling.Nut, LokiClock, Lowellian, M759, MER-C, Macarion, Magister Mathematicae, Malhonen, Malsaqer, Marc Venot, Marc van Leeuwen, Masnevets, MathMartin, Matthew Yeager, Mblumber, McKay, Mcarling, Mcld, Memodude, Mhym, Michael Hardy, Michael Slone, Mickster810, Mike2vil, Mondhir, Mormegil, Msh210, Mxn, Naddy, Nandesuka, Nassrat, Nathan11g, Nehalem, Nneonneo, Nonforma, Noumenon, Ntsimp, Nwbeeson, Oakwood, Oliphaunt, OnePt618, Paul August, Paul Richter, PaulTanenbaum, Pcb21, Peregrine981, Peter Alan McAllister, Peterven, Phoenixthebird, PrimeHunter, Ptrillian, R. S. Shaw, RMFan1, Rajathsbhat, Ralph Corderoy, Red Director, Rjwilmsi, Rks22, Robin klein, Rocastelo, RodC, Rodney Topor, Ron B. Thomson, RyanEberhart, S2000magician, Sam Derbyshire, SchuminWeb, Shahab, ShelfSkewed, Sisodia, Some P. Erson, Sourabh Katagade, Spoon!, Stack, Stevertigo, Sticky Parkin, Symane, Taxipom, Teutanic, Thegeneralguy, Thehotelambush, Thingg, Thore Husfeldt, Timir2, Tiptoety, Tjdw, Tocharianne, Tom harrison, TomJF, Tomthecool, Traroth, Ttzz, Ultra two, Ultramarine, Urdutext, Vadimvadim, Viz, Vyznev Xnebara, Wafulz, Wantnot, Wcherowi, Wfaulk, Will Orrick, Willking1979, Woohookitty, Zahlentheorie, Zarvok, Zaslav, Zudu29, Zundark, , 253 anonymous edits Index of combinatorics articles Source: http://en.wikipedia.org/w/index.php?oldid=461171081 Contributors: Alan Liefting, Marc van Leeuwen, Michael Hardy, Minnecologies, Nbarth, Quantling, Sole Soul, Tobias Bergemann, 2 anonymous edits Outline of combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=456630286 Contributors: Arch dude, AxelBoldt, Charles Matthews, CombFan, CyborgTosser, D6, David Eppstein, Dbenbenn, Dcoetzee, Dirac1933, Dominus, Elkman, Fplay, Garyzx, Herbee, Hermel, Icairns, Ixfd64, JYOuyang, JamesAM, JdH, Jeff3000, Jennifer seberry, Jks, Jon Awbrey, Jowa fan, Kevyn, Kope, Magicus69, Mhym, Michael Hardy, Minnecologies, Neparis, Nils Grimsmo, Niteowlneils, NuclearWarfare, PWilkinson, Paul August, Peterkwok, Quiddity, Qxz, Rajsekar, Stebulus, Steinsky, Svick, The Transhumanist, Thivierr, TimBentley, Trovatore, Ttzz, Vatter, Wile E. Heresiarch, Will Orrick, ZeroOne, 8 anonymous edits 3-dimensional matching Source: http://en.wikipedia.org/w/index.php?oldid=450554506 Contributors: Altenmann, Andreas Kaufmann, Bender2k14, Miym, RJFJR, Sbjesse, Yewang315, 14 anonymous edits AanderaaKarpRosenberg conjecture Source: http://en.wikipedia.org/w/index.php?oldid=458087669 Contributors: Bender235, Coppertwig, David Eppstein, Dekart, Eeekster, Giftlite, GirasoleDE, Michael Hardy, RobinK, Thore Husfeldt, Tobias Bergemann, Yewang315, 1 anonymous edits Algorithmic Lovsz local lemma Source: http://en.wikipedia.org/w/index.php?oldid=448839191 Contributors: 3mta3, Andreas Kaufmann, Bkell, David Eppstein, Delsenor, Doradus, Giftlite, Gregbard, Headbomb, Know2info, Michael Hardy, RobinK, 5 anonymous edits All-pairs testing Source: http://en.wikipedia.org/w/index.php?oldid=461958596 Contributors: Ash, Ashwin palaparthi, Bookworm271, Brandon, Capricorn42, Chris Pickett, Cmdrjameson, Erkan Yilmaz, Jeremy Reeder, Kjtobo, LuisCavalheiro, MER-C, Melcombe, MrOllie, Pinecar, Qwfp, Raghu1234, Rajushalem, Regancy42, Rexrange, Rstens, RussBlau, SteveLoughran, Tassedethe, 51 anonymous edits Analytic combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459074753 Contributors: (:Julien:), Brad7777, Charvest, CyborgTosser, FractalFusion, JohnBlackburne, Lantonov, Maksim-e, Marc van Leeuwen, Michael Hardy, Michael Slone, Oleg Alexandrov, Shreevatsa, Silverfish, Vanish2, 2 anonymous edits Arctic circle theorem Source: http://en.wikipedia.org/w/index.php?oldid=457670669 Contributors: Gregbard, R.e.b., Sodin Arrangement of hyperplanes Source: http://en.wikipedia.org/w/index.php?oldid=447023647 Contributors: Arcfrk, ArnoldReinhold, Charles Matthews, David Eppstein, Edward, Joriki, Josephorourke, Justin W Smith, Kiefer.Wolfowitz, Linas, Masnevets, Michael Hardy, Michael Slone, Passargea, Rgrimson, Tizio, Tosha, Ttzz, Waltpohl, Zaslav, Zhw, 5 anonymous edits Baranyai's theorem Source: http://en.wikipedia.org/w/index.php?oldid=455867079 Contributors: David Eppstein, Giftlite, Kope, Michael Hardy, Sodin, TheAMmollusc Barycentric-sum problem Source: http://en.wikipedia.org/w/index.php?oldid=385545643 Contributors: Againme, Bender2k14, CiTrusD, Michael Hardy, Oordaz, Sprachpfleger Bent function Source: http://en.wikipedia.org/w/index.php?oldid=451501275 Contributors: Anonymous Dissident, David Eppstein, Gzorg, Lipedia, Marozols, Ntsimp, Rich Farmbrough, Rjwilmsi, Will Orrick Binomial coefficient Source: http://en.wikipedia.org/w/index.php?oldid=464043346 Contributors: .:Ajvol:., 137.112.129.xxx, 3ICE, A. Pichler, Altenmann, Amit man, Anonymous Dissident, Atlastawake, Av, AxelBoldt, Basploeger, Bo Jacoby, Boaex, BosRedSox13, Bsskchaitanya, Btyner, CRGreathouse, Calculuslover, CarloWood, Catapult, Cdang, Charles Matthews, Cherkash, Classicalecon, Cometstyles, CommandoGuard, Conversion script, Cornince, Cryptography project, DAGwyn, DVD R W, Danski14, David Eppstein, Dcoetzee, Dejan Jovanovi, DejanSpasov, Denelson83, Devoutb3nji, Don4of4, Doug Bell, DrBob, Duoduoduo, Dysprosia, E rulez, Ebony Jackson, Edemaine, Eleuther, Emul0c, Endlessoblivion, Eric119, Excelsiorfireblade, Ferkel, Fredrik, Fresheneesz, Fropuff, Gauge, Ghazer, Giftlite, Graham87, Gzorg, Hashar, Hawthorn, Hede2000, Hierakares, Himynameisbrak, Indianaed, Isilando2, Jim Sukwutput, Jim.belk, Jitse Niesen, Jmabel, Jobu0101, Joey96, Josh3580, Jrvz, Jusdafax, Jkullmar, KSmrq, Kaimbridge, Karl Stroetmann, Kestrelsummer, Keta, King Bee, Kneufeld, Knightry, LakeHMM, Lambiam, Lantonov, Law, Lclem, Linas, Ling.Nut, Llama320, Llamabr, Luke Gustafson, Mabuhelwa, Macrakis, Madmath789, Magioladitis, Magister Mathematicae, Mangojuice, Marc van Leeuwen, Maxal, Maxwell helper, Mboverload, Mcld, Mcmlxxxi, Mhym, Michael Hardy, Michael Slone, Mikeblas, Mormegil, Nbarth, Ninly, Nk, Oleg Alexandrov, Oliverknill, Ondra.pelech, Orangutan, Ott2, PMajer, Paolo.dL, Patrick, Paul August, PaulTanenbaum, Pfunk42, Pgc512, PhotoBox, Postxian, PrimeHunter, Quantling, Ragzouken, Rahence, Rhebus, Rich Farmbrough, Rponamgi, Sander123, Sanjaymjoshi, Scharan, Schmock, Sciyoshi, Shelandy, Shelbymoore3, Simoneau, Sjorford, Small potato, Spacepotato, Sreyan, Stebulus, Stellmach, Stpasha, TakuyaMurata, Tetracube, Thiago R Ramos, Timwi, Tomi, Twas Now, Uncle uncle uncle, Uncompetence, Vonkje, Voyagerfan5761, WardenWalk, Wavelength, Wellithy, XJamRastafire, Xanthoxyl, Ylloh, Zahlentheorie, Zalethon, Ziyuang, , 267 anonymous edits Block walking Source: http://en.wikipedia.org/w/index.php?oldid=309217825 Contributors: Mathemajor, Mboverload, Michael Hardy, Oleg Alexandrov, Paul A, TheHorse'sMouth, 3 anonymous edits Bondy's theorem Source: http://en.wikipedia.org/w/index.php?oldid=455732299 Contributors: Btyner, Giftlite, Headbomb, Jitse Niesen, Melcombe, Michael Hardy, Miym, RDBury, Retired username, RobinK, 2 anonymous edits BorsukUlam theorem Source: http://en.wikipedia.org/w/index.php?oldid=455582771 Contributors: Algebraist, AxelBoldt, BeteNoir, Charles Matthews, Charvest, Chrisahn, Conversion script, Crasshopper, Delaszk, Edemaine, Franp9am, Giftlite, Igorpak, Jhausauer, Makotoy, Matikkapoika, Michael Hardy, Nobi, PV=nRT, Paul August, Pred, Psychonaut, Rich Farmbrough, Rjwilmsi, Robin S, Robinh, Slinger, Sodin, Stebulus, TakuyaMurata, Zundark, 10 anonymous edits BruckRyserChowla theorem Source: http://en.wikipedia.org/w/index.php?oldid=457062077 Contributors: 4pq1injbok, Algebraist, Arthur Rubin, BeteNoir, Btyner, CBM, Charles Matthews, Charleswallingford, Giftlite, Jafeluv, Joe Decker, Lantonov, Maxal, Michael Slone, Nbarth, Psychonaut, RDBury, Silverfish, Waltpohl, Wcherowi, Will Orrick, Zaslav, 6 anonymous edits Butcher group Source: http://en.wikipedia.org/w/index.php?oldid=456599276 Contributors: A.K.Nole, ChildofMidnight, Giftlite, Headbomb, Jitse Niesen, Mathsci, Michael Hardy, Omnipaedista, PhGustaf, R.e.b., Roger wilco, Sawomir Biay, TotientDragooned, 2 anonymous edits CameronErds conjecture Source: http://en.wikipedia.org/w/index.php?oldid=456975650 Contributors: Alex Dainiak, Arthur Rubin, CRGreathouse, David Eppstein, Geometry guy, Giftlite, Grafen, Haseldon, Igorpak, Lantonov, Michael Hardy, Michael Slone, PaulHanson, Vanish2, Watson Ladd, 2 anonymous edits Catalan's constant Source: http://en.wikipedia.org/w/index.php?oldid=462909751 Contributors: AxelBoldt, Banus, CRGreathouse, Dicklyon, Dogah, Dysprosia, Ejrh, Filu, Fnielsen, Fredrik, Fropuff, Gaius Cornelius, Garion96, Giftlite, Haelix, Headbomb, Julian Birdbath, Lagerfeuer, Lantonov, Linas, Michael Hardy, Oleg Alexandrov, Oli Filth, PV=nRT, Pinazza, Plastikspork, Prumpf, Pt, R. J. Mathar, Rbk, Robo37, Scythe33, Slawekb, Sligocki, Tema, ThreePointOneFour, XJamRastafire, Zundark, 34 anonymous edits Chinese monoid Source: http://en.wikipedia.org/w/index.php?oldid=450911870 Contributors: Giftlite, Headbomb, Michael Hardy, R.e.b.

Article Sources and Contributors


Combination Source: http://en.wikipedia.org/w/index.php?oldid=464334083 Contributors: -Marcus-, Ann arbor street, Ashcatash5, Ashok567, AxelBoldt, Axiomsofchoice, Beatnick, Bizso, Bkumartvm, BlckKnght, Bloodsucker01, Bo Jacoby, Bongwarrior, Borgx, Burn, CarlosRC, Cherkash, Classicalecon, Closedmouth, Conversion script, CraftMaster2190, Cybercobra, Czhangrice, Dav2008, DerHexer, Dialectric, Dickguertin, Dirac1933, Doniago, Doug Bell, Druiffic, Eboomer, EdC, Emote, Evil Eccentric, Excirial, FF2010, Faisal.akeel, Fintler, Fresheneesz, Gary King, Geekygator, Giftlite, Glmory, Gogo Dodo, GregorB, Gkhan, Haklo, Hariva, Heero Kirashami, Hsore9, Iaen, Im.a.lumberjack, Iridescent, J.delanoy, Jfmantis, JohnBlackburne, Jojojlj, Jowa fan, Jwmcleod, Knakts, Koyao, Lambiam, Lantonov, LarryLACa, Liao, Lipedia, Lourakis, MER-C, Mabuhelwa, MacMed, Marc van Leeuwen, Mavaddat, Maxal, Messy, Michael Hardy, Michael Slone, Mike Rosoft, Mindmatrix, Mm4221, Myanw, Nbarth, Nicolaennio, Ninjagecko, NipplesMeCool, Nobar, Nuno Tavares, Obradovic Goran, Oleg Alexandrov, Oxymoron83, Paolo.dL, Patrick, Philip Trueman, Phoe6, Pmontu, Preth, Pysar, Radagast83, Rich Farmbrough, RoToRa, Romanm, Salix alba, Shelbymoore3, Shenme, Slimdee, Snoyes, Sonjaaa, Svetofor, Tawagoto, Technochocolate, Tehdiplomat, The Anome, TheBendster, Tiggerjay, TobiasBengtsson, Tocharianne, Trusilver, Urdutext, Veinor, Vinne2, Wikidrone, Wj32, Xavier75, YordanGeorgiev, Zacharie Grossen, Zhou Yu, , , , 193 anonymous edits Combinatorial class Source: http://en.wikipedia.org/w/index.php?oldid=360346137 Contributors: Charles Matthews, CyborgTosser, Michael Hardy, Sopoforic, Stebulus Combinatorial data analysis Source: http://en.wikipedia.org/w/index.php?oldid=361758419 Contributors: Bestiasonica, Delaszk Combinatorial explosion Source: http://en.wikipedia.org/w/index.php?oldid=420575372 Contributors: CBM, CharlesGillingham, Chillum, Curtdbz, Devourer09, Hairy Dude, Jossi, Ll0l00l, MarSch, Spiritia, The Anome, Vadmium, Vinney, WikHead, Xnn, 9 anonymous edits Combinatorial explosion (communication) Source: http://en.wikipedia.org/w/index.php?oldid=362301497 Contributors: Bestiasonica, Booyabazooka, CBM, Charles Matthews, Crenner, Dawynn, GTBacchus, Greenrd, J.delanoy, Kku, Michael Hardy, Midgley, Oleg Alexandrov, Robertvan1, The Anome, 3 anonymous edits Combinatorial hierarchy Source: http://en.wikipedia.org/w/index.php?oldid=451614594 Contributors: Amichael123, Charvest, Fabrictramp, Harper3505, Keithbowden, Meredyth, Michael Hardy, Ospalh, RunningClam, Wikifarzin, 3 anonymous edits Combinatorial number system Source: http://en.wikipedia.org/w/index.php?oldid=451161050 Contributors: Billmcn, Eraoul, Giftlite, JamesDmccaffrey, Jwmcleod, Lantonov, Lasloo, LeeHunter, Marc van Leeuwen, Michael Hardy, Ninjagecko, Oleg Alexandrov, Salix alba, Semifinalist, The Anome, Txen, Whpq, Zaslav, 12 anonymous edits Combinatorial principles Source: http://en.wikipedia.org/w/index.php?oldid=448845859 Contributors: Btyner, CBM2, David Eppstein, ED, Grutness, JRSpriggs, Lantonov, Leon math, Lord Bruyanovich Bruyanov, Michael Hardy, Paul August, Robertd, Silverfish, Stebulus, 1 anonymous edits Combinatorics and dynamical systems Source: http://en.wikipedia.org/w/index.php?oldid=436613618 Contributors: Charvest, David Eppstein, Michael Hardy, Tiptoety Combinatorics and physics Source: http://en.wikipedia.org/w/index.php?oldid=460458763 Contributors: Charvest, David Eppstein, Grubisch, Harper3505, Johnsopc, Korepin, Michael Hardy, R'n'B, 1 anonymous edits Composition (number theory) Source: http://en.wikipedia.org/w/index.php?oldid=447204660 Contributors: Akriasas, Armend, CRGreathouse, Charvest, Giftlite, HelenCowie, Henrygb, JustAGal, KSmrq, Lipedia, Marc van Leeuwen, Michael Hardy, Oleg Alexandrov, Paddles, Rheun, Shonk, Vince Vatter, Wolfling, Yrodro, 8 anonymous edits Constraint counting Source: http://en.wikipedia.org/w/index.php?oldid=453884289 Contributors: Hillman, Lantonov, Michael Slone, Oleg Alexandrov, Ryan Reich, 2 anonymous edits Corners theorem Source: http://en.wikipedia.org/w/index.php?oldid=456978668 Contributors: Geometry guy, Giftlite, Jowa fan, Kope, Michael Hardy Coupon collector's problem (generating function approach) Source: http://en.wikipedia.org/w/index.php?oldid=443946411 Contributors: Adriantam, Bearian, Dmitry123456, Haruth, Igorpak, Melcombe, Michael Hardy, Zahlentheorie, 2 anonymous edits Covering problem Source: http://en.wikipedia.org/w/index.php?oldid=424370294 Contributors: Charles Matthews, Ericbodden, Gaius Cornelius, Giftlite, Hermel, Jeremykemp, JonHarder, Michael Hardy, Mild Bill Hiccup, PV=nRT, RobinK, The Thing That Should Not Be, Ylloh Cycle index Source: http://en.wikipedia.org/w/index.php?oldid=462905789 Contributors: Abhask, Charles Matthews, Chris the speller, Dadre-ann, DrunkSquirrel, Edward, Frakturfreund, Greg Kuperberg, Lantonov, MFH, Marc van Leeuwen, Maxal, Mhym, Michael Hardy, Salix alba, SchfiftyThree, The Anome, Tyler, Zahlentheorie, 8 anonymous edits Cyclic order Source: http://en.wikipedia.org/w/index.php?oldid=460337461 Contributors: Charles Matthews, EmilJ, Headbomb, InverseHypercube, Melchoir, Michael Hardy, Msh210, Oleg Alexandrov, R'n'B, Siddhant, 1 anonymous edits De Arte Combinatoria Source: http://en.wikipedia.org/w/index.php?oldid=459200240 Contributors: Abiyoyo, Altenmann, Anarchia, AnonMoos, Banno, Cacophony, Charles Matthews, Erkan Yilmaz, Good Olfactory, Javirl, Khazar, MarkBuckles, MathMartin, Omnipaedista, Renamed user 4, Viriditas, Wallpaperit, 6 anonymous edits De Bruijn torus Source: http://en.wikipedia.org/w/index.php?oldid=405260397 Contributors: Darguz Parsilvan, Dranorter, Giftlite, Gigs, InverseHypercube, Jitse Niesen, Justin W Smith, Lambiam, Michael Hardy, Muhandes Delannoy number Source: http://en.wikipedia.org/w/index.php?oldid=395251455 Contributors: JocK, Lantonov, Michael Hardy, Robertd, Simon04 Dickson's lemma Source: http://en.wikipedia.org/w/index.php?oldid=449612488 Contributors: Baarslag, Charles Matthews, Dcoetzee, FF2010, Giftlite, Julien Tuerlinckx, Lantonov, Oleg Alexandrov, PhS, Phil Boswell, TELL ME that, 3 anonymous edits Difference set Source: http://en.wikipedia.org/w/index.php?oldid=434219566 Contributors: ErSLa, Guerongi, Lantonov, Masnevets, Maxal, Nbarth, Oleg Alexandrov, Pcnss01, Stdjmax, Timesuptim, 18 anonymous edits DIMACS Source: http://en.wikipedia.org/w/index.php?oldid=376243406 Contributors: David Eppstein, Hermitian, Jitse Niesen, Lantonov, Radak, Wikid77, 1 anonymous edits Dinitz conjecture Source: http://en.wikipedia.org/w/index.php?oldid=455866320 Contributors: Brenny, Btyner, Charles Matthews, Giftlite, Headbomb, Lantonov, MarSch, Maxal, Ntsimp, Pengo, Personman, Singularity, Sirmob, Sodin, Tomaxer, Vanish2, 3 anonymous edits Discrepancy theory Source: http://en.wikipedia.org/w/index.php?oldid=448731717 Contributors: Bender235, Brighterorange, Charles Matthews, Charvest, Chutzpan, David Eppstein, Don Braffitt, Eggwadi, Ewulp, Giftlite, GregorB, Justin W Smith, Khalid hassani, Michael Hardy, Nbarth, Omnipaedista, Sophus Bie, Vanish2, Vcelloho, Whitepaw, Wk muriithi, 14 , anonymous edits Discrete Morse theory Source: http://en.wikipedia.org/w/index.php?oldid=457074550 Contributors: Charvest, Delaszk, Gigacephalus, Jason Quinn, Michael Hardy, Sgt Pikachu5, Woood, 9 anonymous edits Disjunct matrix Source: http://en.wikipedia.org/w/index.php?oldid=455898299 Contributors: Ahughes6, Andreas Kaufmann, Bobmath, Gregbard, IgorCarron, Michael Hardy, Topbanana, 3 anonymous edits Dividing a circle into areas Source: http://en.wikipedia.org/w/index.php?oldid=460766807 Contributors: Andris, Antandrus, Avocado, BigrTex, Brad7777, Charles Matthews, Dearsubir, Eric Ng, Escape Orbit, Galexander, Lambiam, Lantonov, Lemontea, Linas, Melchoir, Michael Hardy, Pt, Quadell, Robinh, Siddhant, SkyWalker, Slashme, 26 anonymous edits Dobinski's formula Source: http://en.wikipedia.org/w/index.php?oldid=428703157 Contributors: Avraham, Borneq, CRGreathouse, Fermion, Infrangible, Lantonov, MarkSweep, Melcombe, Michael Hardy, Michael Slone, Pinazza, Pmanderson, Skittleys, The wub, Wumbojr144, Zaslav, 10 anonymous edits Domino tiling Source: http://en.wikipedia.org/w/index.php?oldid=450914570 Contributors: Arthur Rubin, Boffob, CBM, Containment, David Eppstein, Dino, Dominus, GaborPete, Giftlite, Headbomb, Jitse Niesen, John Reaves, Joseph Myers, Kilom691, Lantonov, Lunch, Mikeblas, Mikhail Dvorkin, Miym, Nonenmac, PrimeHunter, Propaniac, Qwfp, R.e.b., The Anome, Voorlandt, X7q, ZICO, 13 anonymous edits Enumerations of specific permutation classes Source: http://en.wikipedia.org/w/index.php?oldid=464969492 Contributors: Headbomb, Joel B. Lewis, RDBury, Rettetast, Rjwilmsi, Vince Vatter, 3 anonymous edits Equiangular lines Source: http://en.wikipedia.org/w/index.php?oldid=314959625 Contributors: Charles Matthews, J04n, Linas, Natalya, Ntsimp, RobinK, Skippy le Grand Gourou, Ttzz, 2 anonymous edits

396

Article Sources and Contributors


Erds conjecture on arithmetic progressions Source: http://en.wikipedia.org/w/index.php?oldid=442393374 Contributors: Arthur Rubin, Blmpxcvd, Bobamnertiopsis, CRGreathouse, Charles Matthews, Crisfilax, David Eppstein, Dominus, Gdr, Gene Nygaard, Giftlite, Jitse Niesen, Kirriemuir, Kope, Lantonov, Michael Hardy, Obli, PRHammond2001, Pmanderson, StevenDH, Syxiao, Tdoune, Yill577, 9 anonymous edits ErdsGraham problem Source: http://en.wikipedia.org/w/index.php?oldid=456981371 Contributors: Cewvero, Charles Matthews, Connelly, David Eppstein, Delirium, Dominus, Francos, Gene Nygaard, Geometry guy, Giftlite, Headbomb, Michael Hardy, Northeasternbeast, PL290, Peruvianllama, 2 anonymous edits ErdsFuchs theorem Source: http://en.wikipedia.org/w/index.php?oldid=271271702 Contributors: Giftlite, Michael Hardy, Vanish2 ErdsRado theorem Source: http://en.wikipedia.org/w/index.php?oldid=456981840 Contributors: Geometry guy, Headbomb, Kope, Michael Hardy, Tobias Bergemann, 3 anonymous edits European Journal of Combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=453426452 Contributors: Guillaume2303, Kajervi, Lantonov, Michael Hardy, Taxipom, 2 anonymous edits Extremal combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459074970 Contributors: Bobathon71, Brad7777, Fubar Obfusco, Giftlite, Hermel, Jitse Niesen, Lantonov, Salix alba, Silverfish, 6 anonymous edits Factorial Source: http://en.wikipedia.org/w/index.php?oldid=463264167 Contributors: A. Pichler, Acer, Acimarol, Agreatnate, Agricola44, Ahoerstemeier, Alexandre Vassalotti, Alparmarta, Altenmann, Andres, Anonymous Dissident, Anton Mravcek, Arabic Pilot, ArglebargleIV, Arneth, ArnoldReinhold, Arphibagon, Aruton, Astronautics, Audiovideo, Autopilot, AxelBoldt, Bdesham, Ben pcc, Bender2k14, Blahm, Boltsman, Booyabazooka, BruceHodge, Bryan H Bell, Bubba73, Burgercat, CRGreathouse, Capitalist, Carlosguitar, CheekierMonkey, Chrispringle, Christian List, Ckatz, Colonies Chris, Connelly, Conversion script, Curlytop999, Daniel5Ko, Darkmeerkat, David Eppstein, Db099221, Dcluett, Dcoetzee, Deathphoenix, Denelson83, Dhp1080, Dirac66, Dmcq, Dogah, Dominus, Domitori, Dragohunter, Dude1818, Dysprosia, Ec-, EdC, Emperorbma, Eras-mus, Eric119, Eridani, Ernest lk lam, Eumeme, Eutactic, Evil Monkey, Excirial, FallenAngel, FatalError, Fibonacci, Fishcorn, Fredericksgary, Fredrik, Frencheigh, Fritzpoll, Furrykef, Garoto burns, GeneralCheese, Gesslein, Giftlite, Glenn L, Godden46, Goldencako, Google Child, Gruntler, H3llbringer, Haein45, Hairy Dude, Hannes Eder, Happy-melon, Hassan210360, Henrygb, Herbee, Hyacinth, Iamunknown, Icek, Idbelange, ImperatorExercitus, Indeed123, IronGargoyle, Isopropyl, Ixfd64, JB82, JWSchmidt, JabberWok, Jagun, JebeddiahSpringfield, Jezzabr, Jgoulden, Jiel.B, Jim.belk, Jleedev, JohnBlackburne, Jonathunder, Jonik, Jordaan12, Jumbuck, Justin W Smith, Jwmcleod, Kaimiddleton, Karl Palmen, Kbrose, Keith111, Keithcc, Kevin Baas, Kier07, King Bee, Knutux, Koolman2, Korax1214, Lambiam, Lantonov, Lhf, MSGJ, Magioladitis, Marc van Leeuwen, MarkSweep, Marquez, Mathacw, MattGiuca, Mattbuck, Matusz, Maximaximax, Mboverload, McKay, Mech Aaron, MegaSloth, Melchoir, Mets501, Michael Hardy, Michael Slone, Minesweeper, Misof, Mktos532, MoleculeUpload, Mon4, MrOllie, Mradam2008, Mrbowtie, Ms2ger, Nabla, Namaxwell, Necrid Master, NeonMerlin, NerdyScienceDude, Nicolas.Wu, Nikai, Nishantsah, Nitrolicious, Nitrxgen, Nniigeell, Nomet, Num Ref, Obradovic Goran, Octahedron80, Oldjackson, Oleg Alexandrov, PAR, Patrick, Paul August, Paul Niquette, Pde, Pek the Penguin, Piano non troppo, Pilover819, Pleasantville, Poor Yorick, Prari, PrimeFan, PrimeHunter, Qonnec, Quantling, Quaoar, Quest for Truth, Qwerty mac13, RaitisMath, Rhythm, Rich Farmbrough, Rob.derosa, Robo37, SGBailey, Sabbut, Salix alba, Saraghav, Schneelocke, Seb-Gibbs, Shawnhath, Shimgray, Slady, Spyswimmer33, Stevenj, Stevey7788, Stuart Morrow, Super Spider, Super-Magician, Sushi Tax, Sverdrup, Svick, Swpb, THEN WHO WAS PHONE?, TXiKi, TakuyaMurata, Taxman, Tels, Tgeairn, Thatoneguy, The Perfection, The Thing That Should Not Be, TheArcher, Thenewmathemagician, Thewhyman, Thingg, ThinkEnemies, Tim1988, TomViza, Tommy2010, Tomruen, Trusilver, Usien6, Vashistha.avinash, Vinicius Metal, Vorratt, Vvargoal, WFPM, Waggers, Wholmestu, Whywhenwhohow, Wik, Wikipelli, Wile E. Heresiarch, Wirkstoff, Wperdue, WriterHound, Wrs1864, XJamRastafire, Xario, Xenoglossophobe, Yamamoto Ichiro, Yarvin, Youandme, ZICO, Zaslav, Zundark, Zzedar, Zzyzx11, var Arnfjr Bjarmason, 487 , anonymous edits Factorial number system Source: http://en.wikipedia.org/w/index.php?oldid=457783999 Contributors: Asteron, Beland, Bosmon, CRGreathouse, Charles Matthews, Cybercobra, David Eppstein, Endpoint, Goochelaar, Grr82, Grutness, Henrygb, IvanLanin, JamesBWatson, JamesDmccaffrey, Jan Winnicki, Jim Mahoney, Jwmcleod, Lantonov, Lipedia, Ljrljr, MFH, Marc van Leeuwen, Michael Hardy, Michael Slone, Niteowlneils, Noe, Numerao, One Harsh, Rjwilmsi, Robo37, Siskus, The Anome, Tobias Bergemann, Tobycat, Torc2, Txen, Winnow, 35 anonymous edits Finite geometry Source: http://en.wikipedia.org/w/index.php?oldid=464681432 Contributors: 24soccer, Algebran, Asdfdsa, Brad7777, Charles Matthews, Charvest, CommonsDelinker, Cullinane, Dreadstar, Dysprosia, Elminster Aumar, Giftlite, GoingBatty, Greenfernglade, Hiperfelix, Jim.belk, JohnBlackburne, Joshuabowman, Kmhkmh, Kprateek88, Lipedia, Longhair, MarcelB612, Michael Hardy, Michael Slone, Msh210, Nbarth, Papa November, Patrick, Tatufan, Tedernst, Topbanana, Tosha, Uscitizenjason, Waltpohl, Wcherowi, 10 anonymous edits Finite topological space Source: http://en.wikipedia.org/w/index.php?oldid=448869981 Contributors: Bethnim, Brusegadi, CRGreathouse, David Eppstein, Fropuff, Headbomb, Mathematrucker, Mistory, Pawel8605, Tobias Bergemann, 1 anonymous edits FishburnShepp inequality Source: http://en.wikipedia.org/w/index.php?oldid=449623771 Contributors: Giftlite, Melcombe, Michael Hardy, R.e.b., Ser Amantio di Nicolao Free convolution Source: http://en.wikipedia.org/w/index.php?oldid=402491743 Contributors: Cayden Ryan, Debbah, Fabrictramp, Koppas, LilHelpa, M-le-mot-dit, Michael Hardy, Nikkimaria, Realkyhick, Shlyakht, 11 anonymous edits Fuzzy transportation Source: http://en.wikipedia.org/w/index.php?oldid=389037374 Contributors: Abductive, Ghatee, KathrynLybarger, Michael Hardy, NawlinWiki, Random Fixer Of Things, Sarah, Solidkuzma, 3 anonymous edits Generalized arithmetic progression Source: http://en.wikipedia.org/w/index.php?oldid=408956663 Contributors: Charles Matthews, Eternalblisss, Mets501, Michael Hardy, Nbarth, 2 anonymous edits Geometric combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459074863 Contributors: Brad7777, Charvest, Igorpak, JohnBlackburne, Michael Slone, 1 anonymous edits Glaisher's theorem Source: http://en.wikipedia.org/w/index.php?oldid=455766066 Contributors: Giftlite, Lantonov, Linas, Melchoir, Newprogressive, Robinh, Sodin, Vanish2, 1 anonymous edits Graph dynamical system Source: http://en.wikipedia.org/w/index.php?oldid=418085477 Contributors: Charvest, Docu, Eslip17, Harryboyles, Henning.Mortveit, Jim.belk, Jyoshimi, Mhym, Michael Hardy, 3 anonymous edits Group testing Source: http://en.wikipedia.org/w/index.php?oldid=455918227 Contributors: Banus, Bearcat, Decstop, Devanshuhpandey, Foobarnix, John of Reading, Katharineamy, Michael Hardy, Philip Trueman, S Marshall, Zeebtron, 8 anonymous edits History of combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459594923 Contributors: Colonies Chris, DarwinPeacock, Deeptrivia, Fratrep, Generalboss3, Headbomb, Igorpak, Indeed123, Joaquin008, KConWiki, Kww, Lenoxus, Mhym, PigFlu Oink, Qniemiec, 2 anonymous edits HuntMcIlroy algorithm Source: http://en.wikipedia.org/w/index.php?oldid=464130304 Contributors: 10nitro, Agathman, Bearcat, C. A. Russell, Daniel5Ko, Digwuren, Gonzonoir, Michael Hardy, Rich Farmbrough, Rjwilmsi, Ruud Koot, The Anome, Uzume, 3 anonymous edits Ideal ring bundle Source: http://en.wikipedia.org/w/index.php?oldid=453369581 Contributors: Bill william compton, Fabrictramp, Hans Adler, Michael Hardy, Mindmatrix, RJFJR, Twri, Incidence matrix Source: http://en.wikipedia.org/w/index.php?oldid=445103059 Contributors: Adrianwn, Aebdht, AugPi, BBB, Bobo192, Calle, Darktemplar, David Eppstein, EconoPhysicist, FelipeVargasRigo, Giftlite, Intangir, Kku, Kwvanderlinde, LOL, LachlanA, Lantonov, Linas, Lockeownzj00, Longhair, MathMartin, Michael Hardy, Michael Slone, Miym, Natalya, Oli Filth, Paul August, RJFJR, Rludlow, Sam Hocevar, Shahab, TakuyaMurata, Tamfang, Thesilverbail, Tomo, Tomruen, User A1, X7q, Zaslav, 33 anonymous edits Incidence structure Source: http://en.wikipedia.org/w/index.php?oldid=452144704 Contributors: Charles Matthews, Cobaltcigs, Cullinane, Davepape, David Eppstein, Giftlite, Gubbubu, Jon Awbrey, Koko90, Lambiam, Lantonov, Lipedia, Longhair, McKay, Michael Hardy, Michael Kinyon, Michael Slone, Nbarth, Papa November, Patrick, Paul August, Pgan002, Pkledgrape, Tomo, Tremlin, Twri, Wcherowi, Wdmatthews, Zaslav, 13 anonymous edits Independence system Source: http://en.wikipedia.org/w/index.php?oldid=392752123 Contributors: Bearcat, David Eppstein, Evilphoenix, Grumpfel, Malcolma, Michael Hardy, Oleg Alexandrov, Silverfish, 1 anonymous edits Infinitary combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459074833 Contributors: Bethnim, Brad7777, CBM, Charvest, David Eppstein, Differentiablef, Eastlaw, Giftlite, Harsimaja, Headbomb, Igorpak, Kope, Myasuda, Ntsimp, Pavel Jelinek, Popopp, R.e.b., Set theorist, Tobias Bergemann, Zundark, 3 anonymous edits Inversion (discrete mathematics) Source: http://en.wikipedia.org/w/index.php?oldid=455833919 Contributors: David Eppstein, Drbreznjev, Jim.belk, Joel B. Lewis, Kostmo, Lipedia, Mandsford, Maxal, Michael Hardy, Pnm, Uncle G, 5 anonymous edits

397

Article Sources and Contributors


Inversive plane Source: http://en.wikipedia.org/w/index.php?oldid=460992029 Contributors: Brad7777, David Eppstein, Evilbu, Michael Hardy, Shevello, Siddhant, Tosha, Undead warrior, Vanish2 Isolation lemma Source: http://en.wikipedia.org/w/index.php?oldid=448874347 Contributors: Headbomb, Michael Hardy, Shreevatsa Johnson scheme Source: http://en.wikipedia.org/w/index.php?oldid=453169454 Contributors: David Eppstein, Krptnght, Malcolma, Michael Hardy Josephus problem Source: http://en.wikipedia.org/w/index.php?oldid=461058516 Contributors: Adashiel, Amadaeus1010011010, Andre.holzner, Angel ivanov angelov, AnonMoos, Ashwinkumar b v, Begoon, Bluebusy, Bocianski, Cantons-de-l'Est, Chiok, Courtgoing, Craig Schamp, Crazytales, Crisfilax, Crowborough, Cybercobra, Eloy, Favonian, Fuzzyllama, Giftlite, Gleannnangealt, GregorB, Hairhorn, Hariva, Hebrides, Helopticor, Jogloran, John Shaffer, KOTEHOK, Lantonov, LcawteHuggle, MDMCP, Masterzora, Matthias Kupfer, Maxal, Michael Hardy, Mooncow, Mormegil, Napx, PL290, PV=nRT, Phil Boswell, Philip Trueman, Poliocretes, Ptrillian, Rachelskit, Radagast83, RobinK, Robinh, Sintau.tayua, Sycthos, Toddst1, Trovatore, Ty683g542, 77 anonymous edits Kalmanson combinatorial conditions Source: http://en.wikipedia.org/w/index.php?oldid=161180377 Contributors: Charles Matthews, David Eppstein, DavidCBryant, LordHowdy, Strangerer, 2 anonymous edits Kemnitz's conjecture Source: http://en.wikipedia.org/w/index.php?oldid=457010282 Contributors: Charles Matthews, David Eppstein, Geometry guy, Giftlite, Michael Hardy, Tomaxer Langford pairing Source: http://en.wikipedia.org/w/index.php?oldid=432017519 Contributors: David Eppstein, Epsilon0, Maxal, Michael Hardy Laver table Source: http://en.wikipedia.org/w/index.php?oldid=402421782 Contributors: CRGreathouse, Charles Matthews, Schneelocke, 1 anonymous edits Lehmer code Source: http://en.wikipedia.org/w/index.php?oldid=464588165 Contributors: 1exec1, EoGuy, Herix, Marc van Leeuwen, Michael Hardy, Tom Morris, 19 anonymous edits LindstrmGesselViennot lemma Source: http://en.wikipedia.org/w/index.php?oldid=431943435 Contributors: Bearcat, Gregbard, Michael Hardy, Sam Derbyshire, Stuartyeates List of factorial and binomial topics Source: http://en.wikipedia.org/w/index.php?oldid=448328201 Contributors: Cdang, Charles Matthews, D6, Dbenbenn, Fplay, Fredrik, Linas, Marc van Leeuwen, Michael Hardy, Paul August, Thibbs, ZeroOne, 3 anonymous edits LittlewoodOfford problem Source: http://en.wikipedia.org/w/index.php?oldid=455721689 Contributors: Charles Matthews, David Haslam, Dgies, Discospinster, Lantonov, Michael Hardy, Mon4, Oleg Alexandrov, PV=nRT, Sodin, Urhixidur, 6 anonymous edits Longest common subsequence problem Source: http://en.wikipedia.org/w/index.php?oldid=464153273 Contributors: Abednigo, Abhay Parvate, Adityasinghhhh, Altenmann, BRPXQZME, Bk314159, Booyabazooka, Breaddawson, CaliforniaRuby, CarlManaster, Cchantep, Charles Matthews, Colm Rice, Daf, David Eppstein, Dcoetzee, Decrease789, Dmcq, Dorftrottel, Drpaule, Flszen, Gerbrant, Giftlite, Glane23, Gracenotes, Hao2lian, Headbomb, Heavyrain2408, Jef41341, Jonathanmotes, Joshi1983, Kentare, Kirankjoseph, LOL, Lantonov, Ltickett, Luqui, M, Macrakis, McSly, Mceder, Michael Hardy, Michael Snow, MisterSheik, Nchcky19, Nils Grimsmo, Nils.grimsmo, Nkojuharov, Notpattison, Pearle, Pgimeno, Phicem, Poor Yorick, Popnose, PyroPi, RickOsborne, Rjwilmsi, Royalguard11, Ruud Koot, Svick, Tommy2010, Vikreykja, WatchAndObserve, Wolfkeeper, X7q, Yaris678, Zero0000, 111 anonymous edits Longest increasing subsequence Source: http://en.wikipedia.org/w/index.php?oldid=464153737 Contributors: BACbKA, Bender2k14, Casbah, Creidieki, David Eppstein, Dstary, Giftlite, Heysan, Istassiy, Justin W Smith, Lantonov, Mlhetland, Nikaggar, R.e.b., Ronz, Ruud Koot, September5th, Shmomuffin, Shreevatsa, Weizhang, Xdxter, 42 anonymous edits Longest repeated substring problem Source: http://en.wikipedia.org/w/index.php?oldid=464154093 Contributors: Colonies Chris, Malcolma, Nils Grimsmo, Ruud Koot, Svick, TheMandarin, Tinucherian, 4 anonymous edits Lottery mathematics Source: http://en.wikipedia.org/w/index.php?oldid=463099543 Contributors: Aaron Lehmann, AlexWangombe, Arcturus, BHC, Bluerasberry, ChaosR, Dafydd, Denisarona, Ehrenkater, Gap9551, Gjshisha, Hevamb, Infarom, Ishikawa Minoru, Jlcooke, Keycard, Kku, Kotjze, LottsoLuck, MachineFist, Melcombe, Mindmatrix, Mufka, New Thought, Nic bor, Novacatz, Oddbodz, Pearle, Peperamiarmy, Run54, Slakr, Tetzcatlipoca, Triponi, Wikipromedia, Woohookitty, 56 anonymous edits Lovsz local lemma Source: http://en.wikipedia.org/w/index.php?oldid=463143850 Contributors: 3mta3, Bkell, Charles Matthews, David Eppstein, Delsenor, EmilJ, FF2010, G.perarnau, Gauge, Giftlite, Glacialfox, Headbomb, Jamie King, Jocme, Justin W Smith, Kevinatilusa, Know2info, Michael Hardy, Oleg Alexandrov, Omaranto, Ott2, Thesilverbail, WikHead, 16 anonymous edits LubellYamamotoMeshalkin inequality Source: http://en.wikipedia.org/w/index.php?oldid=447615895 Contributors: 3mta3, Charles Matthews, Creando, David Eppstein, Giftlite, Headbomb, MarkSweep, Michael Hardy, Peter Kwok, 10 anonymous edits M. Lothaire Source: http://en.wikipedia.org/w/index.php?oldid=447618357 Contributors: Fudo, Headbomb, R.e.b. Markov spectrum Source: http://en.wikipedia.org/w/index.php?oldid=456999046 Contributors: Charles Matthews, Fatal!ty, Giftlite, Michael Hardy, Quuxplusone, Sodin, WikiGrrrl, 3 anonymous edits Meander (mathematics) Source: http://en.wikipedia.org/w/index.php?oldid=465048609 Contributors: Eequor, Fredrik, Henrygb, Jaredwf, Linas, Martn Oregn, Michael Hardy, PhotoBox, RDBury, Schneelocke, Tawker, 5 anonymous edits Method of distinguished element Source: http://en.wikipedia.org/w/index.php?oldid=332110876 Contributors: CRGreathouse, Leon math, McKay, Melchoir, Michael Hardy, Oleg Alexandrov Mnev's universality theorem Source: http://en.wikipedia.org/w/index.php?oldid=457003034 Contributors: Charles Matthews, LJosil, Michael Hardy, Peterwshor, Sodin, Tiphareth Multi-index notation Source: http://en.wikipedia.org/w/index.php?oldid=441162366 Contributors: CESSMASTER, Cayamara, Charles Matthews, Deville, Dv82matt, F3et, Giftlite, Kbolino, Kusma, Lantonov, ObsessiveMathsFreak, Oleg Alexandrov, Peko (usurped), Quietbritishjim, Rich Farmbrough, Schmock, Sillybanana, Template namespace initialisation script, Vanka5, Zombiejesus, 8 anonymous edits Natural density Source: http://en.wikipedia.org/w/index.php?oldid=447031022 Contributors: Bubba73, Burn, CRGreathouse, CaptainDalit, Charles Matthews, David Eppstein, Dominus, G man yo, Gandalf61, Giftlite, Haham hanuka, Jesusonfire, JumpDiscont, Kompik, Ladislav the Posthumous, Lambiam, Lantonov, Loadmaster, MarkC77, Matchups, Michael Hardy, Mon4, Oleg Alexandrov, PMajer, Phoenixrod, The Thing That Should Not Be, Vince elion fox, Wroscel, 24 anonymous edits No-three-in-line problem Source: http://en.wikipedia.org/w/index.php?oldid=431187404 Contributors: Achim1999, Charles Matthews, DYLAN LENNON, David Eppstein, Eumolpo, Gdr, Lambiam, Lantonov, Matt Yeager, Mhym, N Shar, PV=nRT, Portalian, R. J. Mathar, Rich Farmbrough, Yangshuai, Yugsdrawkcabeht, 14 anonymous edits Occupancy theorem Source: http://en.wikipedia.org/w/index.php?oldid=431677224 Contributors: Darksun, Derek dreery, Jitse Niesen, Matt Crypto, Michael Hardy, MidgleyC, Pontus, 4 anonymous edits Ordered partition of a set Source: http://en.wikipedia.org/w/index.php?oldid=446411709 Contributors: Charles Matthews, Michael Hardy, Michael Slone, Nickj, Oleg Alexandrov, Patrick, 2 anonymous edits Oriented matroid Source: http://en.wikipedia.org/w/index.php?oldid=460702182 Contributors: CBM, Charvest, Cybercobra, David Eppstein, Difu Wu, Dylan Thurston, Headbomb, Hypercube, Justin W Smith, Kiefer.Wolfowitz, LOL, Michael Hardy, Rjwilmsi, 2 anonymous edits Partial permutation Source: http://en.wikipedia.org/w/index.php?oldid=369928280 Contributors: Arthur Rubin, Dranorter, EWikist, Michael Hardy Partition (number theory) Source: http://en.wikipedia.org/w/index.php?oldid=465030059 Contributors: 4pq1injbok, Almit39, Anomalocaris, Arch dude, Archimedes100, Bender2k14, Bluebusy, Bruguiea, Burn, CRGreathouse, Charles Matthews, Chinasaur, CommonsDelinker, Daniel5Ko, David Eppstein, DependableSkeleton, EGetzler, Ebehn, El C, Eliasen, Fivemack, FractalFusion, GTBacchus, Gandalf61, Giftlite, Googleisdik, GromXXVII, Hannes Eder, Henrygb, Hillman, HorsePunchKid, Ilmari Karonen, Ixfd64, JNLII, JRSpriggs, Jason Quinn, Jed 20012, Jivecat, JoaquinFerrero, Joel B. Lewis, Joriki, Justin W Smith, Jwmcleod, Karun3kumar, Khunglongcon, KiruJiwak, Kku, Krishnachandranvn, Lambiam, Lantonov, Linas, Lipedia, Loopology, Macrakis, Marc van Leeuwen, Mathman99, Maxal, Merovingian, Mhym, Miaers, Michael Hardy, Michael Slone, Mild Bill Hiccup, Milogardner, Nonenmac, Oleg Alexandrov, Partedit, Philip Trueman, Phys, Pwlfong, R. J. Mathar, RDBury, Redgolpe, Richard L. Peterson, Robinh, Robo37, Sam Derbyshire, Shoeofdeath, Simetrical, Stevertigo, Tarquin, Tetracube, Thehebrewhammer, Timothy Clemans, Timrem, Vladimirdx, Wang ty87916, Wshun, Ylloh, Zdaugherty, , , 91 anonymous edits

398

Article Sources and Contributors


Partition of a set Source: http://en.wikipedia.org/w/index.php?oldid=451498398 Contributors: Adam majewski, Anonymous Dissident, Armend, Arved, AxelBoldt, Bobrayner, Bunnyhop11, CRGreathouse, Calle, Capitalist, Charles Matthews, Corvi42, David Eppstein, El C, Elenseel, Elwikipedista, Fropuff, Gaius Cornelius, Giftlite, Gubbubu, Jamelan, Jiejunkong, Jon Awbrey, Kelly Martin, Kku, Lantonov, Laurentius, Lipedia, MFH, MathMartin, Mcld, Mennucc, Mhss, Mhym, Michael Hardy, Oleg Alexandrov, Patrick, Paul August, PaulTanenbaum, PhilHibbs, Pred, Revolver, Rob Zako, Ruakh, Salix alba, Sam Hocevar, Skippydo, Sopoforic, Stemonitis, Stereospan, StevenL, TedPavlic, That Guy, From That Show!, Tomo, Tsemii, Wshun, Zero0000, 33 anonymous edits Pascal's rule Source: http://en.wikipedia.org/w/index.php?oldid=464496154 Contributors: Alex Nisnevich, Algbrico, Auntof6, Burn, Erodium, Giftlite, Histrion, Infinity0, Isilanes, Kbdank71, Linas, Loniousmonk, Magister Mathematicae, Michael Hardy, Michael Slone, Pacdude9, Paul August, PetaRZ, Rich Farmbrough, The wub, TutterMouse, 14 anonymous edits Percolation Source: http://en.wikipedia.org/w/index.php?oldid=464765977 Contributors: 2over0, Antaeus Feldspar, Anton Petrov, Bakashi10, BazookaJoe, Berberisb, Biscuittin, Bjordan555, Bonadea, Brian0918, CedricVonck, DavidLevinson, Delirium, Dgrant, Docu, Doradus, Dr. Perfessor, E mraedarab, Edchi, Erianna, F.Shelley, Felix0411, Giftlite, Guy Harris, Heracles31, Iridescent, Jacopo Werther, Kingpin13, Lights, Linas, Mdd, Melaen, Mhym, Michael Hardy, Mikael Hggstrm, Mmru, Mwtoews, Neelix, Nunquam Dormio, Oda Mari, Oleg Alexandrov, PetaRZ, Pgreenfinch, Pinethicket, Possum, R.J.Oosterbaan, RTC, RadioFan, Rafti Institute, Rziff, SCriBu, Seba5618, Shell Kinney, Signalhead, Sitic, Snoyes, Steinsky, Stepa, THEMONKEYZIP, Tanvir Ahmmed, TexasAndroid, The Big Man, Uffish, Vonkje, Wikeepedian, Windchaser, 100 anonymous edits Percolation theory Source: http://en.wikipedia.org/w/index.php?oldid=457106675 Contributors: Akriasas, Andy17null, Arretado, Beatnik8983, Bereziny, BillC, CRGreathouse, Charles Matthews, ChristophE, Cmdrjameson, Daniel.Larremore, David Eppstein, Dgrant, Dicklyon, Drilnoth, E mraedarab, Erzbischof, Felix0411, Gadykozma, Garion96, Giftlite, Hatch68, Ingolfson, Krasnoludek, LachlanA, Lantonov, Linas, Magmi, MathMartin, Megadeff, Mentifisto, Mhym, Michael Hardy, Midgley, Myasuda, O process, Pgreenfinch, Pionshivu, Pra1998, RUClimate, Rjwilmsi, Rziff, Stasyuha, Uffish, Wilsonthongwk, Yobmod, Zunaid, 62 anonymous edits Perfect ruler Source: http://en.wikipedia.org/w/index.php?oldid=247786175 Contributors: GregorB, Ipso2, Lantonov, Rich Farmbrough, Ryan Reich, Tribaal, 3 anonymous edits Permanent is sharp-P-complete Source: http://en.wikipedia.org/w/index.php?oldid=462410589 Contributors: Altenmann, Arthur Rubin, CRGreathouse, Charles Matthews, Cobaltcigs, David Eppstein, Deepmath, ForgeGod, Fuhghettaboutit, Giftlite, Hardmath, Jayron32, Joachim Selke, Karmastan, Kusma, Lantonov, Laudak, Macwhiz, Michael Hardy, Mukadderat, Nsk92, Omnieiunium, Ppadala, Shreevatsa, T. Canens, Thore Husfeldt, Torceval, Twri, Tznkai, Ylloh, 32 anonymous edits Permutation Source: http://en.wikipedia.org/w/index.php?oldid=463598215 Contributors: .:Ajvol:., A. B., A. Pichler, Aaronchall, Abhishekbh, Alansohn, Albert0168, Alexander Chervov, AlphaPyro, Altenmann, Andre Engels, Angela, Anna1609, Anonymous Dissident, Archelon, Armend, Arved, AxelBoldt, AzaToth, B-Con, BRG, Beland, BenRG, Bender235, Bhatele, Bhound89, Bigblackdad, Bkumartvm, Blokhead, Bongwarrior, Boothy443, Borgx, Burn, CBM, CRGreathouse, Charles Matthews, ChevyC, Chris the speller, Citizen Premier, Ck lostsword, Classicalecon, Constructive editor, Conversion script, CountingPine, Courcelles, Cullinane, Curmi, Damian Yerrick, David Eppstein, Dbergan, Dcoetzee, Delldot, DerHexer, Desertsky85451, Dickguertin, Dicklyon, Dkasak, Dmcq, Dominus, DoubleAW, DoubleBlue, Dratman, Dreadstar, Dreftymac, Dubhe.sk, Dysprosia, Eco2009, Ed g2s, Edaelon, Elwikipedista, Emperorbma, Eric119, Eusebio42, Evercat, Extransit, FF2010, Fabiform, Faisal.akeel, Fresheneesz, Galoisprotege, Garde, Geekygator, Gerbrant, Giftlite, Goochelaar, Graeme Bartlett, Graham87, Haham hanuka, Haipa Doragon, Happy-melon, Hariva, Hedgehog83, Helder.wiki, Hfastedge, Ht686rg90, Hyacinth, Hydrogen Iodide, Iainscott, Ideyal, Ih8evilstuff, ImperatorExercitus, Insanity Incarnate, Ionescuac, J.delanoy, JEzratty, JackSchmidt, Jessemv, Jlaire, Jlau521, Joel B. Lewis, John Chamberlain, John wesley, Johnteslade, JonathLee, Jonpro, Jonverve, Joshk, K0rq, KSmrq, Kruusamgi, Kubigula, LOL, Lambiam, Lantonov, LilHelpa, Linas, Lipedia, Lunkwill, MFH, MSGJ, Mabuhelwa, Manik762007, Manscher, Marc van Leeuwen, Masterflex, Materialscientist, MaxEnt, Memming, Memodude, Michael Hardy, Michael Slone, Michael.Pohoreski, Mikez302, Mindmatrix, Minimac, Mmccoo, Moondyne, Mxn, NOrbeck, NYKevin, Nbarth, Nightkey, Nikai, Nishantsah, Nissanskyline923, Nk, Nolaiz, Nonette, NuclearWarfare, Obradovic Goran, Octahedron80, Oleg Alexandrov, Omnipaedista, Oobopshark, Patrick, Paul August, Pecunia, Pfortuny, Phil.a, Philip Trueman, Pikiwyn, Poor Yorick, Preslethe, Profvk, Ravibodake, ResearchRave, Revolver, Romanm, Saippuakauppias, Samikc, Sdornan, Shadowjams, Simetrical, Slady, Solarapex, Stevenj, Stone Pastor, Svick, Technochocolate, Tenth Plague, Terrek, The Anome, TheBendster, Tide rolls, Timwi, Tjclutten, Tlroche, Tocharianne, Tom harrison, Tom jgg, Tosha, Trusilver, Tsujigiri, Upendedappender, Vince Vatter, Wikidrone, Wikimachine, Wshun, Wzwz, Zero0000, 334 anonymous edits Permutation pattern Source: http://en.wikipedia.org/w/index.php?oldid=464969745 Contributors: Bridgeplayer, Charles Matthews, Michael Albert, Michael Hardy, RDBury, TropicalCoder, Vince Vatter, 11 anonymous edits Piecewise syndetic set Source: http://en.wikipedia.org/w/index.php?oldid=431648972 Contributors: (aeropagitica), BD2412, CRGreathouse, Charles Matthews, Crystallina, Darnedfrenchman, Delaszk, Giftlite, Landonproctor, Lantonov, Linas, Oleg Alexandrov, 3 anonymous edits Pigeonhole principle Source: http://en.wikipedia.org/w/index.php?oldid=464356557 Contributors: .mau., Aaron Rotenberg, Afa86, Andeggs, Andris, Anonymous Dissident, Antidrugue, Army1987, B.d.mills, BL5965, Bandanna, BenFrantzDale, Bh3u4m, Btyner, Calair, Cargowiki, Ccwelt, Chaosthingy, Charles Matthews, Cmichael, Comperr, Constructive editor, Crasshopper, Cybercobra, Daverocks, Dcoetzee, Demonkey36, Discospinster, Dmmaus, Dookama, Doug, Dr.K., Dysprosia, Enderminh, Fbd, Furrykef, FvdP, Giftlite, Gonzonoir, Greenrd, Greentower, Grutness, Hippopha, Hotpanda, Hu12, Hydrostatics, Icairns, Idunno271828, Ipi31415, JJL, Jaapkroe, Jackcsk, Jeff560, Jibbley, Jleedev, Jmundo, JohnnieDanger, Johnuniq, Jonathan Grynspan, Kle0012, Lantonov, Leon math, Lord Bruyanovich Bruyanov, Loupeter, Manway, Marc van Leeuwen, MarkSweep, Maurice Carbonaro, McKay, Memorygap, Metaeducation, Michael Hardy, MichaelMaggs, Mister B., Monty845, Netan'el, Nevsan, NickW557, Nihiltres, Noosphere, Peak, Plastikspork, Pokemonaofei, Praefectorian, Pred, Protonk, Pwlfong, Que, RDBury, Reywas92, Rich Farmbrough, Rudinreader, Ryan Reich, Sabbut, Senu, Shoeofdeath, Shreck, Superm401, Szalakta, Thehotelambush, Therosecrusher, Thumperward, Tide rolls, Tom Herbert, Tom harrison, Tomaxer, Tomo, Tosha, Trout Ice Cream, TwoOneTwo, Unara, Urdutext, Wang ty87916, Wellington, Wereon, WojciechSwiderski, WolfgangFaber, Xlr8jj, Zaslav, Zro, Zundark, 124 anonymous edits Probabilistic method Source: http://en.wikipedia.org/w/index.php?oldid=447956041 Contributors: Adking80, Agrinshp, Andris, Buenasdiaz, Charles Matthews, Charvest, Dominus, Dyaa, Giftlite, Headbomb, Kevinatilusa, Lantonov, M4gnum0n, Maxal, Melcombe, Michael Hardy, Michael Slone, Miym, Nagy, Nealeyoung, Ott2, Paul August, Peter Kwok, Pierre de Lyon, Psdey1, Qutezuce, Ryan Reich, Shell Kinney, Viz, 19 anonymous edits q-analog Source: http://en.wikipedia.org/w/index.php?oldid=456040100 Contributors: Alksentrs, Centrx, Charles Matthews, Cooperh, Joel B. Lewis, Konradek, Linas, Marc van Leeuwen, Melchoir, Michael Hardy, Michael Slone, Nbarth, Ntsimp, PAR, Viriditas, Zaslav, 3 anonymous edits q-Vandermonde identity Source: http://en.wikipedia.org/w/index.php?oldid=460235784 Contributors: CBM, Cajb, Charles Matthews, Joel B. Lewis, Marc van Leeuwen, Melcombe, Michael Hardy, Rich Farmbrough, Semorrison, Sodin, TechnoSymbiosis, 1 anonymous edits Random permutation statistics Source: http://en.wikipedia.org/w/index.php?oldid=457619120 Contributors: Djozwebo, Edward, Lantonov, Melchoir, Melcombe, Mhym, Michael Hardy, Nbarth, Rjwilmsi, Shira.kritchman, Skysmith, Stasyuha, Sucharit, The Anome, Zahlentheorie, 11 anonymous edits Road coloring problem Source: http://en.wikipedia.org/w/index.php?oldid=455768245 Contributors: Aawood, Altenmann, Andreas Kaufmann, Bkkbrad, Bryan Derksen, Buck Mulligan, CBM, CRGreathouse, Charles Matthews, Chris the speller, David Eppstein, Dcoetzee, Derek farn, Dreish, Elfred, Falkonry, Giftlite, Headbomb, Hermel, J04n, Jeffq, Justin W Smith, Kguirnela, Lantonov, Mapsax, Michael Hardy, Newone, Nimnar, Novwik, PHaze, Papna, Poromenos, Quuxplusone, Ravedave, Reetep, SEWilco, Simetrical, Sodin, Thissomeguy, Tjmayerinsf, Uzi V., Wilsone9, 43 anonymous edits RotaBaxter algebra Source: http://en.wikipedia.org/w/index.php?oldid=413721370 Contributors: Alu042, Bender235, Charvest, Giftlite, J04n, Kurusch, Mathsci, Michael Hardy, Mithu poddar Rule of product Source: http://en.wikipedia.org/w/index.php?oldid=465387917 Contributors: ApocryphalAuthor, Arthur Rubin, CBM, Devourer09, DixonD, Heyzeuss, Lantonov, Leon math, Michael Hardy, PV=nRT, Silverfish, Yms, Zarvok, 8 ,. anonymous edits Rule of sum Source: http://en.wikipedia.org/w/index.php?oldid=448845807 Contributors: David Eppstein, Heyzeuss, Leon math, Lus Felipe Braga, Magister Mathematicae (usurped), Michael Hardy, Silverfish, Yms, Zarvok, 3 ,. anonymous edits Semilinear set Source: http://en.wikipedia.org/w/index.php?oldid=323214900 Contributors: Charles Matthews, Eternalblisss, Mets501, Michael Hardy, Nbarth, 2 anonymous edits Separable permutation Source: http://en.wikipedia.org/w/index.php?oldid=399086872 Contributors: David Eppstein, Michael Hardy, Vince Vatter Sequential dynamical system Source: http://en.wikipedia.org/w/index.php?oldid=372568536 Contributors: CBM, Charvest, Delaszk, Giftlite, Michael Hardy, Oconnor663, Wikimathman, 3 anonymous edits Series multisection Source: http://en.wikipedia.org/w/index.php?oldid=442896698 Contributors: Gurt Posh, Maxal, Michael Hardy, Xanthoxyl Set packing Source: http://en.wikipedia.org/w/index.php?oldid=463502858 Contributors: Alro, Bender05, Bender2k14, Dcoetzee, Lantonov, Miym, Passarel, Paul August, That Guy, From That Show!, Ylloh, 12 anonymous edits

399

Article Sources and Contributors


Sharp-SAT Source: http://en.wikipedia.org/w/index.php?oldid=414196724 Contributors: Bender2k14, Dekart, Michael Hardy, RedZiz, RobinK, Sadads Shortest common supersequence Source: http://en.wikipedia.org/w/index.php?oldid=464153914 Contributors: Ashu on wiki, Dcoetzee, Javed Ahamed, Jitse Niesen, MisterSheik, Nils Grimsmo, Nkojuharov, Ruud Koot, That Guy, From That Show!, 9 anonymous edits Shuffle algebra Source: http://en.wikipedia.org/w/index.php?oldid=447970855 Contributors: Headbomb, Michael Hardy, R.e.b. Sicherman dice Source: http://en.wikipedia.org/w/index.php?oldid=462697099 Contributors: AgentPeppermint, Altzinn, Bryanclair, David Eppstein, Deflective, DreamGuy, EamonnPKeane, Igorpak, JMyrleFuller, Jcgarcow, Jim.belk, Kattfisk, Kwamikagami, Lantonov, Lenoxus, Loscha, Michael Hardy, Noe, Penguin, Rjwilmsi, Sicherman, Tamfang, Yekrats, 17 anonymous edits Sidon sequence Source: http://en.wikipedia.org/w/index.php?oldid=452377009 Contributors: AchedDamiman, Bender235, David Eppstein, Filip13041982, Headbomb, Justin W Smith, Kope, Michael Hardy, Rumping, RupertMillard, THF, Tirabo, 2 anonymous edits Sim (pencil game) Source: http://en.wikipedia.org/w/index.php?oldid=461293546 Contributors: Bender2k14, Dan Hoey, Dbenbenn, Dominus, Groganus, Headbomb, Ikanreed, JRHorse, JocK, Jokes Free4Me, Jugander, KeithTyler, Lantonov, Lawilkin, Lukefan3, Lupin, Mateo SA, MrOllie, Oleg Alexandrov, PamD, Ptrillian, Rich Farmbrough, Serge31416, Sk8rboi1720, 25 anonymous edits Singmaster's conjecture Source: http://en.wikipedia.org/w/index.php?oldid=454446234 Contributors: Akriasas, CRGreathouse, David Eppstein, Giftlite, Headbomb, Lantonov, Leahcim nai, Michael Hardy, Michael Slone, Schmock, Tomaxer, Xic667, 5 anonymous edits Small set (combinatorics) Source: http://en.wikipedia.org/w/index.php?oldid=451133675 Contributors: CRGreathouse, Catapult, CompuChip, Dominus, Giftlite, Henrygb, Jafet, Joriki, Lantonov, Mackold, Michael Hardy, Msh210, Trovatore, VladimirReshetnikov, 15 anonymous edits Sparse ruler Source: http://en.wikipedia.org/w/index.php?oldid=413616143 Contributors: Contestcen, David Eppstein, Tobias Bergemann, Wnmyers, Zariane, 1 anonymous edits Sperner's lemma Source: http://en.wikipedia.org/w/index.php?oldid=464308338 Contributors: Adking80, Arch dude, Arthur Rubin, AxelBoldt, Charles Matthews, Charleswallingford, Chris the speller, Cretog8, David Eppstein, Delaszk, Dmcq, Enochlau, FF2010, Giftlite, Herbee, Iorsh, Julien Tuerlinckx, Lantonov, Leonard G., Marechal Ney, Michael Hardy, Minesweeper, Pokipsy76, Robinh, Tosha, Triathematician, Twri, Zvika, 22 anonymous edits Spt function Source: http://en.wikipedia.org/w/index.php?oldid=444640435 Contributors: CRGreathouse, Enfcer, Gandalf61, GromXXVII, Michael Hardy, Random User 937494, 1 anonymous edits Stable roommates problem Source: http://en.wikipedia.org/w/index.php?oldid=456838320 Contributors: David Eppstein, GregorB, Headbomb, JackSchmidt, Mcherm, Mindspin311, Miym, Pip2andahalf, RobWIrving, Skier Dude, Squeaky201, TakuyaMurata, 11 anonymous edits Star of David theorem Source: http://en.wikipedia.org/w/index.php?oldid=457034119 Contributors: Geometry guy, Giftlite, Mhym, Michael Hardy, Ser Amantio di Nicolao Star product Source: http://en.wikipedia.org/w/index.php?oldid=168918151 Contributors: Lantonov, Michael Hardy, Michael Slone, Oleg Alexandrov, Rich Farmbrough, Trovatore Stars and bars (combinatorics) Source: http://en.wikipedia.org/w/index.php?oldid=456947318 Contributors: Agricola44, AndrewWTaylor, Anonymous Dissident, Btyner, Giftlite, Marc van Leeuwen, Materialscientist, Melcombe, Michael Hardy, Sisodia, 16 anonymous edits Sum-free sequence Source: http://en.wikipedia.org/w/index.php?oldid=332299004 Contributors: CRGreathouse, Charles Matthews, Giftlite, Gmelfi, Jonnabuz, NTmath, Oleg Alexandrov, Paul August Sunflower (mathematics) Source: http://en.wikipedia.org/w/index.php?oldid=464041575 Contributors: Charles Matthews, David Haslam, Hans Adler, Jason22, Michael Hardy, R.e.b., Set theorist, 2 anonymous edits Superpattern Source: http://en.wikipedia.org/w/index.php?oldid=438783577 Contributors: Akuchlous, Albert Antony, Bender235, Joel B. Lewis, Kubigula, Lantonov, Loganberry, Masterflex, Mbhunter, Mlpkr, Oleg Alexandrov, Pavel Vozenilek, Pearle, Ywong137, 34 anonymous edits Symbolic combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=431276229 Contributors: CRGreathouse, CyborgTosser, David Eppstein, Lantonov, Michael Hardy, Paul August, Phil Boswell, Pol098, Quantling, Quuxplusone, Tomo, Waggers, Zahlentheorie, Zaslav, 5 anonymous edits Szemerdi's theorem Source: http://en.wikipedia.org/w/index.php?oldid=455730031 Contributors: Altenmann, Arthur Rubin, Bender235, BeteNoir, Bunder, Charles Matthews, Cjwiki, David Eppstein, Delaszk, Gareth Jones, Giftlite, GreekHouse, JamesAM, JdH, JoshuaZ, Kope, Lantonov, Lupin, Michael Hardy, Mon4, Msh210, OlEnglish, Paul August, R.e.b., RDBury, Ryan Reich, Schneelocke, Sheynhertz-Unbayg, Shishir332, Sylvhem, Syxiao, Sawomir Biay, Teorth, Tsoeto, Yill577, Zsuzsajudit, 18 anonymous edits Theory of relations Source: http://en.wikipedia.org/w/index.php?oldid=431112410 Contributors: (jarbarf), Ars Tottle, Autocratic Uzbek, Charles Matthews, Coredesat, Delaware Valley Girl, DenisHowe, Drunken Pirate, Flower Mound Belle, Hairy Dude, Hans Adler, Howard McCay, Islaammaged126, Jon Awbrey, JzG, King Bee, Lambiam, Lantonov, Majorly, Mrs. Lovett's Meat Puppets, Navy Pierre, PL290, Pluto Car, Poke Salat Annie, Salix alba, Snigbrook, Tassedethe, Tbhotch, The Tetrast, Tobias Bergemann, Trainshift, Unco Guid, Unknown Justin, Viva La Information Revolution!, West Goshen Guy, 12 anonymous edits Toida's conjecture Source: http://en.wikipedia.org/w/index.php?oldid=429641586 Contributors: BarkingMoon, Crondeemon, David Eppstein, Jim.belk, Michael Hardy, The Founders Intent Topological combinatorics Source: http://en.wikipedia.org/w/index.php?oldid=459075623 Contributors: Akriasas, Arid Zkwelty, Brad7777, Charvest, David Eppstein, Delaszk, Ebe123, Erzbischof, JohnBlackburne, Kope, Mistory, RobinK, 4 anonymous edits Trace monoid Source: http://en.wikipedia.org/w/index.php?oldid=437344383 Contributors: Aboutmovies, Denverjeffrey, Dougher, Hans Adler, Linas, Michael Hardy, Mukake, Oleg Alexandrov, Pcap, Plasticup, Spacepotato, Valfontis, 1 anonymous edits Transformation (combinatorics) Source: http://en.wikipedia.org/w/index.php?oldid=416318377 Contributors: CuriousEric, GrEp, Malcolma, Michael Hardy, SMasters Transversal (combinatorics) Source: http://en.wikipedia.org/w/index.php?oldid=464985425 Contributors: Altenmann, Ark'ay, BertSeghers, Booyabazooka, Charles Matthews, Cullinane, David Eppstein, Discospinster, Flegmon, Ghazer, Hillman, Imperial Monarch, J.delanoy, JackSchmidt, Jim.belk, Jleedev, Lantonov, Michael Hardy, Michael Slone, NHJG, Nbarth, Ott2, RDBury, Salix alba, Scottfisher, Superninja, Xlgr2007, Zaslav, Zero0000, 29 anonymous edits Tucker's lemma Source: http://en.wikipedia.org/w/index.php?oldid=427934692 Contributors: Charvest, Delaszk, Giftlite, Michael Hardy, Vanish2, 4 anonymous edits Twelvefold way Source: http://en.wikipedia.org/w/index.php?oldid=447478251 Contributors: ArnoldReinhold, BD2412, CRGreathouse, Charles Matthews, Classicalecon, David Radcliffe, Dreadstar, Firowkp, Giftlite, Huku-chan, Joriki, Lantonov, Marc van Leeuwen, Michael Hardy, Mitch Ames, PeggyCummins, Salix alba, Zundark, 10 anonymous edits Umbral calculus Source: http://en.wikipedia.org/w/index.php?oldid=458304388 Contributors: Alberto da Calvairate, CBM, CRGreathouse, Charles Matthews, Charvest, Cuzkatzimhut, Daniele.tampieri, Dickdock, Dmcq, Gco, Giftlite, Headbomb, Jpbowen, Lantonov, Linas, Maxal, Michael Hardy, R.e.b., Ron asquith, Rparle, Tiandechengshi, Vanish2, 17 anonymous edits Uniform convergence (combinatorics) Source: http://en.wikipedia.org/w/index.php?oldid=448182088 Contributors: Abhradt, Bender235, Charvest, Mblumber, Michael Hardy, 4 anonymous edits Vexillary permutation Source: http://en.wikipedia.org/w/index.php?oldid=447979190 Contributors: Headbomb, Joel B. Lewis, Michael Hardy, R.e.b. Virtual knot Source: http://en.wikipedia.org/w/index.php?oldid=460225832 Contributors: Charles Matthews, David Eppstein, DavidHobby, Headbomb, Henry Delforn, Horoball, Kier07, Lantonov, M0RD00R, R.e.b., Rjwilmsi, 5 anonymous edits Weighing matrix Source: http://en.wikipedia.org/w/index.php?oldid=413880244 Contributors: Charles Matthews, Colonies Chris, David Eppstein, Filll, Kinthelt, Lantonov, Maksim-e, Michael Hardy, Stifle, Vanish2, Zaslav, 3 anonymous edits

400

Article Sources and Contributors


WilfZeilberger pair Source: http://en.wikipedia.org/w/index.php?oldid=452080011 Contributors: Bo Jacoby, GTBacchus, JackSchmidt, Jhdwg, Jim.belk, Maxal, Michael Hardy, Woodshed, 3 anonymous edits Zero-sum problem Source: http://en.wikipedia.org/w/index.php?oldid=442226477 Contributors: BenFrantzDale, Bender235, Charles Matthews, Cmdrjameson, CombFan, Francos, Giftlite, Gubbubu, Jcobb, Jowa fan, Lantonov, Maksim-e, Michael Hardy, Orbst, PV=nRT, PaulHanson, Rich Farmbrough, Rjwilmsi, Tomaxer, Wasawa, 11 anonymous edits

401

Image Sources, Licenses and Contributors

402

Image Sources, Licenses and Contributors


Image:Plain-bob-minor 2.png Source: http://en.wikipedia.org/w/index.php?title=File:Plain-bob-minor_2.png License: Public Domain Contributors: Andrew Hyde, Kelly Martin, Man vyi, Oosoom, Sopoforic Image:Catalan 4 leaves binary tree example.svg Source: http://en.wikipedia.org/w/index.php?title=File:Catalan_4_leaves_binary_tree_example.svg License: Public Domain Contributors: Original uploader was Jasampler at es.wikipedia Image:Partition3D.svg Source: http://en.wikipedia.org/w/index.php?title=File:Partition3D.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Kilom691 Image:Petersen1 tiny.svg Source: http://en.wikipedia.org/w/index.php?title=File:Petersen1_tiny.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Leshabirukov Image:Hasse diagram of powerset of 3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Hasse_diagram_of_powerset_of_3.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: KSmrq Image:Self avoiding walk.svg Source: http://en.wikipedia.org/w/index.php?title=File:Self_avoiding_walk.svg License: Creative Commons Attribution 3.0 Contributors: Claudio Rocchini Image:Young diagram for 541 partition.svg Source: http://en.wikipedia.org/w/index.php?title=File:Young_diagram_for_541_partition.svg License: GNU Free Documentation License Contributors: Arichnad, Darapti, Kilom691 Image:Morse-Thue sequence.gif Source: http://en.wikipedia.org/w/index.php?title=File:Morse-Thue_sequence.gif License: Public Domain Contributors: Axa2 Image:Icosahedron.svg Source: http://en.wikipedia.org/w/index.php?title=File:Icosahedron.svg License: GNU Free Documentation License Contributors: User:DTR Image:Necklace cropped.png Source: http://en.wikipedia.org/w/index.php?title=File:Necklace_cropped.png License: Public Domain Contributors: Robinhankin Image:Kissing-3d.png Source: http://en.wikipedia.org/w/index.php?title=File:Kissing-3d.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Original uploader was Robertwb at en.wikipedia Image:3-dimensional-matching.svg Source: http://en.wikipedia.org/w/index.php?title=File:3-dimensional-matching.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Miym File:Complete-edge-coloring.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complete-edge-coloring.svg License: Creative Commons Zero Contributors: David Eppstein File:Boolean functions like 1000 nonlinearity.svg Source: http://en.wikipedia.org/w/index.php?title=File:Boolean_functions_like_1000_nonlinearity.svg License: Public Domain Contributors: Lipedia File:0001 0001 0001 1110 nonlinearity.svg Source: http://en.wikipedia.org/w/index.php?title=File:0001_0001_0001_1110_nonlinearity.svg License: Public Domain Contributors: Lipedia Image:Pascal's triangle 5.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pascal's_triangle_5.svg License: GNU Free Documentation License Contributors: User:Conrad.Irwin originally User:Drini Image:Pascal's triangle - 1000th row.png Source: http://en.wikipedia.org/w/index.php?title=File:Pascal's_triangle_-_1000th_row.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Endlessoblivion File:Caylrich-first-trees.png Source: http://en.wikipedia.org/w/index.php?title=File:Caylrich-first-trees.png License: Public Domain Contributors: Arthur Cayley File:Combinations without repetition; 5 choose 3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Combinations_without_repetition;_5_choose_3.svg License: Public Domain Contributors: Lipedia File:Combinations with repetition; 5 multichoose 3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Combinations_with_repetition;_5_multichoose_3.svg License: GNU Free Documentation License Contributors: Lipedia Image:4x2.svg Source: http://en.wikipedia.org/w/index.php?title=File:4x2.svg License: Public Domain Contributors: user:Booyabazooka Image:4xn.svg Source: http://en.wikipedia.org/w/index.php?title=File:4xn.svg License: Public Domain Contributors: Booyabazooka, Dcoetzee, 1 anonymous edits Image:Inclusion-exclusion.svg Source: http://en.wikipedia.org/w/index.php?title=File:Inclusion-exclusion.svg License: GNU Free Documentation License Contributors: Booyabazooka, Darapti File:Binary and compositions 4.svg Source: http://en.wikipedia.org/w/index.php?title=File:Binary_and_compositions_4.svg License: Public Domain Contributors: Lipedia File:Compositions of 6.svg Source: http://en.wikipedia.org/w/index.php?title=File:Compositions_of_6.svg License: Public Domain Contributors: Lipedia File:Partitions of 6.svg Source: http://en.wikipedia.org/w/index.php?title=File:Partitions_of_6.svg License: Public Domain Contributors: Lipedia Image:Face colored cube.png Source: http://en.wikipedia.org/w/index.php?title=File:Face_colored_cube.png License: unknown Contributors: The Anome (talk). Original uploader was The Anome at en.wikipedia Image:DC8.png Source: http://en.wikipedia.org/w/index.php?title=File:DC8.png License: Public Domain Contributors: Dcoetzee, Grafite, Maksim, Man vyi Image:CyclicOrderingOfCuts.svg Source: http://en.wikipedia.org/w/index.php?title=File:CyclicOrderingOfCuts.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Melchoir Image:CyclicLinearProductLabels.svg Source: http://en.wikipedia.org/w/index.php?title=File:CyclicLinearProductLabels.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Melchoir image:2-2-4-4-de-Bruijn-torus.svg Source: http://en.wikipedia.org/w/index.php?title=File:2-2-4-4-de-Bruijn-torus.svg License: Public Domain Contributors: Dranorter, Multichill, Ww2censor Image:Delannoy3x3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Delannoy3x3.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Infovarius, Kilom691 Image:DividingACircleIntoAreas-Box.png Source: http://en.wikipedia.org/w/index.php?title=File:DividingACircleIntoAreas-Box.png License: GNU Free Documentation License Contributors: Maksim, 1 anonymous edits Image:DividingACircleIntoAreas.png Source: http://en.wikipedia.org/w/index.php?title=File:DividingACircleIntoAreas.png License: GNU Free Documentation License Contributors: Darapti, Maksim, 1 anonymous edits Image:Pavage domino.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pavage_domino.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Kilom691 Image:Dominoes tiling 8x8.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dominoes_tiling_8x8.svg License: Creative Commons Attribution-Share Alike Contributors: R. A. Nonenmacher File:Diamant azteque.svg Source: http://en.wikipedia.org/w/index.php?title=File:Diamant_azteque.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Kilom691 Image:Log-factorial.svg Source: http://en.wikipedia.org/w/index.php?title=File:Log-factorial.svg License: Public Domain Contributors: Original uploader was Ec- at en.wikipedia Image:Generalized factorial function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Generalized_factorial_function.svg License: GNU Free Documentation License Contributors: Self Image:Factorial05.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Factorial05.jpg License: Attribution Contributors: Domitori (Dmitrii Kouznetsov) File:OEISicon light.svg Source: http://en.wikipedia.org/w/index.php?title=File:OEISicon_light.svg License: Public Domain Contributors: Lipedia File:Loupe light.svg Source: http://en.wikipedia.org/w/index.php?title=File:Loupe_light.svg License: Public Domain Contributors: Lipedia File:Symmetric group 4; permutohedron; permutations and inversion vectors.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symmetric_group_4;_permutohedron;_permutations_and_inversion_vectors.svg License: GNU Free Documentation License Contributors: User:Lipedia File:Symmetric group 4; permutation list with matrices.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symmetric_group_4;_permutation_list_with_matrices.svg License: Public Domain Contributors: User:Lipedia Image:Order 2 affine plane.svg Source: http://en.wikipedia.org/w/index.php?title=File:Order_2_affine_plane.svg License: Public Domain Contributors: Original uploader was Uscitizenjason at en.wikipedia Image:Order 3 affine plane.svg Source: http://en.wikipedia.org/w/index.php?title=File:Order_3_affine_plane.svg License: Public Domain Contributors: Original uploader was Uscitizenjason at en.wikipedia File:Fano plane Hasse diagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fano_plane_Hasse_diagram.svg License: Creative Commons Attribution 3.0 Contributors: Lipedia File:fano3space.png Source: http://en.wikipedia.org/w/index.php?title=File:Fano3space.png License: Public Domain Contributors: Howard Eves

Image Sources, Licenses and Contributors


Image:circ-4-nor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Circ-4-nor.jpg License: Creative Commons Attribution 3.0 Contributors: Henning.Mortveit Image:circ-4-nor-1234.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Circ-4-nor-1234.jpg License: Creative Commons Attribution 3.0 Contributors: Henning.Mortveit Image:Four beat fibonacci animation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Four_beat_fibonacci_animation.gif License: Creative Commons Attribution 3.0 Contributors: Indeed123 Image:Iching-hexagram-37.svg Source: http://en.wikipedia.org/w/index.php?title=File:Iching-hexagram-37.svg License: Attribution Contributors: Ben Finney Image:Labeled_undirected_graph.svg Source: http://en.wikipedia.org/w/index.php?title=File:Labeled_undirected_graph.svg License: Public Domain Contributors: Adrianwn File:Fano plane with nimber labels.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fano_plane_with_nimber_labels.svg License: Public Domain Contributors: Fano_plane.svg: de:User:Gunther derivative work: --Lambiam Image:Fano plane-Levi graph.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fano_plane-Levi_graph.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Tremlin File:2-element subsets of 4 elements.svg Source: http://en.wikipedia.org/w/index.php?title=File:2-element_subsets_of_4_elements.svg License: Public Domain Contributors: Lipedia File:Symmetric group 4; permutohedron 3D; permutations and inversion vectors.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symmetric_group_4;_permutohedron_3D;_permutations_and_inversion_vectors.svg License: GNU Free Documentation License Contributors: User:Lipedia File:Symmetric group 4; permutohedron 3D; inversions.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symmetric_group_4;_permutohedron_3D;_inversions.svg License: Creative Commons Attribution 3.0 Contributors: Lipedia Image:Langford pairing.svg Source: http://en.wikipedia.org/w/index.php?title=File:Langford_pairing.svg License: Public Domain Contributors: David Eppstein Image:RSK example result.svg Source: http://en.wikipedia.org/w/index.php?title=File:RSK_example_result.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Sam Derbyshire Image:Schur lattice paths.svg Source: http://en.wikipedia.org/w/index.php?title=File:Schur_lattice_paths.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Sam Derbyshire Image:Meander M1 jaredwf.png Source: http://en.wikipedia.org/w/index.php?title=File:Meander_M1_jaredwf.png License: Public Domain Contributors: Calliopejen1, Jaredwf, Korg, Pearle, Vyznev Xnebara Image:OpenMeanderM1.svg Source: http://en.wikipedia.org/w/index.php?title=File:OpenMeanderM1.svg License: Public Domain Contributors: Original image by Jaredwf on w:Wikipedia. SVG version created by FirefoxRocks. Image:Open Meander M2 jaredwf.png Source: http://en.wikipedia.org/w/index.php?title=File:Open_Meander_M2_jaredwf.png License: Public Domain Contributors: Original uploader was Jaredwf at en.wikipedia Image:Semi-meander M1 jaredwf.png Source: http://en.wikipedia.org/w/index.php?title=File:Semi-meander_M1_jaredwf.png License: Public Domain Contributors: Calliopejen1, Jaredwf, Pearle Image:No-three-in-line.svg Source: http://en.wikipedia.org/w/index.php?title=File:No-three-in-line.svg License: Public Domain Contributors: David Eppstein File:max-flow min-cut example.svg Source: http://en.wikipedia.org/w/index.php?title=File:Max-flow_min-cut_example.svg License: Public Domain Contributors: Chin Ho Lee Image:Directed acyclic graph 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Directed_acyclic_graph_2.svg License: Public Domain Contributors: Johannes Rssel (talk) Image:3dpoly.svg Source: http://en.wikipedia.org/w/index.php?title=File:3dpoly.svg License: Public Domain Contributors: Xorx77 File:ShapleyFolkman lemma.svg Source: http://en.wikipedia.org/w/index.php?title=File:ShapleyFolkman_lemma.svg License: Public Domain Contributors: David Eppstein Image:Simplex description.png Source: http://en.wikipedia.org/w/index.php?title=File:Simplex_description.png License: Public Domain Contributors: Czupirek, Kocur, Maksim, Martynas Patasius, 1 anonymous edits File:Ferrer partitioning diagrams.svg Source: http://en.wikipedia.org/w/index.php?title=File:Ferrer_partitioning_diagrams.svg License: Creative Commons Attribution-Share Alike Contributors: R. A. Nonenmacher File:GrayDot.svg Source: http://en.wikipedia.org/w/index.php?title=File:GrayDot.svg License: Public Domain Contributors: Ilmari Karonen File:RedDot.svg Source: http://en.wikipedia.org/w/index.php?title=File:RedDot.svg License: Public Domain Contributors: Ilmari Karonen, Pfctdayelise, 1 anonymous edits File:BlackDot.svg Source: http://en.wikipedia.org/w/index.php?title=File:BlackDot.svg License: Public Domain Contributors: Ilmari Karonen, Matthias M. Image:Set partition.svg Source: http://en.wikipedia.org/w/index.php?title=File:Set_partition.svg License: Free Art License Contributors: Original uploader was Wshun at en.wikipedia Later versions were uploaded by Andrew pmk at en.wikipedia. Converted to SVG by Oleg Alexandrov 03:28, 28 July 2007 (UTC) File:PartitionLattice.svg Source: http://en.wikipedia.org/w/index.php?title=File:PartitionLattice.svg License: Public Domain Contributors: Sopoforic Image:Bond percolation p 51.png Source: http://en.wikipedia.org/w/index.php?title=File:Bond_percolation_p_51.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: de:Benutzer:Erzbischof Image:Permanent-Nonneg2Powers.png Source: http://en.wikipedia.org/w/index.php?title=File:Permanent-Nonneg2Powers.png License: Public Domain Contributors: Shreevatsa Image:Permanent-2powers01.png Source: http://en.wikipedia.org/w/index.php?title=File:Permanent-2powers01.png License: Public Domain Contributors: Shreevatsa File:Permutations RGB.svg Source: http://en.wikipedia.org/w/index.php?title=File:Permutations_RGB.svg License: Public Domain Contributors: Lipedia File:Permutations with repetition.svg Source: http://en.wikipedia.org/w/index.php?title=File:Permutations_with_repetition.svg License: Public Domain Contributors: Lipedia File:Symmetric group 3; Cayley table; matrices.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symmetric_group_3;_Cayley_table;_matrices.svg License: Public Domain Contributors: User:Lipedia Image:TooManyPigeons.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TooManyPigeons.jpg License: GNU Free Documentation License Contributors: Pigeons-in-holes.jpg by en:User:BenFrantzDale; this image by en:User:McKay Image:Road coloring conjecture.svg Source: http://en.wikipedia.org/w/index.php?title=File:Road_coloring_conjecture.svg License: Public Domain Contributors: Ravedave File:Loudspeaker.svg Source: http://en.wikipedia.org/w/index.php?title=File:Loudspeaker.svg License: Public Domain Contributors: Bayo, Gmaxwell, Husky, Iamunknown, Mirithing, Myself488, Nethac DIU, Omegatron, Rocket000, The Evil IP address, Wouterhagens, 16 anonymous edits Image:Complete graph K6.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complete_graph_K6.svg License: Public Domain Contributors: David Benbennick wrote this file. Image:Sperner1d.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sperner1d.svg License: GNU Free Documentation License Contributors: Akinom, Anarkman, Dmcq, Pokipsy76, 1 anonymous edits Image:Sperner2d.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sperner2d.svg License: GNU Free Documentation License Contributors: Akinom, Darapti, Dmcq, Pokipsy76 File:Star-of-david-thm.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Star-of-david-thm.jpg License: Public Domain Contributors: Mhym Image:star product 1.png Source: http://en.wikipedia.org/w/index.php?title=File:Star_product_1.png License: GNU Free Documentation License Contributors: en:User:Rich Farmbrough Image:star product 3.png Source: http://en.wikipedia.org/w/index.php?title=File:Star_product_3.png License: GNU Free Documentation License Contributors: Maksim, Sopoforic File:Sonnenblume_02_KMJ.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Sonnenblume_02_KMJ.jpg License: GNU Free Documentation License Contributors: Original uploader was KMJ at de.wikipedia File:TuckerLemmaDiagram.png Source: http://en.wikipedia.org/w/index.php?title=File:TuckerLemmaDiagram.png License: Creative Commons Zero Contributors: David Miller

403

License

404

License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

Potrebbero piacerti anche