Sei sulla pagina 1di 10

Related Properties of Matchings and Covers

Eric Emer December 22, 2012

Abstract This paper explains concepts related to matchings and covers. The paper demonstrates the relationship between properties of matchings and covers. In the process, this paper proves Knigs Theorem.

Introduction

As a business leader, ones objective is often to ll jobs at ones company with as many capable individuals as possible. We can easily determine that this number is limited by the amount of qualied individuals available for each position. It is also limited by the number of jobs available, as one cant assign more individuals to jobs than there are available positions. To ll as many jobs as we can with competent candidates, we wish to optimize the number of matchings of candidates to jobs. Because we can only assign one individual to each job, it obviously follows suit that the number of matchings we can make will be equal to the number of positions we have lled. While this number can be achieved in numerous ways, we call it the minimum cover number. Finding matchings and coverings are very common, real-world applicable, optimization problems. In this paper, we will go over some of the important ideas related to the maximum matching and minimum covering problem. We will connect the maximum matching and minimum cover number by showing that they are equal. We will expose the relationship between the maximum matching number and the minimum covering number, which is dened by Knigs Theorem. Then we will prove Knigs Theorem. This entails explaining in detail matchings and vertex coverings as they relate to bipartite graphs. We will be expressing the equality between maximum matching number and minimum cover number. We will do this by remodeling the problem with a linear program representation and then proving a series of lemmas. The main result of this paper is to show that the maximum matching number is equal to the minimum cover number under all circumstances.

Collaborated with Rachel Ah Chuen, Jared Wong

2 Dening Matching/Covering through an Example


Imagine once again, a scenario where a boss must match candidates from a pool of job applicants to a position. Each candidate is qualied for only a specic subset of positions. This matching problem may involve nding a maximum matching, M , where the boss wishes to make as many assignments as possible, without assigning any individual more than once, and without assigning any position to more than one individual. We may also wish to investigate the minimum cover, C , the set of candidates and jobs which includes either a candidate or a job from each assignment. Consider a vertex set partitioned into two sets: Candidates and Jobs. Figure 1 below displays this demonstration of possible matchings of candidates to jobs. This type of vertex set is called a bipartite graph.

Candidates Jobs

Amanda CEO Darrell CFO Cassandra CTO Eric Secretary Programmer VP

Figure 1: A bipartite graph showing the possible assignments of candidates to positions. I have included arrows to show the assignments, although in general bipartite graphs are undirected graphs.
Looking at Figure 1, the maximum number of matchings of candidates to jobs is going to be clearly equal to the minimum cover number. In this example, the

vertex coverings of the smallest order are going to be of order 4. For instance, {Amanda, CF O, Cassandra, Eric} or {Amanda, Darrell, Cassandra, Eric} would both be valid minimum coverings. The important consideration when nding the minimum cover is to include only one vertex from each matching. In this case, there are more jobs than candidates, but there are enough possible matchings for each candidate to have a unique job. Thus, the maximum matching number, |M |, will be 4 as well, giving |M | = |C |. A valid maximum matching would be M = {Amanda CEO, Darrell CF O, Cassandra P rogrammer, Eric V P }. In this example, it is trivial to show that the maximum number of matchings is 4. We can match each candidate to a unique job.

3 Relationship between Matching and Cover Number


In this section, we are going to explain Knigs Theorem, a theorem relating the matching and vertex cover of a bipartite graph. However, in order to do this, we will need to dene these concepts in greater detail.

3.1

Preliminary Denitions

In order to understand and prove Knigs Theorem, we need to dene its terms and objects in not just verbal examples, but also in mathematical terms. Denition 1. Bipartite Graph G = (V, E ) A bipartite graph, G = (V, E ) = (A, B, E ), is a vertex set V partitioned into A and B , and an edge set E , where E A B . Denition 2. Matching M A matching, M , is an edge set where M E , with every vertex of V contained in at most one edge of M . Denition 3. Vertex Cover C A vertex cover, C , is a set of vertices where C V , such that for every edge e = (a, b) E , either a C or b C , or both. Denition 4. Matching number = max {|M |}, where M is a matching in G. Denition 5. Covering Number = min {|C |}, where C is a vertex cover in G.

3.2

Knigs Theorem

Knigs Theorem (1931) relates and from the above denitions, as they pertain to a bipartite graph. Knigs Theorem informs us that:

Theorem 1. Knigs Theorem [1931] For every bipartite graph G, we have = . Which results in the obvious lemma that if = , then also . We will prove Knigs Theorem in a later section.

3.3 Modeling Maximum Matching and Minimum Cover as Linear Programs


In this section we will model maximum matching and minimum cover as linear programs. We will then bring to light that the linear programs are duals of each other. We can model the maximum matching problem as a linear program (LP). Let G be a bipartite graph G = (A, B, E ), with vertex sets A and B , and edge set E . Let xe be the weight assigned to an edge e = (a, b). In addition, let (a) be the set of all edges in E which contain vertex a as an endpoint, and let (b) be the set of all edges in E which contain vertex b as an endpoint. From this we produce the following LP: LP = Max subject to:
e (a) xe eE

xe

1, for all a A e (b) xe 1, for all b B xe 0, for all e E

The above linear program is a maximization program for LP . The program maximizes the sum of all the edge weights xe for all the edges in E . It also abides by the following constraints: The sum of all edge weights xe that contain vertex a as an endpoint must be less than or equal to 1. The sum of all edge weights xe that contain vertex b as an endpoint must be less than or equal to 1. All of the edge weights should be non-negative. This constitutes our primal LP. We model our dual LP for LP below. In this case, it is a minimum covering, so we look at the minimum number of vertices we can select from partition y and partition z . Let ya be the weight of a vertex a selected from partition y , and let zb be the weight of a vertex b selected from partition z . We obtain the following dual LP: LP = Min subject to: ya + zb 1, for all (a, b) E ya 0, for all a A zb 0 for all b B
aA ya

bB zb

In the above dual LP, the two vertex sets of the bipartite graph which make it bipartite are y and z , and A y , B z . If a vertex in y is in the covering, we assign it a value 1 and if it is not, we assign it a value of 0. Similarly, if a vertex in z is in the covering, we assign it a value of 1, and if it is not, we assign it a value of 0. This is correct because for a cover we would like that each vertex is either in the covering or it is not. The above linear program is a minimization for LP . The program minimizes the sum of all the vertex weights ya and zb for all the edges in E . It also abides by the following constraints: For an edge e = (a, b), the summed weights of ya and zb should be greater than or equal to 1. All vertex weights in A must be non-negative. All vertex weights in B must be non-negative. The guidelines for a pair of primal and dual linear programs are that if we model our primal LP as: Max(cT x), s.t. Ax = b and x 0 then the dual LP of this primal is modeled as such: Min(y T b), s.t. AT y c and y 0 Thus, we see from the rules of duality that from a primal LP, we can get its dual LP. In this case, the primal LP for LP has the dual LP equivalent to the one set up for LP . By inspection, we see the primal LP and dual LP shown in this section correspond to the general rules for a pair of primal and dual linear programs outlined above. Thus, by strong duality, since the LPs are duals of each other, we have that LP = LP .

3.4

Proof of Knigs Theorem

Once again, Knigs Theorem states: For every bipartite graph G = (A, B, E ), we have = . We wish to prove Knigs Theorem using the results of our LPs. We will do this by proving the following lemmas: Lemma 1. LP = LP Lemma 2. LP Lemma 3. LP We will prove Lemma 1, Lemma 2, and Lemma 3, and this will be sucient to show that = , and that Knigs Theorem is true. We will do this by breaking Lemma 1 down into successive, easier lemmas, and by separately proving Lemma 2 and Lemma 3.

3.4.1

Proof of Lemma 1

We begin by breaking down Lemma 1 into Lemma 4, Lemma 5, and Lemma 6, and proving these smaller lemmas. Lemma 4. LP = LP Because our dual LP and our primal LP are duals of each other, as shown in Section 3.3, we are certain that LP = LP , since dual linear programs yield the same optimal result. Next, we examine the relationship between the linear program result of LP and the result . This is also a smaller lemma contained in Lemma 1. Lemma 5. LP By the structure of the Simplex Method, we are certain that the LP will maximize LP with respect to the given constraints. There is no valid that is greater than the LP found by performing the Simplex Method on the LP. Thus, LP . Next, we look at the relationship between the linear program result of LP and the result . This too, is a smaller lemma contained in Lemma 1. Lemma 6. LP We can be certain that for a dual linear program which is correctly set up, we will achieve the minimum LP . This is true for the same reasons that Lemma 5 is true, except in this case the LP will minimize LP with respect to the given constraints. Thus, LP . Reassembling Lemma 4, Lemma 5, and Lemma 6, we see that they form Lemma 1. Therefore, LP = LP , and Lemma 1 is true. If we can also prove Lemma 2 and Lemma 3, then we can prove equality.

3.4.2

Proof of Lemma 2

Let us now prove Lemma 2, which states that LP . Within the primal LP, we take the optimum solution, x . If x e {0, 1} , for all e E then we have a matching M with |M | = LP . In this, case, we have shown that = LP , because LP will simply be the number of edges in its matching. The other case would be if x / {0, 1}, but rather that x e e (0, 1), where e is an edge (i, j ). We will call these edges fractional, and edges where x e {0, 1} we will call binary. When an edge is converted from fractional to binary it never changes, and remains binary. In the case of fractional edges, we can nd either a path or a cycle P and modify x along P while maintaining feasibility and optimality, and

decrease number of edges with fractional values. If no cycle exists, but we do have fractional edges, then we resort to Algorithm 1 dened below. If a cycle exists, then we resort to Algorithm 2, also dened below. Note that a cycle refers to a vertex which is connected to more than one other vertex in the opposite partition of the bipartite graph. We repeat either Algorithm 1 or Algorithm 2 until we get a 0-1 solution. We determine which algorithm to use at each iteration by checking to see if we have a cycle. We are certain that this will only require a nite number of iterations, because the number of edges in our graph |E |, is considered to be a non-negative integer. This "iteration" is dened below by Algorithm 1 and Algorithm 2. Algorithm 1. Suppose F E is the current set of fractional edges. If F = we are done. We have vertex a that is connected to only one other vertex b, and edge weight xe corresponding to edge e = (a, b) is a non-integer value. In this case, we can assign xe corresponding to e = (a, b) to have weight equal to 1, and we assign all other edges in (b) to a weight of 0. Now there are no fractional edges in (b) and we have reduced the size of F . If still, F = , then this algorithm can be repeated for all other edges with non-integer edge weights until F = . Algorithm 2. Suppose F E is the current set of fractional edges. If F = we are done. If not, then we nd a cycle C in the subgraph of only the edges ofF . We can partition the edge set of our path C into two matchings M1 and M2 . We know that a partition exists since the full graph of (A, B, E ) is a bipartite graph partitioned into vertex sets A and B . We dene rounding values:
= min { > 0 : ((e M1 : x e + = 1) (e M2 : xe = 0))} = min { > 0 : ((e M1 : x e + = 0) (e M2 : xe = 1))}

The edges of M1 M2 are fractional, so positive, real values of and must exist. The next step in the iteration performs rounding on the particular fractional edges of x .
With probability + , we adjust xe to be xe + for all e M1 , and we adjust xe to be xe for all e M2 . Of course, we also have that with probability + , we adjust x e to be x + for all e M .1 to be x for all e M , and we adjust x 1 2 e e e

We use this dependent, probability-based rounding scheme, because it will minimize the number of necessary iterations to perform the task of adjusting all fractional edges into binary edges. We can nd a maximum matching solution in linear time. Because bipartite graphs cannot have odd-number cycles, we know that any cycle in G will be even. Thus, we are certain that C can be split into M1 and M2 , distinct matchings where each vertex has only one edge from it, and we can be certain that Algorithm 2 can be applied to any cycle C . We notice also that on an even length cycle with 2x edges, we either increase the length of the cycle by x and decrease the length of the
Algorithm 2 comes from "Dependent Rounding in Bipartite Graphs", "The Dependent Rounding Scheme" by Rajiv Gandhi, Department of Computer Science, University of Maryland, College Park
1

Figure 2: (a) unweighted bipartite graph, (b) weighted adjacency matrix, (c) maximal matching output. The method of solving for bipartite graphs with fractional edges is the same as the method of solving bipartite graphs with weighted edges. Image credit: sciencedirect.com
cycle by x, or we increase the length of the cycle by x and decrease the length of the cycle by x. In both cases, the net adjustment to the length of the cycle is 0. Since xe can have fractional values, this may result in a non-ideal solution. Thus, the rounded result will be less than the optimal, maximized result. Thereby, we conclude that for this case, our maximal matching number |M | = LP .

3.4.3

Proof of Lemma 3

Now, to complete the proof, we must verify Lemma 3, that LP . Within the dual LP, we take the optimum solution (y , z ). Because a minimum vertex cover should pick one vertex of an edge to cover, rather than two (where possible), we will choose a value uniformly at random (continuously) in [0,1] to help select which vertex to cover. Let: C = {a A : ya } {b B : zb 1 } C is a cover object. That is, C represents a particular vertex covering of the graph with alpha value . We are interested in the minimum vertex cover, so we wish to reduce redundancies in the vertices which are covered. If two vertices are connected by an edge, including one of those vertices in the covering will be sucient to cover both vertices. To nd the expected size of the vertex covering, we compute the expectation by summing probabilities:

E[|C |] =
aA

P ( a C ) +
bB

P(b C )

A vertex, a, will be included in the covering when ya , and a vertex b will be included in the covering when zb 1 . Thus, we have two decision variables,

[ ya ] and [1 zb ]. Each of these decision variables equals 1 when the boolean is true, and are set to zero if their boolean is false. We can thus simplify the expression for E[|C |]: P(b C ) P ( a C ) + E[|C |] =
aya b zb

E[|C |] = E[
aA

[ ya ]] + E[
bB

[1 zb ]] zb

E[|C |] =
aA

ya +
bB

E[|C |] = LP The above expression is equivalent to the optimum solution from the dual LP. We assert that E[|C |] = LP . We clearly see that according to the above expansion, LP is an average of the possible covers. Contrarily, is a minimum possible value of |C |. Therefore, we can imply that LP .

3.4.4

Combination of Proof by Parts

Thus, we have shown the following: LP = LP LP LP This proves their equality: = LP = LP = Proving Knigs Theorem that: For every bipartite graph G, we have = .

Concluding Remarks

The concepts explored in this paper open the door to other problems. One problem that is a related oshoot of Knigs Theorem is the Max-Flow Min-Cut Theorem. In a ow network of edge weights 1, we can see that the max ow from the source node to the sink node will be equivalent to the maximal number of matchings, where we view the ow network as a series of bipartite graphs. In addition, the minimum cut will be the same as the minimum vertex cover, and so the minimum cut capacity will have a value equal to the number of the minimum vertex cover. Thus, we see that Knigs Theorem and the Max-Flow Min-Cut Theorem lead to the same solution. Thus, they are both dierent ways of expressing the same concept. Figure 3 demonstrates a simple transformation between a bipartite graph and

Figure 3: An example showing a bipartite graph as a unit-capacity ow network. Image credit: uni-halle.de
a unit-capacity ow network. This leads to an interesting relationship between ow networks and bipartite graphs, which I highly recommend recommend that interested individuals explore further.

10

Potrebbero piacerti anche