Sei sulla pagina 1di 16

Kruskal's algorithm

From Wikipedia, the free encyclopedia Graph and tree search algorithms Search

Alpha-beta pruning A* B* Beam search BellmanFord algorithm Best-first search Bidirectional search Breadth-first search D* Depth-first search Depth-limited search Dijkstra's algorithm FloydWarshall algorithm Hill climbing Iterative deepening depthfirst search

Johnson's algorithm Lexicographic breadth-first search

Uniform-cost search More Related


Dynamic programming Search games

Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component). Kruskal's algorithm is an example of a greedy algorithm.

This algorithm first appeared in Proceedings of the American Mathematical Society, pp. 4850 in 1956, and was written by Joseph Kruskal. Other algorithms for this problem include Prim's algorithm, Reverse-Delete algorithm, and Borvka's algorithm.
Contents
[hide]

1 Description 2 Performance 3 Pseudocode 4 Example 5 Proof of correctness

o o

5.1 Spanning tree 5.2 Minimality

6 See also 7 References 8 External links

[edit]Description

create a forest F (a set of trees), where each vertex in the graph is a separate tree create a set S containing all the edges in the graph while S is nonempty and F is not yet spanning

remove an edge with minimum weight from S if that edge connects two different trees, then add it to the forest, combining two trees into a single tree

otherwise discard that edge.

At the termination of the algorithm, the forest has only one component and forms a minimum spanning tree of the graph.

[edit]Performance
Where E is the number of edges in the graph and V is the number of vertices, Kruskal's algorithm can be shown to run in O(E log E) time, or equivalently, O(E log V) time, all with simple data structures. These running times are equivalent because:

E is at most V2 and logV

= 2logV is O(log V).

If we ignore isolated vertices, which will each be their own component of the minimum spanning forest, V E+1, so log V is O(log E).

We can achieve this bound as follows: first sort the edges by weight using a comparison sort in O(E log E) time; this allows the step "remove an edge with minimum weight from S" to operate in constant time. Next, we use a disjoint-set data structure (Union&Find) to keep track of which vertices are in which components. We need to perform O(E) operations, two 'find' operations and possibly one union for each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) = O(Elog V). Provided that the edges are either already sorted or can be sorted in linear time (for example with counting sort or radix sort), the algorithm can use more sophisticated disjoint-set data structure to run in O(E (V)) time, where is the extremely slowly-growing inverse of the single-valued Ackermann function.

[edit]Pseudocode 1 function Kruskal(G = <N, A>: graph; length: A R+): set of edges 2 Define an elementary cluster C(v) {v} 3 Initialize a priority queue Q to contain all edges in G, using the weights as keys. 4 Define a forest T //T will ultimately contain the edges of the MST 5 // n is total number of vertices 6 while T has fewer than n-1 edges do 7 // edge u,v is the minimum weighted route from u to v 8 (u,v) Q.removeMin() 9 // prevent cycles in T. add u,v only if T does not already contain a path between u and v. 10 // the vertices has been added to the tree. 11 Let C(v) be the cluster containing v, and let C(u) be the cluster containing u. 13 if C(v) C(u) then 14 Add edge (v,u) to T. 15 Merge C(v) and C(u) into one cluster, that is, union C(v) and C(u). 16 return tree T [edit]Example

Image

Description

This is our original graph. The numbers near the arcs indicate their weight. None of the arcs are highlighted.

AD and CE are the shortest arcs, with length 5, and AD has been arbitrarily chosen, so it is highlighted.

CE is now the shortest arc that does not form a cycle, with length 5, so it is highlighted as the second arc.

The next arc, DF with length 6, is highlighted using much the same method.

The next-shortest arcs are AB and BE, both with length 7. AB is chosen arbitrarily, and is highlighted. The arc BD has been highlighted in red, because there already exists a path (in green) between B and D, so it would form a cycle (ABD) if it were chosen.

The process continues to highlight the next-smallest arc, BE with length 7. Many more arcs are highlighted in red at this stage: BC because it would form the loopBCE, DE because it would form the loop DEBA, and FE because it would form FEBAD.

Finally, the process finishes with the arc EG of length 9, and the minimum spanning tree is found.

[edit]Proof

of correctness

The proof consists of two parts. First, it is proved that the algorithm produces a spanning tree. Second, it is proved that the constructed spanning tree is of minimal weight.

[edit]Spanning

tree

Let P be a connected, weighted graph and let Y be the subgraph of P produced by the algorithm. Y cannot have a cycle, since the last edge added to that cycle would have been within one subtree and not between two different trees. Y cannot be disconnected, since the first encountered edge that joins two components of Y would have been added by the algorithm. Thus, Y is a spanning tree of P.

[edit]Minimality
We show that the following proposition P is true by induction: If F is the set of edges chosen at any stage of the algorithm, then there is some minimum spanning tree that contains F.

Clearly P is true at the beginning, when F is empty: any minimum spanning tree will do, and there exists one because a weighted connected graph always has a minimum spanning tree.

Now assume P is true for some non-final edge set F and let T be a minimum spanning tree that contains F. If the next chosen edge e is also in T, then P is true for F + e. Otherwise, T + e has a cycle C and there is another edge f that is in C but not F. (If there were no such edge f, then e could not have been added to F, since doing so would have created the cycle C.) Then T f + e is a tree, and it has the same weight as T, since T has minimum weight and the weight of f cannot be less than the weight of e, otherwise the algorithm would have chosen f instead of e. So T f + e is a minimum spanning tree containing F + e and again P holds.

Therefore, by the principle of induction, P holds when F has become a spanning tree, which is only possible if F is a minimum spanning tree itself.

Kruskal's Algorithm

This minimum spanning tree algorithm was first described by


Kruskal in 1956 in the same paper where he rediscovered Jarnik's algorithm. This algorithm was also rediscovered in 1957 by Loberman and Weinberger, but somehow avoided being renamed after them. The basic idea of the Kruskal's algorithms is as follows: scan all edges in increasing weight order; if an edge is safe, keep it (i.e. add it to the set A).

Overall Strategy Kruskal's Algorithm, as described in CLRS, is directly based on the generic MST algorithm. It builds the MST in forest. Initially, each vertex is in its own tree in forest. Then, algorithm consider each edge in turn, order by increasing weight. If an edge (u, v) connects two different trees, then (u, v) is added to the set of edges of the MST, and two trees connected by an edge (u, v) are merged into a single tree on the other hand, if an edge (u, v) connects two vertices in the same tree, then edge (u, v) is discarded. A little more formally, given a connected, undirected, weighted graph with a function w : E R.

Starts with each vertex being its own component. Repeatedly merges two components into one by choosing the light edge that connects them (i.e., the light edge crossing the cut between them).

Scans the set of edges in monotonically increasing order by weight. Uses a disjoint-set data structure to determine whether an edge connects vertices in different components.

Data Structure Before formalizing the above idea, lets quickly review the disjoint-set data structure from Chapter 21.

Make_SET(v): Create a new set whose only member is pointed to by v. Note that for this operation v must already be in a set. FIND_SET(v): Returns a pointer to the set containing v. UNION(u, v): Unites the dynamic sets that contain u and v into a new set that is union of these two sets.

Algorithm Start with an empty set A, and select at every stage the shortest edge that has not been chosen or rejected, regardless of where this edge is situated in the graph. KRUSKAL(V, E, w) A{} Set A will ultimately contains the edges of the MST for each vertex v in V do MAKE-SET(v) sort E into nondecreasing order by weight w

for each (u, v) taken from the sorted list do if FIND-SET(u) = FIND-SET(v) then A A {(u, v)} UNION(u, v) return A

Illustrative Examples Lets run through the following graph quickly to see how Kruskal's algorithm works on it:

We get the shaded edges shown in the above figure. Edge (c, f) : safe Edge (g, i) : safe Edge (e, f) : safe Edge (c, e) : reject Edge (d, h) : safe Edge (f, h) : safe Edge (e, d) : reject Edge (b, d) : safe Edge (d, g) : safe Edge (b, c) : reject Edge (g, h) : reject Edge (a, b) : safe

At this point, we have only one component, so all other edges will be rejected. [We could add a test to the main loop of KRUSKAL to stop once |V| 1 edges have been added to A.] Note Carefully: Suppose we had examined (c, e) before (e, f ). Then would have found (c, e) safe and would have rejected (e, f ).

Example (CLRS) Algorithm.

Step-by-Step Operation of Kurskal's

Step 1. In the graph, the Edge(g, h) is shortest. Either vertex g or vertex h could be representative. Lets choose vertex g arbitrarily.

Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative for second tree.

Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as representative.

Step 4. Edge (a, b) creates a third tree.

Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the representative.

Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would be created. Vertex c is the representative of both.

Step 7. Instead, add edge (c, d).

Step 8. If we add edge (h, i), edge(h, i) would make a cycle.

Step 9. Instead of adding edge (h, i) add edge (a, h).

Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead to complete the spanning tree. In this spanning tree all trees joined and vertex c is a sole representative.

Analysis

Initialize the set A: First for loop: Sort E: Second for loop:

O(1) |V| MAKE-SETs O(E lg E) O(E) FIND-SETs and UNIONs

Assuming the implementation of disjoint-set data structure, already seen in Chapter 21, that uses union by rank and path compression: O((V + E) (V)) + O(E lg E) Since G is connected, |E| |V| 1 O(E (V)) + O(E lg E). (|V|) = O(lg V) = O(lg E). Therefore, total time is O(E lg E). |E| |V|2 lg |E| = O(2 lg V) = O(lg V). Therefore, O(E lg V) time. (If edges are already sorted, O(E (V)), which is almost linear.)

II Kruskal's Algorithm Implemented with Priority Queue Data Structure

MST_KRUSKAL(G) for each vertex v in V[G] do define set S(v) {v} Initialize priority queue Q that contains all edges of G, using the weights as keys A{} A will ultimately contains the edges of the MST

while A has less than n 1 edges do Let set S(v) contains v and S(u) contain u if S(v) S(u) then Add edge (u, v) to A Merge S(v) and S(u) into one set i.e., union return A

Analysis The edge weight can be compared in constant time. Initialization of priority queue takes O(E lg E) time by repeated insertion. At each iteration of while-loop, minimum edge can be removed in O(log E) time, which is O(log V), since graph is simple. The total running time is O((V + E) log V), which is O(E lg V) since graph is simple and connected.

Kruskal Algorithm
The input is a connected weighted graph G withy; vertices. Step 1. Arrange the edges of G in order of increasing weights. Step 2. Starting only with the vertices of G and proceeding sequentially, add each edge which does not result in a cycle until n 1 edges are added. Step 3. Exit. The weight of a minimal spanning tree is unique, but the minimal spanning tree itself is not. Different minimal spanning trees can occur when two or more edges have the same weight. In such a case, the arrangement of the edges in Step 1 of Algorithms 1.8A or 1.8B is not unique and hence may result in different minimal spanning trees as illustrated in the following example. Example 1.1 Find a minimal spanning tree of the weighted graph Q in Figure (a). Note that Q has six vertices, so a minimal spanning tree will have five edges. (a) Here we apply Algorithm 1.8A. First we order the edges by decreasing weights, and then we successively delete edge, without disconnecting Q until five edges remain. This yields the following data: Edges Weight Delete? BC 8 Yes AF 7 Yes AC 7 Yes BE 7 No CE 6 No BF 5 Yes AE 4 DF 4 BD 3

Thus the minimal spanning tree of Q which is obtained contains the edges. BE, CE, AE, DF, BD The spanning tree has weight 24 and it is shown in Figure (b).

(b) Here we apply Kruskal Algorithm. First we order the edges by increasing weights, and then we successively add edges without formingany cycles five edges are included. This yields the following data: Edges Weight Delete? BD 3 Yes AE 4 Yes DF 4 Yes BF 5 No CE 6 Yes AC 7 No AF 7 Yes BE 7 EC 8

Thus the minimal spanning tree of Q which is obtained contains the edges BD, AE, DF, CE, AF

The spanning tree appears in Figure (c). Observethat this spanning tree is not the same as the one obtained using Algorithm 1.8A. Remark: The above algorithms are easily executed when the graph G is relatively small as in Fig.119(a). Suppose C has dozens of vertices and hundreds of edges which, say. are given by a listofvertices. Then even deciding whether G is connected is not obvious- it nay require some type ifirst search (DFS) or breadth-first search (BFS) graph algorithm. Later sections and the next will discuss ways ofrepresenting graphs G in memory' and will discuss various graph algorithms.

Potrebbero piacerti anche