Sei sulla pagina 1di 11

HARVINDER SINGH (MCA 4 Semester) Roll Number - 511025273

th

http://www.scribd.com/Harvinder_chauhan

ManipalU

Book ID: B0891


[Type the document subtitle]

HARVINDER SINGH

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

July 2011
Master of Computer Application (MCA) Semester 4 MC0080 Analysis and Design of Algorithms 4 Credits (Book ID: B0891) Assignment Set 1

1.

Describe the following: Fibonacci Heaps

Ans: A Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a "lazy" manner, postponing the work for later operations. For example merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree However at some point some order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of children) are kept quite low: every node has degree at most O(log n) and the size of a subtree rooted in a node of degree k is at least Fk+ 2, where Fk is the kth Fibonacci number. This is achieved by the rule that we can cut at most one child of each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree (see Proof of degree bounds, below). The number of trees is decreased in the operation delete minimum, where trees are linked together. As a result of a relaxed structure, some operations can take a long time while others are done very quickly. In the amortized running time analysis we pretend that very fast operations take a little bit longer than they actually do. This additional time is then later subtracted from the actual running time of slow operations. The amount of time saved for later use is measured at any given moment by a potential function. The potential of a Fibonacci heap is given by Potential = t + 2m Where t is the number of trees in the Fibonacci heap, and m is the number of marked nodes. A node is marked if at least one of its children was cut since this node was made a child of another node (all roots are unmarked). Thus, the root of each tree in a heap has one unit of time stored. This unit of time can be used later to link this tree with another tree at amortized time 0.Also, each marked node has two units of time stored. One can be used to cut the node from its parent. If this happens, the node becomes a root and the second unit of time will remain stored in it as in any other root.

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

Binomial Heaps Ans: A binomial heap H is a set of binomial trees that satisfies the following binomial heap properties. 1. Each binomial tree in H obeys the min-heap property: the key of a node is greater than or equal to the key of its parent. We say that each such tree is min-heap-ordered. 2. For any nonnegative integer k, there is at most one binomial tree in H whose root has degree k. The first property tells us that the root of a min-heap-ordered tree contains the smallest key in the tree. The second property implies that an n-node binomial heap H consists of at most [lg n] + 1 binomial trees. To see why, observe that the binary representation of n has [lg n] + 1 bits, say , so that . By property 1 of 4.4.2, therefore, binomial tree Bi appears in H if and only if bit bI = 1. Thus, binomial heap H contains at most [lg n] + 1 binomial trees. If f (n) =

2.

n3 , then show that g O ( f ) and f O (g ) . 2 n3 2

Ans: Solution below:


g ( n ) = 37 n 2 + 120 n + 17 and f (n) =

(i) For g = O (f), g (n) should be <= C. f (n) for all n >= k. When C = 53 and k = 3, we get g(n) <= C. f(n), which holds true. Therefore, g = O (f). (ii) For f not = to O (g), let us see the sample <= C. Now, lets imply, we get <= C. 174 Further implying we get n < = 174 C for all n > = k. But for n = max {174 + 1, k}, the previous statement is not true. Hence the proof: So, f is not equal to O (g).

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

3.

Explain the concept of bubble sort and also write the algorithm for bubble sort.

Ans: Bubble sort is a simple and well-known sorting algorithm. It is used in practice once in a blue moon and its main application is to make an introduction to the sorting algorithms. Bubble sort belongs to O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Bubble sort is stable and adaptive. Algorithm Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them. If at least one swap has been done, repeat step 1. You can imagine that on every step big bubbles float to the surface and stay there. At the step, when no bubble moves, sorting stops. Let us see an example of sorting an array to make the idea of bubble sort clearer. Bubble Sort: If you are sorting content into an order, one of the simplest techniques that exist is the bubble sort technique. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. Step-by-step example Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest number using bubble sort algorithm. In each step, elements written in Bold are being compared. First Pass: (5 14 2 8) (1 54 2 8), Here, algorithm compares the first two elements, and swaps them. (1 5 4 2 8) (1 4 5 2 8), Swap since 5 > 4 (1 4 5 2 8) (1 4 2 5 8), Swap since 5 > 2 (1 4 2 5 8) (1 4 2 5 8), Now, since these elements are already in order (8 > 5), algorithm does not swap them.. Second Pass: (1 4 2 5 8) (1 4 2 5 8) (1 4 2 5 8) (1 2 4 5 8), Swap since 4 > 2 (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8)

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

Now, the array is already sorted, but our algorithm does not now if it is completed. The algorithm needs one Whole pass without any swap to know it is sorted Third Pass: (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) (1 2 4 5 8) Finally, the array is sorted, and the algorithm can terminate 4. Prove that If n 1, then for any n-key, B-tree T of height hand minimum degree t 2, n + 1 h log t 2

Ans: In computer science, a B-tree is a tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic amortized time. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and file systems.

16

12

18

21

A. B-tree of order 2 (Bayer & McCreight 1972) or order 5 (Knuth 1997).

Proof: If a B tree has height h, the root contains at least one key and all other nodes contain at least t-1keys. Thus, there are at least 2 nodes at depth 1, at least 2t nodes at depth 2, at least 2t raise to 2nodes at depth 3, and so on, until at depth h there are at least 2t raise to h 1 nodes. n >= 1 + (t-1) Summation of 2t raise to i-1 from i=1 to h = 1 + 2(t-1) (t raise to h - 1 / t-1) = 2t raise to h - 1. By simple algebra, we get t raise to h <= (n+1) / 2.Taking base - t logarithms of both sides proves the theorem. Therefore, theorem is proved

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

5.

Explain briefly the concept of breadth-First search (BFS).

Ans: Breadth first search as the name suggests first discovers all vertices adjacent to a given vertex before moving to the vertices far ahead in the search graph. If G(V, E) is a graph having vertex set V and edge set E and a particular source vertex s, breadth first search find or discovers every vertex that is reachable from s. First it discovers every vertex adjacent to s, then systematically for each of those vertices find all the vertices adjacent to them and so on. In doing so, it computes the distance and the shortest path in terms of fewest numbers of edges from the source node s to each of the reachable vertex. Breadth-first Search also produces a breadth-first tree with root vertex the process of searching or traversing the graph. For recording the status of each vertex, whether it is still unknown, whether it has been discovered (or found) and whether all of its adjacent vertices have also been discovered. The vertices are termed as unknown, discovered and visited respectively. So if (u, v) E and u is visited then v will be either discovered or visited i.e., either v has just been discovered or vertices adjacent to v have also been found or visited. As breadth first search forms a breadth first tree, so if in the edge (u, v) vertex vis discovered in adjacency list of an already discovered vertex u then we say that u is the parent or predecessor vertex of V. Each vertex is discovered once only. The data structure we use in this algorithm is a queue to hold vertices. In this algorithm we assume that the graph is represented using adjacency list representation. Front [u] us used to represent the element at the front of the queue. Empty() procedure returns true if queue is empty otherwise it returns false. Queue is represented as Q. Procedure enqueue() and dequeue() are used to insert and delete an element from the queue respectively. The data structure Status[ ] is used to store the status of each vertex as unknown or discovered or visite. Algorithm of Breadth First Search 1. for each vertex u V {s} 2. status[u] = unknown 3. status[s] = discovered 4. enqueue (Q, s) 5. while (empty[Q]! = false) 6. u = front[Q] 7. for each vertex v Adjacent to u 8. if status[v] = unknown 9. status[v] = discovered 10. parent (v) = u 11. end for 12. enqueue (Q, v);

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

13. dequeue (Q) 14. status[u] = visited 15. print u is visited 16. end while The algorithm works as follows. Lines 1-2 initialize each vertex to unknown. Because we have to start searching from vertex s, line 3 gives the status discovered to vertex s. Line 4 inserts the initial vertex s in the queue. The while loop contains statements from line 5 to end of the algorithm. The while loop runs as long as there remains discovered vertices in the queue. And we can see that queue will only contain discovered vertices. Line 6 takes an element u at the front of the queue and in lines 7 to 10 the adjacency list of vertex u is traversed and each unknown vertex u in the adjacency list of u, its status is marked as discovered, its parent is marked as u and then it is inserted in the queue. In the line 13, vertex u is removed from the queue. In line 14-15, when there are no more elements in adjacency list of u, vertex u is removed from the queue its status is changed to visited and is also printed as visited. The algorithm given above can also be improved by storing the distance of each vertex u from the source vertex s using an array distance [ ] and also by permanently recording the predecessor or parent of each discovered vertex in the array parent[ ]. In fact, the distance of each reachable vertex from the source vertex as calculated by the BFS is the shortest distance in terms of the number of edges traversed. So next we present the modified algorithm for breadth first search. Modified Algorithm Program BFS (G, s) 1. for each vertex u s v {s} 2. status[u] = unknown 3. parent[u] = NULL 4. distance[u] = infinity 5. status[s] = discovered 6. distance[s] = 0 7. parent[s] = NULL 8. enqueue(Q, s) 9. while empty (Q) ! = false 10. u = front[Q] 11. for each vertex v adjacent to u 12. if status[v] = unknown 13. status[v] = discovered 14. parent[v] = u 15. distance[v] = distance[u]+1

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

16. enqueue(Q, v) 17. dequeue(Q) 18. status[u] = visited 19. print u is visited In the above algorithm the newly inserted line 3 initializes the parent of each vertex to NULL, line 4 initializes the distance of each vertex from the source vertex to infinity, line 6 initializes the distance of source vertex s to 0, line 7initializes the parent of source vertex s NULL, line 14 records the parent of v as u, line 15 calculates the shortest distance of v from the source vertex s, as distance of u plus 1 6. Explain Kruskals Algorithm

Ans: Let G= (V, E) be the given graph, with | V | = n { Start with a graph T = (V ) consisting of only the vertices of G and no edges; /* This can be viewed as n connected components, each vertex being one connected component */ Arrange E in the order of increasing costs; for (i= 1,in- 1,i+ +) {Select the next smallest cost edge; if (the edge connects two different connected components) add the edge to T; } } At the end of the algorithm, we will be left with a single component that comprises all thevertices and this component will be an MST for G. Proof of Correctness of Kruskal's Algorithm Theorem: Kruskal's algorithm finds a minimum spanning tree. Proof: Let G = (V, E) be a weighted, connected graph. Let T be the edge set that is grown inKruskal's algorithm. The proof is by mathematical induction on the number of edges in T. We show that if T is promising at any stage of the algorithm, then it is still promising when a new edge is added to it in Kruskal's algorithm

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

When the algorithm terminates, it will happen that T gives a solution to the problem and hence an MST Basis: T = is promising since a weighted connected graph always has at least one MST. Induction Step: Let T be promising just before adding a new edge e= (u, v). The edges T divide the nodes of G into one or more connected components. u and v will be in two different components. Let U be the set of nodes in the component that includes u. Note that U is a strict subset of V T is a promising set of edges such that no edge in T leaves U (since an edge T either hasboth ends in U or has neither end in U) e is a least cost edge that leaves U (since Kruskal's algorithm, being greedy, would havechosen e only after examining edges shorter than e)The above three conditions are precisely like in the MST Lemma and hence we can conclude that the T {e} is also promising. When the algorithm stops, T gives not merely a spanning tree buta minimal spanning tree since it is promising. Program Void kruskal (vertex-set V ; edge-set E; edge-set T) Int ncomp; /* current number of components */ priority-queue edges /* partially ordered tree */ mfset components; /* merge-find set data structure */ vertex u, v; edge e; int nextcomp; /* name for new component */ int ucomp, vcomp; /* component names */ { makenull (T); makenull (edges); nextcomp = 0; ncomp = n; for (v V) /* initialize a component to have one vertex of V*/ {nextcomp++ ; initial (nextcomp, v, components); } for (e E) insert (e, edges); /* initialize priority queue of edges */ while (ncomp > 1)

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

{ E = deletemin (edges); let e = (u, v); ucomp = find (u, components); vcomp = find (v, components); if (ucomp! = vcomp) { merge (ucomp, vcomp, components); ncomp = ncomp - 1; } } } Implementation Choose a partially ordered tree for representing the sorted set of edges To represent connected components and interconnecting them, we need to implement: 1. MERGE (A, B, C) . . . merge components A and B in C and call the result A or B arbitrarily. 2. FIND (v, C) . . . returns the name of the component of C of which vertex v is a member. This operation will be used to determine whether the two vertices of an edge are in the same or indifferent components. 3. INITIAL (A, v, C) . . . makes A the name of the component in C containing only one vertex,namely v The above data structure is called an MFSET Running Time of Kruskal's Algorithm Creation of the priority queue * If there are e edges, it is easy to see that it takes O(elog e) time to insert the edges into a partially ordered tree * O(e) algorithms are possible for this problem Each delete in operation takes O(log e) time in the worst case. Thus finding and deleting least-cost edges, over the while iterations contribute O(log e) in the worst case. The total time for performing all the merge and find depends on the method used. O(e log e)without path compression

HARVINDER SINGH (MCA 4th Semester) Roll Number - 511025273

http://www.scribd.com/Harvinder_chauhan

O(e

(e))with the path compression, where

(e)is the inverse of an Ackerman function. Example: below Figure E = {(1,3), (4,6), (2,5), (3,6), (3,4), (1,4), (2,3), (1,2), (3,5), (5,6) }

Potrebbero piacerti anche