Sei sulla pagina 1di 62

ANALYSIS AND DESIGN OF ALGORITHMS

An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computational problem. Example: 1 2 3 4 5 6 7 get a positive integer from input if n > 10 print "This might take a while..." for i = 1 to n for j = 1 to i print i * j print "Done!"

Analyzing an algorithm means predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time(or running time) that we want to measure. The efficiency or running time of an algorithm is stated as a function relating the input length(generally denoted by n) to the number of steps (time complexity generally denoted by T(n)) or storage locations (space complexity generally denoted by S(n)). Complexity is measured in form of function . Example: T(n) denote time complexity function of an algorithm where n is number of input. T(n)= n2+5n+100 = O(n2) means that time complexity of algorithm is order of n2,running time of algorithm increases more quickly than the algorithm having complexity of O(n). For quadratic equation type function complexity of algorithm is determined by degree of equation. Example: T(n)=n4+1000n3+1 In this equation complexity is determined by n4,since 4 is the largest degree of n. Note: O(n2) and O(n) are the set of functions which have same behavior at large input values. 5 n2 +n = O(n2) and also 1000 n2 = O(n2) since they exhibit same behavior at large values

Notation for complexity


O-notation(Big O) O(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 f (n) cg(n) for all n n0} .

g(n) is an asymptotic upper bound for f (n).

If f (n) O(g(n)), we write f (n) = O(g(n)) Example: 2 n2 = O(n3), with c = 1 and n0 = 2. Examples of functions in O(n2): n2, n2 + n, n2 + 1000n, 1000 n2 + 1000n, Also, n, n/1000, n1.999, n2/ lg lg lg n -notation (g(n)) = { f (n) : there exist positive constants c and n0 such that 0 cg(n) f (n) for all n n0} .

g(n) is an asymptotic lower bound for f (n). Example: n = (lg n), with c = 1 and n0 = 16. Examples of functions in (n2): n2, n2 + n, n2 n, 1000 n2 + 1000n, 1000 n2 1000n, Also, n3, n2.00001, n2 lg lg lg n, 22n -notation (g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 c1g(n) f (n) c2g(n) for all n n0} .

g(n) is an asymptotically tight bound for f (n). Example: n2/2 2n = _(n2), with c1 = 1/4, c2 = 1/2, and n0 = 8.

Theorem: f (n) = (g(n)) if and only if f = O(g(n)) and f = (g(n)) . Asymptotic notation in equations When on right-hand side: O(n2) stands for some anonymous function in the set O(n2). 2 n2+3n+1 = 2 n2+(n) means 2 n2+3n+1 = 2 n2+ f (n) for some f (n) (n). In particular, f (n) = 3n + 1. When on left-hand side: No matter how the anonymous functions are chosen on the left-hand side, there is a way to choose the anonymous functions on the right hand side to make the equation valid. Interpret 2 n2 + (n) = (n2) as meaning for all functions f (n) (n), there exists a function g(n) (n2) such that 2 n2 + f (n) = g(n) . Interpretation: First equation: There exists f (n) (n) such that 2 n2+3n+1 = 2 n2+ f (n). Second equation: For all g(n) (n) (such as the f (n) used to make the first equation hold), there exists h(n) (n2) such that 2 n2 + g(n) = h(n). Small o-notation o(g(n)) = { f (n) : for all constants c > 0, there exists a constant n0 > 0 such that 0 f (n) < cg(n) for all n n0} . Another view, probably easier to use:

n1.9999= o(n2) n2/ lg n = o(n2) n2 o(n2) (just like 2 _< 2) n2/1000 o(n2) small omega -notation (g(n)) = { f (n) : for all constants c > 0, there exists a constant n0 > 0 such that 0 cg(n) < f (n) for all n n0} .Another view, again, probably easier to use:

n2.0001= (n2) n2 lg n = (n2) n2 (n2)

Comparisons of functions
Relational properties: Transitivity: f (n) = (g(n)) and g(n) = (h(n)) f (n) = (h(n)). Same for O, , o, and . Reflexivity: f (n) = ( f (n)). Same for O and . Symmetry: f (n) = (g(n)) if and only if g(n) = ( f (n)). Transpose symmetry: f (n) = O(g(n)) if and only if g(n) = ( f (n)).

f (n) = o(g(n)) if and only if g(n) = ( f (n)). Comparisons: f (n) is asymptotically smaller than g(n) if f (n) = o(g(n)). f (n) is asymptotically larger than g(n) if f (n) = (g(n)). A way to compare .sizes. of functions: O = o < > but unlike real numbers, where a < b, a = b, or a > b, we might not be able to compare some functions. Example: n1+sin n and n, since 1 + sin n oscillates between 0 and 2.

Standard notations and common functions


Monotonicity f (n) is monotonically increasing if m n f (m) f (n). f (n) is monotonically decreasing if m n f (m) f (n). f (n) is strictly increasing if m < n f (m) < f (n). f (n) is strictly decreasing if m > n f (m) > f (n). Exponentials Useful identities: a1 = 1/a , (am)n = amn , aman = am+n . Can relate rates of growth of polynomials and exponentials: for all real constants a and b such that a > 1,

which implies that nb = o(an). A suprisingly useful inequality: for all real x, ex 1 + x .As x gets closer to 0, ex gets closer to 1 + x. Logarithms Notations: lg n = log2 n (binary logarithm) , ln n = loge n (natural logarithm) , lgk n = (lg n)k (exponentiation) , lg lg n = lg(lg n) (composition) . Logarithm functions apply only to the next term in the formula, so that lg n + k means (lg n) + k, and not lg(n + k). In the expression logb a: If we hold b constant, then the expression is strictly increasing as a increases. If we hold a constant, then the expression is strictly decreasing as b increases. Useful identities for all real a > 0, b > 0, c > 0, and n, and where logarithm bases are not 1: a = blogb a , logc(ab) = logc a + logc b , logb an = n logb a , logb a = logc a logc b, logb(1/a) = logb a ,

logb a = 1 loga b, a logb c = c logb a . Changing the base of a logarithm from one constant to another only changes the value by a constant factor, so we usually dont worry about logarithm bases in asymptotic notation. Convention is to use log within asymptotic notation, unless the base actually matters. Just as polynomials grow more slowly than exponentials, logarithms grow more slowly than polynomials. In , substitute lg n for n and 2a for a:

implying that lgb n = o(na). Factorials n! = 1 2 3 n. Special case: 0! = 1. Can use Stirlings approximation,

to derive that lg(n!) = (n lg n). Q 1. Consider following three claims


I. II. III.

(n + k)m = (nm) where k is constant 2n+1 =(2n) 22n+2= (2n)

Which of these claims are correct. (A) I and II (B) I and III (C) II and III (D) I, II and III CS 2003

Ans. A Explanation. Constants can be ignored in addition and multiplication but not in powers for large input values. So in I n + k n so first is correct. Similarly 2n+1 =2*2n c2n so second is also correct. 22n+2= 22*22n c22n which c2n is much larger than for large values of n. Q2. Consider the following functions: Which of the following statements about the asymptotic behavior of f(n), g(n),and h(n) is true? f(n)=2n g(n)=n! h(n)=nlogn
(A) f nOgn; gnOhn(B) f ngn; gnOhn (C) gnOf n; hnOf n(D) hnOf n; gnf nCS2008

Ans. D Explanation: To solve this type of problem first arrange the functions in increasing order of complexity. Logic: To find complexity order take log of functions. log(f(n))=n log 2= (n) log (g(n))=log(n!)=(n log n)

log(h(n))= log n * log n=(log2n) now order is (log2n)< (n) < (n log n) h(n) < f(n) <g(n) from this relation only D option is correct. Q3. Arrange following function in increasing order of complexity f1(n)=2n n f2(n)=n f3(n)=n(log n)3 f4(n)=nn log n 2n n f5(n)=n f6(n)=2 Ans. f2(n)< f3(n)< f4(n)< f1(n)< f5(n)< f6(n) 1. 2. 3. Note : there are three methods to find which function is larger asymptotically. Substitute Take log Find .

If its 0 then g(n) > f(n) and if its infinite then f(n) > g(n)

Complexity of Iterative Algorithm


To find complexity of iterative algorithm correctly analyze the behavior of algorithm. Q 4. Find time complexity of following iterative algorithm for(i=1; i<=n; i=i+1) { Print n } Ans. (n) Explanation: here for loop is executing n times. Print statement and conditional statement take (1) unit of time. So n times of (1) equals to (n). Q 5. Find time complexity of following iterative algorithm for(i=2; i<=n; i=i*2) { Print n } Ans . (log2 n) Explanation: here value of I is increasing exponentially. This decreases number of times loop executed. At first step value of i is 2 then it incremented to 4 then to 8,16,32,. Loop ends when 2i becomes greater than n. so loop executed till 2i <=n or i <= log2(n).So loop executes log2 (n) times. Q 6 . Find time complexity of following iterative algorithm for(i=1; i<=n; i++) { for(j=1;j<=n; j++) { Print n } } Ans. (n2)

Explanation: Here, for every value of i inner loop is executed. Time complexity of inner loop is (n). n times (n)= (n2). Q 7 . Find time complexity of following iterative algorithm for(i=1; i<=n; i++) { for(j=1;j<=n; j*2) { Print n } } Ans. (n log2 n) Explanation: Her, for every value of i inner loop is executed log n time. Time complexity of inner loop is (log2 n). n times (log2 n)= (n log2 n). Q 8. Find time complexity of following iterative algorithm for(i=1; i<=n; i++) { for(j=1;j<=i; j++) { Print n } } Ans. (n2). Explanation : Here inner loop depends on value of i. For i=1 inner loop execute 1 time. For i=2 inner loop execute 2 times. For i=3 inner loop execute 3 times. . . For i=n inner loop execute n times. Total execution of inner loop i 1 + 2 + 3 + .+ n =n(n+1)/2=n2/2 + n/2= (n2). Q 9. Find time complexity of following iterative algorithm for(i=1; i<=n; i++) { for(j=1;j<=i; j*2) { Print n } } Ans. (n log2 n). Explanation: Here inner loop depends on value of i and inner loop execute log2(i) times for each value of i. For i=1 inner loop execute log2 1 time. For i=2 inner loop execute log2 2 times. For i=3 inner loop execute log2 3 times. . . .

For i=n inner loop execute log2 n times. Total execution of inner loop is log2 1 +log2 2 + log2 3 + . + log2 n =log(1*2*3*.*n) = log2 (n!)= (n log2 n). Q 10. Consider the following C program fragment in which i, j and n are integer variables. For(i=n,j=0;i>0;i/=2,j+=i); Let val( j ) denotes the value stored in the variable j after termination of the for loop. Which one of the following is true? (A) val( j )=(logn) (B) val( j )=(n) (C) val( j )=(n) (D) val( j )=(nlogn) CS2006 Ans. C Explanation: execute loop step by step For i=n j=0 For i=n/2 j=0+n/2 For i=n/4 j=n/2+n/4 For i=n/8 j=n/2+n/4 +n/8 . . . For i=2 j=n/2 + n/4 +n/8++ 1 For i=1 j=n/2 + n/4 +n/8++ 1+0 To solve this equation let n=2k j=2K-1 +2K-2 +2K-3 +..+2+1=2K-1=n= (n)

Complexity of recurrence relation


A recurrence is a function is depends in terms of one or more base cases, and itself, with smaller arguments. Examples:

There are three methods to find complexity of recurrence relation 1. Iterative Method 2. Master Method 3. Tree Method Iterative method In this we reduce recurrence relation to a non recurrence relation. Q 11.Find Complexity of following Recurrence relation T(n)=T(n-1)+1 where T(1)=1 Ans. T(n)=(n) Explanation: T(n)=T(n-1) + 1 ..1

Put value of T(n-1) in eq 1 T(n)= (T(n-2) + 1 ) + 1 = T(n-2) + 2 Where using eq 1 we can derive T(n-1)=T(n-2)+1 Now further reduce eq 2 T(n)=(T(n-3)+1) + 2 = T(n-3) +3 . . T(n)= T(n-k) +k T(n)=T(n-(n-1))+n-1 = T(1) + n - 1 = 1 + n - 1 = n = (n) Q 12.Find Complexity of following Recurrence relation T(n)=T(n-1) + n where T(1)=1 Ans. T(n)=(n2) Explanation: T(n)=T(n-1) + n T(n)= (T(n-2) + n-1 ) + n = T(n-2) + 2n-1 T(n)=(T(n-3)+n-2) + 2n-1 = T(n-3) +3n-3 . . T(n)=T(n-k) + kn - k(k-1)/2 T(n)=T(n-(n-1))+n-1 = T(1) + (n 1)n + (n-1)(n-2)/2 = (n2) Q 13.Find Complexity of following Recurrence relation T(n)=2*T(n-1) + 1 where T(1)=1

.......2

Ans. T(n)=(2n) Explanation: T(n)=2*T(n-1) + 1 T(n)=2* (2*T(n-2) + 1) + 1 = 22*T(n-2) + 1 +2 T(n)= 22* (2T(n-3)+1) ) + 1 +2 = 23*T(n-3) + 1 + 2 + 22 . . T(n)=2k*T(n-k) + 1 + 2 + 22 +23 ++2k-1 T(n)= 2n-1*T(n-(n-1))+ 1 + 2 + 22 +23 ++2n-2 = 2n-1*T(1) +2n-1 =2n-1 +2n-1 =2*2n-1 = 2n = (2n) Master method Used for many divide-and-conquer recurrences of the form T (n) = aT (n/b) + f (n) , where a 1, b > 1, and f (n) > 0. Compare n log a vs. f (n):
b

Case 1: f (n) = O(n log a-) for some constant > 0. ( f (n) is polynomially smaller than n log a.) Solution: T (n) = ( n log a).
b b b

Case 2: f (n) = ( n log a lgk n), where k 0. ( f (n) is within a polylog factor of n log a, but not smaller.) Solution: T (n) = (n log a lgk+1 n). Simple case: k = 0 f (n) = (n log a) T (n) = (n log a log2 n).
b b b b b

Case 3: f (n) = (n log a+ ) for some constant > 0 and f (n) satisfies the regularity condition a f ( n/b) c f (n) for some constant c < 1 and all sufficiently large n. ( f (n) is polynomially greater than n log a.)
b b

Solution: T (n) = ( f (n)). Note: If both f(n) and n log a are equal or dividing n log a from f(n) results in logkn, where k 0, then apply case 2. If case 2 is not applicable then check for which one is greater n log a or f(n), greater one decides the complexity of recurrence relation. Example:
b b b

Q 14.Find complexity of following recursive relation using master method T(n)=T(n/2)+1 Ans. here a=1, b=2 and f(n)=1 n log a = n log 1 = n 0 =1 n log a = f(n) so by case 2 of Master Method T(n) = (n log a log2 n)= (log2 n).
b b 2 b

Q 15.Find complexity of following recursive relation using master method T(n)=2T(n/2)+n Ans. here a=2, b=2 and f(n)=n n log a = n log 2 = n 1 =n n log a = f(n) so by case 2 of Master Method T(n) = (n log a log2 n)= (n log2 n).
b 2 b b

Q 16. Find complexity of following recursive relation using master method T(n)=2T(n/2)+1 Ans. here a=2, b=2 and f(n)=1 n log a = n log 2 = n 1 =n n log a f(n) so by case 1 of Master Method n log a > f(n). so T(n) = (n log a) = (n )
b b b 2 b

Q 17. Find complexity of following recursive relation using master method T(n)=4T(n/2)+1 Ans. here a=4, b=2 and f(n)=1 n log a = n log 4 = n 2 n log a f(n) so by case 3 of Master Method n log a < f(n). so T(n) = (f(n))= (n2 )
b b b 2

Q 18. Consider the following recurrence: T(n)=2T(n)+1,T(1) = 1 Which one of the following is true? (A) T(n) = (log log n) (B) T(n) = (log n) (C) T(n)= (n) (D) T(n) = (n)

CS2006

Ans. D Explanation: This problem can not be solved directly using master method since it is not in the form of T(n)=aT(n/b)+f(n) .first we have to reduce it in this form. To reduce it first substitute n = 2k . Now relation becomes T(2k)=2T(2k)+ 1 or T(2k)=T(2k/2)+ 1 Now replace T(2k) to S(k), function in terms of k instead of 2k.now new relation is S(k)=2S(k/2)+ 2k here a=2, b=2 and R(k)=1 k log a = k log 2 = k 1 =k=log n k log a R(k) so by case 1 of Master Method n log a > R(k). so T(n) = (n)
b b b 2

Recursion Tree Method In a recursion tree, each node represents the cost of a single sub-problem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion. Recursion trees are particularly useful when the recurrence describes the running time of a divide-and-conquer algorithm. Example: T (n) = T (n/3)+T (2n/3)+(n). By summing across each level, the recursion tree shows the cost at each level of Recursion.

There are log3 n full levels, and after log3/2 n levels, the problem size is down to 1. Each level contributes cn. By summing log3/2 n we got running time =c nlog3/2 n which is equals to (n lg n). How to write Recurrence relation for a recursive program To determine the complexity of recursive program, first we should identify the recurrence relation. Then we can apply any of the above method to find complexity of program. For this , analyze program and mainly focus on following If else conditions Loops Return statement and the arguments of the returning function. Terminating condition. Q 19. Find recurrence relation for following function void p(int n) { If(n<=1) return 1; else return p(n-1); } Ans. T(n)=T(n-1)+(1) Explanation: statement return 1; will be executed once and statement return p(n1);causes recursive calls to itself and decreasing value of n. Q 20. Find recurrence relation for following function int i; void p(int n) { If(n<=1) return 1;

else { for(i=1;i<=n;i++) { sum= sum+i; } } Ans. T(n)=T(n-1)+n Explanation: statement return 1; will be executed once but for loop will be executed each time function is called. Running time of for loop depends on argument n .Statement return p(n-1);causes recursive calls to itself and decreasing value of n. Q 21. Time complexity of following C function is (assume n >0) int recursive(int n) { if(n = = 1) return 1; else return(recursive(n-1)+recursive(n-1)); } (a) O(n) (b) O(n log n) (c) O(n2) (d) O(2n) Ans. d Explanation: recurrence relation for above program is T(n)=2*T(n-1)+1 So we can find time complexity using iterative method which is O(2n). Q 22. The recurrence equation T(1)=1 T(n)=2*T(n-1) + n , n >1 Evaluates to (a) 2n+1 n 2 (b) 2n n

CS2004

(c) 2n+1 2n 2

(d) 2n+ n

CS2004

Ans. a Explanation: Using iterative method T(n)=2*T(n-1) + n T(n)=2* (2*T(n-2) + n-1) + n = 22*T(n-2) + n +2(n-1) T(n)= 22* (2T(n-3)+n-2) ) + n +2(n-1)= 23*T(n-3) + n +2(n-1)+22(n-2) . . T(n)=2k*T(n-k) + n +2(n-1)+22(n-2)++2k-1(n-(k-1)) T(n)= 2n-1*T(n-(n-1)) + n +2(n-1)+22(n-2)++2n-1(n-(n-2)) After solving series u will get T(n)= 2n+1 n 2 Note: In this type of question you can also use hit and try method Take n=2 then T(n)=2*T(1)+2=4 only option (a) evaluates to 4 at n=2

SORTING ALGORITHMS
BUBBLE SORT Bubble Sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest number using bubble sort algorithm. In each step, elements written in bold are being compared. First Pass: (51428) ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps them. ( 1 5 4 2 8 ) ( 1 4 5 2 8 ), Swap since 5 > 4 ( 1 4 5 2 8 ) ( 1 4 2 5 8 ), Swap since 5 > 2 ( 1 4 2 5 8 ) ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: (14258) (14258) ( 1 4 2 5 8 ) ( 1 2 4 5 8 ), Swap since 4 > 2 (12458) (12458) (12458) (12458) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted. Third Pass: (12458) (12458) (12458) (12458) (12458) (12458) (12458) (12458) Finally, the array is sorted, and the algorithm can terminate. Data structure -Array Worst case performance -O(n2) Best case performance- O(n) Average case performance - O(n2) Worst case space complexity - O(1) auxiliary SELECTION SORT The algorithm works as follows: 1. Find the minimum value in the list 2. Swap it with the value in the first position 3. Repeat the steps above for the remainder of the list (starting at the second position and advancing each time) Here is an example of this sort algorithm sorting five elements: 64 25 12 22 11 11 25 12 22 64 11 12 25 22 64 11 12 22 25 64 11 12 22 25 64 Note: In bubble sort number of comparison is O(n2) and number of swaps is also O(n2) but in selection sort number of is O(n2) and number of swaps is O(n). Selection sort takes less time in comparison to bubble sort. Data structure - Array Worst case performance -(n) Best case performance - (n) Average case performance - (n) Worst case space complexity - (n) total, O(1) auxiliary

INSERTION SORT To perform an insertion sort, begin at the left-most element of the array and insert each element encountered into its correct position. The ordered sequence into which the element is inserted is stored at the beginning of the array in the set of indices already examined. Each insertion overwrites a single value: the value being inserted. Here is an example of this sort algorithm sorting five elements: 64 25 12 22 11 First Pass: 64 is already in array and insert 25 25 64 12 22 1125 is compared with 64 and swapped Second Pass: Insert 12 25 12 64 22 1125 is compared with 64 and swapped 12 25 64 22 1112 is compared with 25 and swapped Third pass: Insert 22 12 25 22 64 1122 is compared with 64 and swapped 12 22 25 64 1122 is compared with 25 and swapped 12 22 25 64 1122 is compared with 12 Fourth Pass: Insert 11 12 22 25 11 6411 is compared with 64 and swapped 12 22 11 25 6411 is compared with 25 and swapped 12 11 22 25 6411 is compared with 22 and swapped 11 12 22 25 6411 is compared with 12 and swapped Data structure - Array Worst case performance - (n2) Best case performance - O(n) Average case performance - (n2) Worst case space complexity - (n) total, O(1) auxiliary MERGE SORT Conceptually, a merge sort works as follows 1. If the list is of length 0 or 1, then it is already sorted. Otherwise: 2. Divide the unsorted list into two sublists of about half the size. 3. Sort each sublist recursively by re-applying merge sort. 4. Merge the two sublists back into one sorted list. Here is an example of this sort algorithm sorting seven elements:

Data structure - Array Worst case performance -(n log n) Best case performance - (n log n)

Average case performance - (n log n) Worst case space complexity - (n) auxiliary QUICK SORT Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists. The steps are: 1. Pick an element, called a pivot, from the list. 2. Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. 3. Recursively sort the sub-list of lesser elements and the sub-list of greater elements. The base cases of the recursion are lists of size zero or one, which are always sorted. Here is an example of this sort algorithm sorting nine elements: Elements in black box are selected as pivot.

Data structure - Array Worst case performance -(n2) when pivot selected always at the end or start of array Best case performance - (n log n) Average case performance - (n log n) Worst case space complexity - (n) auxiliary Q 23. Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then (A) T (n) 2T (n /5) + n (B) T (n) T (n /5) + T (4n /5) + n (C) T (n) 2T (4n /5) + n (D) T (n) 2T (n /2) + n CS2008 Ans. B Explanation: Since pivot always partition array in 1/5 and 4/5, and then recurrence relation for quicksort is

T(n)=T(n/5) + T(4n/5) + n. So option B is correct. Note: This type of recurrence relation generally results in complexity O(n log n) Example: T(n)=T(n/9) + T(8n/9) + n=O(n log n) T(n)=T(n/4) + T(3n/4) + n=O(n log n) Q 24. In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort? (A) (n) (B) (n log n) (C) (n2 ) (D) (n2log n) CS2009 Ans. B HEAP SORT The heap data structure is an array object which can be easily visualized as a complete binary tree. There is a one to one correspondence between elements of the array and nodes of the tree. The tree is completely filled on all levels except possibly the lowest, which is filled from the left upto a point. All nodes of heap also satisfy the relation that the key value at each node is at least as large as the value at its children. Step I: The user inputs the size of the heap(within a specified limit).The program generates a corresponding binary tree with nodes having randomly generated key Values. Step II: Build Heap Operation: Let n be the number of nodes in the tree and i be the key of a tree. For this, the program uses operation Heapify. When Heapify is called both the left and right subtree of the i are Heaps. The function of Heapify is to let i settle down to a position (by swapping itself with the larger of its children, whenever the heap property is not satisfied)till the heap property is satisfied in the tree which was rooted at (i).This operation calls . Step III: Remove maximum element: The program removes the largest element of the heap(the root) by swapping it with the last element. Step IV: The program executes Heapify(new root) so that the resulting tree satisfies the heap property. Step V: Go to step III till heap is empty How to create heap: lets take an array of elements 10 6 12 15 8

In Array implementation of heap sort if p is index of parent then index of child is 2p+1 and 2p+2 (when array index starts from 0).Similarly, if c is index of child then ceil((c-1)/2), Where ceil function return lower integer value. Complexity of finding largest element in max heap O(1). Complexity of deleting largest element in max heap O(n log n). Sorting Using heap

For sorting 5 element deletemax is called 5 times. After deleting heapify operation performed. Heapify operation take O(log n) time. N time calling Heapify operation causes heap sort complexity to O(n log n) . Data structure- Array Worst case performance -(n log n) Best case performance - (n log n) Average case performance - (n log n) Worst case space complexity - (n) total, (1) auxiliary Statement for Linked Answer Questions: 25 & 26 Consider a binary max-heap implemented using an array. Q 25. Which one of the following array represents a binary max-heap? (A) {25,12,16,13,10,8,14} (B) {25,14,13,16,10,8,12} (C) {25,14,16,13,10,8,12} (D) {25,14,12,13,10,8,16}

CS2009

Ans. C Explanation: A
25

25

C
14

25

12

16

14

13

16

13

10

13

10

16

10

12

13

10

12

D
25

14

12

13

10

16

Elements in bold are violating heap condition. Only heap in C option is correct. Q26. What is the content of the array after two delete operations on the correct answer to the previous question? (A) {14,13,12,10,8} (B) {14,12,13,8,10} (C) {14,13,8,12,10} (D) {14,13,12,8,10} Ans. D Explanation: Using heap in option c in previous question , perform delete operation as

12 25 16

Deletemax
14 16 13 13 10 8 12

14

16

heapify
14 12

10

13

10

deletemax

14 14 8

heapify
13 12 8 12 14 12

10

13

10

13

10

RADIX SORT Algorithm for radix sort is as follows: 1. Take the least significant digit (or group of bits, both being examples of radices) of each key. 2. Group the keys based on that digit, but otherwise keep the original order of keys. (This is what makes the LSD radix sort a stable sort). 3. Repeat the grouping process with each more significant digit. Note: All key in this sort should have same no of digit, and for same digit order is maintained. Example: for Unordered list 170, 045, 075, 090, 002, 024, 802, 066 First pass: sort according to unit digit 170, 090, 002, 802, 024, 045, 075, 066 Second pass: sort according to 10th digit 002, 802, 024, 045,066, 075, 170, 090, Third pass: sort according to 100th digit 002, 802, 024, 045,066, 075, 090, 170 Complexity: O (n*k) where k is number of digits. Q27. If we use Radix sot to sort n integers in range (nk/12, nk], for some k>0 which is independent of n, the time taken would be (a) (n) (b) (kn) (c) (n log n) (d) (n2) Ans. b Explanation: Let n be d digit number then nk have kd digits at max . so time taken would be (n*k*d) (nk) . CLASSIFICATION OF SORTING ALGORITHMS 1. Comparison based and counting based Comparison based- bubble, selection, insertion, merge, quick and heap sort all are comparison based sorting algorithms. Counting based- counting, radix and bucket sort. 2. In-place and non in-place sort- sorting algorithm which require extra memory to perform sorting is called non in-place sorting algorithm. Bubble, selection, insertion, heap and quick sort are in-place. Merge and radix sort are non in-place. 3. Stable and non stable sort- when sorting technique maintains relative order of repeated data then it is called stable sorting technique. Bubble, insertion and merge sort are stable. Stability of selection sort depends on its implementation (how conditions are handled). Quick sort can be implemented as a stable sort depending on how the pivot is handled. Heap sort is non stable. 4. Internal and External sorting algorithm- if a sorting algorithm is able to sort data available on secondary storage then its called external sorting algorithm. In comparison based sorting algorithm only merge sort is external sorting algorithm and all other comparison based algorithm is used for sorting data in main memory.

DATA STRUCTURES
a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address a bit string that can be itself stored in memory and manipulated by the program. Classification of data structure: 1. Primitive / Non-primitive: Primitive data structures are basic data structure and are directly operated upon machine instructions, e.g., integer, character. Non-primitive data structures are derived from primitive data structures, e.g., structure, union, array. 2. Homogeneous / Heterogeneous: In homogeneous data structures all elements are of the same type, e.g., array. In heterogeneous data structures elements are of different types, e.g. structure. 3. Static / Dynamic: In static data structures the size cannot be changed after the initial allocation, like matrices. In dynamic data structures, like lists, size can change dynamically. 4. Linear / Non-linear: Linear data structures maintain a linear relationship between their elements, e.g., array. Non-linear data structures do not maintain any linear relationship between their elements, e.g., in a tree. STACK A stack is a last in, first out (LIFO) abstract data type and data structure. A stack can have any abstract data type as an element, but is characterized by only two fundamental operations: push and pop. The push operation adds to the top of the list, hiding any items already on the stack, or initializing the stack if it is empty. The pop operation removes an item from the top of the list, and returns this value to the caller. A pop either reveals previously concealed items, or results in an empty list.

The array implementation aims to create an array where the first element (usually at the zerooffset) is the bottom. That is, array[0] is the first element pushed onto the stack and the last element popped off. The program must keep track of the size, or the length of the stack. The stack itself can therefore be effectively implemented as a two-element structure in C. Complexity of insertion and deletion in stack is O(n). Q28. A single array A[1..MAXSIZE] is used to implement two stacks. The two stacks grow from opposite ends of the array. Variables top1 and top 2 (topl< top 2) point to the location of the topmost element in each of the stacks. If the space is to be used efficiently, the condition for stack full is (a) (top1 = MAXSIZE/2) and (top2 = MAXSIZE/2+1) (b) top1+ top2 = MAXSIZE (c) (top1 = MAXSIZE/2) or (top2 = MAXSIZE) (d) top1 = top2 -1 CS2004 Ans. d Explanation: two stack using single Array is implemented as

top1 top2 As element inserted top1 and top2 moves inward when stack is full then top1 =top2 -1 @ @ @ # # # # # #

top1 top2 APPLICATIONS OF STACK. 1. In recursion:Consider the program: Void p(int n) { if(n = = 1) { printf(1) return 1; } else { p(n-1); printf(%d, n); } This program runs using stack. p(1) p(2) p(3) p(4) p(5) p(5) p(4) p(5) p(3) p(4) p(5) p(2) p(3) p(4) p(5) prints 1 p(2) p(3) p(4) p(5)

Void main() { p(5); }

Prints 2 p(3) p(4) p(5)

Prints 3 p(4) p(5)

Prints 4 p(5)

Prints 5

Statement for Linked Answer Questions 29 & 30: Consider the following C-function: double foo (int n) { int i; double sum; if (n==O) return 1.0; else { sum = 0.0; for (i =0;i<n;i++) sum +=foo(i); return sum; } } Q29. The space complexity of the above function is: (a) 0(1) (b) 0(n) (c) 0(n!) (d) 0(n2)

CS2005

Ans. b Explanation: Recursive calls are implemented using stacks. To check maximum space needed by above program, run this program for n=3. Maximum stack needed would be of size 4 and content of stack would be f(0) f(1)

f(2) f(3) Similarly, for input n space complexity of above program will be O(n) Q30. Suppose we modify the above function foo() and store the values of foo (i), 0<=I<n, as and when they are computed. With this modification, the time complexity for function foo() is significantly reduced. The space complexity of the modified function would be: (a) 0(1) (b) 0(n) (c) 0(n2) (d) 0(n!) CS2005 Ans. b Explanation: when we store of foo(i),,then we have to store n values, so the space complexity will remain same O(n). Note : Storing values of foo(i) will reduce time complexity, because by doing so we need not to call function recursively, instead we can look up for store values. 2. Expression Conversion An algebraic expression is a legal combination of operands and operators. Operand is the quantity (unit of data) on which a mathematical operation is performed. Operand may be a variable like x, y, z or a constant like 5, 4,0,9,1 etc. Operator is a symbol which signifies a mathematical or logical operation between the operands. Example of familiar operators include +,-,*, /, ^ etc. An Algebraic Expression can be represented using three different notations: INFIX: e.g. x+y, 6*3 etc this way of writing the Expressions is called infix notation. PREFIX: In the prefix notation, as the name only suggests, operator comes before the operands, e.g. +xy, *+xyz etc. POSTFIX: In the postfix notation, the operator comes after the operands, e.g. xy+, xyz+* etc. Note: INFIX notations are not as simple as they seem specially while evaluating them. To evaluate an infix expression we need to consider Operators' Priority and Associativity. For example, will expression 3+5*4 evaluate to 32 i.e. (3+5)*4 or to 23 i.e. 3+(5*4). To solve this problem Precedence or Priority of the operators were defined. Operator precedence governs evaluation order. An operator with higher precedence is applied before an operator with lower precedence.

As we know the precedence of the operators, we can evaluate the expression 3+5*4 as 23. But wait, it doesn't end here what about the expression 6/3*2? Will this expression evaluate to 4 i.e. (6/3)*2 or to 1 i.e. 6/(3*2).As both * and the / have same priorities, to solve this conflict, we now need to use Associativity of the operators. Operator Associativity governs the evaluation order of the operators of same priority. For an operator with left-Associativity, evaluation is from left to right and for an operator with right-Associativity; evaluation is from right to left.* , /, +, - have left Associativity. So the expression will evaluate to 4 and not 1. Note: We use Associativity of the operators only to resolve conflict between operators of same priority. Due to the above mentioned problem of considering operators' Priority and Associativity while evaluating an expression using infix notation, we use prefix and postfix notations. Both prefix and postfix notations have an advantage over infix that while evaluating an expression

in prefix or postfix form we need not consider the Priority and Associativity of the operators. E.g. x/y*z becomes */xyz in prefix and xy/z* in postfix. Q31. The following postfix expression, containing single digit operands and arithmetic operators + and *, is evaluated using a stack. 52*34+52**+ Show the contents of the stack. (i) After evaluating 5 2 * 3 4 + (ii) After evaluating 5 2 * 3 4 + 5 2 (iii) At the end of evaluation. CS2000 Ans. 5 2 * 3 4 4 3 10 + 5 5 7 10 2 2 5 7 10 (ii) * 10 7 10 * +

2 5

10

3 10

7 10 (i)

70 10

80 (iii)

Infix to Postfix Conversion : In normal algebra we use the infix notation like a + b * c. The corresponding postfix notation is abc*+. The algorithm for the conversion is as follows : Scan the Infix string from left to right. Initialize an empty stack. If the scanned character is an operand, add it to the Postfix string. If the scanned character is an operator and if the stack is empty Push the character to stack. If the scanned character is an Operand and the stack is not empty, compare the precedence of the character with the element on top of the stack (topStack). If topStack has higher precedence over the scanned character Pop the stack else Push the scanned character to stack. Repeat this step as long as stack is not empty and topStack has precedence over the character. Repeat this step till all the characters are scanned. (After all characters are scanned, we have to add any character that the stack may have to the Postfix string.) If stack is not empty add topStack to Postfix string and Pop the stack. Repeat this step as long as stack is not empty. Return the Postfix string. Example : Let us see how the above algorithm will be implemented using an example. Infix String : a+b*c-d Initially the Stack is empty and our Postfix string has no characters. Now, the first character scanned is 'a'. 'a' is added to the Postfix string. The next character scanned is '+'. It being an operator, it is pushed to the stack.

+ Stack

Postfix String

Next character scanned is 'b' which will be placed in the Postfix string. Next character is '*' which is an operator. Now, the top element of the stack is '+' which has lower precedence than '*', so '*' will be pushed to the stack.

* + Stack

ab Postfix String

The next character is 'c' which is placed in the Postfix string. Next character scanned is '-'. The topmost character in the stack is '*' which has a higher precedence than '-'. Thus '*' will be popped out from the stack and added to the Postfix string. Even now the stack is not empty. Now the topmost element of the stack is '+' which has equal priority to '-'. So pop the '+' from the stack and add it to the Postfix string. The '-' will be pushed to the stack.

Stack

abc*+ Postfix String

Next character is 'd' which is added to Postfix string. Now all characters have been scanned so we must pop the remaining elements from the stack and add it to the Postfix string. At this stage we have only a '-' in the stack. It is popped out and added to the Postfix string. So, after all characters are scanned, this is how the stack and Postfix string will be :

abc*+dStack End result : Infix String : a+b*c-d Postfix String : abc*+dQ32. Assume that the operators +, -, x, are left associative and ^ is right associative. The order of precedence (from highest to lowest) is ^, x, +, -. The postfix expression corresponding to the infix expression a + b x c d ^ e ^ f is (a) a b c x + d e f ^ ^(b) a b c x + d e ^ f ^ (c) a b + c x d e ^ f ^ (d) - + a x b c ^ d e f CS2004 Ans. A Explanation: postfix expression can be evaluated using stack Step 1. First element is a which is an operand add it to list and second character is + operator push it to stack Postfix String

+ Postfix String Stack Step 2.next character is b add it to list. Next character is * operator . At top of stack there is + operator which is of low precedence than *, so push * to stack

ab

+ Postfix String Stack Step 3.Similarly, add c to list. Next character is operator, at top of stack there is * which is higher precedence than -,pop * from stack and add it to list. Now , at top of stack there is + which is higher precedence than - , pop + from stack and add it to list and push to stack

Postfix String Stack Step 4. Add d to list. . Next character is ^ operator . At top of stack there is - operator which is of low precedence than ^, so push ^ to stack.

abc*+

^ Stack

abc*+d

Postfix String

Step 5. Push e to list. Next character is ^ operator. At top of stack there is ^ operator, here we check for associative property of ^ operator,^ is right associative, so push ^ to stack. Add f to list. ^ ^ abc*+def Postfix String Stack Now pop all stack content and add to list. Postfix String is: a b c * + d e f ^ ^ Note: If top of stack and incoming operator are of same precedence and left associative then we first pop the operator and add it to list and push new incoming operator. Converting Expression from Infix to Prefix using STACK In this algorithm we first reverse the input expression so that a+b*c will become c*b+a and then we do the conversion and then again the output string is reversed. Doing this has an advantage that except for some minor modifications the algorithm for Infix->Prefix remains almost same as the one for Infix->Postfix. Algorithm 1) Reverse the input string. 2) Examine the next element in the input. 3) If it is operand, add it to output string. 4) If it is closing parenthesis, push it on stack. 5) If it is an operator, then i) If stack is empty, push operator on stack. ii) If the top of stack is closing parenthesis, push operator on stack. iii) If it has same or higher priority than the top of stack, push operator on stack. iv) Else pop the operator from the stack and add it to output string, repeat step 5. 6) If it is a opening parenthesis, pop operators from stack and add them to output string until a closing parenthesis is encountered. Pop and discard the closing parenthesis. 7) If there is more input go to step 2 8) If there is no more input, pop the remaining operators and add them to output string. 9) Reverse the output string.

Example: Suppose we want to convert 2*3/(2-1)+5*(4-1). Reverse String )1-4(*5+)1-2(/3*2 Char Scanned ) 1 4 ( * 5 + ) 1 2 ( / 3 * 2 Stack Contents(Top on right) ) ) ))Empty * * + +) +) +)+)+ +/ +/ +/* +/* Empty 1 1 14 141414-5 14-5* 14-5* 14-5*1 14-5*1 14-5*12 14-5*1214-5*1214-5*12-3 14-5*12-3 14-5*12-32 14-5*12-32*/+ Prefix Expression(right to left)

Reverse the output string : +/*23-21*5-41 So, the final Prefix Expression is +/*23-21*5-41 All the remaining conversions can easily be done using a Binary Expressions Tree. Binary Expression Tree An Expression Tree is a strictly binary tree in which leaf nodes contain Operands and nonleaf nodes contain Operator, root node contain the operator that is applied to result of left and right sub trees. Once we obtain the Expression Tree for a particular expression, its conversion into different notations (infix, prefix, and postfix) and evaluation become a matter of Traversing the Expression Tree. The following Figure shows an expression tree for above expression 2*3/(2-1)+5*(4-1).

Note: A binary expression tree does not contain parenthesis, the reason for this is that for evaluating an expression using expression tree, structure of tree itself decides order of the operations. When we run Pre-order traversal (visit root, left child and then right child) on the Binary Expression Tree we get prefix notation of the expression, similarly an Post-order traversal

(visit left child, right child and then root) will yield postfix notation. What will we get from an in-order Traversal (visit left child, root and then right child)? Well, for the expressions which do not contain parenthesis, in-order traversal will definitely give infix notation of expression but expressions whose infix form requires parenthesis to override conventional precedence of operators can not be retrieved by simple in-order traversal. Doing the Conversions with Expression Trees Prefix -> Infix The following algorithm works for the expressions whose infix form does not require parenthesis to override conventional precedence of operators. 1) Create the Expression Tree from the prefix expression. 2) Run in-order traversal on the tree. Prefix -> Postfix 1) Create the Expression Tree from the prefix expression. 2) Run post-order traversal on the tree. Q 33.Write down postfix and prefix expression for infix expression a *((b-c)/d)+k Ans: postfix: a b c d / * k + Prefix: + * a / - b c d k Explanation: to solve this type of question, first create binary expression tree.
+ / k

* -

a b

Now prefix and postfix can easily be found using preorder and post order traversal of tree respectively. QUEUE A queue is a pile in which items are added an one end and removed from the other. In this respect, a queue is like the line of customers waiting to be served by a bank teller. As customers arrive, they join the end of the queue while the teller serves the customer at the head of the queue. As a result, a queue is used when a sequence of activities must be done on a first-come, first-served basis. Deque extends the notion of a queue. In a deque, items can be added to or removed from either end of the queue. In a sense, a deque is the more general abstraction of which the stack and the queue are just special cases. In queue there are two basic operation Enque(): add element at the rear end of queue. Deque(): remove element from front end of queue. Example:

A bounded queue is a queue limited to a fixed number of items and is implemented using array. Priority Queue One can imagine a priority queue as a modified queue, but when one would get the next element off the queue, the highest-priority one is retrieved first. Stacks and queues may be modeled as particular kinds of priority queues. In a stack, the priority of each inserted element is monotonically increasing; thus, the last element inserted is always the first retrieved. In a queue, the priority of each inserted element is monotonically decreasing; thus, the first element inserted is always the first retrieved. There are a variety of simple, usually inefficient, ways to implement a priority queue. They provide an analogy to help one understand what a priority queue is: Sorted list implementation: Like a checkout line at the supermarket, but where important people get to "cut" in front of less important people. (O(log(n)) insertion time (can binary search for insertion position) if implemented using arrays, O(n) insertion time if implemented using linked lists; O(1) get-next time) Unsorted list implementation: Keep a list of elements as the queue. To add an element, append it to the end. To get the next element, search through all elements for the one with the highest priority. (O(1) insertion time, O(n) get-next due to search) These implementations are usually inefficient. To get better performance, priority queues typically use a heap as their backbone, giving O(log n) performance for inserts and removals. Q34. What is the minimum number of stacks of size n required to implement a queue of size n? (a) One (b) Two (c) Three (d) Four CS2001 Ans. b Explanation: queue can be implemented using minimum two stack. Since stack have only one end, inserting at one end and deletion at another end is not possible with one stack. Let we have two stack s1 and s2. For inserting new element, push it to s1 while s2 remains empty. For deleting an element, first pop elements from s1 and push to s2 until s1 is empty, then pop an element from s2(this is the first element in queue).

LINKED LIST a linked list is a data structure that consists of a sequence of data records such that in each record there is a field that contains a reference (i.e., a link) to the next record in the sequence. Example: A linked list whose nodes contain two fields: an integer value and a link to the next node

Linked lists are among the simplest and most common data structures; they provide an easy implementation for several important abstract data structures, including stacks, queues. The

principal benefit of a linked list over a conventional array is that the order of the linked items may be different from the order that the data items are stored in memory or on disk. For that reason, linked lists allow insertion and removal of nodes at any point in the list, with a constant number of operations. On the other hand, linked lists by themselves do not allow random access to the data, or any form of efficient indexing. Thus, many basic operations such as obtaining the last node of the list, or finding a node that contains a given datum, or locating the place where a new node should be inserted may require scanning most of the list elements. In a doubly-linked list, each node contains, besides the next-node link, a second link field pointing to the previous node in the sequence. The two links may be called forward(s) and backwards, or next and previous). Example: doubly-linked list whose nodes contain three fields: an integer value, the link forward to the next node, and the link backward to the previous node.

Linear and circular lists


In the last node of a list, the link field often contains a null reference, a special value that is interpreted by programs as meaning "there is no such node". A less common convention is to make it point to the first node of the list; in that case the list is said to be circular or circularly linked; otherwise it is said to be open or linear.

Linked List implementation of stack The linked-list implementation is equally simple and straightforward. In fact, a stack linkedlist is much simpler than most linked-list implementations: it requires that we implement a linked-list where only the head node or element can be removed, or popped, and a node can only be inserted by becoming the new head node.

Linked list operations


1. Inserting a node: Inserting a node before an existing one cannot be done; instead, you have to locate it while keeping track of the previous node.

2. Deleting a node: To find and remove a particular node, one must again keep track of the previous element.

Q35. In the worst case, the number of comparisons needed to search a singly linked list of length n for a given element is

(a) log n

(b)1

(c) log n 1

(d) n

CS2002

Ans. d Explanation: in singly link list searching start from first element to last element, since next element will be accessed when previous element is accessed. Note: We cannot apply binary search in linked list because we cannot find middle of linked list. Q36. Consider the function f defined below struct item { int data; struct item * next; }; int f(struct item *p) { return ( (p = = NULL) || ( p -> next = = NULL) || ( ( p-> data <= p-> next ->data ) && f( p -> next ) ) ) ; } For a given linked list p, the function f returns 1 if and only if (A) The List is empty or has exactly one element. (B) The elements in the list are sorted in non-decreasing order of data value (C) The elements in the list are sorted in non-increasing order of data value (D) Not all elements in the list have the same data value

CS2003

Ans. B Explanation: return statement in f either returns null value when linked list ends or check if data at current node is less than next node ,then it calls f recursively for next node. If in middle current node data is greater next node data then it will not execute next statement and returns 0. Note: in C && operator checks for left side operand first, if it is false then it will not check for right side operand. Q37. A circularly linked list is used to represent a Queue. A single variable p is used to access the Queue. To which node should p point such that both the operations enQueue and deQueue can be performed in constant time? (a) rear node (b) front node (c) not possible with a single pointer (d) node next to front CS2004 Ans. (a) Explanation: A circular liked list p

If circular linked list used to represent a queue and we assign rear to list then rear

We can easily trace last and first element of queue. rear is pointing last element and next element to the rear is first element of queue. But if we assign front to p then we cannot find last element in constant time since we cannot traverse singly linked list in reverse orders. Q38. The following C function takes a single-linked list of integers as a parameter and rearranges the elements of the list. The function is called with the list containing the integers 1,2,3,4,5,6,7 in the given order. What will be the contents of the list after the function completes execution? struct node { int value; struct node *next; }; Void rearrange( struct node *list ){ struct node *p, * q; int temp; if( !list || !list -> next) return; p=list; q= list-> next; while (q) { temp =p-> value; p ->value= q-> value; q ->value= temp; p= q-> next; q=p?p-> next : 0; } } (A) 1,2,3,4,5,6,7 (B) 2,1,4,3,6,5,7 (C) 1,3,2,5,4,7,6 (D) 2,3,4,5,6,7,1 Ans. B Explanation: list 1 2 3 4 5 6 7 CS2008, IT2005

Step 1: list is not null so assign p=list; q= list-> next; list 1 2 3 4

p q Step 2: q is not 0 so enter in while loop, after executing loop first time list 2 1 3 4 5 p q Step 3: since q is not 0,after executing loop second time list 2 1 4 3 Step 4: since q is not 0,after executing loop second time list 2 1 4 3 Now p-> next is NULL so q becomes null ,and loop terminates

5 p 6

6 q 5

7 p

Final values in list are 2 1 4 3 6 5 7

TREES A tree is a widely-used data structure that emulates a hierarchical tree structure with a set of linked nodes. A simple unordered tree; in this diagram, the node labeled 7 has two children, labeled 2 and 6, and one parent, labeled 2. The root node, at the top, has no parent. Mathematically, it is not a tree, but an acyclic connected graph where each node has zero or more children nodes and at most one parent node. Furthermore, the children of each node have a specific order. A node is a structure which may contain a value, a condition, or represent a separate data structure (which could be a tree of its own). Each node in a tree has zero or more child nodes. Nodes that do not have any children are called leaf nodes. They are also referred to as terminal nodes. The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root. The topmost node in a tree is called the root node. The root node will not have parents. It is the node at which operations on the tree commonly begin. All other nodes can be reached from it by following edges or links. An internal node or inner node is any node of a tree that has child nodes and is thus not a leaf node. A subtree of a tree T is a tree consisting of a node in T and all of its descendants in T. The subtree corresponding to the root node is the entire tree; the subtree corresponding to any other node is called a proper subtree. Trees store data in a hierarchical manner. Note: there is always unique path traverses from the root to each node. Q39. In a complete k-ary tree, every internal node has exactly k children. The number of leaves in such a tree with n internal nodes is: (a) nk (b) (n1) k+ 1 (c) n(k 1) + 1 (d)n(k 1) CS 2005 Ans. c Explanation: take example of any complete k-ary tree. Lets, we take a 4-ary tree

In this tree internal node is 3 and leaf node is 10. Only option c satisfy the values.

Binary Trees
The simplest form of tree is a binary tree. A binary tree consists of

1. a node (called the root node) and 2. left and right sub-trees. 3. Both the sub-trees are themselves binary trees. Or in other words: A binary tree is either: -an empty tree -consists of a node, called a root, and zero, one, or two children (left and right), each of which are themselves binary trees This recursive definition uses the term "empty tree" as the base case Every non-empty node has two children, either of which may be empty.

Q40. Consider the following nested representation of binary trees: (X Y Z) indicates Y and Z are the left and right sub stress, respectively, of node X. Note that Y and Z may be NULL, or further nested. Which of the following represents a valid binary tree? (a) (1 2 (4 5 6 7)) (b) (1 (2 3 4) 5 6) 7) (c) (1 (2 3 4)(5 6 7)) (d) (1 (2 3 NULL) (4 5)) CS2000 Ans. c Explanation: As per definition in question binary tree should have triplet like (X Y Z). In option a (4 5 6 7) contains for element. In option b there is missing parentheses. In option d (4 5) contains two element only. So only option c is correct. Representation of node in C Struct node { int item; struct node *left; struct node *right; } Traversal in binary tree 1. Inorder traversal: left - - > root - - > right Example : for above binary tree inorder traversal generates: 1 3 4 6 7 8 10 13 14 2. Preorder traversal: root - - > left - - > right Example : for above binary tree preorder traversal generates: 8 3 1 6 4 7 10 14 13 3. Postorder traversal: left - - > right - - > root Example : for above binary tree preorder traversal generates: 1 4 7 6 3 13 14 10 8 Q41 Draw all binary trees having exactly three nodes labeled A, B and C on which Preorder traversal gives the sequence C,B,A. CS2002 Ans. as we know preorder traversal is root, left, right traversal. In given sequence C is first so its root. There are Five possibilities
C C C C C

B A

B A

B A

Note: You cannot draw unique binary tree only with either preorder or postorder sequence or both. There should be a inorder sequence given in addition to either of them. Q42. A binary search tree contains the numbers 1, 2, 3, 4, 5, 6, 7, 8. When the tree is traversed in pre-order and the values in each node printed out, the sequence of values obtained is 5, 3, 1, 2, 4, 6, 8, 7. If the tree is traversed in post-order, the sequence obtained would be (A) 8, 7, 6, 5, 4, 3, 2, 1 (B) 1, 2, 3, 4, 8, 7, 6, 5 (C) 2, 1, 4, 3, 6, 7, 8, 5 (D) 2, 1, 4, 3, 7, 8, 6, 5 IT2005 Ans. Explanation: pre-order sequence is given, and inorder sequence of binary search tree (BST) is sorted data. By having these two sequences we can create unique binary search tree. Pre-order : 5 3 1 2 4 6 8 7 Inorder: 1 2 3 4 5 6 7 8 Step 1: in preorder sequence root is visited first so 5 is root of BST. And in inorder sequence left is visited first , 1 2 3 4 are at left of 5, so 1 2 3 4 form left subtree of root 5. Similarly, 6 7 8 forms right subtree of 5.
5 1234 678

Step 2: Now Draw Left subtree of root 5 Pre-order sequence is 3 1 2 4 In order sequence is 1 2 3 4 From preorder sequence we can find root for subtree which is 3 and by inorder sequence 1 2 is left subtree of root 3, and 4 is right subtree.
5

678 5

12

4 3

Similarly, solve for 1 2 and 6 7 8.


6

1 2

4 7

APPLICATIONS OF BINARY TREES 1. Binary Expression Tree We have discussed binary expression tree earlier under the topic expression conversion.

2. Binary Search Tree It is an ordered binary tree in which, 1. the keys of all the nodes in the left sub-tree are less than that of the root, 2. the keys of all the nodes in the right sub-tree are greater than that of the root, 3. the left and right sub-trees are themselves ordered binary trees. Note: Each node (item in the tree) has a distinct key. Searching a binary tree for a specific value can be a recursive or iterative process. We begin by examining the root node. If the tree is null, the value we are searching for does not exist in the tree. Otherwise, if the value equals the root, the search is successful. If the value is less than the root, search the left subtree. Similarly, if it is greater than the root, search the right subtree. This process is repeated until the value is found or the indicated subtree is null. If the searched value is not found before a null subtree is reached, then the item must not be present in the tree. Worst case searching in BST is O(n), when every node have single child. Note: Inorder traversal of binary search tree generates sorted data.

Insertion in binary search tree


Insertion begins as a search would begin; if the root is not equal to the value, we search the left or right subtrees as before. Eventually, we will reach an external node and add the value as its right or left child, depending on the node's value. In other words, we examine the root and recursively insert the new node to the left subtree if the new value is less than the root, or the right subtree if the new value is greater than or equal to the root. This operation requires time proportional to the height of the tree in the worst case, which is O(log n) time in the average case over all trees, but (n) time in the worst case. Deletion in binary search tree There are three possible cases to consider: Deleting a leaf (node with no children): Deleting a leaf is easy, as we can simply remove it from the tree. Deleting a node with one child: Delete it and replace it with its child. Deleting a node with two children: Call the node to be deleted "N". Do not delete N. Instead, choose either its in-order successor node or its in-order predecessor node, "R". Replace the value of N with the value of R, then delete R.

Q43. The following numbers are inserted into an empty binary search tree in the given order: 10,1, 3, 5, 15, 12, 16. What is the height of the binary search tree (the height is the maximum distance of a leaf node from the root)? (a) 2 (b) 3 (c) 4 (d) 6 CS2004 Ans. b Explanation: create binary search tree for sequence height of tree is 3 10
15

1 12

16

Q44. suppose the numbers 7, 5, 1, 8, 3, 6, 0, 9, 4, 2 are inserted in that order into an initially empty binary search tree. The binary search tree uses the usual ordering on natural numbers. What is the in-order traversal sequence of the resultant tree?

(A) 7 5 1 0 3 2 4 6 8 9 (C) 0 1 2 3 4 5 6 7 8 9

(B) 0 2 4 3 1 6 5 9 8 7 (D) 9 8 6 4 2 3 0 1 5 7

CS2003

Ans. C Explanation: inorder traversal of binary search tree always gives sorted data. 3. Huffmann Code Huffman coding is based on the frequency of occurrence of a data item (pixel in images). The principle is to use a lower number of bits to encode the data that occurs more frequently. 1. Initialization: Put all nodes in an OPEN list, keep it sorted at all times (e.g., ABCDE). 2. Repeat until the OPEN list has only one node left: (a) From OPEN pick two nodes having the lowest frequencies/probabilities, create a parent node of them. (b) Assign the sum of the children's frequencies/probabilities to the parent node and insert it into OPEN. (c) Assign code 0, 1 to the two branches of the tree, and delete the children from OPEN.

Q45 . Create huffmann code for 6 symbol alphabet with the following symbol probabilities: A = 1, B = 2, C = 4, D = 8, E = 16, F = 32 Ans: create OPEN(in increasing order of probability)={A B C D E F} Step 1: In OPEN, A and B are with lowest probability, remove them from list, create a new node having sum of these two , p1 with probability 3. Add A as left childe of p1 and B as right child of p1. Insert p1 to the list and sort the list. Now OPEN becomes {p1 C D E F} And huffmann tree is

P1

Step 2 : now From OPEN remove p1 and C , create a new node having sum of these two, p2 with probability 7. Add p1 as left childe of p2 and C as right child of p2. Insert p2 to the list and sort the list. Now OPEN becomes {p2 D E F} Repeat similar step in OPEN one element remains: Final tree is

huffmann code can be found by traversing node from root A=00000 B=00001 C=0001 D=001 E=01 F=1 Note: in above example, no element is prefix of other element. So huffmann code is also called as huffmann prefix code. 4 . AVL Trees an AVL tree is a self-balancing binary search tree. In an AVL tree, the heights of the two child subtrees of any node differ by at most one; therefore, it is also said to be heightbalanced. Search, insertion, and deletion all take O(log n) time in both the average and worst cases, where n is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations. The balance factor of a node is the height of its left subtree minus the height of its right subtree, and a node with balance factor 1, 0, or 1 is considered balanced. A node with any other balance factor is considered unbalanced and requires rebalancing the tree. The balance factor is either stored directly at each node or computed from the heights of the subtrees. Searching: Searching in AVL tree is performed exactly as in an unbalanced binary search tree. Insertion: After inserting a node, it is necessary to check each of the node's ancestors for consistency with the rules of AVL. For each node checked, if the balance factor remains 1, 0, or +1 then no rotations are necessary. However, if the balance factor becomes 2 then the subtree rooted at this node is unbalanced. If insertions are performed serially, after each insertion, at most two tree rotations are needed to restore the entire tree to the rules of AVL. There are four cases which need to be considered, of which two are symmetric to the other two. Let P be the root of the unbalanced subtree. Let R be the right child of P. Let L be the left child of P. 1. Right-Right case and Right-Left case: If the balance factor of P is 2, then the right subtree outweighs the left subtree of the given node, and the balance factor of the right child (R) must be checked. If the balance factor of R is 0, a left rotation is needed with P as the root. If the balance factor of R is +1, a double left rotation is needed. The first

rotation is a right rotation with R as the root. The second is a left rotation with P as the root. 2. Left-Left case and Left-Right case: If the balance factor of P is +2, then the left subtree outweighs the right subtree of the given node, and the balance factor of the left child (L) must be checked. If the balance factor of L is 0, a right rotation is needed with P as the root. If the balance factor of L is 1, a double right rotation is needed. The first rotation is a left rotation with L as the root. The second is a right rotation with P as the root. Figure shown below describe rotation in four cases. In the figure root is node with balance factor +2 or -2 which is violating AVL tree conditions. Pivot is the node that will become root after rotation.

Deletion: If the node is a leaf, remove it. If the node is not a leaf, replace it with either the largest in its left subtree (inorder predecessor) or the smallest in its right subtree (inorder successor), and remove that node. The node that was found as a replacement has at most one subtree. After deletion, retrace the path back up the tree (parent of the replacement) to the root, adjusting the balance factors as needed. Q46. What is the maximum height of any AVL-tree with 7 nodes? Assume that the height of a tree with a single node is 0. (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. B Explanation: Try to make AVL tree of maximum height of 7 node
1

-1

Maximum height of AVL tree with n number of node is log n +1 Q47.A weight-balanced tree is a binary tree in which for each node, the number of nodes in the let sub tree is at least half and at most twice the number of nodes in the right sub tree. The maximum possible height (number of nodes on the path from the root to the furthest leaf) of such a tree on n nodes is best described by which of the following? (a) log2n (b) log4 n (c) log3n (d) log2 n CS2001 Ans. a

B TREES
A B-tree is a specialized multiway tree designed especially for use on disk (Secondary Storage). In a B-tree each node may contain a large number of keys. The number of subtrees of each node, then, may also be large. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. B-tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and file systems. A B-tree of order m (the maximum number of children for each node) is a tree which satisfies the following properties: 1. Every node has at most m children. 2. Every node (except root) has at least m-12 children. 3. The root has at least two children if it is not a leaf node. 4. All leaves appear in the same level, and carry information. 5. A non-leaf node with k children contains k1 keys. Example B-Tree The following is an example of a B-tree of order 5. This means that (except root node) all internal nodes have minimum (5-1)/2 =2 keys . Of course, the maximum number of children that a node can have is 5 (so that 4 is the maximum number of keys). According to condition 4, each leaf node must contain at least 2 keys. In practice B-trees usually have orders a lot bigger than 5.

Inserting a New Item When inserting an item, first do a search for it in the B-tree. If the item is not already in the B-tree, this unsuccessful search will end at a leaf. If there is room in this leaf, just insert the new item here. Note that this may require that some existing keys be moved one to the right to make room for the new item. If instead this leaf node is full so that there is no room to add the new item, then the node must be "split" with about half of the keys going into a new node to the right of this one. The median (middle) key is moved up into the parent node. (Of course, if that node has no room, then it may have to be split as well.) Note that when adding to an internal node, not only might we have to move some keys one position to the right, but

the associated pointers have to be moved right as well. If the root node is ever split, the median key moves up into a new root node, thus causing the tree to increase in height by one. Let's work our way through an example similar to that given by Kruse. Insert the following letters into what is originally an empty B-tree of order 5: C N G A H E K Q M F W L T Z D P R X Y S Order 5 means that a node can have a maximum of 5 children and 4 keys. All nodes other than the root must have a minimum of 2 keys. The first 4 letters get inserted into the same node, resulting in this picture

When we try to insert the H, we find no room in this node, so we split it into 2 nodes, moving the median item G up into a new root node. Note that in practice we just leave the A and C in the current node and place the H and N into a new node to the right of the old one.

Inserting E, K, and Q proceeds without requiring any splits:

Inserting M requires a split. Note that M happens to be the median key and so is moved up into the parent node.

The letters F, W, L, and T are then added without needing any split.

When Z is added, the rightmost leaf must be split. The median item T is moved up into the parent node. Note that by moving up the median key, the tree is kept fairly balanced, with 2 keys in each of the resulting nodes.

The insertion of D causes the leftmost leaf to be split. D happens to be the median key and so is the one moved up into the parent node. The letters P, R, X, and Y are then added without any need of splitting:

Finally, when S is added, the node with N, P, Q, and R splits, sending the median Q up to the parent. However, the parent node is full, so it splits, sending the median M up to form a new root node. Note how the 3 pointers from the old parent node stay in the revised node that contains D and G.

Deleting an Item In the B-tree as we left it at the end of the last section, delete H. Of course, we first do a lookup to find H. Since H is in a leaf and the leaf has more than the minimum number of keys, this is easy. We move the K over where the H had been and the L over where the K had been. This gives:

Next, delete the T. Since T is not in a leaf, we find its successor (the next item in ascending order), which happens to be W, and move W up to replace the T. That way, what we really have to do is to delete W from the leaf, which we already know how to do, since this leaf has extra keys. In ALL cases we reduce deletion to a deletion in a leaf, by using this method.

Next, delete R. Although R is in a leaf, this leaf does not have an extra key; the deletion results in a node with only one key, which is not acceptable for a B-tree of order 5. If the sibling node to the immediate left or right has an extra key, we can then borrow a key from the parent and move a key up from this sibling. In our specific case, the sibling to the right has an extra key. So, the successor W of S (the last key in the node where the deletion occurred), is moved down from the parent, and the X is moved up. (Of course, the S is moved over so that the W can be inserted in its proper place.)

Finally, let's delete E. This one causes lots of problems. Although E is in a leaf, the leaf has no extra keys, nor do the siblings to the immediate right or left. In such a case the leaf has to be combined with one of these two siblings. This includes moving down the parent's key that was between those of these two leaves. In our example, let's combine the leaf containing F with the leaf containing A C. We also move down the D.

Of course, you immediately see that the parent node now contains only one key, G. This is not acceptable. If this problem node had a sibling to its immediate left or right that had a spare key, then we would again "borrow" a key. Suppose for the moment that the right sibling (the node with Q X) had one more key in it somewhere to the right of Q. We would then move M down to the node with too few keys and move the Q up where the M had been. However, the old left subtree of Q would then have to become the right subtree of M. In other words, the N P node would be attached via the pointer field to the right of M's new location. Since in our example we have no way to borrow a key from a sibling, we must again combine with the sibling, and move down the M from the parent. In this case, the tree shrinks in height by one.

Another Example Here is a different B-tree of order 5. Let's try to delete C from it

We begin by finding the immediate successor, which would be D, and move the D up to replace the C. However, this leaves us with a node with too few keys.

Since neither the sibling to the left or right of the node containing E has an extra key, we must combine the node with one of these two siblings. Let's consolidate with the A B node.

But now the node containing F does not have enough keys. However, its sibling has an extra key. Thus we borrow the M from the sibling, move it up to the parent, and bring the J down to join the F. Note that the K L node gets reattached to the right of the J.

Q48. B -trees are preferred to binary trees in databases because (a) Disk capacities are greater than memory capacities (b) Disk access is much slower than memory access (c) Disk data transfer rates are much less than memory data transfer rates (d) Disks are more reliable than memory

CS2000

Ans. a, b Explanation: Self balancing binary search can be used for searching in main memory but in case of disk, log2xn complexity is not acceptable because disk capacity is much larger than main memory, larger capacity means large number of records so large values of n and also disc access is much slower than memory access. In processing a query, we traverse a path from the root to a leaf node. If there are K search key values in the file, this path is no longer than log(n/2) K , where n is number of links possible in any given node. This means that the path is not long, even in large files. For a 4k byte disk block with a search-key size of 12 bytes and a disk pointer of 8 bytes, n is around 200. If n =100, a look-up of 1 million search-key values may take log50(1,000,000) = 4 nodes to be accessed. Since root is in usually in the buffer, so typically it takes only 3 or fewer disk reads. B tree of even order Consider B tree of degree 4. Maximum key=3 and minimum key 4/2 =2. First create B tree for 15 6 21 27 12 18. 15 6 21 can be inserted without any splitting but inserting 27 requires a splitting, but here problem is in deciding whether 15 or 21 is out of 6 15 21 27 moved up. Right Biasing: Right Child has more key than left child. In this case if 15 is moved up. Left Biasing: Left Child has more key than right child. In this case if 21 is moved up. Left Biasing
6 15 21 27 6

Right Biasing
15 21 27

21

15

15

27

21

27

Inserting 12 does not require any split and leftmost node contain 6 12 15. Inserting 18 requires splitting of leftmost node in case of left biasing but in case of right biasing no splitting required.

Left Biasing
15 21

Right Biasing
15

12

18

27

12

18

21

27

Q49. Consider the following 2-3-4 tree (i.e. a tree with a minimum degree of two) in which each data item is a letter. The usual alphabetical ordering of letters is used in constructing the tree.

What is the result of inserting G in above tree. (A) (B) (C)

(D) None of These.

CS2003

Ans. B Explanation: 2-3-4 tree means tree of having internal node(except root node) with minimum degree two and maximum degree 3. G will be inserted to node BHI and insertion of G requires two splitting. We can either use left biasing or right biasing. But if any of biasing is used then it should be used for whole tree. Figure below shows left biasing and right biasing of tree after insertion of G. Left Biasing
P L H, L U G P, U

Right Biasing

B,G

Q,T

VXZ B HI N Q,T VXZ

B+ Trees
This is advance version of B tree. In this all data is on leaf node. If a leaf node splits then data that moves up will also copied into its successor node. If a internal node splits then it will be same as in B tree. Example: Create B+ tree for C N G A H E K Q M F W L T Z D P R X of order 5 Minimum number of keys=(n-1)/2=2 Maximum number of keys =n-1=4 Step1: C N G A can be inserted without splitting.

Step 2: inserting H require a split, G is moved up and also G is copied to successor leaf (If right biasing is used ).

Step 3: E and K can be inserted without any split.

Step 4: inserting Q causes split in rightmost node, K is moved up and copied to successor leaf node.

Step 5: insert M F (no split)

Step 6: insert W requires a split, N is moved up.

Step 7: insert L T (no split)

Step 8: inserting Z require split in rightmost node.

Step 9: insertion of require two split. First split in left most node. D is moved up and Copied to successor leaf node.

Second split in internal node. If internal node split then key is not copied to successor because key is already at leaf node.

K is moved up.

Step 9: insert P R X (no split).

Q. The following key values are inserted into a B+ - tree in which order of the internal nodes
is 3, and that of the leaf nodes is 2, in the sequence given below. The order of internal nodes is the maximum number of tree pointers in each node, and the order of leaf nodes is the maximum number of data items that can be stored in it. The B+ - tree is initially empty. 10, 3, 6, 8, 4, 2, 1 The maximum number of times leaf nodes would get split up as a result of these insertions is (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. C Explanation: order of leaf nodes is 2 so maximum 2 key can be inserted in leaf nodes and order of internal node is 3, maximum 2 key can be inserted in internal nodes.
Insert 10 1 split
st

10

10

Invalid state require a split

10

Insert 8 2
nd

split

6 6 8 10

10

Insert 4

Invalid state require a split

Insert 2

10

10

Invalid state require a split

3 split

rd

6
4
th

split

8 2 4

Invalid state require a split

10

10

Insert 1

10

Miscellaneous Questions
Q50. Let LASTPOST, LASTIN and LASTPRE denote the last vertex visited in a postorder, inorder and preorder traversal. Respectively, of a complete binary tree. Which of the following is always true? (a) LASTIN = LASTPOST (b) LASTIN = LASTPRE (c) LASTPRE = LASTPOST (d) None of the above CS2000 Ans. b Explanation: In case of complete binary tree, Traversal Inorder Postorder Preorder Last node visited rightmost node in tree Root node rightmost node in tree First node visited leftmost node in tree leftmost node in tree Root node

Note: to solve this type of question first draw example tree then try to prove the option incorrect. Q51. Consider the following functions f(n) = 3n g (n) = 2nIog2 h(n) = n! Which of the following is true? (a) h(n) is 0 (f(n)) (b) h(n) is 0 (g(n)) (c) g(n) is not 0 (f(n)) (d) f(n) is 0(g(n)) Ans. d Explanation: g(n) and f(n) are of same order and h(n) is larger than both. Q52. Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort? (a) 0(n) (b) 0(n log n) (c) 0(n2) (d) 0(n!) CS2001 Ans. b Explanation: Worst case complexity of sorting n numbers using randomized quicksort is O(n log n). Q53. Let f(n) = n2 log n and g(n) = n(log n) be two positive functions of n. Which of the following statements is correct? (a) g(n) = O(f(n)) and f(n) = O(g(n)) (b) g(n) = O(f(n) ) and f(n) O(g(n)) (c) f(n) = O(g(n)) and g(n) O(f(n)) (d) f(n) = O(g(n)) and g(n) =O(f(n)) CS2001 Ans. b Explanation: n2 log n is monotonically larger than n(log n)

CS2000

Q54. The number of leaf nodes in a rooted tree of n nodes, with each node having 0 or 3 children is: (a)(n-1)*2/3 +1 (b)3n - 1 (c) 2n (d)2n -1 CS2002 Ans. a Explanation: General formula The number of leaf nodes in a rooted k-ary tree of n nodes, with each node having 0 or k children is (n 1)(k 1)/k + 1. Note: take example of one or two such tree, and put values in options. Ternary tree with 10 nodes have 7 leaf nodes in which each node having 0 or 3 children. Only option a satisfy results. Q55. The running time of the following algorithm Procedure A(n)

If n<=2 return(1) else return (A(n/2)); Is best described by (a) 0(n) (b) 0(log n (c) 0(log log n) (d) 0(1) Ans. b Explanation: Recurrence relation for above algorithm is T(n)=T(n/2)+1 By master method n log a = n log 1=n0=1 f(n)= n log a so by case 2 of Master method complexity of algorithm is O(log n).
b 2 b

CS2002

Q56.. Let A be a sequence of 8 distinct integer sorted in ascending order. How many distinct pair of sequences, B and C are there such that (i) each is sorted in ascending order, (ii) B has 5 and C has 3 elements, and (iii) the result of merging B and c gives A? (A)2 (B) 30 (C)56 (d)256 CS2003 Ans. C Explanation: This question basically uses permutation and combination. If there are 8 distinct integer sorted in increasing order ,if you select any five of them then these five element will already be sorted in ascending order and also remaining three element is also in sorted order. If you have two sorted array then merging of these array results in a sorted array of elements. So what you have to do is to select 5 elements from group of 8 elements, and there are only 8C3 ways to select 5 numbers from 8 distinct numbers. 8 C3 =8! / (3! * 5!)=(8*7*6*5!) / (3!*5!)=8*7=56 Q57. The usual O(n2)implementation of insertion sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If instead, we use binary search to identify the position the worst case running time will be (A) remain O(n2) (B) become O(n(log n)2) (C) become O(n log n) (D) become O(n) CS2003 Ans. C Explanation: Binary search will reduce comparison in insertion sort Now in each pass causes O(log n) computation T(n)= log 1 + log 2 + log 3 + + log n =log n!=O(n log n) Q58. In heap with n elements with the smallest element at the root, 7th smallest element can be found in time (A) O(n log n) (B) O(n) (C) O(log n) (D) O(1) CS2003 Ans. C Explanation: in heap 7th smallest element can be at any level. To find 7th smallest element in heap we have to perform 7 delete operations. Each delete operation will cause O(log n ) complexity. O(7log n)=O(log n) Q59. A data structure is required for storing a set of integers such that each of the following operations can be done in O(log n) time, where n is the number of elements in the set I. Deletion of the smallest element II. Insertion of an element, if it not already present in the set Which of the following data structure can be used for this purpose?

(A) (B) (C) (D)

A heap can be used but not a balanced binary search tree A balanced binary search tree can be used but not a heap Both balanced binary search tree and heap can be used Neither balanced binary search tree nor heap can be used

CS2003

Ans. B Explanation: in heap(min-heap)Complexity of deletion of the smallest element=O(1) Insertion of an element=O(log n) but it cannot search whether element present or not in heap in O(log n) time. In Balance binary search tree-deletion of the smallest element require traversing tree to its leftmost node which is O(log n) and deleting it will also require O(log n ) time. Total complexity of deleting minimum element is O(log n). Similarly, Insertion of an element, if it not already present in the set require O(log n) time. Q60. Let S be a stack of size n>=1. Starting with the empty stack, suppose we push the first n natural numbers in sequence, and then perform n pop operations. Assume that push and pop operations take X seconds each, and Y seconds elapse between the end of one such stack operation and the start of the next operation. For m>=1 , define the stacklife of m as the time elapsed from the end of push(m) to the start of the pop operation that removes m from S. The average stack-life of an element of this stack is (A) n(X+Y) (B) 3Y + 2X (C)n)(X+Y)-X (D) Y + 2X CS2003 Ans. C Explanation: to solve this question take an stack of size five (n=5) elements A B C D E Push and pop operation require X seconds, Time elapsed between each operation is Y seconds First push these element then pop them, simultaneously calculate start and end time of each element . E D C B A 5X+4Y 5X+5Y

Start time End time

A X

B A 2X+Y

C B A 3X+2Y

D C B A 4X+3Y

D C B A 6X+6Y

C B A 7X+7Y

B A 8X+8Y

A 9X+9Y

Stack-Life of E=(5X+5Y ) (5X+4Y) = Y Stack-Life of D=(6X+6Y) (4X+3Y) = 2X + 3Y Stack-Life of C=(7X+7Y) (3X+2Y) = 4X + 5Y Stack-Life of B=(8X+8Y) (2X+Y) = 6X + 7Y Stack-Life of A=(9X+9Y) (X ) = 8X + 9Y Average stack life is= (Y + (2X + 3Y) +(4X + 5Y) +(6X + 7Y) +(8X + 9Y)) / 5 = (20X+25Y) / 5 =4X + 5Y Only option C satisfy these values. Q61. The best data structure to check whether an arithmetic expression has balanced parentheses is a (a) queue (b) stack (c) tree (d) list CS2004 Ans. B Q62. Consider the following C program segment struct CellNode

{ struct CelINode *leftchild; int element; struct CelINode *rightChild; } int DoSomething (struct CelINode *ptr) { int value = 0; if (ptr ! = NULL) { if (ptr - > leftchild ! = NULL) value = 1 + DoSomething (ptr - > leftChild); if (ptr - > rightChild ! = NULL) value = max(value,1 + DoSomething (ptr - > rightChild)); } return (value); } The value returned by the function DoSomething when a pointer to the root of a non-empty tree is passed as argument is (a) The number of leaf nodes in the tree (b) The number of nodes in the tree (c) The number of internal nodes in the tree (d) The height of the tree

CS2004

Ans. d Explanation: if (ptr - > leftchild ! = NULL) value = 1 + DoSomething (ptr - > leftChild); this statement increasing value by one every time a node is passed and call function recursively for left subtree. if (ptr - > rightChild ! = NULL) value = max(value,1 + DoSomething (ptr - > rightChild)); this statement check whether value calculated by first statement is larger or value of right subtree plus one is larger. Larger of them is assigned to value. Simpler way to solve this question take an example of tree then run algorithm. Q63. Postorder traversal of a given binary search tree, T produces the following sequence of keys 10, 9, 23, 22, 27, 25, 15, 50, 95, 60, 40, 29 which one of the following sequences of keys can be the result of an in-order traversal of the tree T? (a) 9, 10, 15, 22, 23, 25, 27, 29, 40, 50, 60, 95 (b) 9, 10, 15, 22, 40, 50, 60, 95, 23, 25, 27, 29 (c) 29, 15, 9, 10, 25, 22, 23, 27, 40, 60, 50, 95 (d) 95, 50, 60, 40, 27, 23, 22, 25, 10, 9, 15, 29 CS2005 Ans. a Explanation: inorder traversal of binary search tree always produces sorted data. Q64. How many distinct binary search trees can be created out of 4 distinct keys? (a) 5 (b) 14 (c) 24 (d) 42

CS2005

Ans. b. Explanation: take example of 4 distinct keys and try to make binary trees as many possible.

Q65. A Priority-Queue is implemented as a Max-Heap. Initially, it has 5 elements. The levelorder traversal of the heap is given below: 10, 8, 5, 3, 2 Two new elements 1 and 7 are inserted in the heap in that order. The level order traversal of the heap after the insertion of the elements is: (a) 10, 8, 7, 5, 3, 2, 1 (b) 10, 8, 7, 2, 3, 1, 5 (C) 10, 8, 7, 1, 2, 3, 5 (d) 10, 8, 7, 3, 2, 1, 5 CS2005 Ans. d Explanation: First create heap from given data

10

10 10

Insert 1

insert 7
8 5

heapify

10

level order traversal gives : 10 8 7 3 2 1 5


8 7

Q66. Suppose T(n) =2T(n/2) +n, T(0) =T(1)=1 Which one of the following is FALSE? (a) T(n)=O(n2) (b) T(n)=(n log n) (c) T(n)=(n2) (d) T(n)=O(n log n) CS2005 Ans. b Explanation: according to master theorem case 2 T(n)=(n log n) Note : Case 2 of master theorem evaluates complexity in terms of () not O(). So correct answer is b.

Q67. In a binary max heap containing n numbers, the smallest element can be found in time (A) 0(n) (B) O(log n) (C) 0(log log n) (D) 0(1) CS2006

Ans. A Explanation: in max heap we cannot find smallest element by root to leaf traversing, instead we can search using linear search in level order traversing. Q68. A scheme for storing binary trees in an array X is as follows. Indexing of X starts at 1 instead of 0. the root is stored at X[1]. For a node stored at X[i], the left child, if any, is stored in X[2i] and the right child, if any, in X[2i+1]. To be able to store any binary tree on n vertices the minimum size of X should be (A) log2 n (B) n (C) 2n+1 (D) 2n1 CS2006 Ans. D Explanation: maximum space required by a binary tree is when it is forming a chain. In this example dashed node shows extra space that we need to store when creating a chain of 3 element .total space required for creating a chain of 3 element = 7 node = 2n-1.

Q69. Which one of the following in place sorting algorithms needs the minimum number of swaps? (A) Quick sort (B) Insertion sort (C) Selection sort (D) Heap sort CS2006 Ans. B Q70. An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array (A) Solves it in linear time using a left to right pass of the array (B) Solves it in linear time using a right to left pass of the array (C) Solves it using divide and conquer in time (n log n) (D) Solves it in time (n2) CS2006 Ans. B Q71. An implementation of a queue Q, using two stacks S1 and S2, is given below: void insert (Q, x) { push (S1, x); } void delete (Q) { if (stackempty(S2)) then { if (stackempty(S1)) then { print(Q is empty); return; } else while (! (stackempty (S1)))

{ x=pop (S1); push(S2,x) } x=pop (S2); } } Let n insert and m(<= n)delete operations be performed in an arbitrary order on an empty queue Let x and y be the number of push and pop operations performed respectively in the process. Which one of the following is true for all m and n? (A) n+m <x<2n and 2m<y<n+m (B) n+m <x<2n and 2m<y<2n (C) 2m< x<2n and 2m<y<n+m (D) 2m <x<2n and 2m<y<2n CS2006 Ans. A Explanation: case 1: we perform all n insertion first then m deletion operation. To do this first we will push all n elements to S1. Then first delete operation will pop all element from S1 and push them to S2.Them m delete operation call m pop().total no of push is2n and total no of pop operation is n+m. Case 2.if we call one insert and then one delete operation alternatively, then call remaining n-m insert operations. Now for each delete operation one push and two pop operations are performed. For m delete operation m push and 2m pop operation performed. So total number of push is m+n and total number of pop operation is 2m. Q72. The median of n elements can be found in O(n)time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot? (A) (n) (B) (n log n) (C) (n2) (D) (n3) CS2006 Ans. B Explanation : if median can be found in O(n) times and if we select median as pivot then partition operation break array in equal parts, then recurrence relation for quick sort is T(n)=2T(n/2) + O(n) By master method T(n)= ) (n log n) Statement for Linked Answer Questions 73 & 74: A 3-ary max heap is like a binary max heap, but instead of 2 children, nodes have 3 children. A 3-ary heap can be represented by an array as follows: The root is stored in the first location, a[0], nodes in the next level, from left to right, is stored from a[1] to a[3]. The nodes from the second level of the tree from left to right are stored from a[4] location onward. An item x can be inserted into a 3-ary heap containing n items by placing x in the location a[n] and pushing it up the tree to satisfy the heap property. Q73. Which one of the following is a valid sequence of elements in an array representing 3ary max heap? (A) 1, 3, 5, 6, 8, 9 (B) 9, 6, 3, 1, 8, 5 (C) 9, 3, 6, 8, 5, 1 (D) 9, 5, 6, 8, 3, 1 CS2006 Ans. D

Explanation: create 3-ary heap for given data

1 5

In above trees only d satisfy max heap conditions. Q74. Suppose the elements 7, 2, 10 and 4 are inserted, in that order, into the valid 3- ary max heap found in the above question, Which one of the following is the sequence of items in the array representing the resultant heap? (A) 10, 7, 9, 8, 3, 1, 5, 2, 6, 4 (B) 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 (C) 10, 9, 4, 5, 7, 6, 8, 2, 1, 3 (D) 10, 8, 6, 9, 7, 2, 3, 4, 1, 5 CS2006 Ans. A Explanation:

Q75. The height of a binary tree is the maximum number of edges in any root to leaf path. The maximum number of nodes in a binary tree of height h is: (A) 2h-1 (B)2h 1 1(C) 2h+1 1 (D) 2h+1 CS2007 Ans. C

Q76. The maximum number of binary trees that can be formed with three unlabeled nodes is: (A) 1 (B) 5 (C) 4 (D) 3 CS2007 Ans. B Q77. Which of the following sorting algorithms has the lowest worst-case complexity? (A) Merge sort (B) Bubble sort (C) Quick sort (D) Selection sort CS2007 Ans. A Q78. Consider the following segment of C-code: int j, n; j = 1; while (j <=n) j = j*2; The number of comparisons made in the execution of the loop for any n > 0 is: (A) floor (log2 n) + 1 (B) n (C) 2 * ceil (log n) (D) 2 * ceil(log n) + 1 Ans. D Explanation: solve question by taking example. Q79. A complete n-ary tree is a tree in which each node has n children or no children. Let I be the number of internal nodes and L be the number of leaves in a complete n-ary tree. If L = 41, and I = 10, what is the value of n? (A) 3 (B) 4 (C) 5 (D) 6 CS2007 Ans. C Explanation: by the formula L=(K-1)n +1, where n= L+I, K=(L-1)/n + 1=(41-1)/10 +1=5 Q80. What is the time complexity of the following recursive function: int DoSomething (int n) { if (n <= 2) return 1; else return (DoSomething (floor(sqrt(n))) + n); } (A) (n2) (B) (n log n) (C) (log n) (D) (log log n) Ans. D Explanation: recurrence relation for above algorithm is T(n)=T(n) + 1 By substitution and master method T(n)= (log log n) Q81. Consider the following C program segment where CellNode represents a node in a binary tree: struct CellNode { struct CellNOde *leftChild; int element; struct CellNode *rightChild; }; int GetValue (struct CellNode *ptr) { int value = 0; if (ptr != NULL) { if ((ptr->leftChild == NULL) && (ptr->rightChild == NULL))

CS2007

CS2007

value = 1; else value = value + GetValue(ptr->leftChild) + GetValue(ptr->rightChild); } return(value); } The value returned by GetValue when a pointer to the root of a binary tree is passed as its argument is: (A) the number of nodes in the tree (B) the number of internal nodes in the tree (C) the number of leaf nodes in the tree (D) the height of the tree Ans. C Q82. Consider the following C code segment: int IsPrime(n) { int i,n; for(i=2;i<=sqrt(n);i++); if(n%i == 0) { printf(Not Prime\n); return 0; } return 1; } Let T (n)denote the number of times the for loop is executed by the program on input n. Which of the following is TRUE?

CS2007

nand T nn(B) T nOnand T n1 (C) T nOnand T nn(D) None of the above


(A) T nO

CS2007

Ans. A Q83. The minimum number of comparisons required to determine if an integer appears more than n/2 times in a sorted array of n integers is (A) (n) (B) (log n) (C) (log*n ) (D) (1) CS2008 Ans. B Q84. You are given the postorder traversal, P, of a binary search tree on the n elements 1, 2,.,n. You have to determine the unique binary search tree that has P as its postorder traversal. What is the time complexity of the most efficient algorithm for doing this? (A) (log n) (B) (n) (C) (n log n) (D) None of the above, as the tree cannot be uniquely determined CS2008 Ans. C Explanation: Since its a binary search tree and postorder traversal is given, and we know that inorder traversal of a binary search tree is sorted data ,which is 1, 2, 3, ..,n., by knowing inorder and postorder of tree we can uniquely determine a binary tree. Which takes O(n log n) time. Try to write algorithm for this question. Q85. We have a binary heap on n elements and wish to insert n more elements (not necessarily one after another) into this heap. The total time required for this is (A) (log n) (B) (n) (C) (n log n) (D) ( n2 ) CS2008

Ans. C Explanation: inserting an element in heap take (log n) time. heap already have n elements. Inserting n+1 th element in heap require (log(n+1))time Inserting n+2 th element in heap require (log(n+2))time Inserting n+3 th element in heap require (log(n+2))time . . Inserting n+n th element in heap require (log(n+n))time Total time in inserting n elements=log(n+1) + log(n+2) + log(n+3) ++ log(n+n) =log((n+1)* (n+2)* (n+3)*.* (n+n)) (log(nn)) = (n log n) Statement for Linked Answer Questions: 86 & 87 Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. The program is erroneous. 1. f (int Y[10] , int x) 2. {. int u, j, k; 3. i=0; j=9; 4. do { 5. k=( i+j) /2; 6. if(Y[ k]< x) i=k; else j= k; 7. } while (Y [k] != x && i< j) ; 8. If(Y[ k]= =x) print f("x is in the array ") ; 9. else print f (" x is not in the array ") ; 10. } Q86. On which of the following contents of Y and x does the program fail? (A) Y is{1 2 3 4 5 6 7 8 9 10} and x < 10 (B) Y is{1 3 5 7 9 11 13 15 17 19} and x < 1 (C) Y is{2 2 2 2 2 2 2 2 2 2} and x > 2 (D) Y is{2 4 6 8 10 12 14 16 18 20} and 2 < x < 20 and x is even

CS2008

Ans.C Explanation: If you run this program for values in option C then while loop will never terminate. Let take x=3,in first step i=0 and j= 9 k=4, then Y[4]<3 is true, i becomes 4, In next step k=(4+9) / 2=6, Y[4]<3 is true , i becomes 6. In next step k=(6+9) / 2= 7,Y[4]<3 is true , i becomes 7. In next step k=(7+9) / 2=8, now also Y[4]<3 is true , i becomes 8. In next step k=(8+9) / 2=8, now also Y[4]<3 is true , i becomes 8. In next step k=(8+9) / 2=8, now also Y[4]<3 is true , i becomes 8. . . Lop will never terminate. Q87. The correction needed in the program to make it work properly is (A) Change line 6 to: if (Y [k] < x) i = k + 1; else j = k 1; (B) Change line 6 to: if (Y [k] < x) i = k 1; else j = k + 1; (C) Change line 6 to: if (Y [k] < = x) i = k; else j = k; (D) Change line 7 to: } while ((Y [k] = = x) & & (i < j)); CS2008 Ans. A

Explanation: in previous question loop will not terminate when i=j or i=j+1 . We should change code such that this condition will not arrive, Option A shows most promising conditions. Q88. What is the number of swaps required to sort n elements using selection sort, in the worst case? (A) (n) (B) (n log n) (C) (n2 ) (D) (n2 log n) CS2009 Ans. A

Q89. In a binary tree with n nodes, every node has an odd number of descendants. Every node is considered to be its own descendant. What is the number of nodes in the tree that have exactly one child? (A) 0 (B) 1 (C) (n 1) /2 (D) n-1 CS2010 Ans. B Explanation : we know that a binary tree have only two child and according to definition in question every node is its child.

In such tree there is exactly one node having one child (i.e. itself) , which is last node. Q90. Let P be a singly linked list. Let Q be the pointer to an intermediate node x in the list. What is the worst-case time complexity of the best known algorithm to delete the node x from the list? (A) O(n) (B) O(log2 n) (C) O(log n) (D) O(1) IT2004 Ans. A Explanation: in singly linked list we can not traverse back, so to delete a arbitrary node first we should have access to its previous node, and to reach its previous node time taken will be O(n), now we can perform delete operation. Q91. An array of integers of size n can be converted into a heap by adjusting the heaps rooted at each internal node of the complete binary tree starting at the node [(n1)/2], and doing this adjustment up to the root node (root node is at index 0) in order [(n1)/2],[(n 3)/2],...,0. The time required to construct a heap in this manner is (A) 0(log n) (B) 0(n) (C) 0(n log log n) (D) 0(n log n) IT2004 Ans. B Explanation: Creating heap as stated above will require run loop only for (n+1)/2 times.

Q92. Which one of the following binary trees has its in-order and preorder traversals as BCAD and ABCD, respectively?
A A D

A
B

B
C

C
B

D
D C

IT2004 Ans. C Q93. Let f(n),g(n) and h(n) be functions defined for positive integers such that f(n) = O(g(n)), g(n) O(f (n)), g(n) =O(h(n)), and h(n) = O(g(n)). Which one of the following statements is FALSE? (A) f(n)+g(n)=O(h(n)+h(n)) (B) f(n)=O(h(n)) (C) h(n)=O(f(n)) (D) f(n)h(n)=O(g(n)h(n)) Ans. C Explanation: order of complexities for above functions is f(n) < g(n)=h(n) So option C is FALSE Q94. The numbers 1, 2, n are inserted in a binary search tree in some order. In the resulting tree, the right subtree of the root contains p nodes. The first number to be inserted in the tree must be (A) p (B) p + 1 (C) n - p (D)n p + 1 IT2005 Ans. C Explanation: In binary search tree left subtree have smaller element than root and right subtree have larger elements than root. Number of nodes in right subtree=p Number of nodes in right subtree=n p 1 Then root will be = n p 1 +1=n - p

Potrebbero piacerti anche