Sei sulla pagina 1di 38

List of Important Questions

UNIT 1

INTRODUCTION

PART A

1. Design an algorithm to compute the area and circumference of a circle. [N/D 16]
2. Define recurrence relation. [N/D 16]

3. Give the Euclid’s algorithm for computing gcd (m, n). (M/J 2016)

4. Compare the order of growth n (n-1)/2 and n2. (M/J 2016)

5. Write down the properties of asymptotic notations. [A/M 15]


6. Write down algorithm to find the number of binary digits in the binary representation
of a positive decimal interger.[A/M 15]
7. Define Big 'Oh' Notation. [M/J 12, M/J 13]
8. What is meant by Linear Search? [M/J 12]
9. What do you meant by algorithm? [ M/J 06,07,N/D 08,M/J 13]
10. What is average case analysis? [M/J 14]
11. Define program proving and program verification. [M/J 14]
12. Define little Oh and Omega notations. [N/D 13]
13. How an algorithm's time efficiency measured? [ N/D 12,05]
14. Give the recurrence relation for divide and conquer. N/D 12
15. What are the algorithm design techniques? [N/D 05,06]
16. Mention any four classes of algorithm efficiency? [N/D 08]
17. What is stepwise refinement? [N/D 07]
18. Enumerate some important types of problems. [A/M 08]
19. What is recursive call? [M/J 07]
20. What is an activation frame?
21. What are the three different algorithms used to find the gcd of two numbers?
22. What are the fundamental steps involved in algorithmic problem solving?

PART B

1. If you have to solve the searching problem for a list of n numbers, how can you take
advantage of the fact that the list is known to be sorted?
1
Give separate answers for i) Lists represented as arrays ii) Lists represented as linked
lists. Compare the time complexities involved in the analysis of both the algorithms. [A/M
15]
2.i) Derive the worst case anlysis of merge sort using suitable illustrations. [A/M 15](8)
3. Discuss in detail all the asymptitic notations with examples.[M/J 12] (or)
Elaborate on asymptotic notations with examples. [A/M 10,11] [9] [or]
Give the definition and Graphical representation of Big O-notation. [16] [M/J 2016]
4. a) Briefly Explain time complexity and space complexity estimation. [M/J 13](6)
[or] What is space complexity? With an example explain components of fixed and variabel
part in space complexity. [M/J 14](16)
b) Write the linear search algorithm and analyse its time complexity. [M/J 13] (10) (10)
[N/D 2016] [8m]
5. a) Explain tower's of Hanoi problem and solve it using recurrsion. [M/J 14,N/D 13]
6. What are the features of an efficient algorithm? Explain. [A/M 14]
7. Explain important problem types.
8. Give the algorithm to check whether all the elements in a given array of n elements are
distinct. [M/J 2015] [8m]
9. Give the recursive algorithm for finding the number of binary digits in n's binary
representation, where n is a positive decimal integer. Find the recurrence relation and
complexity. [M/J 2015] [16m]

2
NOTES
UNIT 1

INTRODUCTION

PART A

1. Design an algorithm to compute the area and circumference of a circle. [N/D


16]
1. Start
2. Read the value of radius r of Circle
3. Give Pi = 3.14
4. Calculate area of Circle = Pi x r x r
5. Calculate Circumference of circle
(You can use either
C= 2 X Pi X r
Or
Area (A) = Pi X r X r
Circumference (C) = 2 X Pi X r A/r = Pi X r A/r = C/2 So C = 2 x ( A/r )
5. Print area and Circumference
6. End.

2. Define recurrence relation. [N/D 16]

Many algorithms particularly divide and conquer algorithms have time complexities
which are naturally.

Modeled by recurrence relationsA recurrence relation is an equation which is


defined in termes of itself.

3. Give the Euclid’s algorithm for computing gcd (m, n). [M/J 2016]

ALGORITHM Euclid_gcd (m, n)


// Computes gcd (m,n) by Euclid’s algorithm
//Input: Two non-negative, not – both – zero integers m and n

3
//Output: Greatest common divisor of m and n
While n ≠0 do
r ← m mod n
m ←n
n←r
return m
Example : gcd (60 , 24) = gcd (24 , 12) = gcd (12 , 0) = 12.

4. Compare the order of growth n(n-1)/2 and n 2. [M/J 2016]

n n(n-1)/2 n2
Polynomial Quadratic Quadratic
1 0 1
2 1 4
4 6 16
8 28 64
10 45 102
102 4950 104
Complexity Low High
Growth Low High
n(n-1)/2 is lesser than the half of n

5. Write down the properties of asymptotic notations. [A/M 15]

Let f(n) and g(n) be two non negative functions.

Let, n0 and constant c are two integers such that n0 denotes some value of input and n>n0 .
Similarly c is some constant such that c>0.We can write

1.f(n) <= c*g(n)


Then f(n) is said to be bigoh of g(n)
i.e. f(n) 0 (g(n))
4
f(n) >= c*g(n) for all n>=n0

2.f(n) >= c*g(n)


Then f(n) is said to be bigomega of g(n)
f(n) 0 (g(n))

3.Let there be two positive constants namely c 1 and c2 such that


c1 *g(n) <= f(n) <+ c2 * g(n)
f(n) (g(n))

6. Write down algorithm to find the number of binary digits in the binary
representation of a positive decimal interger. [A/M 15]

Algorithm CountBinDigits(int dec)


{
int bin[size];
int i,zeros,ones,total_digits;
zeros=ones=i=0;
Write(“Enter the decimal number to find its binary number”);
Read(dec);
while(dec>0)
do
{
bin[i]=dec%2;
i++;
dec=dec/2;
}
Write(“Binary numger is”);
for(int j=i-1;j>=0;j--)
do
{
Write(bin[j]);
if(bin[j]==0)
zeros++;
else
5
ines++;
}
Write(“Number of zeros are”,zeros);
Write(“Number of ones are”,ones);
total_digits=zeros+ones;

7. Define Big 'Oh' Notation. [M/J 12, M/J 13]

If f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤
cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not
depend on n.

8. What is meant by Linear Search? [M/J 12]


 Linear search or sequential search is a method for finding a particular value in a
list that checks each element in sequence until the desired element is found or the
list is exhausted. The list need not be ordered.

 Linear search is the simplest search algorithm; it is a special case of brute-force


search. Its worst case cost is proportional to the number of elements in the list. Its
expected cost is also proportional to the number of elements if all elements are
searched equally.

9. What do you meant by algorithm? [M/J 06,07,N/D 08,M/J 13]

An algorithm is a sequence of unambiguous instructions for solving a problem, i.e.,


for obtaining a required output for any legitimate input in finite amount of time.

6
10. What is average case analysis? [M/J 14]

The average case analysisiof an algorithm is its efficiency for an average case input
of size n. It provides information about an algorithm behavior on a “typical” or “random”
input.

11. Define program proving and program verification. [M/J 14]


 Program proving means proving each and every instruction of the program with the
help of mathematical theorams.
 Program verification means checking the correctness of the program.

12. Define little Oh and Omega notations. [N/D 13]

O-notation:
A function t(n) is said to be in Ω(g(n)), denoted by t(n) ɛ Ω(g(n)), if t(n) is bounded
below by some constant multiple of g(n) for all large n, i.e., if there exists some positive
constant c and
some nonnegative integer n0 such that
T(n) ≥ cg(n) for all n ≥ n0

Ω-notation:
A function t(n) is said to be in Ω(g(n)), denoted by t(n) ɛ Ω(g(n)), if t(n) is bounded
below by some constant multiple of g(n) for all large n, i.e., if there exists some positive
constant c and
some nonnegative integer n0 such that
T(n) ≥ cg(n) for all n ≥ n0

13. How an algorithm's time efficiency measured? [N/D 12,05]

Algorithmic efficiency are the properties of an algorithm which relate to the amount
of computational resources used by the algorithm. An algorithm must be analysed to
determine its resource usage. Algorithmic efficiency can be thought of as analogous to
engineering productivity for a repeating or continuous process.

14. Give the recurrence relation for divide and conquer. [N/D 12]
7
A divide-and-conquer algorithm consists of three steps:
• dividing a problem into smaller subproblems
• solving (recursively) each subproblem
• then combining solutions to subproblems to get solution to original problem
We use recurrences to analyze the running time of such algorithms. Suppose Tn is
the number of steps in the worst case needed to solve the problem of size n. Let us split a
problem into a >=1 subproblems, each of which is of the input size n/bwhere b>1.
Observe, that the number of subproblems a is not necessarily equal to b. The total
number of steps Tn is obtained by all steps needed to solve smaller subproblems
Tn/bplus the number needed to combine solutions into a final one. The following equation
is called divide-and-conquer recurrence
relation

15. What are the algorithm design techniques? [N/D 05,06]


 Brute force
 Divide and Conquer
 Decrease and Conquer
 Transform and Conquer
 Greedy technique
 Dynamic programming
 Back tracking
 Branch and bound
16. Mention any four classes of algorithm efficiency? [N/D 08]

The various basic efficiency classes are

 Constant : 1
 Logarithmic : log n
 Linear : n nlog n
 N-log-n :
 Quadratic : n2
 Cubic : n3
 Exponential : 2n
 Factorial : n!

8
17. What is stepwise refinement? [N/D 07]
In topdown design methodology the problem is solved in sequence (i.e. step by
step). This method is known as stepwise refinement.
16. How is efficiency of an algorithm defined? [N/D 07]
There are two types of efficiency of algorithm. They are
Time efficiency:
This indicates how fast an algorithm in question runs.
Space efficiency:
This deals with the extra space the algorithm requires

18. Enumerate some important types of problems.[A/M 08]


 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problem
 Geomentric problems
 Numerical problems

19. What is recursive call?[M/J 07]


Recursive algorithm makes more than a single call to itself is known as recursive
call.
20. What is an activation frame?
Activation frame is the storage area for invocation of recursive program
(parameters, local variables,return address etc). Activation frame allocated fromframe
stock pointed by frame pointer.

21. What are the three different algorithms used to find the gcd of two numbers?
The three algorithms used to find the gcd of two numbers are
 Euclid’s algorithm
 Consecutive integer checking algorithm
 Middle school procedure

9
22. What are the fundamental steps involved in algorithmic problem solving?
 The fundamental steps are
 Understanding the problem
 Ascertain the capabilities of computational device
 Choose between exact and approximate problem solving
 Decide on appropriate data structures
 Algorithm design techniques
 Methods for specifying the algorithm
 Proving an algorithms correctness
 Analyzing an algorithm
 Coding an algorithm

PART B

1. If you have to solve the searching problem for a list of n numbers, how can you
take advantage of the fact that the list is known to be sorted?
Give separate answers for
i) Lists represented as arrays
ii) Lists represented as linked lists. Compare the time complexities involved in the
analysis of both the algorithms.[A/M 15]
i) Lists represented as arrays:

The
idea
behind the Binary Search is to split this array in half multiple times to "zero-in" on the value
we're looking for. Assume we are looking for the value 44 in the array, and we want to
know the index of the element that this value is located in, if, in fact, it is in the array at all
(remember that we always have to prepare for the case where the element is not found at
all). In this example, the value 44 is located in the 6th element of the array. In the linear
search shown above, it would require six comparisons between array elements and the

10
search key to find out that the 6th element of the array contained the value that we are
looking for. Let's see how the binary search works now.

In the series of figures below, a sequence of passes is shown for the binary search.
Let's go over them step-by-step. The first step is to look at the array in it's initial state. We
are going to have to keep three "pointers" into the array for this algorithm - three integer
variables that contain the indicies of three different places that we are concerned with in
the array: the low index that we are still looking at, the high index that we are still looking
at, and the midpoint index between the low and high. The figure below shows you these
values. The low and high indices are the first and last element indices of the array, and
the midpoint is shown to be (low+high)/2. Note that we need to do integer division to
find the midpoint. That way, if the number of elements in the array is even, and thus the
"midpoint" is actually not an element, we will set the mid pointer to one less than what
floating point division would give us. If you didn't catch on to what integer division did way
back at the beginning of the semester, now is the time to make sure you do. So if there are
8 elements, then (0+7)/2 would be 3.5 with floating point division, but will return 3 with
integer division.

So, the mid pointer in the example below will start out pointing to element 4, which
contains the value 38. Here comes the key to the Binary Search - pay attention! First, you
compare the value that mid points to to see if it is the value we are looking for (44). It is
not in our case. So, you then ask the question: is the value that mid points to higher or
lower than our search value? In this example, it is lower. Now, since the array is sorted,
we KNOW that the value we are searching for MUST be in the UPPER HALF of the array,
since it is larger than the midpoint element value! So in one comparison, we have
discarded the lower half of the array as elements that we need to search! This is a
powerful tool for searching large arrays! But we still haven't found where the value really
is, so let's continue with the next figure.

11
So now we take the 2nd pass in the Binary Search algorithm. We know that we just
need to search the upper half of the array. We know that the value that mid pointed to
above is NOT the value that we are looking for. So, we now reset our "low" pointer to point
to the index of the element that is one higher than our previous midpoint. This actually
points to our search value, but we don't know that yet. So now low points to index 5, and
high still points to the last index in the array. We recalculate the midpoint, and using
integer division, (5+8)/2 will give 6 as the midpoint index to use. So now we repeat the
process. Does the element at the mid index contain our value? No. Is the value we are
searching for higher or lower than the value of the element at our midpoint index? In this
case, it is lower (44 is less than 77). So now we are going to reset our pointers and do a
third pass. See the next figure.

For our third pass, we reset the HIGH pointer since our search value was lower than
the value of the element at the midpoint. In the figure below, you can see that we reset the
high pointer to point to one less than the previous mid pointer (since we already knew that
the mid pointer did NOT point to our value). We leave the low pointer alone. Note that
now, low and high both point to element 5, and so (5+5)/2 = 5, and now the mid pointer
will point to 5 as well. So now we see if the element in the array that mid is pointing to

12
contains the value that we are searching for. And it does! We have successfully searched
for and found our value in three comparison steps! As noted at the beginning of this
section, the linear search would have taking six comparisons to find the same value.

ii) Lists represented as linked lists:

Linked Lists

Introduction
One disadvantage of using arrays to store data is that arrays are static structures and
therefore cannot be easily extended or reduced to fit the data set. Arrays are also
expensive to maintain new insertions and deletions. In this chapter we consider another
data structure called Linked Lists that addresses some of the limitations of arrays.
A linked list is a linear data structure where each element is a separate object.

Each element (we will call it a node) of a list is comprising of two items - the data and
a reference to the next node. The last node has a reference to null. The entry point into a
linked list is called the head of the list. It should be noted that head is not a separate node,
but the reference to the first node. If the list is empty then the head is a null reference.
A linked list is a dynamic data structure. The number of nodes in a list is not fixed and can
grow and shrink on demand. Any application which has to deal with an unknown number of
objects will need to use a linked list.
One disadvantage of a linked list against an array is that it does not allow direct access to
the individual elements. If you want to access a particular item then you have to start at the
head and follow the references until you get to that item.

13
Another disadvantage is that a linked list uses more memory compare with an array - we
extra 4 bytes (on 32-bit CPU) to store a reference to the next node.

Types of Linked Lists

A singly linked list is described above


A doubly linked list is a list that has two references, one to the next node and another to
previous node.

Search an element in a Linked List (Iterative and Recursive)


Write a C function that searches a given key ‘x’ in a given singly linked list. The function
should return true if x is present in linked list and false otherwise.
bool search(Node *head, int x)
For example, if the key to be searched is 15 and linked list is 14->21->11->30->10, then
function should return false. If key to be searched is 14, then the function should return
true.
Iterative Solution
1) Initialize a node pointer, current = head.
2) Do following while current is not NULL
a) current->key is equal to the key being searched return true.
b) current = current->next
3) Return false

Recursive Solution
bool search(head, x)
1) If head is NULL, return false.
2) If head's key is same as x, return true;
2) Else return search(head->next, x)
Data Time Complexity Space
Structu Complexit
re y
Average Worst Worst
Access Search Insertion Deletion Access Search Insertion Deletion
Array Θ(1) Θ(n) Θ(n) Θ(n) O(1) O(n) O(n) O(n) O(n)
Stack Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Queue Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Singly-
Linked Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
List

14
 i) Derive the worst case anlysis of merge sort using suitable illustrations.[A/M
15] (8)
The worst case scenario for Merge Sort is when, during every merge step, exactly
one value remains in the opposing list; in other words, no comparisons were skipped. This
situation occurs when the two largest value in a merge step are contained in opposing
lists. When this situation occurs, Merge Sort must continue comparing list elements in
each of the opposing lists until the two largest values are compared

The complexity of worst-case Merge Sort is:

Eq.1 is the recurrence relation for Merge Sort.

T(N) refers to the total number of comparisons between list elements in Merge Sort
when we are sorting the entire list of N elements. The divide stage performs Merge Sort on
two halves of the list, which is what 2*T(N/2) refers to. The final part of the Eq., N-1,refers
to the total comparisons in the merge step that returns the sorted list of N elements. Eq.1
describes the number of comparisons that occur in a merge sort, which is a recursive
procedure. Since the method is recursive, we will not be able to count every comparison
that occurs by only looking at a single call to Merge. Instead, we need to unroll the
recursion and count the total number of comparisons.
Equations 2-3 perform this action of unrolling the recursion by performing substitution. We
know what the value of T(N) is from 1, and by substitution we know the value of T(N/2).
Eq. 4 is just a clearer view of Eq. 3. At this point we are in the third recursive call of Merge
Sort, and a pattern has become clear enough to produce Eq. 5 below:

In Eq. 2, a new variable called “k” is introduced. This variable represents the depth of the
15
recursion. When the sort is recursively dividing the input list, the recursion stops when the
list contains a single element. A single element list is already sorted

By substituting Eq. 6 through Eq. 8 into Eq. 5, we eliminate the k term and reduce
the recurrence relation to produce the complexity for merge sort, Eq. 9. Thus we’ve shown
that Merge Sort has O(N*log(N))complexity with the worst possible input.

Here is another way to compute the asymptotic complexity: guess the answer (In
this case, O(n lg n)), and plug it directly into the recurrence relation. By looking at what
happens we can see whether the guess was correct or whether it needs to be increased to
a higher order of growth (or can be decreased to a lower order). This works as long as the
recurrence equations are monotonic in n, which is usually the case. By monotonic, we
mean that increasing n does not cause the right-hand side of any recurrence equation to
decrease.

Substitution method:

For example, consider our recurrence relation for merge sort. To show T(n) is O(n lg
n), we need to show that T(n) ≤ kn lg n for large n and some choice of k. Define F(n) = n
lg n, so we are trying to show that T(n) ≤ kF(n). This turns out to be true if we can plug
kF(n) into the recurrence relation for T(n) and show that the recurrence equations hold as
"≥" inequalities. Here, we plug the expression kn lg n into the merge-sort recurrence
relation:

kn lg n ≥ 2k(n/2) lg (n/2) + c4n

= kn lg (n/2) + c4n

= kn (lg n −1) + c4n

= kn lg n − kn + c4n

= kn lg n + (c4−k)n

Can we pick a k that makes this inequality come true for sufficiently large n? Certainly; it
holds if k≥c4. Therefore this function is O(n lg n). In fact, we can make the two sides
16
exactly equal by choosing k=c4, which tells us that it is Θ(n lg n) as well.

More generally, if we want to show that a recurrence relation solution is O(F(n)), we show
that we can choose k so that for each recurrence equation with kF(n) substituted for T(n),
LHS ≥ RHS for all sufficiently large n. If we want to show that a recurrence relation is
Θ(F(n)), we need to show that there is also a k such that LHS ≤ RHS for all sufficiently
large n. In the case above, it happens that we can choose the same k.

Why does this work? It's really another use of strong induction where the proposition to be
proved is that T(n) ≤ kF(n) for all sufficiently large n. We ignore the base case because we
can always choose a large enough k to make the inequality work for small n. Now we
proceed to the inductive step. We want to show that T(n+1) ≤ kF(n+1) assuming that for all
m≤n we have T(m) ≤ kF(m). We have

T(n+1) = 2T((n+1)/2) + c4n ≤ 2kF((n+1)/2) + c4n ≤ kF(n+1)

so by transitivity T(n) ≤ F(n). The middle inequality follows from the induction hypothesis
T((n+1)/2) ≤ F((n+1)/2) and from the monotonicity of the recurrence equation. The last step
is what we showed by plugging kF(n) into the recurrence and checking that it holds for any
sufficiently large n.

To see another example, we know that any function that is O(n lg n) is also O(n2) though

not Θ(n2). If we hadn't done the iterative analysis above, we could still verify that merge

sort is at least as good as insertion sort (asymptotically) by plugging kn2 into the
recurrence and showing that the inequality holds for it as well:

kn2 ≥ 2k(n/2)2 + c4n

= ½kn2 + c4n

For sufficiently large n, this inequality holds for any k. Therefore, the algorithm is O(n2).

Because it holds for any k, the algorithm is in fact o(n2). Thus, we can use recurrences to
show upper bounds that are not tight as well as upper bounds that are tight.

On the other hand, suppose we had tried to plug in kn instead of kn2. Then we'd have:

kn ≥ 2k(n/2) + c4n

= kn + c4n
17
Because c4 is positive, the inequality doesn't hold for any k ; therefore, the algorithm is not

O(n). In fact, we see that the inequality always holds in the opposite direction (<); therefore
kn is a strict lower bound on the running time of the algorithm; its running time is more than
linear.

Thus, reasonable guesses about the complexity of an algorithm can be plugged into a
recurrence and used not only to find the complexity, but also to obtain information about its
solution.

3. Discuss in detail all the asymptitic notations with examples.[M/J 12](or)


Elaborate on asymptotic notations with examples.[A/M 10,11] [9] [or]
Give the definition and Graphical representation of Big O-notation.[16] [M/J2016]

The main idea of asymptotic analysis is to have a measure of efficiency of


algorithms that doesn’t depend on machine specific constants, and doesn’t require
algorithms to be implemented and time taken by programs to be compared. Asymptotic
notations are mathematical tools to represent time complexity of algorithms for asymptotic
analysis. The following 3 asymptotic notations are mostly used to represent time
complexity of algorithms.

1) Θ Notation:

The theta notation bounds a functions from above and below, so it defines exact
asymptotic behavior.A simple way to get Theta notation of an expression is to drop low
order terms and ignore leading constants. For example, consider the following expression.

3n3+6n2+6000=Θ(n3)
Dropping lower order terms is always fine because there will always be a n0 after which

Θ(n3) beats Θn2) irrespective of the constants involved.

For a given function g(n), we denote Θ(g(n)) is following set of functions.

Θ((g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that
0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

18
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between
c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta also requires
that f(n) must be non-negative for values of n greater than n0.

Big O Notation:

The Big O notation defines an upper bound of an algorithm, it bounds a function


only from above. For example, consider the case of Insertion Sort. It takes linear time in
best case and quadratic time in worst case. We can safely say that the time complexity of
Insertion sort is O(n^2). Note that O(n^2) also covers linear time.

If we use Θ notation to represent time complexity of Insertion sort, we have to use


two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).

The Big O notation is useful when we only have upper bound on time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.

O(g(n)) = { f(n): there exist


positive constants c and n0
such that
0 <= f(n) <=
cg(n) for all n >= n0}

19
Ω Notation:

Just as Big O notation provides an asymptotic upper bound on a function, Ω notation


provides an asymptotic lower bound.

Ω Notation< can be useful when we have lower bound on time complexity of an algorithm.
As discussed in the previous post, the best case performance of an algorithm is generally
not useful, the Omega notation is the least used notation among all three.

For a given function g(n), we denote by Ω(g(n)) the set of functions.

Ω (g(n)) = {f(n): there exist positive constants c and n0 such that


0 <= cg(n) <= f(n) for all n >= n0}.

Let us consider the same Insertion sort example here. The time complexity of Insertion
Sort can be written as Ω(n), but it is not a very useful information about insertion sort, as
we are generally interested in worst case and sometimes in average case.

4. a)Briefly Explain time complexity and space complexity estimation.[M/J 13](6)


or
What is space complexity? With an example explain components of fixed and
20
variabel part in space complexity.[M/J 14](16)

 The space complexity of an algorithm is the amount of memory it needs to run to


completion.

 The time complexity of an algorithm is the amount of computer time it needs to run
to completion.

Performance evaluation can be loosely divided into two major phases:

(1) A priory estimate and

(2) a posteriori testing. We refer to these performance analysis and performance


measurements respectively.

1. Space complexity

The better the time complexity of an algorithm is, the faster the algorithm will carry out
his work in practice. Apart from time complexity, its space complexity is also important:
This is essentially the number of memory cells which an algorithm needs. A good algorithm
keeps this number as small as possible, too.

There is often a time-space-tradeoff involved in a problem, that is, it cannot be solved


with few computing time and low memory consumption. One then has to make a
compromise and to exchange computing time for memory consumption or vice versa,
depending on which algorithm one chooses and how one parameterizes it.

(1)A fixed part that is independent of characteristics (e.g., number, size) of the inputs and
outputs this part typically includes the instruction space (i.e. space for code), space for
simple variables and fixed size component variables (also called aggregate),space for
constants and so on.

(2) A variable part that consist of space needed by component variable whose size is
dependent on particular problem instance being solved, the space needed by referenced
variables(to the extent that it depends on instance characteristics), and the recursion stack
space (insofar and this space depends on the instance characteristics). The space
requirement S(P) of an algorithm P may therefore be written as S(P) =c + SP (instance

21
characteristics),where c is constant. When analyzing the space complexity of an algorithm,
we concentrate solely on estimating SP (instance characteristics). For any given problem,
we need first to determine which instance characteristics to use to measure the space
requirement.Generally speaking our choices are related to the number and magnitude of
the inputs to and outputs from the algorithm at times, more complexity measures of the
interrelationship among the data times are used.

Time complexity
The time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run as a function of the length of the string representing the input. The time
complexity of an algorithm is commonly expressed using big O notation, which excludes
coefficients and lower order terms. When expressed this way, the time complexity is said
to be described asymptotically, i.e., as the input size goes to infinity. For example, if the

time required by an algorithm on all inputs of size n is at most 5n3 + 3n for any n (bigger

than some n0), the asymptotic time complexity is O(n3).

The time required to analyze the given problem of particular size is known as the time
complexity.It depends on two components

 Fixed Part – Compile time


 Variable Part – Run time dependent on problem instance.Run time is considered
usually and compile time is ignored.

Ways to measure time complexity

 Use a stop watch and time is obtained in seconds or milliseconds.


 Step Count - Count no of program steps.
 Comments are not evaluated so they are not considered as program step.
 In while loop steps are equal to the number of times loop gets executed.
 In for loop,steps are equal to number of times an expression is checked for
condition.
 A single expression is considered as a single step.Example a+f+r+w+w/q-d-
f is one step.

22
 Rate of Growth(Asymptotic Notations)

Example of step count

Algorithm sum( a[], 5)


{
sum =0;
for(i=0;i<=5;i++)
{
sum = sum + a[i];
}
return sum;
}

Step count is a difficult approach if we need to compare results.For e.g.If we used two
techniques to solve one problem and T(1) is the running time of first technique and T(2) is

the running time of second technique.Say T(1) =( n+1 )and T(2) is (n2 + 1).We cannot
decide which one is the better solution,so for comparisons ‘Rate of growth‘ i.e. asymptotic
notations of time space complexity functions are much convenient to use.

23
4. b) Write the linear search algorithm and analyse its time complexity. [M/J 13]
(10) [N/D 2016] [8m]

Linear search algorithm is one of the most basic algorithm in computer science to
find a particular element in a list of elements.
Pseudocode:-
# Input: Array D, integer key
# Output: first index of key in D, or -1 if not found

For i = 0 to last index of D:


if D[i] equals key:
return i
return -1

Worst Case Analysis (Usually Done)

In the worst case analysis, we calculate upper bound on running time of an


algorithm. We must know the case that causes maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched (x
in the above code) is not present in the array. When x is not present, the search() functions
compares it with all the elements of arr[] one by one. Therefore, the worst case time
complexity of linear search would be Θ(n).

Average Case Analysis (Sometimes done)

In average case analysis, we take all possible inputs and calculate computing time
for all of the inputs. Sum all the calculated values and divide the sum by total number of
inputs. We must know (or predict) distribution of cases. For the linear search problem, let
us assume that all cases are uniformly distributed (including the case of x not being
present in array). So we sum all the cases and divide the sum by (n+1). Following is the
value of average case time complexity.

Average Case Time =

24
=

= Θ(n)

Best Case Analysis (Bogus)


In the best case analysis, we calculate lower bound on running time of an algorithm.
We must know the case that causes minimum number of operations to be executed. In the
linear search problem, the best case occurs when x is present at the first location. The
number of operations in the best case is constant (not dependent on n). So time
complexity in the best case would be Θ(1)
5.a) Explain tower's of honnoi problem and solve it using recurrsion.[M/J 14,N/D 13]
The Tower of Hanoi puzzle was invented by the French mathematician Edouard
Lucas in 1883. He was inspired by a legend that tells of a Hindu temple where the puzzle
was presented to young priests. At the beginning of time, the priests were given three
poles and a stack of 64 gold disks, each disk a little smaller than the one beneath it. Their
assignment was to transfer all 64 disks from one of the three poles to another, with two
important constraints. They could only move one disk at a time, and they could never
place a larger disk on top of a smaller one. The priests worked very efficiently, day and
night, moving one disk every second. When they finished their work, the legend said, the
temple would crumble into dust and the world would vanish.

Although the legend is interesting, you need not worry about the world ending any
time soon. The number of moves required to correctly move a tower of 64 disks is
264−1=18,446,744,073,709,551,615. At a rate of one move per second, that is
584,942,417,355 years! Clearly there is more to this puzzle than meets the eye.

Figure shows an example of a configuration of disks in the middle of a move from the first
peg to the third. Notice that, as the rules specify, the disks on each peg are stacked so that
smaller disks are always on top of the larger disks. If you have not tried to solve this puzzle
before, you should try it now. You do not need fancy disks and poles–a pile of books or
pieces of paper will work.you have a tower of five disks, originally on peg one. If you
already knew how to move a tower of four disks to peg two, you could then easily move
the bottom disk to peg three, and then move the tower of four from peg two to peg three.
25
But what if you do not know how to move a tower of height four? Suppose that you knew
how to move a tower of height three to peg three; then it would be easy to move the fourth
disk to peg two and move the three from peg three on top of it. But what if you do not know
how to move a tower of three? How about moving a tower of two disks to peg two and then
moving the third disk to peg three, and then moving the tower of height two on top of it?
But what if you still do not know how to do this? Surely you would agree that moving a
single disk to peg three is easy enough, trivial you might even say. This sounds like a base
case in the making.Here is a high-level outline of how to move a tower from the starting
pole, to the goal pole, using an intermediate pole:

1. Move a tower of height-1 to an intermediate pole, using the final pole.


2. Move the remaining disk to the final pole.
3. Move the tower of height-1 from the intermediate pole to the final pole using the
original pole.

As long as we always obey the rule that the larger disks remain on the bottom of the stack,
we can use the three steps above recursively, treating any larger disks as though they
were not even there. The only thing missing from the outline above is the identification of a
base case. The simplest Tower of Hanoi problem is a tower of one disk. In this case, we
need move only a single disk to its final destination. A tower of one disk will be our base
case. In addition, the steps outlined above move us toward the base case by reducing the
height of the tower in steps 1 and 3.

procedure Hanoi(n: integer; source, dest, by: char);


Begin
if (n=1) then

26
writeln('Move the plate from ', source, ' to ', dest)
else begin
Hanoi(n-1, source, by, dest);
writeln('Move the plate from ', source, ' to ', dest);
Hanoi(n-1, by, dest, source);
end;
End;

6. What are the features of an efficient algorithm? Explain. [A/M 14]

Fundamentals of Algorithmic problem solving

• Understanding the problem


• Ascertain the capabilities of the computational device
• Exact /approximate soln.
• Decide on the appropriate data structure
• Algorithm design techniques
• Methods of specifying an algorithm
• Proving an algorithms correctness
• Analysing an algorithm

Understanding the problem:

The problem given should be understood completely. Check if it is similar to some


standard problems & if a Known algorithm exists. otherwise a new algorithm has to be
devised.Creating an algorithm is an art which may never be fully automated. An important
step in the design is to specify an instance of the problem. Ascertain the capabilities of the
computational device: Once a problem is understood we need to Know the capabilities of
the computing device this can be done by Knowing the type of the architecture,speed &
memory availability.

Exact /approximate soln.:

Once algorithm is devised, it is necessary to show that it computes answer for all
the possible legal inputs. The solution is stated in two forms,Exact solution or approximate
solution.examples of problems where an exact solution cannot be obtained are

i)Finding a squareroot of number.

27
ii)Solutions of non linear equations.

Decide on the appropriate data structure:

Some algorithms do not demand any ingenuity in representing their


inputs.Someothers are in fact are predicted on ingenious data structures.A data type is a
well-defined collection of data with a well-defined set of operations on it.A data structure is
an actual implementation of a particular abstract data type. The Elementary Data
Structures are ArraysThese let you access lots of data fast. (good) .You can have arrays of
any other data type. (good) .However, you cannot make arrays bigger if your program
decides it needs more space. (bad) .RecordsThese let you organize non-homogeneous
data into logical packages to keep everything together. (good) .These packages do not
include operations, just data fields (bad, which is why we need objects) .Records do not
help you process distinct items in loops (bad, which is why arrays of records are used)
SetsThese let you represent subsets of a set with such operations as intersection, union,
and equivalence. (good) .Built-in sets are limited to a certain small size. (bad, but we can
build our own set data type out of arrays to solve this problem if necessary).

Algorithm design techniques:

Creating an algorithm is an art which may never be fully automated. By mastering


these design strategies, it will become easier for you to devise new and useful algorithms.
Dynamic programming is one such technique. Some of the techniques are especially
useful in fields other then computer science such as operation research and electrical
engineering. Some important design techniques are linear, non linear and integer
programming

Methods of specifying an algorithm:

There are mainly two options for specifying an algorithm: use of natural language or
pseudocode & Flowcharts. A Pseudo code is a mixture of natural language & programming
language like constructs. A flowchart is a method of expressing an algorithm by acollection
of connected geometric shapes.

Proving an algorithms correctness:

Once algorithm is devised, it is necessary to show that it computes answer for all
the possible legal inputs .We refer to this process as algorithm validation. The process of
validation is to assure us that this algorithm will work correctly independent of issues
concerning programming language it will be written in. A proof of correctness requires that
28
the solution be stated in two forms. One form is usually as a program which is annotated
by a set of assertions about the input and output variables of a program. These assertions
are often expressed in the predicate calculus. The second form is called a specification,
and this may also be expressed in the predicate calculus. A proof consists of showing
that these two forms are equivalent in that for every given legal input, they describe same
output. A complete proof of program correctness requires that each statement of
programming language be precisely defined and all basic operations be proved correct. All
these details may cause proof to be very much longer than the program.

Analyzing algorithms:

As an algorithm is executed, it uses the computers central processing


unit to perform operation and its memory (both immediate and auxiliary) to hold the
program and data. Analysis of algorithms and performance analysis refers to the task of
determining how much computing time and storage an algorithm requires. This is a
challenging area in which some times require grate mathematical skill. An important result
of this study is that itallows you to make quantitative judgments about the value of one
algorithm over another.

Another result is that it allows you to predict whether the software will meet any efficiency
constraint that exits.

7. Explain Important problem types.

Important Problem Types

• Sorting
• Searching
• String processing
• Graph problems
• Combinatorial problems
• Geometric problems
• Numerical problems

Sorting:
29
Sorting algorithm is an algorithm that puts elements of a list in a certain order. The
mostused orders are numerical order and lexicographical order. Efficient sorting is
important to optimizing the use of other algorithms (such as search and merge algorithms)
that requiresorted lists to work correctly; it is also often useful for canonicalizing data and
forproducing human-readable output. More formally, the output must satisfy two
conditions:
1. The output is in nondecreasing order (each element is no smaller than the previous
element according to the desired total order);
2. The output is a permutation, or reordering, of the input.

Since the dawn of computing, the sorting problem has attracted a great deal of research,
perhapsdue to the complexity of solving it efficiently despite its simple, familiar statement.
Forexample, bubble sort was analyzed as early as 1956.[1] Although many consider it a
solvedproblem, useful new sorting algorithms are still being invented (for example, library
sort wasfirst published in 2004). Sorting problem provides a gentle introduction to a variety
of core algorithm concepts, such as big O notation, divide and conquer algorithms, data
structures,randomized algorithms, best, worst and average case analysis, time-space
tradeoffs, and lowerbounds.

Searching :

In computer science, a search algorithm, broadly speaking, is an algorithm for


finding an item with specified properties among a collection of items. The items may be
stored individually as records in a database; or may be elements of a search space
defined by a mathematical formula or procedure, such as the roots of an equation with
integer variables; or a combination of the two, such as the Hamiltonian circuits of a
graph.Searching algorithmsare closely related to the concept of dictionaries. Dictionaries
are data structures that support search, insert, and delete operations. One of the most
effective representations is a hash table.Typically, a simple function is applied to the key to
determine its place in the dictionary.Another efficient search algorithms on sorted tables is
binary search

String processing:

String searching algorithms are important in all sorts of applications that we meet
everyday. In text editors, we might want to search through a very large document
(say, a million characters) for the occurence of a given string (maybe dozens of

30
characters). Intext retrieval tools, we might potentially want to search through thousands of
such documents(though normally these files would be indexed, making this unnecessary).
Other applicationsmight require string matching algorithms as part of a more complex
algorithm (e.g., the Unix program ``diff'' that works out the differences between two simiar
text files). Sometimes we might want to search in binary strings (ie, sequences of 0s and
1s). For example the ``pbm'' graphics format is based on sequences of 1s and 0s. We
could express a task like ``find a wide white stripe in the image'' as a string searching
problem.

Graph problems:

Graph algorithms are one of the oldest classes of algorithms

There are two large classes of graphs:


• directed graphs (digraphs )
• undirected graphs

Some algorithms differ by the class. Moreover the set of problems for digraphs and
undirected graphs are different. There are special cases of digraphs and graphs that have
their own sets of problem. One example for digraphs will be program graphs. Program
graphs are important in compiler construction and were studied in detail after the invention
of the computers. Graphs are made up of vertices and edges. The simplest property of a
vertex is its degree, the number of edges incident upon it. The sum of the vertex degrees
in any undirected graph is twice the number of edges, since every edge contributes one to
the degree of both adjacent vertices. Trees are undirected graphs which contain no cycles.
Vertex degrees are important in

Among classic algorithms/problems on digraphs we can note the following:


• Reachability.
• Shortest path (min-cost path). Find the path from B to A with the minimum cost
(determined as some simple function of the edges traversed in the path) (Dijkstra's and
Floyd's algorithms)
• Visit all nodes. Traversal. Depth- and breadth-first traversals
• Transitive closure. Determine all pairs of nodes that can reach each other (Floyd's
algorithm)
• Dominators a node d dominates a node n if every path from the start node to n must
go through d. Notationally, this is written as d dom n. By definition, every node dominates
itself.

31
There are a number of related concepts:
o immediate dominator
o pre-dominator
o post-dominator.
o dominator tree
• Minimum spanning tree. A spanning three is a set of edges such that every node is
reachable from every other node, and the removal of any edge from the tree eliminates
the reachability property. A minimum spanning tree is the smallest such tree. (Prim's
and Kruskal's algorithms)

Combinatorial problems:

From a more abstract perspective ,the traveling Salesman problem and the graph
coloring problems of combinatorial problems are problems that a task to find a
combinatorial object-such as a permutation a combination ,or a subset-that satisfies
certain constraints and has some desired property.Generally speaking, combinatorial
problems are the most difficult problems in computing ,from both the theoretical and
practical standpoints. Their difficulty stems from the following facts. First ,the number of
combinatorial objects typically grows extremely fast with a problem size , reaching
unimaginable magnitudes even moderate-sized intances. Second, there are no known
algorithms for solving most such problems exactly in an acceptable amount of time.
Moreover, most computer scientist believe such algorithms do not exist. This conjecture
has been neither proved nor disproved ,and it remains the most important resolved issue
in theoretical computer science. Some combinatorial problems can be solved by efficient
algorithms, but they should be considered fortunate to the rule. The shortest-problem
mentioned earlier is among such exceptions.

Geometric Problems

Geometric algorithms deal with geometric objects such as points , lines, and
polygons. Ancient Greeks were very much interested in developing procedures for solving
a variety of geometric problems including problems of constructing simple geometric
shapes-triangles,circles and so on-with an unmarked ruler and a compass. Then ,for
about2000 years ,intense interest in geometrics disappeared, to be resurrected in the age
of computers-no more rulers and compasses, just bits, bytes, and good old human
ingenuity. Of course, today people are interested in geometric algorithms with quite

32
different applications in mind, such as computer Graphics, robotics, and tomography.
We will discuss algorithms for only two classic problems of computational geometry: the
closest pair problem and the convex-hull problem. The closest-pair problem is self
explanatory :given n points in the plain, find the closest pair among them. The convex hull
problem is to find the smallest convex polygon that would include all points of a given set.

Numerical Problems

Numerical problems, another large area of applications are problems that involve
mathematical objects of continuous nature: solving equations and system of equation,
computing definite integrals, evaluating functions and so on. The majority of such
mathematical problems can be solved only approximately. Another principal difficulty stems
from the fact that such problem typically requires manipulating real numbers, which can be
represented in computer only approximately. Moreover, a large number of arithmetic
operations performed on approximately represented numbers can lead to an accumulation
of the round-off error to a point where it can drastically distort an output produced by a
seemingly sound algorithm.Many sophisticated algorithms have been developed over the
years in this area ,and they continue to play a critical role in many scientific and
engineering applications.

8. Give the algorithm to check whether all the elements in a given array of n
elements are distinct.[ M/J 2015] [8m]

Print All Distinct Elements of a given integer array

Given an integer array, print all distinct elements in array. The given array may contain
duplicates and the output should print every element only once. The given array is not
sorted.

Examples:

Input: arr[] = {12, 10, 9, 45, 2, 10, 10, 45}


Output: 12, 10, 9, 45, 2

Input: arr[] = {1, 2, 3, 4, 5}


Output: 1, 2, 3, 4, 5

Input: arr[] = {1, 1, 1, 1, 1}

33
Output: 1

A Simple Solution is to use twp nested loops. The outer loop picks an element one by one
starting from the leftmost element. The inner loop checks if the element is present on left side of it.
If present, then ignores the element, else prints the element. Following is C++ implementation of
the simple algorithm.

// C++ program to print all distinct elements in a given


array
#include <iostream>
#include <algorithm>
using namespace std;

void printDistinct(int arr[], int n)


{
// Pick all elements one by one
for (int i=0; i<n; i++)
{
// Check if the picked element is already printed
int j;
for (j=0; j<i; j++)
if (arr[i] == arr[j])
break;

// If not printed earlier, then print it


if (i == j)
cout << arr[i] << " ";
}
}

// Driver program to test above function


int main()
{
int arr[] = {6, 10, 5, 4, 9, 120, 4, 6, 10};
int n = sizeof(arr)/sizeof(arr[0]);
printDistinct(arr, n);

34
return 0;
}
Output:

6 10 5 4 9 120

Time Complexity of above solution is O(n2). We can Use Sorting to solve the problem
in O(nLogn) time. The idea is simple, first sort the array so that all occurrences of every element
become consecutive. Once the occurrences become consecutive, we can traverse the sorted array
and print distinct elements in O(n) time. Following is C++ implementation of the idea.

// C++ program to print all distinct elements in a given array


#include <iostream>
#include <algorithm>
using namespace std;

void printDistinct(int arr[], int n)


{
// First sort the array so that all occurrences become
consecutive
sort(arr, arr + n);

// Traverse the sorted array


for (int i=0; i<n; i++)
{
// Move the index ahead while there are duplicates
while (i < n-1 && arr[i] == arr[i+1])
i++;

// print last occurrence of the current element


cout << arr[i] << " ";
}
}

// Driver program to test above function


int main()
35
{
int arr[] = {6, 10, 5, 4, 9, 120, 4, 6, 10};
int n = sizeof(arr)/sizeof(arr[0]);
printDistinct(arr, n);
return 0;
}
Output:

4 5 6 9 10 120

We can Use Hashing to solve this in O(n) time on average. The idea is to traverse the given array
from left to right and keep track of visited elements in a hash table. Following is Java
implementation of the idea.

/* Java program to print all distinct elements of a given array


*/
import java.util.*;

class Main
{
// This function prints all distinct elements
static void printDistinct(int arr[])
{
// Creates an empty hashset
HashSet<Integer> set = new HashSet<>();

// Traverse the input array


for (int i=0; i<arr.length; i++)
{
// If not present, then put it in hashtable and print it
if (!set.contains(arr[i]))
{
set.add(arr[i]);
System.out.print(arr[i] + " ");
}
}
}

36
// Driver method to test above method
public static void main (String[] args)
{
int arr[] = {10, 5, 3, 4, 3, 5, 6};
printDistinct(arr);
}
}
Output:

10 5 3 4 6

9. Give the recursive algorithm for finding the number of binary digits in n's binary
representation, where n is a positive decimal integer. Find the recurrence
relation and complexity.[ M/J 2015] [16m]

37
38

Potrebbero piacerti anche