Sei sulla pagina 1di 29

DESIGN AND ANALYSIS OF ALGORITHMS

UNIT I INTRODUCTION
Objectives
Ø Explain what the use of algorithm is
Ø Describe the fundamentals of algorithmic problem solving
Ø Understand how to calculate the complexity of an algorithm
Ø Describe the use of an divide and conquer method
Ø Explain the analysis of various sorting techniques

1. Algorithm
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining
a required output for any legitimate input in a finite amount of time.

2. Fundamentals of algorithmic problem solving

Design and analysis of algorithms- P.Vasantha Kumari 1


2.1. Understanding the Problem:
• Clearly read the problem
• Ask any doubts you having
• Analysis what is the input required
2.2. Decision making on
• Capabilities of computational devices: The algorithm should adapt to your system. For example
if your system executes the instruction in parallel then you have to write parallel
Algorithm or else if you having RAM machine (The system executes the instruction
sequentially) then you have to write serial algorithm. Decide exact or approximate output need :
Write an algorithm depends on output need. For example to find the square root the approximate
output is enough but finding shortest way between the cities, we need an exact result.
• Decide Data Structure: In which format going to give the input. Example data structures are
Stack, Queue, List and etc.
• Algorithm Techniques: Decide which technique used to write an algorithm. For example you
can divide the big problem into number of smaller units and solve each unit. This technique
called as Divide and Conquer method. So depending on the problem you can select any one
techniques. Some techniques are Decrees and Conquer, Brute Force, Divide and Conquer and
etc.
2.3. Specification of an Algorithm:
The way for specifying an algorithm. There are three ways to represent an algorithm
• Natural Language : You can represent the instruction in the form of sentence.
• Pseudo Code : Use identifier, keywords and symbols to represent the instructions. For example
c=a+b.
• Flowchart : The another way for representing the instructions. In this use diagrams and link the
sequence using lines. The separate diagrams are available based on the instructions.
2.4. Algorithm Verification:
To verify the algorithm apply all the possible input and check the algorithm produce the required
output or not..
2.5. Analysis of Algorithm:
Compute the efficiency of the algorithm.. The efficiency iis depend on the following factors
• Time: The amount of time to take execute the algorithm. If the algorithm take less
amount of time to execute then this one will be the best one.

Design and analysis of algorithms- P.Vasantha Kumari 2


• Space: The amount of memory required to store the algorithm and the amount of memory
required to store the input for that algorithm.
• Simplicity: The algorithm should not having any complex instructions. That type of
instructions are simplified to number of smaller instructions.
• Generality: The algorithm should be in general. It should be implemented in any
language. The algorithm is not fit in to a specific output.
2.6. Implementation :
After satisfying above all factors code the algorithm in any of language you known and execute
the program.

3. Properties of the algorithm


) Finiteness: - an algorithm terminates after a finite numbers of steps.
2) Definiteness: - each step in algorithm is unambiguous. This means that the action specified by
the step cannot be interpreted (explain the meaning of) in multiple ways & can be performed
without any confusion.
3) Input:- an algorithm accepts zero or more inputs
4) Output:- it produces at least one output.
5) Effectiveness:- it consists of basic instructions that are realizable. This means that the
instructions
6) Non Ambiguity: The algorithm should not having any conflict meaning..
7) Range of Input: Before design a algorithm,, decide what type of iinput going to
given,, whatsis the required output..
8) Multiplicity: You can write different algorithms to solve the same problem..
9) Speed : Apply some idea to speed up the execution time..

4. Analysis of algorithm
4.1. Analysis Framework :
Analysis : To compute the efficiency of an algorithm. When compute the efficiency of an
algorithm, consider the following two factors.
_ Space Complexity
_ Time Complexity

Design and analysis of algorithms- P.Vasantha Kumari 3


(i) Space Complexity : The amount of memory required to store the algorithm and the amount
of memory required to store the inputs for this algorithm.
S(p) = C + Sp where,
C – Constant (The amount of memory required to store the algorithm)
Sp – The amount of memory required to store the inputs. Each input stored in one unit.
Example : Write an algorithm to find the summation of n numbers and analysis the space
complexity for that algorithm..
Algorithm Summation (X ,,n)
//// Input : n Number of elements
//// Output : The result for summation of n numbers..
sum = 0
for I = 1 to n
sum = sum+ X[ii]
return sum
The Space Complexity of above algorithm:
S(p) = C + Sp
1.. One unit for each element in the array.. The array having n number of elements so the array
required in unit..
2.. One unit for the variable n,, one unit for the variable I and one unit for the variable sum..
3.. Add the above all units and find the space complexity..
S(p) = C + (n + 1 + 1 + 1)
S(p) = C + (n + 3)
(ii) Time Complexity
The amount of time required to run the algorithm.. The execution time depends on the following
factors..
• System load
• Number of other programs running
• Speed of hardware
How to measure the running time?
• Find the basic operation of the algorithm. (The inner most loop or operation is called basic
operation)
• Compute the time required to execute the basic operation.

Design and analysis of algorithms- P.Vasantha Kumari 4


• Compute how many times the basic operation is executed. The Time complexity calculated by
the following formula
T(n) = Cop * C(n) Where,
Cop – Constant (The amount of time required to execute the basic operation)
C(n) – How many times the basic operation is executed.
Example : The Time complexity of the above algorithm
(Summation of n Numbers)
1. The Basic operation is: addition
2. To compute how many times the Basic operation is executed: n
3. So, T(n) Cop * n
4. Remove the constant or assume Cop = 1
5. The Time Complexity is T(n) = n.
4.2 Order of Growth
The Performance of an algorithm relation with the input size n of that algorithm. This is called
Order of Growth. For example the Function is 2n In that,, if the input n = 1 then the output is 2.
Best case, Worst case and Average case
In some algorithm the time complexity comes under in three categories..
• Best case
• Worst case
• Average case
_ In the Best case the algorithm (Basic operation) execute only less number of times compare to
other cases..
_ In the Worst case the algorithm (Basic operation) execute high number of times compare to
other cases..
_ In the Average case the algorithm (Basic operation) execute in between Best case and Worst
case..
Example : Write an algorithm to search a key in the given set of elements..
Algorithm Seq_Search ( X,, key,, n)
////Input : The array of elements and a Search key..
//// Output : The search key is present in the list or not..
for i = 1 to n
if ( X[i] == key )

Design and analysis of algorithms- P.Vasantha Kumari 5


return true
return false
In the above algorithm the best case is One. T(n) = 1
In the above algorithm the worst case come under in two situations,,
• If the search key is located at the end of the list.
• If the search key is not present in the list.
Here the basic operation executes n Number of times. So the Time Complexity of this algorithm
is n. i.e T(n) = n
Let assume,
P – The probability of successful search.
1-P - The probability of unsuccessful search.
P/n – The probability of occurring first match for the I the element.
Cavg (n) = [ 1 P/n + 2P/n + …..nP/n]+ n(1-P)
= P/n[1+2+…..+n]+n(1-P)
= P/n(n(n+1)/2)+ n(1-P)
= P(n+1)/2 + n(1-P)
In the above,,
Apply P = 0 if there is no such a key in the list
Cavg (n) = 0(n+1)//2 + n(1-0)
=n
Apply P = 1 if the key is present in the list
Cavg (n) = 1(n+1)//2 + n(1-1)
= (n+1)//2..
4.3 Asymptotic Notations
(i) Big–O
(ii) Big–Omega
(iii) Big–Theta
(i) Big–O (or) O( ) Notation
A function t(n) is said to be in O(g(n)), denote t(n) € O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ c.g(n) for all n ≥ n0.

Design and analysis of algorithms- P.Vasantha Kumari 6


(ii) Big–Omega (or) Ω( ) Notation
A function t(n) is said to be in Ω (g(n)), denote t(n) € Ω (g(n)), if t(n) is bounded below by some
constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some
nonnegative integer n0 such that
t(n) ≥ c.g(n) for all n ≥ n0.

(iii) Big–Theta (or) θ( ) Notation


A function t(n) is said to be in O(g(n)), denote t(n) € O (g(n)), if t(n) is bounded both above and
below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant
c1 and c2 and some nonnegative integer n0 such that
C2.g(n) ≤ t(n) ≥ c1.g(n) for all n ≥ n0.

Design and analysis of algorithms- P.Vasantha Kumari 7


Examples
Here are a few examples that show how the definitions should be applied.

Summary
• An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.

Design and analysis of algorithms- P.Vasantha Kumari 8


• There are two kinds of efficiency: time efficiency and space efficiency. Time efficiency
indicates how fast the algorithm runs; space efficiency deals with the extra space it
requires. The efficiency is depending on the factors such as time, space, simplicity and
generality.
• An algorithm’s time efficiency is principally measured as a function of its input size by
counting the number of time its basic operation is executed. A basic operation is the
operation that contributes most toward running time.
• Natural Language, Pseudo Code and Flowchart are the three ways to represent an
algorithm
• The Space Complexity of an algorithm is calculated using the formula S(p) = C + Sp
• The Time Complexity of an algorithm is calculated using the formula T(n) = Cop * C(n)
• Big–O, Big–Omega and Big–Theta notations are used to indicate and compare the
asymptotic orders of growth of functions expressing algorithm efficiencies

Key Terms
>> Algorithm >> Order of Growth >> time efficiency
>> Space efficiency >> Best case >> Worst case
>> Average case >> Big–O >> Big–Omega
>> Big–Theta

Key Term Quiz


1. A sequence of unambiguous instructions for solving a problem is called ---------------
2. ------------- efficiency indicates how fast the algorithm runs.
3. ------------- efficiency deals with the extra space it requires.
4. A function t(n) is said to be in g(n), if t(n) is bounded above by some constant multiple of g(n)
for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 is
called ------------ notation.
5. A function t(n) is said to be in g(n), if t(n) is bounded below by some constant multiple of g(n)
for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 is
called ------------ notation.

Design and analysis of algorithms- P.Vasantha Kumari 9


6. A function t(n) is said to be in g(n),if t(n) is bounded both above and below by some constant
multiple of g(n) for all large n, i.e., if there exists some positive constant c1 and c2 and some
nonnegative integer n0 is called ------------ notation.
Multiple Choice Questions
1. Define an algorithm?
(a)Algorithm is a programming language that can be used to write a program
(b)Consists of sequence of instructions that can be executed in any order
(c) Sequence of ambiguous instructions that has to be executed one by one.
(d) a sequence of unambiguous instructions for solving a problem
2. What is a pseudo code?
(a) Both 2 and 3.
(b) It is a mixture of a natural language and programming language-like constructs.
(c) Is defined as the procedural language without programming constructs
(d) Is defined as the program that can be executed in the machine.
3. Define the efficiency of algorithms?
(a)The algorithm should occupy more memory space with lesser running time
(b)Efficiency is defined as the algorithm should occupy less memory space and should run fast
(c)Efficiency can be said as the algorithm occupies lesser memory space and more running
time
(d)all the above
4. What are the three cases of efficiencies?
(a)Best Case (b)Worst Case (c)Average Case (d)All the above
5. Which of the following parameter is used to measure the efficiencies of an algorithm?
(a)Algorithm's output
(b)They are measured as the function of algorithms input size
(c)It is measured based on the programming language it is implemented
(d)None of the above
6. Time efficiency is measured by
(a)Number of times the procedures executed in the algorithm
(b)Counting the number of times the functions executed
(c)Counting the number of times the algorithm's basic operation is executed
(d) Counting the number of times for a single loop execution.

Design and analysis of algorithms- P.Vasantha Kumari 10


8. Space efficiency is calculated by
(a)Counting the number of extra memory units consumed by the algorithm
(b)Counting the number of memory units consumed by the algorithm
(c)Both 1 and 2 are correct
(d)None of the above
9. Define worst case efficiency?
(a)Very short time of input size n
(b)The medium time consumption for the input size n
(c)The smallest possible time for input n
(d)The longest possible time it takes to run for the size of n inputs
10. What is the best case efficiency?
(a)Fastest running time for a particular input among all the inputs for the algorithm
(b)Slowest running time for a particular input among all the inputs
(c)Very slower time among all the inputs
(d)All the above

Review Questions
Two mark Questions
1. Define algorithm
2. Define Big-Oh notation.
3. Mention any four classes of algorithm efficiency.
4. Define order of an algorithm.
5. How will you the input size of an algorithm?
6. Compare the order of growth n! and 2n.
7. What is an algorithm design technique?
8. How is an efficiency of an algorithm defined?

Big Questions
1. Define the asymptotic notations used for best case average case and worst case analysis of
algorithms.
2. Explain various criteria for analyzing an algorithm.
3. Discuss briefly the sequence of steps for designing and analyzing an algorithm.
Design and analysis of algorithms- P.Vasantha Kumari 11
5. Divide and Conquer
The divide-and-conquer strategy solves a problem by:
1. Breaking it into sub problems that are themselves smaller instances of the same type of
problem
2. Recursively solving these sub problems
3. Appropriately combining their answers

6. Binary Search

Generally, to find a value in unsorted array, we should look through elements of an array one by
one, until searched value is found. In case of searched value is absent from array, we go through
all elements. In average, complexity of such an algorithm is proportional to the length of the
array.

Algorithm

Algorithm is quite simple. It can be done either recursively or iteratively:

1. get the middle element;


2. if the middle element equals to the searched value, the algorithm stops;
3. otherwise, two cases are possible:
o Searched value is less, than the middle element. In this case, go to the step 1 for
the part of the array, before middle element.
o Searched value is greater, than the middle element. In this case, go to the step 1
for the part of the array, after middle element.

Now we should define, when iterations should stop. First case is when searched element is
found. Second one is when subarray has no elements. In this case, we can conclude, that
searched value doesn't present in the array.

Examples

Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114
Design and analysis of algorithms- P.Vasantha Kumari 12
Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114
Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114
Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114
Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114
procedure: int binarySearch(int arr[], int value, int left, int right)
while (left <= right) do
int middle = (left + right) / 2;
if (arr[middle] == value)
return middle;
else if (arr[middle] > value)
right = middle - 1;
else
left = middle + 1;
return -1;

Analysis:

The recurrence relation for binary search

C(n)=C(n/2)+1 for n>1, C(1)=1

Substitute n=2k then we get,

C(2k)=k+1=log2n+1

C(n)= log2n +1= log2n (n+1)

If n=2i then

C(n)= log2n +1= log2n 2i+1= log22+ log2i+1

=1+ log2i+1= log2i+2

7. Finding Maximum and Minimum element

A natural approach is to try a divide and conquer algorithm. Split the list into two sub lists of
equal size. (Assume that the initial list size is a power of two.) Find the maxima and minima of
the sub lists. Two more comparisons then suffice to find the maximum and minimum of the list.
The steps are,
Design and analysis of algorithms- P.Vasantha Kumari 13
Ø Dividing the given array into two equal halves
Ø Repeat step1 until the array contains a single element
Ø Combining the two arrays and select the minimum and maximum element
Ø Repeat stpe3 until the final solution is found

procedure maxmin(A[1...n] of numbers) -> (min, max)


begin
if (n == 1)
return (A[1], A[1])
else if (n == 2)
if( A[1] < A[2])
return (A[1], A[2])
else
return (A[2], A[1])
else
(max_left, min_left) = maxmin(A[1...(n/2)])
(max_right, min_right) = maxmin(A[(n/2 +1)...n])
if (max_left < max_right)
max = max_right
else
max = max_left
if (min_left < min_right)
max = min_left
else
min = min_right
return (min, max)
end

Design and analysis of algorithms- P.Vasantha Kumari 14


Hence, if T(n) is the number of comparisons, then T(n) = 2T(n/2) +2. (The 2T(n/2) term comes
from conquering the two problems into which we divide the original; the 2 term comes from
combining these solutions.) Also, clearly T(2) = 1. By induction we find T(n) = (3n/2)−2, for n a
power of 2.

8. Analysis of Merge sort

The mergesort algorithm is based on the classical divide-and-conquer paradigm.

DIVIDE: Partition the n-element sequence to be sorted into two subsequences of n/2 elements
each.
CONQUER: Sort the two subsequences recursively using the mergesort.
COMBINE: Merge the two sorted sorted subsequences of size n/2 each to produce the sorted
sequence consisting of n elements.
Note that recursion "bottoms out" when the sequence to be sorted is of unit length. Since every
sequence of length 1 is in sorted order, no further recursive call is necessary. The key operation
of the mergesort algorithm is the merging of the two sorted sequencesin the "combine step". To
Design and analysis of algorithms- P.Vasantha Kumari 15
perform the merging, we use an auxilliary procedure Merge(A,p,q,r), where A is an array and p,q
and r are indices numbering elements of the array such that dure assumes that the subarrays
A[p..q] and A[q+1...r] are in sorted order.It merges them to form a single sorted subarray that
replaces the current subarray A[p..r]. Thus finally,we obtain the sorted array A[1..n], which is the
solution.

Design and analysis of algorithms- P.Vasantha Kumari 16


Algorithm
void mergeSort(int numbers[], int temp[], int array_size)
{
m_sort(numbers, temp, 0, array_size - 1);
}
void m_sort(int numbers[], int temp[], int left, int right)
{
int mid;
if (right > left)
{
mid = (right + left) / 2;
m_sort(numbers, temp, left, mid);
m_sort(numbers, temp, mid+1, right);
merge(numbers, temp, left, mid+1, right);
}
}
void merge(int numbers[], int temp[], int left, int mid, int right)
{
int i, left_end, num_elements, tmp_pos;
Design and analysis of algorithms- P.Vasantha Kumari 17
left_end = mid - 1;
tmp_pos = left;
num_elements = right - left + 1;
while ((left <= left_end) && (mid <= right))
{
if (numbers[left] <= numbers[mid])
{
temp[tmp_pos] = numbers[left];
tmp_pos = tmp_pos + 1;
left = left +1;
}
else
{
temp[tmp_pos] = numbers[mid];
tmp_pos = tmp_pos + 1;
mid = mid + 1;
}
}
while (left <= left_end)
{
temp[tmp_pos] = numbers[left];
left = left + 1;
tmp_pos = tmp_pos + 1;
}
while (mid <= right)
{
temp[tmp_pos] = numbers[mid];
mid = mid + 1;
tmp_pos = tmp_pos + 1;
}
for (i=0; i <= num_elements; i++)
{
numbers[right] = temp[right];
right = right - 1;
}
}

Analysis

Mergesort goes thru the same steps - independent of the data.

Best-case = Worst-case = Average-case.

T(n)= running time on a list on n elements.

T(1) = 1
Design and analysis of algorithms- P.Vasantha Kumari 18
T(n)= 2 times running time on a list of n/2 elements + linear merge

T(n)= 2T(n/2)+ n

Brute force method:

T(n)=2T(n/2)+n=2[2T(n/4)+n/2]+n=4T(n/4)+2n

T(n)=4[2T(n/8)+n/4]+2n=8T(n/8)+3n

T(n)= 2**kT(n/2**k)+k*n

and there are k=log n equations to get to T(1):

T(n) = nT(1)+nlog n = nlog n + n

9. Analysis of Quick sort


The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:

1. Choose a pivot value. We take the value of the middle element as pivot value, but it can
be any value, which is in range of sorted values, even if it doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are lesser than the
pivot go to the left part of the array and all elements greater than the pivot, go to the right
part of the array. Values equal to the pivot can stay in any part of the array. Notice, that
array may be divided in non-equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.

Partition algorithm in detail

There are two indices i and j and at the very beginning of the partition algorithm i points to the
first element in the array and j points to the last one. Then algorithm moves i forward, until an
element with value greater or equal to the pivot is found. Index j is moved backward, until an
element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps
to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes
greater than j.

Design and analysis of algorithms- P.Vasantha Kumari 19


After partition, all values before i-th element are less or equal than the pivot and all values after
j-th element are greater or equal to the pivot.

Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.


Algorithm
int partition(int arr[], int left, int right)
{
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];
while (i <= j)
{
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j)
{
swap(arr[i],arr[j];
i++;
j--;
}
}
return i;
}
void quickSort(int arr[], int left, int right)
{
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];
while (i <= j)
{
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j)
{
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
}
if (left < j)
Design and analysis of algorithms- P.Vasantha Kumari 20
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}

Analysis of Quick Sort


The recurrence relation for the best case performance is:
Cbest(n)=2 Cbest(n/2)+n,for n>1, Cbest(1)=0
To derive the recurrence relation,

Design and analysis of algorithms- P.Vasantha Kumari 21


Substitute n=2k, then we get
Cbest (2k) =2 Cbest (2k-1)+ 2k
=2[2 Cbest (2k-2)+2k-1]+ 2k
=22 Cbest (2k-2)+ 2k +2k
=22 [2 Cbest (2k-3)+2k-2]+ 2k +2k
=23 Cbest (2k-3)+3. 2k
.
.
.
For i terms,
Cbest (2k)=2i Cbest (2k-i)+i. 2k
Replace i=k then we get,
Cbest (2k)=k. 2k
Substitute n=2k,
Cbest (n)=O(nlogn)
The recurrence relation for the worst case performance is:
Cworst(n)=(n+1)+n+…+3=(n+1)(n+2)/2-3 € O(n2)
The recurrence relation for the average case performance is:
Cavg(n)=2nlogn

10. Analysis of Selection sort

Selection sort is one of the O(n2) sorting algorithms, which makes it quite inefficient for sorting
large data volumes. Selection sort is notable for its programming simplicity and it can over
perform other sorts in certain situations (see complexity analysis for more details).

The idea of algorithm is quite simple. Array is imaginary divided into two parts - sorted one and
unsorted one. At the beginning, sorted part is empty, while unsorted one contains whole array.
At every step, algorithm finds minimal element in the unsorted part and adds it to the end of the
sorted one. When unsorted part becomes empty, algorithm stops.

When algorithm sorts an array, it swaps first element of unsorted part with minimal element and
then it is included to the sorted part. This implementation of selection sort in not stable. In case

Design and analysis of algorithms- P.Vasantha Kumari 22


of linked list is sorted, and, instead of swaps, minimal element is linked to the unsorted part,
selection sort is stable.

Example. Sort {5, 1, 12, -5, 16, 2, 12, 14} using selection sort.

procedure SELECTION_SORT (A)

for i ← 1 to n-1 do
min j ← i;
min x ← A[i]
for j ← i + 1 to n do
If A[j] < min x then
min j ← j
min x ← A[j]
Design and analysis of algorithms- P.Vasantha Kumari 23
A[min j] ← A [i]
A[i] ← min x

Analysis

Selection sort stops, when unsorted part becomes empty. As we know, on every step number of
unsorted elements decreased by one. Therefore, selection sort makes n steps (n is number of
elements in array) of outer loop, before stop. Every step of outer loop requires finding minimum
in unsorted part. Summing up, n + (n - 1) + (n - 2) + ... + 1, results in O(n2) number of
comparisons. Number of swaps may vary from zero (in case of sorted array) to n - 1 (in case
array was sorted in reversed order), which results in O(n) number of swaps. Overall algorithm
complexity is O(n2). Fact, that selection sort requires n - 1 number of swaps at most, makes it
very efficient in situations, when write operation is significantly more expensive, than read
operation.

11. Analysis of Heap sort

The heap data structure is an array object which can be easily visualized as a complete binary
tree. There is a one to one correspondence between elements of the array and nodes of the tree.
The tree is completely filled on all levels except possibly the lowest, which is filled from the left
up to a point. All nodes of heap also satisfy the relation that the key value at each node is at least
as large as the value at its children.
Step I: The user inputs the size of the heap(within a specified limit).The program generates a
corresponding binary tree with nodes having randomly generated key Values.
Step II: Build Heap Operation: Let n be the number of nodes in the tree and i be the key of a
tree. For this, the program uses operation Heapify. when Heapify is called both the left and right
sub tree of the i are Heaps. The function of Heapify is to let i settle down to a position(by
swapping itself with the larger of its children, whenever the heap property is not satisfied)till the
heap property is satisfied in the tree which was rooted at (i).This operation calls
Step III: Remove maximum element:The program removes the largest element of the heap(the
root) by swapping it with the last element.
Step IV: The program executes Heapify(new root) so that the resulting tree satisfies the heap
property.
Step V: Goto step III till heap is empty
Design and analysis of algorithms- P.Vasantha Kumari 24
procedure downheap (v)
Input: semi-heap with root v
Output: heap (by rearranging the vertex labels)
Method: while v does not have the heap property do
choose direct descendant w with maximum label a(w)
exchange a(v) and a(w)
set v := w

Procedure downheap can be used to build a heap from an arbitrarily labelled tree. By proceeding
bottom-up, downheap is called for all subtrees rooted at inner vertices. Since leaves are already
heaps they may be omitted.

procedure buildheap
Input: almost complete binary tree T of depth d(T) with vertex labelling a
Output: heap (by rearranging the vertex labels)
Method: for i := d(T) – 1 downto 0 do
for all inner vertices v of depth d(v) = i do
downheap (v)

A call of buildheap is the first step of procedure heapsort, which can now be written down as
follows:

procedure heapsort
Input: almost complete binary tree with root r and vertex labelling a
Output: vertex labels in descending order
Method: buildheap
while r is not a leaf do
output a(r)
choose leaf b of maximum depth
write label a(b) to r
delete leaf b
downheap (r)
output a(r)

Analysis

An almost complete binary tree with n vertices has a depth of at most log(n). Therefore, procedure
downheap requires at most log(n) steps. Procedure buildheap calls downheap for each vertex,
therefore it requires at most n·log(n) steps. Heapsort calls buildheap once; then it calls downheap for
each vertex, together it requires at most 2·n·log(n) steps.
Design and analysis of algorithms- P.Vasantha Kumari 25
the time complexity of heapsort is T(n) O(n·log(n)). The algorithm is optimal, since the lower
bound of the sorting problem is attained.

Analysis

An almost complete binary tree with n vertices has a depth of at most log(n). Therefore, procedure
downheap requires at most log(n) steps. Procedure buildheap calls downheap for each vertex,
therefore it requires at most n·log(n) steps. Heapsort calls buildheap once; then it calls downheap for
each vertex, together it requires at most 2·n·log(n) steps.

Design and analysis of algorithms- P.Vasantha Kumari 26


Thus, the time complexity of heapsort is T(n) O(n·log(n)). The algorithm is optimal, since the
lower bound of the sorting problem is attained.

Summary

Ø Divide and Conquer is a general algorithm design technique that solves a problem
instance by dividing it into several smaller instances of equal size, solving them
recursively, and then combining their solutions to get a solution to the original instance of
the problem.
Ø Time efficiency of divide and conquer satisfies the equation T(n)=aT(n/b)+f(n).
Ø Merge sort is a divide and conquer algorithm. It works by dividing an input array into
two halves, sorting them recursively and then merging the two sorted halves to get the
original array sorted.
Ø Quick sort is a divide and conquer algorithm that works by partitioning its input elements
according to their values relative to some pre selected element.
Ø Binary search is a O(logn) algorithm for searching in sorted arrays.

Key Terms

>> binary search >>quick sort >> merge sort

>> divide and conquer >>heap sort >> selection sort

Key Term Quiz

1. ---------------------- is a general algorithm design technique that solves a problem instance by


dividing it into several smaller instances of equal size, solving them recursively, and then
combining their solutions to get a solution to the original instance of the problem.

2. ---------------- algorithm works by dividing an input array into two halves, sorting them
recursively and then merging the two sorted halves to get the original array sorted.

3. ------------------ is a divide and conquer algorithm that works by partitioning its input elements
according to their values relative to some pre selected element.

4. The running time of binary search is -------------------

Design and analysis of algorithms- P.Vasantha Kumari 27


5. The tree is completely filled on all levels except possibly the lowest, which is filled from the left
to the right is called -------------------

Multiple Choice Questions

1. The algorithm works by dividing an input array into two halves, sorting them recursively
and then merging the two sorted halves to get the original array sorted.

a) heap sort b) quick sort c) merge sort d) selection sort

2. ------------------- is a divide and conquer algorithm that works by partitioning its input
elements according to their values relative to some pre selected element.

a) heap sort b) quick sort c) merge sort d) selection sort

3. The selection of pivot element is used in -----------------

a) heap sort b) quick sort c) merge sort d) selection sort

4. Which one of the following is not a divide and conquer?

a) heap sort b) quick sort c) merge sort d) selection sort

5. The running time of binary search is

a) O(logn) b) O(n) c) O(nlogn) d)O(n/2)

6. The time complexity of heapsort is

a) O(logn) b) O(n) c) O(nlogn) d)O(n/2)

7. The running time of quick sort is

a) O(logn) b) O(n) c) O(nlogn) d)O(n/2)

8. The time complexity of selection sort is

a) O(logn) b) O(n) c) O(nlogn) d)O(n2)

Review Questions
Two mark Questions

1. What is the average case complexity of linear search algorithm


2. Write the procedure for selection sort.

Design and analysis of algorithms- P.Vasantha Kumari 28


3. Find the number of comparisons made by the sequential search in the worst and best case.
4. State the time complexity of merge sort algorithm.
5. State the time complexity of quick sort algorithm.
6. State the time complexity of selection sort algorithm.
7. State the time complexity of heap sort algorithm.
8. List out any two drawbacks of binary search algorithm.
9. What are the objectives of sorting algorithm?

Big Questions

1. Write an algorithm for finding maximum element of an array , perform best , worst and average
case complexity with appropriate order notations.
2. Explain in detail quick sorting method. Provide a complete analysis of quick sort.
3. Explain in detail merge sort. Illustrate the algorithm with a numeric example. Provide complete
analysis of the same.
4. Write an algorithm for performing binary search for any element in an array. And finding the
complexity of binary search.
5. Define heap. Explain the properties of heap. With a simple example explain the concept of heap
algorithm.
6. Write an algorithm to sort n numbers using selection sort. Trace the algorithm for the numbers
11,21,56,43,87,67,54,2,99
7. Sort the following set of elements using quick sort 11,21,56,43,87,67,54,2,99,5,78,94

Lesson Labs

1. Compare quick sort, merge sort, heap sort and selection sort
2. Compare linear search and binary search.
3. Write an algorithm for Fibonacci series using divide and conquer. Finding the complexity of the
same.

------------ END OF FIRST UNIT ------------------

Design and analysis of algorithms- P.Vasantha Kumari 29

Potrebbero piacerti anche