Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
UNIT I INTRODUCTION
Objectives
Ø Explain what the use of algorithm is
Ø Describe the fundamentals of algorithmic problem solving
Ø Understand how to calculate the complexity of an algorithm
Ø Describe the use of an divide and conquer method
Ø Explain the analysis of various sorting techniques
1. Algorithm
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining
a required output for any legitimate input in a finite amount of time.
4. Analysis of algorithm
4.1. Analysis Framework :
Analysis : To compute the efficiency of an algorithm. When compute the efficiency of an
algorithm, consider the following two factors.
_ Space Complexity
_ Time Complexity
Summary
• An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.
Key Terms
>> Algorithm >> Order of Growth >> time efficiency
>> Space efficiency >> Best case >> Worst case
>> Average case >> Big–O >> Big–Omega
>> Big–Theta
Review Questions
Two mark Questions
1. Define algorithm
2. Define Big-Oh notation.
3. Mention any four classes of algorithm efficiency.
4. Define order of an algorithm.
5. How will you the input size of an algorithm?
6. Compare the order of growth n! and 2n.
7. What is an algorithm design technique?
8. How is an efficiency of an algorithm defined?
Big Questions
1. Define the asymptotic notations used for best case average case and worst case analysis of
algorithms.
2. Explain various criteria for analyzing an algorithm.
3. Discuss briefly the sequence of steps for designing and analyzing an algorithm.
Design and analysis of algorithms- P.Vasantha Kumari 11
5. Divide and Conquer
The divide-and-conquer strategy solves a problem by:
1. Breaking it into sub problems that are themselves smaller instances of the same type of
problem
2. Recursively solving these sub problems
3. Appropriately combining their answers
6. Binary Search
Generally, to find a value in unsorted array, we should look through elements of an array one by
one, until searched value is found. In case of searched value is absent from array, we go through
all elements. In average, complexity of such an algorithm is proportional to the length of the
array.
Algorithm
Now we should define, when iterations should stop. First case is when searched element is
found. Second one is when subarray has no elements. In this case, we can conclude, that
searched value doesn't present in the array.
Examples
Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114
Design and analysis of algorithms- P.Vasantha Kumari 12
Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114
Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114
Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114
Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114
procedure: int binarySearch(int arr[], int value, int left, int right)
while (left <= right) do
int middle = (left + right) / 2;
if (arr[middle] == value)
return middle;
else if (arr[middle] > value)
right = middle - 1;
else
left = middle + 1;
return -1;
Analysis:
C(2k)=k+1=log2n+1
If n=2i then
A natural approach is to try a divide and conquer algorithm. Split the list into two sub lists of
equal size. (Assume that the initial list size is a power of two.) Find the maxima and minima of
the sub lists. Two more comparisons then suffice to find the maximum and minimum of the list.
The steps are,
Design and analysis of algorithms- P.Vasantha Kumari 13
Ø Dividing the given array into two equal halves
Ø Repeat step1 until the array contains a single element
Ø Combining the two arrays and select the minimum and maximum element
Ø Repeat stpe3 until the final solution is found
DIVIDE: Partition the n-element sequence to be sorted into two subsequences of n/2 elements
each.
CONQUER: Sort the two subsequences recursively using the mergesort.
COMBINE: Merge the two sorted sorted subsequences of size n/2 each to produce the sorted
sequence consisting of n elements.
Note that recursion "bottoms out" when the sequence to be sorted is of unit length. Since every
sequence of length 1 is in sorted order, no further recursive call is necessary. The key operation
of the mergesort algorithm is the merging of the two sorted sequencesin the "combine step". To
Design and analysis of algorithms- P.Vasantha Kumari 15
perform the merging, we use an auxilliary procedure Merge(A,p,q,r), where A is an array and p,q
and r are indices numbering elements of the array such that dure assumes that the subarrays
A[p..q] and A[q+1...r] are in sorted order.It merges them to form a single sorted subarray that
replaces the current subarray A[p..r]. Thus finally,we obtain the sorted array A[1..n], which is the
solution.
Analysis
T(1) = 1
Design and analysis of algorithms- P.Vasantha Kumari 18
T(n)= 2 times running time on a list of n/2 elements + linear merge
T(n)= 2T(n/2)+ n
T(n)=2T(n/2)+n=2[2T(n/4)+n/2]+n=4T(n/4)+2n
T(n)=4[2T(n/8)+n/4]+2n=8T(n/8)+3n
T(n)= 2**kT(n/2**k)+k*n
1. Choose a pivot value. We take the value of the middle element as pivot value, but it can
be any value, which is in range of sorted values, even if it doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are lesser than the
pivot go to the left part of the array and all elements greater than the pivot, go to the right
part of the array. Values equal to the pivot can stay in any part of the array. Notice, that
array may be divided in non-equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.
There are two indices i and j and at the very beginning of the partition algorithm i points to the
first element in the array and j points to the last one. Then algorithm moves i forward, until an
element with value greater or equal to the pivot is found. Index j is moved backward, until an
element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps
to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes
greater than j.
Selection sort is one of the O(n2) sorting algorithms, which makes it quite inefficient for sorting
large data volumes. Selection sort is notable for its programming simplicity and it can over
perform other sorts in certain situations (see complexity analysis for more details).
The idea of algorithm is quite simple. Array is imaginary divided into two parts - sorted one and
unsorted one. At the beginning, sorted part is empty, while unsorted one contains whole array.
At every step, algorithm finds minimal element in the unsorted part and adds it to the end of the
sorted one. When unsorted part becomes empty, algorithm stops.
When algorithm sorts an array, it swaps first element of unsorted part with minimal element and
then it is included to the sorted part. This implementation of selection sort in not stable. In case
Example. Sort {5, 1, 12, -5, 16, 2, 12, 14} using selection sort.
for i ← 1 to n-1 do
min j ← i;
min x ← A[i]
for j ← i + 1 to n do
If A[j] < min x then
min j ← j
min x ← A[j]
Design and analysis of algorithms- P.Vasantha Kumari 23
A[min j] ← A [i]
A[i] ← min x
Analysis
Selection sort stops, when unsorted part becomes empty. As we know, on every step number of
unsorted elements decreased by one. Therefore, selection sort makes n steps (n is number of
elements in array) of outer loop, before stop. Every step of outer loop requires finding minimum
in unsorted part. Summing up, n + (n - 1) + (n - 2) + ... + 1, results in O(n2) number of
comparisons. Number of swaps may vary from zero (in case of sorted array) to n - 1 (in case
array was sorted in reversed order), which results in O(n) number of swaps. Overall algorithm
complexity is O(n2). Fact, that selection sort requires n - 1 number of swaps at most, makes it
very efficient in situations, when write operation is significantly more expensive, than read
operation.
The heap data structure is an array object which can be easily visualized as a complete binary
tree. There is a one to one correspondence between elements of the array and nodes of the tree.
The tree is completely filled on all levels except possibly the lowest, which is filled from the left
up to a point. All nodes of heap also satisfy the relation that the key value at each node is at least
as large as the value at its children.
Step I: The user inputs the size of the heap(within a specified limit).The program generates a
corresponding binary tree with nodes having randomly generated key Values.
Step II: Build Heap Operation: Let n be the number of nodes in the tree and i be the key of a
tree. For this, the program uses operation Heapify. when Heapify is called both the left and right
sub tree of the i are Heaps. The function of Heapify is to let i settle down to a position(by
swapping itself with the larger of its children, whenever the heap property is not satisfied)till the
heap property is satisfied in the tree which was rooted at (i).This operation calls
Step III: Remove maximum element:The program removes the largest element of the heap(the
root) by swapping it with the last element.
Step IV: The program executes Heapify(new root) so that the resulting tree satisfies the heap
property.
Step V: Goto step III till heap is empty
Design and analysis of algorithms- P.Vasantha Kumari 24
procedure downheap (v)
Input: semi-heap with root v
Output: heap (by rearranging the vertex labels)
Method: while v does not have the heap property do
choose direct descendant w with maximum label a(w)
exchange a(v) and a(w)
set v := w
Procedure downheap can be used to build a heap from an arbitrarily labelled tree. By proceeding
bottom-up, downheap is called for all subtrees rooted at inner vertices. Since leaves are already
heaps they may be omitted.
procedure buildheap
Input: almost complete binary tree T of depth d(T) with vertex labelling a
Output: heap (by rearranging the vertex labels)
Method: for i := d(T) – 1 downto 0 do
for all inner vertices v of depth d(v) = i do
downheap (v)
A call of buildheap is the first step of procedure heapsort, which can now be written down as
follows:
procedure heapsort
Input: almost complete binary tree with root r and vertex labelling a
Output: vertex labels in descending order
Method: buildheap
while r is not a leaf do
output a(r)
choose leaf b of maximum depth
write label a(b) to r
delete leaf b
downheap (r)
output a(r)
Analysis
An almost complete binary tree with n vertices has a depth of at most log(n). Therefore, procedure
downheap requires at most log(n) steps. Procedure buildheap calls downheap for each vertex,
therefore it requires at most n·log(n) steps. Heapsort calls buildheap once; then it calls downheap for
each vertex, together it requires at most 2·n·log(n) steps.
Design and analysis of algorithms- P.Vasantha Kumari 25
the time complexity of heapsort is T(n) O(n·log(n)). The algorithm is optimal, since the lower
bound of the sorting problem is attained.
Analysis
An almost complete binary tree with n vertices has a depth of at most log(n). Therefore, procedure
downheap requires at most log(n) steps. Procedure buildheap calls downheap for each vertex,
therefore it requires at most n·log(n) steps. Heapsort calls buildheap once; then it calls downheap for
each vertex, together it requires at most 2·n·log(n) steps.
Summary
Ø Divide and Conquer is a general algorithm design technique that solves a problem
instance by dividing it into several smaller instances of equal size, solving them
recursively, and then combining their solutions to get a solution to the original instance of
the problem.
Ø Time efficiency of divide and conquer satisfies the equation T(n)=aT(n/b)+f(n).
Ø Merge sort is a divide and conquer algorithm. It works by dividing an input array into
two halves, sorting them recursively and then merging the two sorted halves to get the
original array sorted.
Ø Quick sort is a divide and conquer algorithm that works by partitioning its input elements
according to their values relative to some pre selected element.
Ø Binary search is a O(logn) algorithm for searching in sorted arrays.
Key Terms
2. ---------------- algorithm works by dividing an input array into two halves, sorting them
recursively and then merging the two sorted halves to get the original array sorted.
3. ------------------ is a divide and conquer algorithm that works by partitioning its input elements
according to their values relative to some pre selected element.
1. The algorithm works by dividing an input array into two halves, sorting them recursively
and then merging the two sorted halves to get the original array sorted.
2. ------------------- is a divide and conquer algorithm that works by partitioning its input
elements according to their values relative to some pre selected element.
Review Questions
Two mark Questions
Big Questions
1. Write an algorithm for finding maximum element of an array , perform best , worst and average
case complexity with appropriate order notations.
2. Explain in detail quick sorting method. Provide a complete analysis of quick sort.
3. Explain in detail merge sort. Illustrate the algorithm with a numeric example. Provide complete
analysis of the same.
4. Write an algorithm for performing binary search for any element in an array. And finding the
complexity of binary search.
5. Define heap. Explain the properties of heap. With a simple example explain the concept of heap
algorithm.
6. Write an algorithm to sort n numbers using selection sort. Trace the algorithm for the numbers
11,21,56,43,87,67,54,2,99
7. Sort the following set of elements using quick sort 11,21,56,43,87,67,54,2,99,5,78,94
Lesson Labs
1. Compare quick sort, merge sort, heap sort and selection sort
2. Compare linear search and binary search.
3. Write an algorithm for Fibonacci series using divide and conquer. Finding the complexity of the
same.