Sei sulla pagina 1di 2

Solutions to HW Assignment 1

28th July, 2004

Problem 1:

For each function f (n) and time t in the following table, determine the largest size n of a problem that
can be solved in time t, assuming that the algorithm to solve the problem takes f (n) microseconds.

Solution to Problem 1:

The procedure for filling the first entry in the table is shown below. Other entries are to be filled
similarly.

For the function log2 (n) , the time taken to complete n operations is log2 (n) microseconds. We
calculate the value of n such that log2 (n) is 106 .

log2 (n) microseconds = 1 second


or, log2 (n) × 10−6 = 1
or, log2 (n) = 106
6
or, n = 210

1 second 1 hour 1 day 1 century


6 9 10 15
log2 (n) 210 23.6×10 28.64×10 23.15×10
6
n 10 3.6 × 109 8.64 × 1010 3.15 × 1015
2n 19 31 36 51
n! 9 12 13 17
n2 103 6 × 104 2.94 × 105 5.61 × 107

Problem 2:

Insertion sort can be expressed as a recursive procedure as follows. In order to sort A[1..n] we
recursively sort A[1..n − 1] and then insert A[n] into the sorted array A[1..n-1]. Write a recurrence
for the running time of this recursive version of insertion sort and find the solution.

Solution to Problem 2:

Let sorting an array of n elements using insertion sort takes T (n) time, where T is some function
on non-negative integers. For n = 1, T (n) = T (1) = 0, because the array is already sorted. Sorting
n − 1 elements takes T (n − 1) time and insertion of the nth element in the sorted array A[1..n − 1]
requires j comparisions and shifting of n − j elements (including the element being inserted) to their
right by one array position, where j is the position of nth element in the sorted array. Assuming
that both comparision and shifting takes 1 unit of time, we get the following recurrence for T (n)
T (n) = T (n − 1) + j + n − j, j = 1...n
= T (n − 1) + n
= T (n − 2) + n − 1 + n
...
...
= T (1) + 2 + 3 + ... + n − 1 + n
= n×(n−1)
2 −1

Thus, the recursive version of the insertion sort runs in O(n2 ) time.
Problem 3:

Suppose that each row of an n × n array A consists of 1’s and 0’s such that, in any row of A, all the
1’s come before any 0’s in that row. Assume that the array has already been created and filled. Write
the pseudocode for an algorithm which runs in O(n) time and finds the row of A that contains the
most 1’s.

Solution to Problem 3:

Think of 1’s as concrete bricks and 0’s as air. Therefore the matrix can be thought of as a brick
structure with continuous slabs of horizontal walls that are hanging out from a vertical column.
(Refer to the figure below). The idea is to walk over the first (topmost) slab till you reach the end of
the wall. Jump from that point vertically down. You will fall through the rows that have 0’s (in that
column) and finally land on a concrete slab again. Walk till the end of the slab and repeat the above
procedure. When you reach the ground (the last row), recall the slab from which you jumped last.
That is the longest one (and hence, corresponds to the row with the largest number of 1’s).

1→ 1→ 0↓ 0 0
1 1 1→ 0↓ 0
1 0 0 0↓ 0
1 1 1 1*→ 0↓
1 1 0 0 0↓

The row with the last horizontal arrow (marked *) is the one with largest number of ones.

Pseudo code of the algorithm:

Procedure FindMaxRow

Input: A[n][n] - binary array


Output: rowmax - the row with maximum number of ones

rowmax:=1
row:=1, col:=1
while row < n+1 and col < n+1
begin
if A[row][col]= 1 then col:=col+1, rowmax:=row
else row:=row+1
end while
return rowmax

Analysis

The main part of the algorithm is the while loop and in each execution of the loop, there are fixed
number of operations. Therefore, to analyze the time complexity of the algorithm, it suffices to count
the number of times the loop is executed. Every time the while loop is executed, either the variable
row or the variable col is incremented. Initially, both row and col are 1 and none of them can exceed
n+1. Therefore, in the worst case if one of them is incremented to n and the other to n + 1, then the
loop must have been executed n + (n + 1) − (1 + 1) = 2n − 1 times. Similarly, in the best case, one of
them must become n + 1 and therefore the loop must be executed at least (n + 1) + 1 − (1 + 1) = n
times. Therefore the algorithm runs in O(n) time.

Potrebbero piacerti anche