Sei sulla pagina 1di 2

Solutions to HW Assignment 1 28 th July, 2004

Problem 1:

For each function f (n ) and time t in the following table, determine the largest size n of a problem that can be solved in time t , assuming that the algorithm to solve the problem takes f (n ) microseconds.

Solution to Problem 1 :

The procedure for ﬁlling the ﬁrst entry in the table is shown below. Other entries are to be ﬁlled similarly.

For the function log 2 (n ) , the time taken to complete n operations is log 2 (n ) microseconds. We calculate the value of n such that log 2 (n ) is 10 6 .

 log 2 (n ) microseconds = 1 second or, log 2 (n ) × 10 −6 = 1 or, log 2 (n ) = 10 6 or, n = 2 10 6
 1 second 1 hour 1 day 1 century log 2 (n ) 2 10 6 2 3 . 6 ×10 9 2 8 . 64 ×10 10 2 3 . 15 ×10 15 n 10 6 3 .6 × 10 9 8.64 × 10 10 3 .15 × 10 15 2 n 19 31 36 51 n ! 9 12 13 17 n 2 10 3 6 × 10 4 2.94 × 10 5 5.61 × 10 7

Problem 2:

n] we

recursively sort A[1 1] and then insert A[n ] into the sorted array A[1 -1]. Write a recurrence for the running time of this recursive version of insertion sort and ﬁnd the solution.

Insertion sort can be expressed as a recursive procedure as follows. In order to sort A[1

n

n

Solution to Problem 2 :

Let sorting an array of n elements using insertion sort takes T (n ) time, where T is some function

on non-negative integers. For n = 1, T (n ) = T (1) = 0, because the array is already sorted. Sorting

n 1 elements takes T (n 1) time and insertion of the n th element in the sorted array A[1 1]

requires j comparisions and shifting of n j elements (including the element being inserted) to their right by one array position, where j is the position of n th element in the sorted array. Assuming that both comparision and shifting takes 1 unit of time, we get the following recurrence for T (n )

n

T (n )

= T (n 1) + j + n j,

= T (n 1) + n

= T (n 2) + n 1 + n

j = 1 n

= T (1) + 2 + 3 +

= n ×( n 1)

2

1

+ n 1 + n

Thus, the recursive version of the insertion sort runs in O(n 2 ) time.

Problem 3:

Suppose that each row of an n × n array A consists of 1’s and 0’s such that, in any row of A, all the 1’s come before any 0’s in that row. Assume that the array has already been created and ﬁlled. Write the pseudocode for an algorithm which runs in O(n ) time and ﬁnds the row of A that contains the most 1’s.

Solution to Problem 3 :

Think of 1’s as concrete bricks and 0’s as air. Therefore the matrix can be thought of as a brick structure with continuous slabs of horizontal walls that are hanging out from a vertical column. (Refer to the ﬁgure below). The idea is to walk over the ﬁrst (topmost) slab till you reach the end of the wall. Jump from that point vertically down. You will fall through the rows that have 0’s (in that column) and ﬁnally land on a concrete slab again. Walk till the end of the slab and repeat the above procedure. When you reach the ground (the last row), recall the slab from which you jumped last. That is the longest one (and hence, corresponds to the row with the largest number of 1’s).

 1 → 1 → 0 ↓ 0 0 1 1 1→ 0↓ 0 1 0 0 0↓ 0 1 1 1 1*→ 0 ↓ 1 1 0 0 0 ↓

The row with the last horizontal arrow (marked *) is the one with largest number of ones.

Pseudo code of the algorithm :

Procedure FindMaxRow

Input: A[n][n] - binary array Output: rowmax - the row with maximum number of ones

rowmax:=1

row:=1, col:=1 while row < n+1 and col < n+1 begin

if A[row][col]= 1 then col:=col+1, rowmax:=row else row:=row+1 end while return rowmax

Analysis

The main part of the algorithm is the while loop and in each execution of the loop, there are ﬁxed number of operations. Therefore, to analyze the time complexity of the algorithm, it suﬃces to count the number of times the loop is executed. Every time the while loop is executed, either the variable row or the variable col is incremented. Initially, both row and col are 1 and none of them can exceed n +1. Therefore, in the worst case if one of them is incremented to n and the other to n + 1, then the loop must have been executed n + ( n + 1) (1 + 1) = 2n 1 times. Similarly, in the best case, one of them must become n + 1 and therefore the loop must be executed at least (n + 1) + 1 (1 + 1) = n times. Therefore the algorithm runs in O(n ) time.