Sei sulla pagina 1di 30

Algorithm Analysis:

Running Time Big O and omega (W)

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

Introduction
An algorithm analysis of a program is a step by step procedure for accomplishing that program In order to learn about an algorithm, we need to analyze it This means we need to study the specification of the algorithm and draw conclusion about the implementation of that algorithm (the program) will perform in general

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

The issues that should be considered in analyzing an algorithm are: The running time of a program as a function of its inputs The total or maximum memory space needed for program data

The total size of the program code


Whether the program correctly computes the desired result The complexity of the program. For example, how easy it is to read, understand, and modify the program The robustness of the program. For example, how well does it deal with unexpected or erroneous inputs
3

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

In this course, we consider the running time of the algorithm.

The main factors that effect the running time are the algorithm itself, input data, the computer system, etc.
The performance of a computer is determined by The hardware The programming language used and The operating system To calculate the running time of a general C++ program, we first need to define a few rules. In our rules , we are going to assume that the effect of hardware and software systems used in the machines are independent of the running time of our C++ program

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

Rule 1:
The time required to fetch an integer from memory is a constant t(fetch), The time required to store an integer in memory is also a constant t(store)

For example the running time of x=y is: t(fetch) + t(store) because we need to fetch y from memory store it into x
Similarly the running time of x=1 is also t(fetch) + t(store) because typically any constant is stored in the memory before it is fetched.
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 5

Rule 2
The time required to perform elementary operations on integers, such as addition t(+), subtraction t(-), multiplication t(*), division t(/), and comparison t(cmp), are all constants.

For Example the running time of y= x+1 is: 2t(fetch) + t(store) + t (+) because you need to fetch x and 1: then add them together: and place the result into y:
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University

2*t(fetch) t(+) t(store)


Page 6

Rule 3:
The time required to call a function is a constant, t(call) And the time required to return a function is a constant, t(return)

Rule 4:
The time required to pass an integer argument to a function or procedure is the same as the time required to store an integer in memory, t(store)

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

For example the running time of y = f(x)


is: t(fetch) + 2t(store) + t(call) + t(f(x)) Because you need To fetch the value of x: Pass x to the function and store it into parameter: Call the function f(x): Run the function: Store the returned result into y:

t (fetch) t (store) t (call) t (f(x)) t (store)

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page

Rule 5:
The time required for the address calculation implied by an array subscripting operation like a[i] is a constant, t([ ]). This time does not include the time to compute the subscript expression, nor does it include the time to access (fetch or store) the array element For example, the running time of y = a[i] is: 3t(fetch) + t([ ]) + t(store) Because you need To fetch the value of i: To fetch the value of a: To find the address of a[i]: To fetch the value of a[i]: To store the value of a[i] into y:
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University

t(fetch) t(fetch) t([ ]) t(fetch) t(store)


Page 9

Rule 6:
The time required to calculate a fixed amount of storage from the heap using operator new is a constant, t(new) This time does not include any time required for initialization of the storage (calling a constructor). Similarly, the time required to return a fixed amount of storage to the heap using operator delete is a constant, t(delete). This time does not include any time spent cleaning up the storage before it is returned to the heap (calling destructor)
For example the running time of For example the running time of int* ptr = new int; delete ptr; is: is: t(new) + t(store) t(fetch ) + t(delete) Because you need Because you need To allocate a memory: t(new) To fetch the address from ptr : t(fetch) And to store its address into ptr: t(store) And delete specifies location: t(delete)
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 10

1. int Sum (int n) 2. { 3. int result =0; 4. for (int i=1; i<=n; i=i+1) 5. result = result + i; 6. return result 7. }
Statement Time 3 t(fetch)+ t(store) 4a 4b 4c 5 6 t(fetch) + t(store) (2t(fetch)+t(cmp)) * (n+1) (2t(fetch)+t(+) +t(store)) *n (2t(fetch)+t(+) +t(store)) *n t(fetch)+ t(return) Code result = 0 i=1 i<=n ++i Result +=i Return result

Total
A.R. Hadaegh Dr. Ahmad R. Hadaegh

[6t(fetch) +2t(store) + t(cmp) + 2t(+)]*n + [5t(fetch) + 2t(store) + t(cmp) + t(return) ]


National University Page 11

1. int func (int a[ ], int n, int x) 2. { 3. int result = a[n]; 4. for (int i=n-1; i>=0; i=i-1) 5. result =result *x + a[i]; 6. return result 7. }
Statement Time

3 4a 4b 4c 5 6

3t(fetch)+ t([ ]) + t(store) 2t(fetch) + t(-) + t(store) (2t(fetch)+t(cmp)) * (n+1) (2t(fetch)+t(-) +t(store)) *n (5t(fetch)+t([ ])+t(+)+t(*)+t(store)) *n t(fetch)+ t(return)

Total

[(9t(fetch) +2t(store) + t(cmp) +t([]) + t(*) + t(-)]*n + [(8t(fetch) + 2t(store) + t([]) + t(-) +t(cmp) + t (return) )]
National University Page 12

A.R. Hadaegh Dr. Ahmad R. Hadaegh

Using constant times such as t(fetch), t(store), t(delete), t(new), t(+), , ect makes our running time accurate However, in order to make life simple, we can consider the approximate running time of any constant to be the same time as t(1). For example, the running time of y=x+1 is 3 because it includes two fetches and one store in which all are constants For a loop there are two cases: If we know the exact number of iterations, the running time becomes constant t(1) If we do not know the exact number of iterations, the running time becomes t(n) where n is the number of iterations
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 13

1. int Geometric (int x, int n) 2. { 3. int sum = 0; 4. for (int i=0; i<=n; ++i) 5. { 6. int prod = 1; 7. for (int j=0; j<i; ++j) 8. prod = prod * x; 9. sum = sum + prod; 10. } 11. return result 12. }

Statement 3 4a 4b 4c 6 7a 7b
7c 8

9 11

Time 2 2 3(n+2) 4(n+1) 2(n+1) 2(n+1) n 3 Si=0 i+1 n 4 Si=0 i n 4 Si=0 i 4(n+1) 2

Total

(11/2)n2 + (47/2)n + 27

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 14

1. int Power (int x, int n) 2. { 3. if (n= =0) 4. return 1; 5. else if (n%2 = = 0) // n is even 6. return Power (x*x, n/2); 7. else // n is odd 8. return x* Power (x*x, n/2); 9. }
Statement 3 4 5 6 8 n=0 3 2 n>0 (n is even) 3 5 10 + T( n/2 ) n>0 (n is odd) 3 5 12 + T( n/2 )

Total

18 + T( n/2 )

20 + T( n/2 )

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 15

T(n) =

5 18+T( n/2 ) 20+T( n/2 )

for n=0 for n>0 and n is even for n> 0 and n is odd

Suppose n = 2k for some k>0. Obviously 2k is an even number, we get T(2k) = 18 + T(2k-1) = 18 + (18 + T(2k-2)) = 18 + 18 + (18 + T(2k-3)) = .. = .. = 18k + T(2k-k) = 18k + T(20) = 18k + T(1)

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 16

Since T(1) is add number, the running time of T(1) is: T(1) = 20 + T(0) = 20 + 5 = 25 Therefore, T(2k) = 18k + 25 If n = 2k, then log n = log2k indicating that k = log n Therefore, T(n) = 18log n + 25

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 17

Asymptotic Notation
Suppose the running time of two algorithms A and B are TA(n) and TB(n), respectively where n is the size of the problem

How can we determine TA(n) is better than TB(n)?


One way to do that is if we know the size n ahead of time for some n=no. Then we may say that algorithm A is performing better than algorithm B for n= no But this is a special case for n=no. How about n = n1, or n=n2? Is A better than B for other cases too?

Unfortunately, this is not an easy answer. We cannot expect that the size of n to be known ahead of time. But we may be able to say that under certain situations TA(n) is better than TB(n) for all n >= n1
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 18

To understand the running times of the algorithms we need to make some definitions: Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that f(n) is big oh of g(n) which we write (f(n) is O(g(n)) if there exists an integer no and a constant c > 0 such that for all integers n >=no, f(n) <=c g(n) Example: Show that f(n) = 8n + 128 is O(n2)

8n + 128 <= c n2 (lets set c = 1) 0 <= cn2 -8n -128 0 <= (n-16) (n+8) Thus we can say that for constant c =1 and n >= 16, f(n) is O(n2)
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 19

f(n)
g1(n)=4n2 g2(n)=2n2 g3(n)=n2

400
f(n)=8n+128 200

10

15

20

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 20

Theorem: If f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then f1(n) + f2(n) = O (max(g1(n), g2(n))) Proof: If f1(n) is O(g1(n)) then f1(n) <= c1g1(n) for some c1 and n >= n1 If f2(n) is O(g2(n)) then f2(n) <= c2g2(n) for some c2 and n >= n2 Let no= max(n1, n2) and co = 2(max(c1, c2)), consider the sum f1(n) + f2(n) for some n >= no

f1(n) + f2(n)

<= <= <=

c1g1(n) + c2g2(n) co(g1(n) + g2(n) )/2 co (max (g1(n), g2(n))

Therefore, f1(n) + f2(n) is O (max(g1(n), g2(n)) )


A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 21

Theorem: If f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then f1(n) * f2(n) = O(g1(n)*g2(n) ) Proof: If f1(n) is O(g1(n)) then f1(n) <= c1g1(n) for some c1 and n>=n1 If f2(n) is O(g2(n)) then f2(n) <= c2g2(n) for some c2 and n>=n2 Let no= max(n1, n2) and co = c1*c2, consider the product of f1(n)*f2(n) for some n>=no

f1(n) * f2(n)

<= <=

c1g1(n) * c2g2(n) co(g1(n) * g2(n) )

Therefore, f1(n) * f2(n) is O (g1(n)*g2(n) )


A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 22

Theorem: If f(n) is O(g(n)) and g(n) is O(h(n)), then f(n) is O(h(n))

Proof:
If f(n) is O(g(n)) then f(n) <= c1g(n) for some c1 and n>=n1 If g(n) is O(h(n)) then g(n) <= c2h(n) for some c2 and n>=n2 Let no= max(n1, n2) and co = c1*c2, then f(n) <= <= <= c1g1(n) c1c2h(n) co h(n)

Therefore, f(n) is O (h(n))

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 23

The names of common big O expressions


Expression O(1) O (log n) O(log2 n) O (n) O (n*log n) O(n2) O(n3) O(2n) Name Constant logarithmic log squared Linear nlogn Quadratic Cubic exponential

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 24

Conventions for writing Big Oh Expression


Certain conventions have evolved which concern how big oh expression normally written: First, it is common practice when writing big oh expression to drop all but the most significant items. Thus instead of O(n2 + nlogn + n) we simply write O(n2)

Second, it is common practice to drop constant coefficients. Thus, instead of O(3n2), we write O(n2). As a special case of this rule, if the function is a constant, instead of, say O(1024), we simply write O(1)

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 25

Asymptotic Lower Bound (W)


Definition: Consider a function f(n), which is non-negative for all integers n>=0. We say that f(n) is omega of g(n) which we write (f(n) is W(g(n)), if there exists an integer no and a constant c > 0 such that for all integers n>= no, f(n) >= c g(n)

Example: Show that f(n) = 5n2 - 64n + 256 is W(n2)


5n2 - 64n + 256 5n2 - 64n + 256 5n2 - 64n + 256 n2 4n2 - 64n + 256 4(n-8)2 >= cn2 let c=1 >= n2 >= 0 >= 0 >= 0

Let no = 8, we can say that for c=1 and no>=8 f(n) is W(n2)
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 26

1500
f(n)=5n2-64n+256 1000 f(n)=n2 500 f(n)=2n2

10

15

20

25

A.R. Hadaegh Dr. Ahmad R. Hadaegh

National University

Page 27

Other definitions
Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that f(n) is theta of g(n) which we write (f(n) is Q(g(n)) if and only if f(n) is O(g(n)) and f(n) is W(g(n)) Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that f(n) is little o of g(n) which we write (f(n) is o(g(n)) if and only if f(n) is O(g(n)) and f(n) is not W(g(n)) Now lets consider some of the previous examples in terms of the big O notations:
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University Page 28

1. int func (int a[ ], int n, int x) 2. { 3. int result = a[n]; 4. for (int i=n-1; i>=0; --i) 5. result =result *x + a[i]; 6. return result 7. }
Statement Simple Big O Time model 3 5 O(1) 4a 4b 4c 5 6 4 3n + 3 4n 9n 2

The total running time is:

Total
A.R. Hadaegh Dr. Ahmad R. Hadaegh

16n + 14

O(1) O(n) O(n) O(n) O(1) O(n)

O(16n + 14) = O(max(16n, 14)) = O(16n) = O(n)

National University

Page 29

1. int PrefixSums (int a[ ], int n) 2. { 3. for (int j=n-1; i>=0; --j) 4. { 5. int sum = 0; 6. for (int i=0; i<=j; ++i) 7. sum = sum + a[i]; 8. a[j] = sum; 9. } 10. return result 11. }

Statement Big O 3a 3a 3c 5

6a
6b 6c 7 9

Total
A.R. Hadaegh Dr. Ahmad R. Hadaegh National University

O(1) O(1)*O(n) O(1)*O(n) O(1)*O(n) O(1)*O(n) O(1)*O(n2) O(1)*O(n2) O(1)*O(n2) O(1)*O(n) O(n2)
Page 30

Potrebbero piacerti anche