Sei sulla pagina 1di 43

Time Complexity

Dr. Jicheng Fu

Department of Computer Science
University of Central Oklahoma

Objectives (Section 7.6)
The concepts of space complexity and time
complexity
Use the step count to derive a function of the
time complexity of a program
Asymptotics and orders of magnitude
The big-O and related notations
Time complexity of recursive algorithms
Motivation
3 5 10 16 20 22 28 36 60
arr: sorted
n
Evaluate An Algorithm
Two important measures to evaluate an
algorithm
Space complexity
Time complexity
Space complexity
The maximum storage space needed for an
algorithm
Expressed as a function of the problem size
Relatively easy to evaluate
Time complexity
Determining the number of steps (operations)
needed as a function of the problem size
Our focus

Step Count
Count the exact number of steps needed for
an algorithm as a function of the problem size
Each atomic operation is counted as one step:
Arithmetic operations
Comparison operations
Other operations, such as assignment and
return

2
2
7
2
3
) 1 ( 3 2 2
2
1
+ + = + + +

=
n n i n n
n
i
The running time is
1

=
+
n
i
i n
1
) 1 (

=
+
n
i
i n
1
) 1 ( 2
n 2
1
Algorithm 1
1 int count_1(int n)
2 {
3 sum = 0
4 for i=1 to n {
5 for j=i to n {
6 sum++
7 }
8 }
9 return sum
10 }

= =
+ = = +
n
i
n
i
n n i i n
1 1
2 / ) 1 ( ) 1 (
Note:
1 int count_2(int n)
2 {
3 sum = 0
4 for i=1 to n {
5 sum += n+1-i
6 }
7 return sum
8 }
n 2
1
1
n 3
The running time is
2 5 + n
Algorithm 2
1 int count_3(int n)
2 {
3 sum = n(n+1)/2
4 return sum
5 }
4
The running time is 5 time unit
1
Algorithm 3
Asymptotics
An exact step count is usually unnecessary
Too dependent on programming languages and
programmers style
But make little difference in whether the algorithm is
feasible or not
A change in fundamental method can make a
vital difference
If the number of operations is proportional to n,
then double n will double the running time
If the number of operations is proportional to 2
n
,
doubling n will square the number of operations
Example:
Assume that a computation that takes 1
second may involve 10
6
operations
Also assume that double the problem size will
require 10
12
operations
Increase running time from 1 second to 11.5
days
10
12
operations / 10
6
operations per second = 10
6

second ~ 11.5 days
Instead of an exact step count, we want a
notation that
accurately reflects the increase of computation
time with the size, but
ignores details that has little effect on the total
Asymptotics: the study of functions of a
parameter n, as n becomes larger and larger
without bound
Orders of Magnitude
The idea:
Suppose function f(n) measures the amount of
work done by an algorithm on a problem of size n
Compare f(n) for large values of n, with some
well-known function g(n) whose behavior we
already understand
To compare f(n) against g(n):
take the quotient f(n) / g(n), and
take the limit of the quotient as n increases
without bound
Definition
If then:

f(n) has strictly smaller order of magnitude than g(n).
If is finite and nonzero then:

f(n) has the same order of magnitude as g(n).
If then:

f(n) has strictly greater order of magnitude than g(n).
0
) (
) (
lim =

n g
n f
n
) (
) (
lim
n g
n f
n
=

) (
) (
lim
n g
n f
n
Common choices for g(n):
g(n) = 1 Constant function
g(n) = log n Logarithmic function
g(n) = n Linear function
g(n) = n
2
Quadratic function
g(n) = n
3
Cubic function
g(n) = 2
n
Exponential function




Notes:
The second case, when f(n) and g(n) have the
same order of magnitude, includes all values of
the limit except 0 and
Changing the running time of an algorithm by any
nonzero constant factor will not affect its order of
magnitude
Polynomials
If f(n) is a polynomial in n with degree r , then f(n)
has the same order of magnitude as n
r

If r < s, then n
r
has strictly smaller order of
magnitude than n
s
Example 1:





3n
2
- 100n - 25 has strictly smaller order than n
3
0
25 100 3
lim
) (
) (
lim
3
2
=

=

n
n n
n g
n f
n n
25 100 3 ) (
2
= n n n f
3
) ( n n g =
Example 2:





3n
2
- 100n - 25 has strictly greater order than n
Example 3:





3n
2
- 100n - 25 has the same order as n
2


=

=

n
n n
n g
n f
n
25 100 3
) (
) (
lim
2
25 100 3 ) (
2
= n n n f
n n g = ) (
3
25 100 3
) (
) (
lim
2
2
=

=

n
n n
n g
n f
n
25 100 3 ) (
2
= n n n f
2
) ( n n g =
Logarithms
The order of magnitude of a logarithm does not
depend on the base for the logarithms
Let log
a
n and log
b
n be logarithms to two different bases
a > 1 and b > 1



Since the base for logarithms makes no difference to the
order of magnitude, we just generally write log without a
base
( )( )
a
n
n a
n
n
b
a
a b
n
a
b
n
log
log
log log
lim
log
log
lim = =

Compare the order of magnitude of a logarithm log n
with a power of n, say n
r
(r > 0)
It is difficult to calculate the quotient log n / n
r
Need some mathematical tool
LHpitals Rule
Suppose that: f(n) and g(n) are differentiable functions for
all sufficiently large n, with derivatives f(n) and g(n),
respectively
and

exists

Then exists and
=

) ( lim n f
n
=

) ( lim n g
n
) (
) (
lim
n g
n f
n
'
'

) (
) (
lim
n g
n f
n
) (
) (
lim
) (
) (
lim
n g
n f
n g
n f
n n
'
'
=

Use LHpitals Rule



Conclusion
log n has strictly smaller order of magnitude than
any positive power n
r
of n, r > 0.

0
1
lim
1
lim
) (
) (
lim
ln
lim
) (
) (
lim
1
= = =
'
'
= =


r
n
r
n n
r
n n
rn rn
n
n g
n f
n
n
n g
n f
n n f ln ) ( = 0 , ) ( > = r n n g
r
Exponential Functions
Compare the order of magnitude of an
exponential function a
n
with a power of n,
and n
r
(r > 0)
Use LHpitals Rule again (pp. 308)
Conclusion:
Any exponential function a
n
for any real number
a > 1 has strictly greater order of magnitude than
any power n
r
of n, for any positive integer r
Compare the order of magnitude of two
exponential functions with different bases, a
n

and b
n

Assume 0 s a < b,


Conclusion:
If 0 s a < b then a
n
has strictly smaller order of
magnitude than b
n

0 lim lim =
|
.
|

\
|
=

n
n
n
n
n
b
a
b
a
Common Orders
For most algorithm analyses, only a short list of
functions is needed
1 (constant), log n (logarithmic), n (linear), n
2
(quadratic), n
3

(cubic), 2
n
(exponential)
They are in strictly increasing order of magnitude
One more important function: n log n (see pp. 309)
The order of some advanced sorting algorithms
n log n has strictly greater order of magnitude than n
n log n has strictly smaller order of magnitude than any
power n
r
for any r > 1
Growth Rate of Common Functions
The Big-O and Related Notations

These notations are pronounced little oh, Big Oh,
Big Theta, and Big Omega, respectively.

Examples
On a list of length n, sequential search has running time
O(n)
On an ordered list of length n, binary search has running
time O(log n)
Retrieval from a contiguous list of length n has running time
O(1)
Retrieval from a linked list of length n has running time O(n).
Any algorithm that uses comparisons of keys to search a
list of length n must make O(log n) comparisons of keys
If f(n) is a polynomial in n of degree r , then
f(n) is O(n
r
)
If r < s, then n
r
is o(n
s
)
If a > 1 and b > 1, then log
a
(n) is O(log
b
(n))
log n is o(n
r
) for any r > 0
For any real number a > 1 and any positive
integer r, n
r
is o(a
n
)
If 0 s a < b then a
n
is o(b
n
)
O(n)
1 int count_0(int n)
2 {
3 sum = 0
4 for i=1 to n {
5 for j=1 to n {
6 If i<=j then
7 sum++
8 }
9 }
10 return sum
11 }
O(1)
O(1)
The running time is O(n
2
)
O(n
2
)
O(n
2
)
O(n
2
)
Algorithm 4
Summary of Running Times
Algorithm Running Time Order of Running Time
Algorithm 1 n
2

Algorithm 2 5n+2 n
Algorithm 3 5 Constant
2
2
7
2
3
2
+ + n n
Asymptotic Running Times
Algorithm Running Time Asymptotic Bound
Algorithm 1 O(n
2
)
Algorithm 2 5n+2 O(n)
Algorithm 3 5 O(1)
Algorithm 4 - O(n
2
)
2
2
7
2
3
2
+ + n n
More Examples
1)
int x = 0;
for (int i = 0; i < 100; i++)
x += i;

2)
int x = 0;
for (int i = 0; i < n
2
; i++)
x += i;

* Assume that the value of n is the size of the problem
3)
int x = 0;
for (int i = 1; i < n; i *= 2)
x += i;

4)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = 1; j < i; j++)
x += i + j;


5)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = i; j < 100; j++)
x += i + j;

6)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = n; j > i; j /= 3)
x += i + j;

7)
int x = 0;
for (int i = 1; i < n
*
n; i++)
for (int j = 1; j < i; j++)
x += i + j;

Review: Arithmetic
Sequences/Progressions
An arithmetic sequence is a sequence of
numbers such that the difference of any two
successive members of the sequence is a
constant
If the first term of an arithmetic sequence is a
1

and the common difference of successive
members is d, then the nth term a
n
of the
sequence is:
( )d n a a
n
1
1
+ =
Analyzing Recursive Algorithms
Often a recurrence equation is used as the starting
point to analyze a recursive algorithm
In the recurrence equation, T(n) denotes the running time
of the recursive algorithm for an input of size n
We will try to convert the recurrence equation into a
closed form equation to have a better understanding
of the time complexity
Closed Form: No reference to T(n) on the right side of the
equation
Conversions to the closed form solution can be very
challenging
Example: Factorial

int factorial (int n)
/* Pre: n is an integer no less than 0
Post: The factorial of n (n!) is returned
Uses: The function factorial recursively */
{
if (n == 0)
return 1;
else
return n * factorial (n - 1);
}
}

3 ) 1 ( + n T
1
1
The time complexity of factorial(n) is:




T(n) is an arithmetic sequence with the common difference
4 of successive members and T(0) equals 2



The time complexity of factorial is O(n)

> +
=
=
0 if 4 ) 1 (
0 if 2
) (
n n T
n
n T
n nd T n T 4 2 ) 0 ( ) ( + = + =
3+1: The
comparison
is included
Recurrence Equations Examples
Divide and conquer: Recursive merge sorting

template <class Record>
void Sortable_list<Record> :: recursive_merge_sort(
int low, int high)
/* Post: The entries of the sortable list between index low and high
have been rearranged so that their keys are sorted into non-
decreasing order.
Uses: The contiguous List
*/
{
if (high > low) {
recursive_merge_sort(low, (high + low) / 2);
recursive_merge_sort((high + low) / 2 + 1, high);
merge(low, high);
}
}
The time complexity of recursive_merge_sort is:



To obtain a closed form equation for T(n), we assume n is
a power of 2




When i = log
2
n, we have:



The time complexity is O(nlogn)
(

> + +
=
=
1 if ) 2 / ( ) 2 / (
1 if 1
) (
n cn n T n T
n
n T
icn n T
cn n T cn n T
cn cn n T cn n T n T
i i
+ = =
+ = + =
+ + = + =
) 2 / ( 2
3 ) 2 / ( 2 2 ) 2 / ( 2
) 2 / ) 2 / ( 2 ( 2 ) 2 / ( 2 ) (
3 3 2 2
2

n cn n
n cn nT cn n n T n T
n n
log
log ) 1 ( ) (log ) 2 / ( 2 ) (
log log
+ =
+ = + =
Fibonacci numbers
int fibonacci(int n)
/* fibonacci : recursive version */
{
if (n <= 0) return 0;
else if (n == 1) return 1;
else return fibonacci(n 1) + fibonacci(n 2);
}
The time complexity of fibonacci is:





Theorem (in Section A.4): If F(n) is defined by a Fibonacci
sequence, then F(n) is O(g
n
), where

The time complexity is exponential: O(g
n
)

>
=
=
+ +
=
1 if
1 if
0 if
6 ) 2 ( ) 1 (
3
2
) (
n
n
n
n T n T
n T
2 / ) 5 1 ( + = g

Potrebbero piacerti anche