Sei sulla pagina 1di 36

Algorithm and Complexity

Sheng Zhong
Computer Sci and Engr Dept
SUNY Buffalo

1
Algorithm
• The core of computer science is algorithm.
• An algorithm is a finite sequence of
precise instructions for performing a
computation or for solving a problem.
• Example:
– Algorithm: go to NSC building; find room 210;
find the 2nd row; sit in the 4th seat from the left;
calculate x=5+3.

2
Example non-algorithm
• Algorithm can’t be vague: go to NSC building;
find room 210; find the 2nd row; sit in the 4th seat;
calculate x=5+3.
– The 4th seat from the left or from the right?
• Doing any step of algorithm can’t require
creativity or non-trivial thinking: go to NSC
building; find room 210; find the 2nd row; sit in
the 4th seat from the left; determine whether
P=NP and give a proof.
– It requires creativity to solve the P vs. NP problem.
3
Example of non-algorithm
• Algorithm can’t be infinite: go to NSC
building; find room 210; find the 2nd row; sit
in the 4th seat from the left; calculate
x=5+3, calculate x=5+4, calculate
x=5+5, ... calculate x=5+n, …
– Note that, if the above sequence of
instructions stops at calculating
x=5+1000,000,000, it is still an algorithm
because it is finite.
4
Algorithm vs. program
• A computer program is actually an algorithm
written in a specific programming language.
– The advantage is that it can be “understood” and
executed by a computer.
– The disadvantage is that it is hard for human beings
to understand the algorithm.
• So usually algorithms are written in pseudo-code
or technical English, which is easier to
understand.
– Translating an algorithm to a program is called
“implementation” of the algorithm in the
corresponding programming language.

5
How to write an algorithm
• Below is an algorithm for finding the
maximum element in a finite sequence.
procedure max (a1, a2, …, an: integers)
max := a1 Can have Input of algorithm
iterations
for i:=2 to n
Can have branches
if max<ai then max:=ai
{max is the largest element}
Output of algorithm
6
Example: binary search
• To demonstrate how algorithms are
designed, now we consider a problem of
searching: given a sequence of integers a1,
a2, …, an , where a1< a2 <…< an , how can
you find the location of a specific integer x?
– Clearly you can use linear search, i.e.,
examine these integers one by one, until you
hit x.
– But is there a faster algorithm to find it?
7
The idea
• What if we start our search from the middle of
the sequence?
– Let’s say we compare ai with x.
– We’ll be very happy if we are lucky enough to find ai
=x, in which case we are done.
– If we find ai <x, we know that all integers to the left of
ai are all less than x, and so we only need to look at
those to the right of ai .
– If we find ai >x, we know that all integers to the right of
ai are all greater than x, and so we only need to look
at those to the left of ai .
8
Example of binary search
• Let’s say we are searching for 13 in
1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 15, 16, 18, 19,
20, 22.
• First, we compare 13 with the number in
the middle, 10.
1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 15, 16, 18,
19, 20, 22.
We find that 10<13, and so we only need to
look at the numbers to the right.
9
Example of binary search
• Now our remaining sequence is
12, 13, 15, 16, 18, 19, 20, 22.
• Again we compare 13 with the number in
the middle, 16.
12, 13, 15, 16, 18, 19, 20, 22.
This time we find that 16>13 and hence we
only need to look at the numbers to the left.

10
Example of binary search
• Now our remaining sequence is
12, 13, 15
• We compare 13 with the integer in the
middle, 13, and find that 13=13. So we are
done! This is exactly the one we need. It is
the 10th in the original sequence.

11
Binary search algorithm
• Formally we can write the algorithm as follows:
procedure binary search(x: integer, a1, a2, …, an :increasing integers)
i:=1 {i is the left endpoint of search interval}
j:=n {j is the right endpoint of search interval}
while i<j
begin
m:= floor((i+j)/2)
if x>am then i:=m+1
else if x<am then j:=m-1
else break
end
if x=am then location := m
else location := 0
{location is the subscript of the term equal to x, or 0 if not found}

12
Other algorithm examples
• The textbook has a few more examples of
algorithms like bubble sort, greedy change
making, etc.
– You should read them and understand how
they work.

13
Can algorithms solve all problems?
• Computer algorithms are so useful. They
have applications in almost all aspects of
today’s society.
– But can they solve all the problems for us?
Are they omnipotent?
• The answer is NO.
– It suffices to observe that the set of algorithms
is countable, but the set of problems to solve
is uncountable. We simply don’t have enough
algorithms to solve all the problems.
14
Example unsolvable problem
• A famous problem that can’t solved by algorithms is the
Turing halting problem.
– The problem has the description of a Turing machine (you can
temporarily understand it as a computer program) as an input.
– It asks whether this Turing machine will halt. (That is, whether
the computer program will terminate after running for some time,
or it will run forever.)
– Unfortunately, there is no algorithm that can solve this problem
for us.
• WARNING: The proof in the textbook is NOT rigorous.
You should skip it.
– You should be able to see a rigorous proof in your theory class
(CSE 396).

15
Evaluation of algorithm
• Suppose you are given an algorithm for solving a
certain problem. How can you evaluate it?
– You can evaluate its running time (i.e., time
complexity).
– You can evaluate the space it requires (i.e., space
complexity).
– You can evaluate the amount of communications it
needs (i.e., communication complexity), the number of
random bits it needs, …
• For each of the above metric, we can evaluate
the algorithm in the average case and in the
worst case.
– In computer science, people are normally more
interested in the worst case.
16
How to measure complexity
• How do we measure complexities? For example,
consider the worst-case time complexity of an
algorithm.
– Ideally, we should give the number of seconds the
algorithm requires in the worst case.
– Unfortunately, this number can be significantly
affected by the kind of computer you use. A super
computer can be millions times faster than an
embedded computer.
– So, we count the number of basic operations (e.g.,
addition, multiplication, comparison, etc.) in stead.

17
Example complexity
• What is the worst case time complexity of the
algorithm below?
procedure factorial (x:integer)
r:=1;
for i:=1 to x
r:=r * i
{r is the output}
• We can easily see it requires x multiplications
(and also some other operations). Note that this
varies with different values of input.
– But we don’t want the complexity of an algorithm to
be a variable.
18
Complexity as a function
• In general, the running time (and space,
amount of communications, …) of an
algorithm vary greatly with different inputs.
– Specifically, with larger inputs, we may need
more running time, more space…
– So, instead of using a variable to describe the
complexity, we use a function whose
argument is the length of input.

19
Example complexity
• Let’s look at the worst case time complexity of
the algorithm below again.
procedure factorial (x:integer)
r:=1;
for i:=1 to x
r:=r * i
{r is the output}
• Suppose the input has n bits. Then we need
about 2n multiplications in the worst case.
– Here 2n is a function in n that we use to describe the
complexity.
20
Growth of function
• Suppose we have three algorithms, and their worst case
time complexities are n+10, n2, and n2-1, respectively.
How do they compare with each other?
– When n is small (say, n=2), n+10 can be larger than n2 and n2-1,
but we don’t care because for smaller inputs we always have
sufficient time to run the algorithm.
– When n is large, n+10 is much smaller than n2 and n2-1, and
there is no significant difference between n2 and n2-1. This is
what we really care about.
• In general, when we consider complexities, we often look
at how fast a function grows, rather than look at the
function itself.
– This called the asymptotic complexity of the function.

21
Big O notation
• To specify the asymptotic complexity, we often use the
big O notation:
Let f and g be functions from R+ to R. We say
f(x)=O(g(x)) if there are constants C and k such that,
for all x>k,
|f(x)| ! C|g(x)|.
• In the above inequality, the symbols of absolute values
can often be eliminated because we often restrict our
attention to positive valued functions.
– Even if a function is not positive everywhere, it should always be
positive except for a few small values of x.
• Here we use x as the name of argument variable. Note
that it is just n (the length of input) when we discuss
complexities.
22
Example big O
• Suppose f(x)=2x3+8x. Show that f(x)=O(x3).
Proof: For each x>2, we have that
|f(x)| = 2x3+8x
= 4 ( ½ x 3+2x)
< 4 x3
= 4 | x3 |.
So f(x)= O(x3).

23
Equality symbol in big O
• Note that f(x)=O(g(x)) is NOT a real
equality.
– For example, you can’t rewrite it as
O(g(x))=f(x).
– It does NOT mean f(x) is equal to a function
called O(g(x)).
– It is written this way just for easiness.
– You must be careful with this notation.

24
Big O of polynomial function
Theorem: Suppose f(x) is a degree-n polynomial.
Then f(x) = O(xn).
Proof: Since f(x) is degree-n polynomial, we can write
it as f(x)=anxn+an-1xn-1+…+a1x+a0.
So, for x>1 we have
|f(x)| = |anxn+an-1xn-1+…+a1x+a0|
! |an| xn+|an-1|xn-1+…+|a1|x+|a0|
= xn (|an| +|an-1| / x+…+|a1| / xn-1+|a0| / xn)
! |xn | (|an| +|an-1|+…+|a1|+|a0|).
Hence, f(x)= O(xn). 25
Example polynomial big O
• Suppose f(n)=1+2+…+n. Show that
f(n)=O(n2).
Proof. First, we note that f(n) is the sum of
an arithmetic sequence. Hence,
f(n)=n+n(n-1)/2=n(n+1)/2.
Next, applying the theorem we just studied,
we get that f(n)=O(n2).

26
Big O of logarithm
• It is easy to show the following fact:
For k "0, xk log x= O(xk+1).
For k "0, xk= O(xk log x).
• In particular, we have
log x= O(x).
1= O(log x).

27
Big O of sum
Theorem: Suppose f1 (x)=O(g1 (x)) and f2(x)=O(g2 (x)).
Then
f1(x)+f2(x)=O(max(|g1(x)|,|g2(x)|))
Proof: By definition of big O, we know there exist constants
k1, C1, k2, C2 , such that
for all x>k1, |f1 (x)| !C1 |g1 (x)|;
for all x>k2, |f2 (x)| !C2 |g2 (x)|.
When x> both k1 and k2,
|f1(x)+f2(x)| ! |f1(x)|+|f2(x)| ! C1 |g1 (x)|+ C2 |g2 (x)|
! C1 max(|g1(x)|,|g2(x)|)+ C2 max(|g1(x)|,|g2(x)|)
! (C1 + C2 ) max(|g1(x)|,|g2(x)|),
which means f1(x)+f2(x)=O(max(|g1(x)|,|g2(x)|)).

28
Example big O of sum
• What is the big O of f(x)=x2+x2logx ?
– Using the rule we just learned, f(x)= O(max(x2,
x2logx))=O(x2logx).

• What is the big O of f(x)=x3+x2logx ?


– Using the rule we just learned, f(x)= O(max(x3,
x2logx))=O(x3).

29
Big O of product
• We can also easily prove the following
theorem:
Suppose f1 (x)=O(g1 (x)) and f2(x)=O(g2
(x)). Then
f1(x)f2(x)=O(g1(x)g2(x)).
• Example: f1 (x)= x2 +x+1, f2 (x)= x3+2x+1.
So we have f1 (x)=O(x2) and f2(x)=O(x3).
Then,
f1(x)f2(x)=O(x5).
30
Example big O
• Use big O to estimate
f(x)= (xlogx+x2) (x+logx).
First, using the rule for sum, we have
xlogx+x2= O(xlogx,x2)=x2.
Second, using the rule for sum, we have
x+logx= O(x,logx)=x.
Finally, using the rule for product, we have
f(x)=O(x3).
31
Big Omega notation
• Intuitively, f(x)=O(g(x)) means f(x) grows
no faster than g(x) asymptotically.
– But what if we want to say f(x) grows no
slower than g(x) asymptotically?
– We use so called big Omega notation.
• We say f(x)=#$%$&''()*(%$&'+,$*$&''-
. /&012345(6)784(39%(&+(,$&':(;4(%4<(
<=0<(&+(#$39%(&'
32
Big Theta notation
• Now we have ways to say f(x) grows no faster or
no slower than g(x) asymptotically.
– What if we want to say f(x) grows exactly as fast as
g(x) asymptotically?
– We use so called big Theta notation.
• We say f(x)=>$%$&''()*(*$&'+,$%$&''(07?(
%$&'+,$*$&''-
. /&012345(6)784(39%(&+(,$39% &(@A'(07?(39%(&(
@A(+(,$39% &':(;4(%4<(<=0<(39%(&+(>$39%(&(
@A'
33
Symmetry of big Theta
• By the definition of big Theta, clearly we
have f(x)= >(g(x)) if and only if g(x)=
>(f(x)).
• We also have that f(x)=>$%$&''()*(07?(
973B()*(*$&'+(#$%$&''(07?(%$&'+(
#$*$&''-

34
Frequently used results
• In the textbook, you can find a number of
frequently used results on big O, big
Omega, and big Theta notations.
– You should memorize these results, so that
you can use them to solve problems.

35
Homework 6
• Rosen 3.2: Questions 10, 20, 30.
• Rosen 3.3: Questions 10, 26.

36

Potrebbero piacerti anche