Sei sulla pagina 1di 10

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

Big-O Cheat Sheet

SearchingBig-O Algorithm ComplexityCheat Sheet Big-O Cheat Sheet Sorting Data Structures Heaps Graphs Chart Comments

SortingAlgorithm ComplexityCheat Sheet Big-O Cheat Sheet Searching Data Structures Heaps Graphs Chart Comments 3,398

Data Structures Heaps Graphs Chart Comments 3,398 2k 7.3k Like Tweet
Data Structures
Heaps
Graphs
Chart
Comments
3,398
2k
7.3k
Like
Tweet
I receive $10.19 / wk on Gittip.
I receive
$10.19 / wk
on Gittip.

Know Thy Complexities!

Hi there! This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. Over the last few years, I've interviewed at several Silicon Valley startups, and also some bigger companies, like Yahoo, eBay, LinkedIn, and Google, and each time that I prepared for an interview, I thought to myself "Why oh why hasn't someone created a nice Big-O cheat sheet?". So, to save all of you fine folks a ton of time, I went ahead and created one. Enjoy!

Good Fair Poor Searching
Good
Fair
Poor
Searching

Algorithm

Data Structure

Time Complexity

Average

Worst

Space

Complexity

Worst

 

Graph of |V| edges

vertices and |E|

 

Depth First Search (DFS)

-

O(|E|+|V|)

 
O(|V|)
O(|V|)
 

Graph of |V| vertices and |E| edges

 

Breadth First Search (BFS)

-

O(|E|+|V|)

 
O(|V|)
O(|V|)
 

Sorted array of n elements

O(log(n)) O(n)
O(log(n))
O(n)
 
O(log(n)) O(n)
O(log(n))
O(n)
 
O(1)
O(1)

Array

O(1)
O(1)

Graph with |V| vertices and |E| edges

O((|V|+|E|)log

O((|V|+|E|)log

O(|V|)
O(|V|)

|V|)

|V|)

Graph with |V| vertices and |E| edges

O(|V|^2)
O(|V|^2)
O(|V|^2)
O(|V|^2)
O(|V|)
O(|V|)

Graph with |V| vertices and |E| edges

O(|V||E|)
O(|V||E|)
O(|V||E|)
O(|V||E|)
O(|V|)
O(|V|)

Sorting

Algorithm

Data Structure

Time Complexity

Worst Case Auxiliary Space Complexity

 

Best

Average

Worst

Worst

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

Array

Array

Array

Array

Array

Array

Array

O(nlog(n)) O(nlog(n)) O(n^2) O(n) O(nlog(n)) O(nlog(n)) O(nlog(n)) O(n) O(nlog(n)) O(nlog(n)) O(nlog(n)) O(1)
O(nlog(n))
O(nlog(n))
O(n^2)
O(n)
O(nlog(n))
O(nlog(n))
O(nlog(n))
O(n)
O(nlog(n))
O(nlog(n))
O(nlog(n))
O(1)
O(n)
O(n^2)
O(n^2)
O(1)
O(n)
O(n^2)
O(n^2)
O(1)
O(n^2)
O(n^2)
O(n^2)
O(1)
O(n+k)
O(n+k)
O(n^2)
O(nk)
O(nk)
O(nk)
O(nk)
O(n+k)

Data Structures

Data Structure

Time Complexity Space Complexity Average Worst Worst Indexing Search Insertion Deletion Indexing Search
Time Complexity
Space Complexity
Average
Worst
Worst
Indexing
Search
Insertion
Deletion
Indexing
Search
Insertion
Deletion
O(1)
O(n)
-
-
O(1)
O(n)
-
-
O(n)
O(1)
O(n)
O(n)
O(n)
O(1)
O(n)
O(n)
O(n)
O(n)
O(n)
O(n)
O(1)
O(1)
O(n)
O(n)
O(1)
O(1)
O(n)
O(n)
O(n)
O(1)
O(1)
O(n)
O(n)
O(1)
O(1)
O(n)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(n)
O(n)
O(n)
O(n)
O(nlog(n))
-
O(1)
O(1)
O(1)
-
O(n)
O(n)
O(n)
O(n)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(n)
O(n)
O(n)
O(n)
O(n)
-
O(log(n))
O(log(n))
O(log(n))
-
O(n)
O(n)
O(n)
O(n)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(n)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(n)
-
O(log(n))
O(log(n))
O(log(n))
-
O(log(n))
O(log(n))
O(log(n))
O(n)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(n)

Heaps

Heaps Time Complexity Heapify Find Max Extract Max Increase Key Insert Delete Merge Linked List
Heaps
Time Complexity
Heapify Find Max Extract Max Increase Key
Insert
Delete
Merge
Linked List (sorted)
Linked List (unsorted) -
-
O(1)
O(1)
O(n)
O(n)
O(1)
O(m+n)
O(n)
O(n)
O(1)
O(1)
O(1)
O(1)
Binary Heap
O(n)
O(1)
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(m+n)
Binomial Heap
-
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
O(log(n))
Fibonacci Heap
-
O(1)
O(log(n))*
O(1)*
O(1)
O(log(n))*
O(1)
al·go·rithm
A process or set of rules to be followed in
calculations or other problem-solving
operations, especially by a computer.
Node / Edge Management
Adjacency list
Incidence list
Adjacency matrix
Add Vertex
Add Edge
Remove Vertex Remove Edge Query
More »
O(1)
O(1)
O(|V|+|E|)
O(|E|)
O(|V|)
O(|V|+|E|)
O(1)
O(1)
O(|E|)
O(|E|)
O(|E|)
O(|V|^2)
O(|V|^2)
O(1)
O(|V|^2)
O(1)
O(1)
O(|V|⋅ |E|)
O(|V|⋅ |E|)
O(|V|⋅ |E|)
O(|V|⋅ |E|)
Incidence matrix
O(|V|⋅ |E|)
O(|E|)

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

Notation for asymptotic growth

letter

bound

growth

(theta) Θ

upper and lower, tight [1]

equal [2]

(big-oh) O upper, tightness unknown less than or equal [3]

(small-oh) o

upper, not tight

less than

(big omega) Ω

lower, tightness unknown greater than or equal

(small omega) ω lower, not tight

greater than

[1] Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n). SO

[2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).

[3] Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.

In short, if algorithm is algorithm performance

o(n)

< n

O(n)

≤ n

Θ(n)

= n

Ω(n)

≥ n

ω(n)

> n

then its performance is

Big-O Complexity Chart

This interactive chart, created by our friends over at MeteorCharts, shows the number of operations (y axis) required to obtain a result as the number of elements (x axis) increase. O(n!) is the worst complexity which requires 720 operations for just 6 elements, while O(1) is the best complexity, which only requires a constant number of operations for any number of elements.

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

4/27/2014 Big-O Algorithm ComplexityCheat Sheet Contributors Edit these tables! 1. Eric Rowell 2. Quentin

Contributors

185 Comments

Big-O Cheat Sheet

14. Thomas Dybdahl Ahle 185 Comments Big-O Cheat Sheet Login Sort by Best Share Favorite Join

Login

Sort by Best

Share

Ahle 185 Comments Big-O Cheat Sheet Login Sort by Best Share Favorite Join the discussion… This

Favorite

Comments Big-O Cheat Sheet Login Sort by Best Share Favorite Join the discussion… This is great.

Join the discussion…Comments Big-O Cheat Sheet Login Sort by Best Share Favorite This is great. Maybe you could

Login Sort by Best Share Favorite Join the discussion… This is great. Maybe you could include

This is great. Maybe you could include some resources (links to khan academy, mooc etc) that would explain each of these concepts for people trying to learn them.Login Sort by Best Share Favorite Join the discussion… Michael Mitchell • a year ago 157

Michael Mitchell a year ago

157

trying to learn them. Michael Mitchell • a year ago 157 • Reply • Share ›
trying to learn them. Michael Mitchell • a year ago 157 • Reply • Share ›

Reply

Share ›

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

4/27/2014 Big-O Algorithm ComplexityCheat Sheet Amanda Harlin Michael Mitchell • Yes! Please & thank you 38

Amanda Harlin4/27/2014 Big-O Algorithm ComplexityCheat Sheet Michael Mitchell • Yes! Please & thank you 38 • Reply

Big-O Algorithm ComplexityCheat Sheet Amanda Harlin Michael Mitchell • Yes! Please & thank you 38 •

Michael Mitchell

Yes! Please & thank you

38

Harlin Michael Mitchell • Yes! Please & thank you 38 • Reply • Share › a
Harlin Michael Mitchell • Yes! Please & thank you 38 • Reply • Share › a

Reply

Share ›

a year ago

This explanation in 'plain English' helps: http://stackoverflow.com/quest http://stackoverflow.com/quest

Cam Tyler

helps: http://stackoverflow.com/quest Cam Tyler Michael Mitchell • 11 months ago 16 • Reply • Share

Michael Mitchell

11 months ago

16

Cam Tyler Michael Mitchell • 11 months ago 16 • Reply • Share › Arjan Nieuwenhuizen
Cam Tyler Michael Mitchell • 11 months ago 16 • Reply • Share › Arjan Nieuwenhuizen

Reply

Share ›

Arjan NieuwenhuizenMitchell • 11 months ago 16 • Reply • Share › Michael Mitchell • Here are

months ago 16 • Reply • Share › Arjan Nieuwenhuizen Michael Mitchell • Here are the

Michael Mitchell

Here are the links that I know of.

10 months ago

probably as good or maybe better # 2, but I have not had a chance to look at it. http://ocw.mit.edu/courses/ele

Sincerely,

Arjan

p.s. https://www.coursera.org/cours This course has just begun on coursera (dated 1 July 2013), and looks very good.

6
6
• Reply • fireheron
• Reply •
fireheron

Share ›

Arjan Nieuwenhuizen

7 months ago

Thank you Arjan. Espaecially the coursera.org one ;-)

ago Thank you Arjan. Espaecially the coursera.org one ;-) • Reply • Share › Adam Heinermann
ago Thank you Arjan. Espaecially the coursera.org one ;-) • Reply • Share › Adam Heinermann

Reply

Share ›

Adam Heinermann • 11 months ago  

Adam Heinermann 11 months ago

 

Is there a printer-friendly version?

39

39 • Reply • Share ›  
39 • Reply • Share ›  

Reply

Share ›

 
 
  ericdrowell Mod Adam Heinermann • 4 months ago

ericdrowell

Mod
Mod

Adam Heinermann

4 months ago

not yet, but that's a great idea!

 
 

5

  5 • Reply • Share ›
  5 • Reply • Share ›

Reply

Share ›

Gokce Toykuyu • a year ago

Gokce Toykuyu a year ago

Could we add some tree algorithms and complexities? Thanks. I really like the Red-Black trees ;)

31

31 • Reply • Share ›  
31 • Reply • Share ›  

Reply

Share ›

 
 
  ericdrowell Mod Gokce Toykuyu • a year ago  

ericdrowell

Mod
Mod

Gokce Toykuyu

a year ago

 

Excellent idea. I'll add a section that compares insertion, deletion, and search complexities for specific data structures

 

24

  24 • Reply • Share ›
  24 • Reply • Share ›

Reply

Share ›

Jon Renner • a year ago

Jon Renner a year ago

This is god's work.

 

44

44 • Reply • Share ›  
44 • Reply • Share ›  

Reply

Share ›

 
Blake Jennings • 11 months ago  

Blake Jennings 11 months ago

 

i'm literally crying

 

23

23 • Reply • Share ›  
23 • Reply • Share ›  

Reply

Share ›

 
Darren Le Redgatr • a year ago  

Darren Le Redgatr a year ago

 

I came here from an idle twitter click. I have no idea what it's talking about or any of the comments. But I love the fact there are

eo le out there this clever Makes me think that one da

humanit

will come

ood Cheers

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

p

31

p

4/27/2014 Big-O Algorithm ComplexityCheat Sheet p 31 p • Reply • . Share › y y
4/27/2014 Big-O Algorithm ComplexityCheat Sheet p 31 p • Reply • . Share › y y

Reply

.

Share ›

y

y

g

.

.

Valentin Stanciu a year ago Valentin Stanciu •

1.

Deletion/insertion in a single linked list is implementation dependent. For the question of "Here's a pointer to an element, how

much does it take to delete it?", single-linked lists take O(N) since you have to search for the element that points to the element being deleted. Double-linked lists solve this problem.

2. Hashes come in a million varieties. However with a good distribution function they are O(logN) worst case. Using a double

hashing algorithm, you end up with a worst case of O(loglogN).

3. For trees, the table should probably also contain heaps and the complexities for the operation "Get Minimum".

18

the complexities for the operation "Get Minimum". 18 • Reply • Share › qwertykeyboard • a
the complexities for the operation "Get Minimum". 18 • Reply • Share › qwertykeyboard • a

Reply

Share ›

qwertykeyboard a year ago qwertykeyboard •

14
14
• Reply • Share › qwertykeyboard • a year ago 14 • Reply • Gene Share

Reply

Gene

› qwertykeyboard • a year ago 14 • Reply • Gene Share › It would be

Share ›

It would be very helpful to have export to PDF. Thx

6 months ago

qwertykeyboard

You could convert the document yourself using Pandoc: http://johnmacfarlane.net/pand It might take you a long time to get it working, but Pandoc is an amazing one stop shop for file conversion, and cross platform compatible. If I understand big oh notation correctly I might say "I estimate your learning rate for learning Pandoc will be O(1). ".

2

learning rate for learning Pandoc will be O(1). ". 2 • Reply • Share › Ashutosh
learning rate for learning Pandoc will be O(1). ". 2 • Reply • Share › Ashutosh

Reply

Share ›

Pandoc will be O(1). ". 2 • Reply • Share › Ashutosh Gene • 2 months

AshutoshPandoc will be O(1). ". 2 • Reply • Share › Gene • 2 months ago

will be O(1). ". 2 • Reply • Share › Ashutosh Gene • 2 months ago

Gene

2 months ago

proved O(n), n=number of format conversions to learn :)

ago proved O(n), n=number of format conversions to learn :) • Reply • Share › Juan
ago proved O(n), n=number of format conversions to learn :) • Reply • Share › Juan

Reply

Share ›

Juan Carlos Alvarezof format conversions to learn :) • Reply • Share › big oh. haha funny. Gene

big oh. haha funny.

Reply • Share › Juan Carlos Alvarez big oh. haha funny. Gene • tempire • a

Gene

tempire

a year ago
a year ago

Reply

Share ›

2 months ago

This chart seems misleading. Big O is worst case, not average case; ~ is average case. O( case columns.

)

shouldn't be used in the average

15
15
O( case columns. ) shouldn't be used in the average 15 • Reply • Share ›

Reply

Share ›

guest

be used in the average 15 • Reply • Share › guest tempire • 11 months

tempire

11 months ago

I think big O is just an upper bound. It could be used for all (best, worst and average) cases. Am I wrong?

11 • Reply • Share › Luis guest • 11 months ago You are right.
11
• Reply
Share ›
Luis
guest •
11 months ago
You are right.
3
3
• Reply • Share › Oleksandr Luis •
• Reply
Share ›
Oleksandr
Luis

6 months ago

@Luis That is WRONG. @tempire is correct. Big O cannot be used for lower, average, and upper

bound

instance in a linear search algorithm, worst case is when the list is completed out of order, i.e. the list sorted but backwards. Omega is the lower bound. This is almost pointless to have, for instance you would rather have Big O then Omega because it is exactly the same as say "it will take more than five dollars to

get to N.Y. vs. Its will always take, at most, 135 dollars to get to New York." The first bit of information from Omega is essentially useless, the third however gives you the constraint. Theta is the upper and lower bound together. This is the most beneficially piece of information to have about an algorithm but unfortunately it is usually very hard to find, but we have done this. You can usually find that average for an algorithms efficiency by testing it in average case and worst cases together. Simply this is a computational exercise to extract the empirical data. There is another problem I do not like is the color scheme is

sometimes wrong

Big O (Omicron) is the Worst Case Scenario. It is the upper bound for for the algorithm. For

O(n) is better the O(log(n))? In what way? 1024 vs 10 increments that a sort algorithm

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

has to perform for instance? All in all this is good information but in its current state, to the novice, honestly it needs to be taken with a grain of salt and fact check with a good algorithm book. However, this is in MHO so if I'm off base or incorrect then feel free to flame me like the fantastic four at a gay parade :)

2

to flame me like the fantastic four at a gay parade :) 2 • Reply •
to flame me like the fantastic four at a gay parade :) 2 • Reply •

Reply

Oleksandr

Share ›

6 months ago

Guest:) 2 • Reply • Oleksandr • Share › 6 months ago @Oleksandr You are confused.

@Oleksandr You are confused. Your example about the dollars states specific amounts (e.g. " at most

135 dollars"), but big O and related concepts are used to bound the order (linear, exponential, etc.) of a

function that describes how an algorithm grows (in space, time, etc.) with problem size. To be more appropriate, your example should be modified to say something like "it takes at most 2$ per mile" (linear). With this in mind, you can thus understand how big O can be used both for, say, the best and the worst case. Take your linear search. As the size of the problem grows (the array to be searched grows in size), the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0, or another fixed position), while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size, n, until we find the object in the last index where we look.).

(fixed: wrong autocomplete of who I replied to)

1

we look.). (fixed: wrong autocomplete of who I replied to) 1 • Reply • Share ›
we look.). (fixed: wrong autocomplete of who I replied to) 1 • Reply • Share ›

Reply

Share ›

Omega is useless unless it is a tight bound, i.e., it represents real minimal cases that are interesting (when you have options like <= or >= in the definition of a bound, you should at least get close to the = case, otherwise you might as well use the strictly < or > cases, and even there you should try to find bounds that are reasonably close to the = case). For example, strictly speaking, quicksort is Omega(1), but Omega(n log n) is more informative because tells you its real best case.of who I replied to) 1 • Reply • Share › Philip Machanick Oleksandr • 3

Philip Machanick

because tells you its real best case. Philip Machanick Oleksandr • 3 months ago In any

Oleksandr

3 months ago

In any case, you do not normally use Omega, Theta etc. for differentiating average, best and worst case. These are bounds on any of these cases. For quicksort, the worst case analysis is n^2 and this is both the upper and lower bound on the worst case. You use Omega, Theta, etc. when the analysis for a particular case is not clear and you have to say it is no better than or no worse than a particular analysis.

• Reply • Share › Bob Foster Oleksandr • 3 months ago I believe you
• Reply •
Share ›
Bob Foster
Oleksandr •
3 months ago
I believe you are correct. O is worst case.
Reply •
Share ›
Luis
Oleksandr •
6 months ago
@Oleksandr You are confused. Your example about the dollars states specific amounts (e.g. " at most

135 dollars"), but big O and related concepts are used to bound the order (linear, exponential, etc.) of a

function that describes how an algorithm grows (in space, time, etc.) with problem size. To be more appropriate, your example should be modified to say something like "it takes at most 2$ per mile" (linear). With this in mind, you can thus understand how big O can be used both for, say, the best and the worst case. Take your linear search. As the size of the problem grows (the array to be searched grows in size), the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0, or another fixed position), while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size, n, until we find the object in the last index where we look.).

until we find the object in the last index where we look.). • Reply • Share
until we find the object in the last index where we look.). • Reply • Share

Reply

Share ›

You make a very poor assumption that because a specific value is given, than it must be a linear function. It is in fact any polynomial function of my choice given its parameters and any amount of Lagrange constants which will produce a value of 135, or any such number I specify to be used in the example. The point is that Big O is the upper bound of a function. In fact there are an infinite amount of Big O's for any elementary functions. Big O cannot be used for the best case scenario,in the last index where we look.). • Reply • Share › Oleksandr1 Luis • 6

Oleksandr1

Big O cannot be used for the best case scenario, Oleksandr1 Luis • 6 months ago

Luis

6 months ago

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

this is a complete misunderstanding of Omega vs Omicron. You should read up on this because this is very important. As for the example, $135 dollars was given as an upper bound, $5 was the Omega value, I'm not sure why you don't understand a very clear analogy, but for you I change situation and values. Given function unknown, it will run more than five iterations (Omega), BUT it will never run more than 135 iterations. 135 being the Omicron value. On the linear search algorithm, forgive me, I meant to say Linear Sort Algorithm, which has the worst can scenario when a list is fed to said algorithm in order, but backwards. I agree about what you said about linear search algorithm.

I agree about what you said about linear search algorithm. • Reply • Share › see
I agree about what you said about linear search algorithm. • Reply • Share › see
I agree about what you said about linear search algorithm. • Reply • Share › see

Reply

Share ›

see more

Yavuz Yetim

algorithm. • Reply • Share › see more Yavuz Yetim Oleksandr1 • 6 months ago @Oleksandr

Oleksandr1

6 months ago

@Oleksandr @Luis IMHO, there are three different statements in this argument, that lead to the eventual misunderstanding. I agree with Luis that the table is correct and not useless but also agree with Oleksandr that it's not complete (but again disagree that it is incomplete because of the mismatch between best/average case and big-O, see Statement iii and Example (a) in the end).

The main confusion is between the terms "case" and "bound". These are orthogonal terms, and do not

have any relation with each other. For example, you have a lower-bound for average-case, or upper-bound

for best-case,

- has useless/meaningless information)

(in total 9 different, correct combinations, each useful for a different use case, but --none-

Here are the statements in this argument:

Statement i) "The table is wrong in using Big-O notation for all columns". This statement is false because the table is correct. Big-O notation does not have anything to do with the worst case, average case or the best case. Big-O notation is only a representation for a function. Let's say the best-case run time for an algorithm for a given input of size n is exactly (3*n + 1). One correct representation for this function is O(n). Therefore, writing O(n) for a best-case entry is correct.

Therefore, writing O(n) for a best-case entry is correct. Statement ii) "The best-case and average-case columns

Statement ii) "The best-case and average-case columns are correct in definition but useless/meaningless" This statement is also false While learning the "average case" (3*n + 1) as O(n)

ericdrowell Mod
ericdrowell Mod
• Reply • Share › tempire • a year ago
• Reply •
Share ›
tempire •
a year ago

I'll try to clarify that. Thanks!

1

• a year ago I'll try to clarify that. Thanks! 1 • Reply • Share ›

Reply Reply

Share ›

see more

Guest • a year ago

Guest

a year ago

Finals are already over

This should have been shared a week ago! Would have saved me like 45 minutes of using Wikipedia.

9

9 • Reply • Share ›
9 • Reply • Share ›

Reply

Share ›

Stéphane Duguay • a year ago  

Stéphane Duguay a year ago

 

Hi, I'd like to use a french version of this page in class and I do the data entering for french? I'm interested!

should I translate it on another website or you can support localisation

11
11
• Reply • Share › Marten Czech Stéphane Duguay • a year ago learn English!
• Reply
Share ›
Marten Czech
Stéphane Duguay •
a year ago
learn English!
22
• Reply
Share ›
Marcus
Marten Czech •
7 months ago
Maybe he means he wants to deliver it to French students. If he is offering to do the data entry from french, but
clearly speaks English (from his comment). Don't be ignorant, there is no reason that everything should be in

4/27/2014

Big-O Algorithm ComplexityCheat Sheet

Jon Renner

English.

11

Algorithm ComplexityCheat Sheet Jon Renner • English. 11 • Reply • Marten Czech Share › Marcus

Reply

Marten Czech

Share › Marcus
Share ›
Marcus

7 months ago

IT world ticks in English, the sooner French realize that the faster we can go together.

3 9 months ago
3
9 months ago
realize that the faster we can go together. 3 9 months ago • Reply • Share

Reply

Share ›

Anyway I can get a PDF version without taking screenshots myself?

7

I can get a PDF version without taking screenshots myself? 7 • Reply • Share ›
I can get a PDF version without taking screenshots myself? 7 • Reply • Share ›

Reply

Share ›

sigmaalgebra a year ago

You omitted an in-place, guaranteed O(n log(n)) array sort, e.g., heap sort. You omitted radix sort that can be faster

than any of the algorithms you mentioned.

Might mention SAT and related problems in NP-complete

where the best known algorithm for a problem of size n has O(2^n).

Might include an actual, precise definition of O().

7

Might include an actual, precise definition of O(). 7 • Reply • Share › Antoine Grondin
Might include an actual, precise definition of O(). 7 • Reply • Share › Antoine Grondin

Reply

Share ›

Antoine Grondin a year ago

I think DFS and BFS, under Search, would be more appropriate listed as Graph instead of Tree.

7

would be more appropriate listed as Graph instead of Tree. 7 • Reply • Share ›
would be more appropriate listed as Graph instead of Tree. 7 • Reply • Share ›

Reply

Share ›

listed as Graph instead of Tree. 7 • Reply • Share › ericdrowell Fixed! Thanks 3

ericdrowell

instead of Tree. 7 • Reply • Share › ericdrowell Fixed! Thanks 3 • Reply •

Fixed! Thanks

3

7 • Reply • Share › ericdrowell Fixed! Thanks 3 • Reply • Antoine Grondin •

Reply Reply

Antoine Grondin

Share ›

a year ago

Agreed3 • Reply • Antoine Grondin • Share › a year ago Quentin Pleplé • Reply

Quentin Pleplé

Grondin • Share › a year ago Agreed Quentin Pleplé • Reply • Antoine Grondin •
Grondin • Share › a year ago Agreed Quentin Pleplé • Reply • Antoine Grondin •

Reply

• Share › a year ago Agreed Quentin Pleplé • Reply • Antoine Grondin • Share

Antoine Grondin

Share ›

a year ago

Ankush Gupta 11 months ago

Awesome resource! You should add Dijkstra using a Fibonacci Heap!

5

resource! You should add Dijkstra using a Fibonacci Heap! 5 • Reply • Share › Anonimancio
resource! You should add Dijkstra using a Fibonacci Heap! 5 • Reply • Share › Anonimancio

Reply

Share ›

Anonimancio Cobardoso a year ago

You could include a chart with logarithmic scale. Looks nicer IMHO.

5

include a chart with logarithmic scale. Looks nicer IMHO. 5 • Reply • Share › Gábor
include a chart with logarithmic scale. Looks nicer IMHO. 5 • Reply • Share › Gábor

Reply

Share ›

Gábor Nádai a year ago

Nice. 5 • Reply • Share › maxw3st • a year ago
Nice.
5
• Reply •
Share ›
maxw3st •
a year ago

This gives me some excellent homework to do of a variety I'm not getting in classes. Thank you.

4

4 • Reply • Share ›
4 • Reply • Share ›

Reply

Share ›

AmitK

a year ago

Its pretty handy!

3

3 • Reply • Share ›
3 • Reply • Share ›

Reply

Share ›

IvanKuckir • a year ago ht tp://bigoc heatsheet.com/ a year ago http://bigocheatsheet.com/

9/10

Big-O Algorithm ComplexityCheat Sheet

4/ 27/2014 Big-O Algorithm ComplexityCheat Sheet Do you really find this useful? When talking about complexity,

Do you really find this useful?

When talking about complexity, you must talk about some specific algorithm. But when you know the algorithm, you already know the complexity, am I wrong? Does anybody just learns the paris algorithm_name : complexity, without any idea how algorithm works? OMG

4
4

: complexity, without any idea how algorithm works? OMG 4 • Reply • Share › ericdrowell

Reply

Share ›

ericdrowell

algorithm works? OMG 4 • Reply • Share › ericdrowell IvanKuckir • a year ago have

IvanKuckir

a year ago

have you never had a technical interview before?

7
7

Reply Reply

IvanKuckir

• Share › ericdrowell •
Share ›
ericdrowell •

a year ago

No, I am still a student. And I think, that if employer wants you to know just algorithm complexity, but not the whole alogrithm, there is something wrong with that company

1

alogrithm, there is something wrong with that company 1 • Reply • Share › That's too
alogrithm, there is something wrong with that company 1 • Reply • Share › That's too

Reply

Share ›

That's too strong. There are simply too many algorithms. Also, just because certain companies are asking algorithms, this fact does not imply other companies have a lower expectation. Most of the top companies I know of don't even go with Red-Black tree. Both of them are interested in basic tree/graph and sorting algorithms and give you one or two puzzles that don't really help in real life. Half of the Google interview questions are good, but the other half are puzzles that I find (and certainly a lot of people) less helpful . One I find useful one is fitting GBs of data into 1M memory if I remember correctly.something wrong with that company 1 • Reply • Share › mrtvb IvanKuckir • a year

mrtvb

GBs of data into 1M memory if I remember correctly. mrtvb IvanKuckir • a year ago

IvanKuckir

a year ago

Also, not everyone will remember the complexity. Certain people will never use algorithms above tree search or sorting. They might not even need streaming algorithm.

2 Paolo
2
Paolo

They might not even need streaming algorithm. 2 Paolo • Reply • IvanKuckir • Share ›

They might not even need streaming algorithm. 2 Paolo • Reply • IvanKuckir • Share ›

Reply

IvanKuckir

Share ›

4 months ago

Page styling via Bootstrap Comments via Disqus Algorithm detail via Wikipedia Big-O complexity chart via MeteorCharts Table source hosted on Github Mashup via @ericdrowell