If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Functions in asymptotic notation

When we use asymptotic notation to express the rate of growth of an algorithm's running time in terms of the input size n, it's good to bear a few things in mind.
Let's start with something easy. Suppose that an algorithm took a constant amount of time, regardless of the input size. For example, if you were given an array that is already sorted into increasing order and you had to find the minimum element, it would take constant time, since the minimum element must be at index 0. Since we like to use a function of n in asymptotic notation, you could say that this algorithm runs in Θ(n0) time. Why? Because n0=1, and the algorithm's running time is within some constant factor of 1. In practice, we don't write Θ(n0), however; we write Θ(1).
Now suppose an algorithm took Θ(log10n) time. You could also say that it took Θ(log2n) time. Whenever the base of the logarithm is a constant, it doesn't matter what base we use in asymptotic notation. Why not? Because there's a mathematical formula that says
logan=logbnlogba
for all positive numbers a, b, and n. Therefore, if a and b are constants, then logan and logbn differ only by a factor of logba, and that's a constant factor which we can ignore in asymptotic notation.
Therefore, we can say that the worst-case running time of binary search is Θ(logan) for any positive constant a. Why? The number of guesses is at most log2n+1, generating and testing each guess takes constant time, and setting up and returning take constant time. However, as a matter of practice, we often write that binary search takes Θ(log2n) time because computer scientists like to think in powers of 2.
There is an order to the functions that we often see when we analyze algorithms using asymptotic notation. If a and b are constants and a<b, then a running time of Θ(na) grows more slowly than a running time of Θ(nb). For example, a running time of Θ(n), which is Θ(n1), grows more slowly than a running time of Θ(n2). The exponents don't have to be integers, either. For example, a running time of Θ(n2) grows more slowly than a running time of Θ(n2n), which is Θ(n2.5).
The following graph compares the growth of n,n2, and n2.5:
Graph comparing n, n squared, and n to the 2.5 power
Logarithms grow more slowly than polynomials. That is, Θ(log2n) grows more slowly than Θ(na) for any positive constant a. But since the value of log2n increases as n increases, Θ(log2n) grows faster than Θ(1).
The following graph compares the growth of 1, n, and log2n:
Graph comparing 1, log based 2 of n, and n
Here's a list of functions in asymptotic notation that we often encounter when analyzing algorithms, ordered by slowest to fastest growing:
  1. Θ(1)
  2. Θ(log2n)
  3. Θ(n)
  4. Θ(nlog2n)
  5. Θ(n2)
  6. Θ(n2log2n)
  7. Θ(n3)
  8. Θ(2n)
  9. Θ(n!)
Note that an exponential function an, where a>1, grows faster than any polynomial function nb, where b is any constant.
The list above is not exhaustive, there are many functions with running times not listed there. You'll hopefully run into a few of those in your computer science journey.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

Want to join the conversation?

  • leafers seedling style avatar for user Chase Eaton
    It doesn't mention how the base on a log affects its growth. What does changing the base of a log do to its growth rate? The part about the change-of-base formula says that all logs with constant bases are equal but on the next quiz it is seen that they are not. Why is that so?
    (17 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user winju mishra
      There are some pretty good answers to this... One more way to think can be that.
      When we write log...2(100) then we think that how many times we can divide 100 into parts of two (As seen in binary search algo) .
      So when we write log...10(100) then we think that how many times we can divide 100 into parts of ten .
      The answer we will get is that we can make way more parts of two than parts of ten .
      So, log..2(100)>log..10(100) .
      But we say, Theta(log..2(n)) = Theta(log..10(n)), here we should also keep in mind that we are talking pf very very large values of 'n'(Asymtotic) .So for these vary large values of 'n' log..2(n) and log..10(n) tend to become same.
      (33 votes)
  • old spice man blue style avatar for user Leon Gower
    I'm following the content but i'm unsure as to why the content is heading down this path. What is the reasoning behind learning or even reviewing this? Shouldn't lessons about the various speeds of Algorithms come after we've learned the various algorithms? we were given a snippet of information regarding 2 simple algorithms and then sent into the woods with a hand full of bread crumbs... or did I miss something?
    (26 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      They use the asymptotic analysis when they discuss each algorithm, but if you wanted to you could:
      -skip the asymptotic analysis section
      -read the parts that discuss how each algorithm works (ignoring any asymptotic notation)
      -skip the analysis of each algorithm
      -after learning how each algorithm works, review the asymptotic notation section
      -review the analysis section for each algorithm

      The asymptotic notation is useful in telling the computer science part of the story. It tells you why one algorithm is better than another algorithm. It tells you why you would even bother learning merge sort and quick sort after learning about other sorting algorithms.
      (21 votes)
  • male robot hal style avatar for user Andrei Shindyapin
    Θ(n!) would grow faster than Θ(2​^n​​), correct?
    (15 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      To demonstrate that n! grows faster than 2^n, write them out like this:
      n!    = 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * ... * n
      2^n = 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * ... * 2


      Here's how we can show n! grows faster than 2^n:
      If we have n! and 2^n written as above we can see that the ratio of the terms for n>2 is >1 and increasing with larger n.
      That is:
      (3 * 4 * 5 * ... * n)/(2 * 2 * 2 * ... * 2) >1 and increases with n.
      Since this ratio is increasing, the ratio will become larger than the inverse ratio of the terms for n <=2.
      That is:
      (3 * 4 * 5 * ... * n)/(2 * 2 * 2 * ... * 2) will eventually become greater than (2 * 2)/(1 * 2)
      Once that occurs, the ratio between all the terms in n! and 2^n will be >1
      Not only will the ratio be >1, but it will also be increasing as n increases
      This demonstrates that n! grows faster than 2^n
      (Note that: The same logic can be used for n! and c^n where c is any constant)

      Another approach is to use an approximation for n!.
      Stirling's Approximation says:
      n! ~= sqrt(2 * Pi * n) * (n/e)^n
      (https://en.wikipedia.org/wiki/Stirling%27s_approximation)
      From inspection we can see that even the:
      (n/e)^n term grows faster than 2^n

      Hope this makes sense
      (29 votes)
  • piceratops tree style avatar for user nguyenhaitran97
    what consider constant , linear ,polinomial, expotienal growth what is the rule for it?i havent learn log yet
    (5 votes)
    Default Khan Academy avatar avatar for user
  • leafers seed style avatar for user William
    The article mentions that "computer scientists like to think in powers of 2." I'm guessing this has something to do with the binary nature of computers, but is there a more significant meaning to this? As someone who is new to computer science, thinking in powers of 10 seems easier to me.
    (4 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Modern computers work in binary, but there is a deeper meaning to it. Computer science is, to a large extent, built around the concept of Turing machines, which have a binary alphabet. ( https://en.wikipedia.org/wiki/Turing_machine ). The Church-Turing thesis (not proven) roughly says that if you can compute something, then it is computable on a Turing machine (see https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis), which is why it is used as the foundation of computer science.

      Notably, when you see n used in functions for running times, the value of n represents the initial problem size as it would be represented on a Turing machine i.e. the size of the input data in binary. (This helps to make sure we are always comparing apples to apples)

      Hope this makes sense
      (11 votes)
  • piceratops ultimate style avatar for user lishaleo3
    Im sooooo cofused
    (7 votes)
    Default Khan Academy avatar avatar for user
  • starky ultimate style avatar for user LuminatorBU
    What is the difference between lg and log notation? Do they mean the same thing or are they different?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Here they are using:
      - lg means log base 2
      - log_a is log base a

      The bad news:
      - If you see log, or lg in other places it may mean something different.
      e.g. log is often used to represent: log base 2, log base 10, log base e
      e.g. lg is often used to represent: log base 2, log base 10, log base e
      - Often, in computer science, log or lg will be assumed to be log base 2 without anyone mentioning it.

      The good news:
      - If you see Ln then you can be fairly sure it means the natural log
      - If you see log_a or lg_a you can be fairly sure it means log base a
      - In asymptotic analysis, the base of our logarithms usually don't matter,
      (because we can convert from one base to another by multiplying by a constant, and we can typically ignore these constants)
      (6 votes)
  • blobby green style avatar for user justinj1776
    Could you explain why a^n grows faster than b^n where a and b are constants.It might be possible for certain values of a and b but not for all.For example consider the case where:
    1.a^n=100^n where a=100
    Plot is here:https://www.wolframalpha.com/input/?i=100^n+from+n%3D0+to+100
    2.n^b=n^100 where b=100
    Plot is here:https://www.wolframalpha.com/input/?i=n^100+from+n%3D0+to+100

    Looking at the plots I think step 2 increases faster than step 1 or am I wrong?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Nada
    What does it means by saying " Θ(n) grows more slowly than a running time of Θ(n^2)"? How slowly? I didn'y get it
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf yellow style avatar for user SP
      Let's give n 5 values to demonstrate: 1, 2, 3, 4, and 5. Let's compare the outputs of Θ(n) and Θ(n^2).

      When the value is 1, Θ(n) is 1 and Θ(n^2) is 1.
      When the value is 2, Θ(n) is 2 and Θ(n^2) is 4.
      When the value is 3, Θ(n) is 3 and Θ(n^2) is 9.
      When the value is 4, Θ(n) is 4 and Θ(n^2) is 16.
      When the value is 5, Θ(n) is 5 and Θ(n^2) is 25.

      You can see how Θ(n^2) is increasing much faster than Θ(n).
      (8 votes)
  • blobby green style avatar for user Salem
    Is there a negative correlation between an algorithm rate of growth and it's efficiency?

    If my my expectation about the negative correlation is correct then the following statement are correct:
    1- Fast rate of growth means slow algorithm. Therefore, less efficient algorithm.
    2- Slow rate of growth means fast algorithm. Therefore, more efficient algorithm.

    For example, in the linear search, the rate of growth is Θ(n), and the binary search, the rate of growth is Θ(lg n). In this example, the linear search has faster rate of growth than binary search which means that the linear search is less efficient than the binary search.

    Is my expectation about the negative correlation correct?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Yes.
      If you take less time to complete something, then you must have worked faster.
      If you take more time to complete something, then you must have worked slower.

      Just remember that asymptotic complexity only applies to large values of n. Simpler algorithms, which have worse asymptotic complexity often outperform complex algorithms with better asymptotic complexity when the values of n are small.
      e.g.
      Insertion sort is O(n^2), and merge sort is O(n log n).
      For large values of n (> 100000) merge sort is faster than insertion sort.
      For smaller values of n (<100) insertion sort is faster than merge sort.
      This does not contradict anything that asymptotic complexity says, because asymptotic complexity only talks about what happens when we have large values of n.
      (5 votes)