How can I find the time complexity of an algorithm?

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity.

What do I know already?

Say for code as simple as the one below:
char h = 'y'; // This will be executed 1 time int abc = 0; // This will be executed 1 time 
Say for a loop like the one below:
for (int i = 0; i

The time is actually calculated to i=0 and not the declaration.

So the number of operations required by this loop are = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)

OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges asked Jun 14, 2012 at 11:21 Yasser Shaikh Yasser Shaikh 47.6k 49 49 gold badges 207 207 silver badges 284 284 bronze badges Bonus for those interested: The Big O Cheat Sheet bigocheatsheet.com Commented Jun 9, 2013 at 22:12 why is Console.Write('Hello World !'); not a machine instruction? Commented Jul 10, 2017 at 11:02 Related / maybe duplicate: Big O, how do you calculate/approximate it? Commented Dec 24, 2017 at 17:24

@Chetan If you mean that you should consider Console.Write when calculating the complexity, that's true, but also somewhat irrelevant in this case, as that only changes a constant factor, which big-O ignores (see the answers), so the end result is still a complexity of O(N).

Commented Dec 24, 2017 at 17:39 Commented Oct 31, 2018 at 11:00

10 Answers 10

How to find time complexity of an algorithm

You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.

For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N) .

Why do we remove the two 2 s ?

We are interested in the performance of the algorithm as N becomes large.

Consider the two terms 2N and 2.

What is the relative influence of these two terms as N becomes large? Suppose N is a million.

Then the first term is 2 million and the second term is only 2.

For this reason, we drop all but the largest terms for large N.

So, now we have gone from 2N + 2 to 2N .

Traditionally, we are only interested in performance up to constant factors.

This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.

So 2N becomes just N .

33.1k 20 20 gold badges 91 91 silver badges 102 102 bronze badges answered Jun 14, 2012 at 11:25 Andrew Tomazos Andrew Tomazos 68k 43 43 gold badges 199 199 silver badges 338 338 bronze badges

hey thanks for letting me know "why O(2N+2) to O(N)" very nicely explained, but this was only a part of the bigger question, I wanted someone to point out to some link to a hidden resource or in general I wanted to know how to do you end up with time complexities like O(N), O(n2), O(log n), O(n!), etc.. I know I may be asking a lot, but still I can try :<)

Commented Jun 14, 2012 at 11:33

Well the complexity in the brackets is just how long the algorithm takes, simplified using the method I have explained. We work out how long the algorithm takes by simply adding up the number of machine instructions it will execute. We can simplify by only looking at the busiest loops and dividing by constant factors as I have explained.

Commented Jun 14, 2012 at 11:36

Giving an in-answer example would have helped a lot, for future readers. Just handing over a link for which I have to signup, really doesn't help me when I just want to go through some nicely explained text.

Commented Jan 2, 2016 at 4:48

I would suggest to watch Dr. Naveen Garg(IIT Delhi Prof.) videos if you want to get good knowledge on DS and Time complexity.check the link.nptel.ac.in/courses/106102064

Commented Oct 1, 2016 at 16:45

(cont.) This hierarchy would have a height on the order of log N. As for O(N!) my analogies won't likely cut it, but permutations are on that order - it's prohibitively steep, more so than any polynomial or exponential. There are exactly 10! seconds in six weeks but the universe is less than 20! seconds old.

Commented Feb 25, 2018 at 2:59

The below answer is copied from above (in case the excellent link goes bust)

The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:

statement; 

Is constant. The running time of the statement will not change in relation to N.

for ( i = 0; i < N; i++ ) statement; 

Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

for ( i = 0; i

Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

while ( low list[mid] ) low = mid + 1; else break; > 

Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.

void quicksort (int list[], int left, int right)

Is N * log (N). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( ) where is the measure. The quicksort algorithm would be described as O (N * log(N )) .

Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Jan 18, 2013 at 10:04 8,668 6 6 gold badges 40 40 silver badges 49 49 bronze badges The quicksort algorithm in the worst case has a running time of N^2, though this behaviour is rare. Commented Mar 4, 2015 at 8:23

IIRC, little o and big omega are used for best and average case complexity (with big O being worst case), so "best, average, and worst case measures. Each would have its own Big O notation." would be incorrect. There are even more symbols with more specific meanings, and CS isn't always using the most appropriate symbol. I came to learn all of these by the name Landau symbols btw. +1 anyways b/c best answer.

Commented May 8, 2015 at 7:43

@hiergiltdiestfu Big-O, Big-Omega, etc. can be applied to any of the best, average or worst case running times of an algorithm. How do O and Ω relate to worst and best case?

Commented Dec 17, 2017 at 9:34

Also, if anyone is looking for how to calculate big O for any method: stackoverflow.com/a/60354355/4260691

Commented Feb 22, 2020 at 17:13 One of the best explanations. Commented May 21, 2020 at 6:04

1. Introduction

In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.

2. Big O notation

The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.

For example, if the time required by an algorithm on all inputs of size n is at most 5n 3 + 3n, the asymptotic time complexity is O(n 3 ). More on that later.

A few more examples:

3. O(1) constant time:

An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.

4. O(n) linear time

An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.

Consider the following examples. Below I am linearly searching for an element, and this has a time complexity of O(n).

int find = 66; var numbers = new int[] < 33, 435, 36, 37, 43, 45, 66, 656, 2232 >; for (int i = 0; i < numbers.Length - 1; i++) < if(find == numbers[i]) < return; >> 

5. O(log n) logarithmic time:

An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.

Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search.

6. O(n 2 ) quadratic time

An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.

7. Some useful links

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Mar 27, 2014 at 13:14 Yasser Shaikh Yasser Shaikh 47.6k 49 49 gold badges 207 207 silver badges 284 284 bronze badges

Several examples of loop.

 // Here c is a positive integer constant for (int i = 1; i for (int i = n; i > 0; i -= c) < // some O(1) expressions >
 for (int i = 1; i > for (int i = n; i > 0; i += c) < for (int j = i+1; j 
 for (int i = 1; i for (int i = n; i > 0; i /= c) < // some O(1) expressions >
 // Here c is a constant greater than 1 for (int i = 2; i //Here fun is sqrt or cuberoot or any other constant root for (int i = n; i > 0; i = fun(i)) < // some O(1) expressions >

One example of time complexity analysis

int fun(int n) < for (int i = 1; i > > 

Analysis:

For i = 1, the inner loop is executed n times. For i = 2, the inner loop is executed approximately n/2 times. For i = 3, the inner loop is executed approximately n/3 times. For i = 4, the inner loop is executed approximately n/4 times. ……………………………………………………. For i = n, the inner loop is executed approximately n/n times. 

So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n) , which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)

The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is around to O(log n). So the time complexity of the above code is O(n·log n).

5,900 1 1 gold badge 32 32 silver badges 48 48 bronze badges answered Nov 2, 2015 at 9:31 47.3k 22 22 gold badges 199 199 silver badges 239 239 bronze badges @Simon, Could you please figure out which part is incorrect? Commented Sep 19, 2019 at 12:39 thanks for asking. I misread the code. I deleted my comment. Sorry! Commented Sep 19, 2019 at 18:32 in the analysis it should be O(n ^ 2) . Commented Nov 9, 2020 at 14:17

@zangw Awesome explaination! Can you change series (1/1 + 1/2 + 1/3 + … + 1/n) is equal to O(log n) as series (1/1 + 1/2 + 1/3 + … + 1/n) is around to log n instead of O(log n) to make it more precise. And c in pseudo code for O(n^c) may mislead people that the 2 c s are same.

Commented May 19, 2022 at 3:52

Time complexity with examples

1 - Basic operations (arithmetic, comparisons, accessing array’s elements, assignment): The running time is always constant O(1)

read(x) // O(1) a = 10; // O(1) a = 1,000,000,000,000,000,000 // O(1) 

2 - If then else statement: Only taking the maximum running time from two or more possible statements.

age = read(x) // (1+1) = 2 if age < 17 then begin // 1 status = "Not allowed!"; // 1 end else begin status = "Welcome! Please come in"; // 1 visitors = visitors + 1; // 1+1 = 2 end; 

So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).

3 - Looping (for, while, repeat): Running time for this statement is the number of loops multiplied by the number of operations inside that looping.

total = 0; // 1 for i = 1 to n do begin // (1+1)*n = 2n total = total + i; // (1+1)*n = 2n end; writeln(total); // 1 

So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).

4 - Nested loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n^2) or O(n^3).

for i = 1 to n do begin // (1+1)*n = 2n for j = 1 to n do begin // (1+1)n*n = 2n^2 x = x + 1; // (1+1)n*n = 2n^2 print(x); // (n*n) = n^2 end; end; 

Common running time

There are some common running times when analyzing an algorithm:

  1. O(1) – Constant time Constant time means the running time is constant, it’s not affected by the input size.
  2. O(n) – Linear time When an algorithm accepts n input size, it would perform n operations as well.
  3. O(log n) – Logarithmic time Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
  4. O(n log n) – Linearithmic time This running time is often found in "divide & conquer algorithms" which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
  5. O(n 2 ) – Quadratic time Look Bubble Sort algorithm!
  6. O(n 3 ) – Cubic time It has the same principle with O(n 2 ).
  7. O(2 n ) – Exponential time It is very slow as input get larger, if n = 1,000,000, T(n) would be 21,000,000. Brute Force algorithm has this running time.
  8. O(n!) – Factorial time The slowest. Example: Travelling salesman problem (TSP)

It is taken from this article. It is very well explained and you should give it a read.

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Apr 19, 2014 at 9:36 Yasser Shaikh Yasser Shaikh 47.6k 49 49 gold badges 207 207 silver badges 284 284 bronze badges

In your 2nd example, you wrote visitors = visitors + 1 is 1 + 1 = 2 . Could you please explain to me why you did that?

Commented Dec 31, 2015 at 9:09

@Sajib Acharya Look it from right to left. First step: calculate visitors + 1 Second step: assign value from first step to visitors So, above expression is formed of two statements; first step + second step => 1+1=2

Commented Jan 12, 2016 at 9:46 @nbro Why it is 1+1 in age = read(x) // (1+1) = 2 Commented Mar 13, 2017 at 19:33 @BozidarSikanjic Why it is 1+1 in age = read(x) // (1+1) = 2 Commented Mar 13, 2017 at 19:34

@Humty Check the beginning of this answer: read(x) // O(1) a = 10; // O(1) First is function call => O(1) ///// Second is assignment, as nbro said, but 10 is constant, so second is => O(1).

Commented Mar 13, 2017 at 22:46

Loosely speaking, time complexity is a way of summarising how the number of operations or run-time of an algorithm grows as the input size increases.

Like most things in life, a cocktail party can help us understand.

O(N)

When you arrive at the party, you have to shake everyone's hand (do an operation on every item). As the number of attendees N increases, the time/work it will take you to shake everyone's hand increases as O(N) .

Why O(N) and not cN ?

There's variation in the amount of time it takes to shake hands with people. You could average this out and capture it in a constant c . But the fundamental operation here --- shaking hands with everyone --- would always be proportional to O(N) , no matter what c was. When debating whether we should go to a cocktail party, we're often more interested in the fact that we'll have to meet everyone than in the minute details of what those meetings look like.

O(N^2)

The host of the cocktail party wants you to play a silly game where everyone meets everyone else. Therefore, you must meet N-1 other people and, because the next person has already met you, they must meet N-2 people, and so on. The sum of this series is x^2/2+x/2 . As the number of attendees grows, the x^2 term gets big fast, so we just drop everything else.

O(N^3)

You have to meet everyone else and, during each meeting, you must talk about everyone else in the room.

O(1)

The host wants to announce something. They ding a wineglass and speak loudly. Everyone hears them. It turns out it doesn't matter how many attendees there are, this operation always takes the same amount of time.

O(log N)

The host has laid everyone out at the table in alphabetical order. Where is Dan? You reason that he must be somewhere between Adam and Mandy (certainly not between Mandy and Zach!). Given that, is he between George and Mandy? No. He must be between Adam and Fred, and between Cindy and Fred. And so on. we can efficiently locate Dan by looking at half the set and then half of that set. Ultimately, we look at O(log_2 N) individuals.

O(N log N)

You could find where to sit down at the table using the algorithm above. If a large number of people came to the table, one at a time, and all did this, that would take O(N log N) time. This turns out to be how long it takes to sort any collection of items when they must be compared.

Best/Worst Case

You arrive at the party and need to find Inigo - how long will it take? It depends on when you arrive. If everyone is milling around you've hit the worst-case: it will take O(N) time. However, if everyone is sitting down at the table, it will take only O(log N) time. Or maybe you can leverage the host's wineglass-shouting power and it will take only O(1) time.

Assuming the host is unavailable, we can say that the Inigo-finding algorithm has a lower-bound of O(log N) and an upper-bound of O(N) , depending on the state of the party when you arrive.

Space & Communication

The same ideas can be applied to understanding how algorithms use space or communication.

Knuth has written a nice paper about the former entitled "The Complexity of Songs".

Theorem 2: There exist arbitrarily long songs of complexity O(1).

PROOF: (due to Casey and the Sunshine Band). Consider the songs Sk defined by (15), but with

V_k = 'That's the way,' U 'I like it, ' U U = 'uh huh,' 'uh huh' 
answered Oct 14, 2015 at 4:12 60k 38 38 gold badges 192 192 silver badges 270 270 bronze badges

You nailed it, Now whenever I go to a cocktail party I will subconsciously try finding Time Complexity of any fun events. Thanks for such a humorous example.

Commented Apr 4, 2020 at 1:54

When you're analyzing code, you have to analyse it line by line, counting every operation/recognizing time complexity. In the end, you have to sum it to get whole picture.

For example, you can have one simple loop with linear complexity, but later in that same program you can have a triple loop that has cubic complexity, so your program will have cubic complexity. Function order of growth comes into play right here.

Let's look at what are possibilities for time complexity of an algorithm, you can see order of growth I mentioned above:

 int p = 0; for (int i = 1; i < N; i++) p = p + 2; 
 int x = 0; for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) x = x + 2 
31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Jun 5, 2016 at 9:43 Aleksandar Makragić Aleksandar Makragić 1,987 18 18 silver badges 32 32 bronze badges

If this was the case, what would be the complexity? for (int i = 0; i < N; i++) for (int j = i+1; j < N; j++) for (int k = j+1; k < N; k++) x = x + 2

Commented Jan 24, 2020 at 0:35

For the mathematically-minded people: The master theorem is another useful thing to know when studying complexity.

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Nov 4, 2015 at 9:20 Gentian Kasa Gentian Kasa 776 6 6 silver badges 10 10 bronze badges

O(n) is big O notation used for writing time complexity of an algorithm. When you add up the number of executions in an algorithm, you'll get an expression in result like 2N+2. In this expression, N is the dominating term (the term having largest effect on expression if its value increases or decreases). Now O(N) is the time complexity while N is dominating term.

Example

For i = 1 to n; j = 0; while(j  

Here the total number of executions for the inner loop are n+1 and the total number of executions for the outer loop are n(n+1)/2, so the total number of executions for the whole algorithm are n + 1 + n(n+1/2) = (n2 + 3n)/2. Here n^2 is the dominating term so the time complexity for this algorithm is O(n2).

31.5k 22 22 gold badges 109 109 silver badges 132 132 bronze badges answered Mar 11, 2013 at 20:18 21 1 1 bronze badge

Re For i = 1 to n; : What programming language has working with for and while end prematurely with semicolon? Some pseudocode?

Commented Apr 17, 2022 at 18:36

Other answers concentrate on the big-O-notation and practical examples. I want to answer the question by emphasizing the theoretical view. The explanation below is necessarily lacking in details; an excellent source to learn computational complexity theory is Introduction to the Theory of Computation by Michael Sipser.

Turing Machines

The most widespread model to investigate any question about computation is a Turing machine. A Turing machine has a one dimensional tape consisting of symbols which is used as a memory device. It has a tapehead which is used to write and read from the tape. It has a transition table determining the machine's behaviour, which is a fixed hardware component that is decided when the machine is created. A Turing machine works at discrete time steps doing the following:

It reads the symbol under the tapehead. Depending on the symbol and its internal state, which can only take finitely many values, it reads three values s, σ, and X from its transition table, where s is an internal state, σ is a symbol, and X is either Right or Left.

It changes its internal state to s.

It changes the symbol it has read to σ.

It moves the tapehead one step according to the direction in X.

Turing machines are powerful models of computation. They can do everything that your digital computer can do. They were introduced before the advent of digital modern computers by the father of theoretical computer science and mathematician: Alan Turing.

Time Complexity

It is hard to define the time complexity of a single problem like "Does white have a winning strategy in chess?" because there is a machine which runs for a single step giving the correct answer: Either the machine which says directly 'No' or directly 'Yes'. To make it work we instead define the time complexity of a family of problems L each of which has a size, usually the length of the problem description. Then we take a Turing machine M which correctly solves every problem in that family. When M is given a problem of this family of size n, it solves it in finitely many steps. Let us call f(n) the longest possible time it takes M to solve problems of size n. Then we say that the time complexity of L is O(f(n)), which means that there is a Turing machine which will solve an instance of it of size n in at most C.f(n) time where C is a constant independent of n.

Isn't it dependent on the machines? Can digital computers do it faster?

Yes! Some problems can be solved faster by other models of computation, for example two tape Turing machines solve some problems faster than those with a single tape. This is why theoreticians prefer to use robust complexity classes such as NL, P, NP, PSPACE, EXPTIME, etc. For example, P is the class of decision problems whose time complexity is O(p(n)) where p is a polynomial. The class P do not change even if you add ten thousand tapes to your Turing machine, or use other types of theoretical models such as random access machines.

A Difference in Theory and Practice

It is usually assumed that the time complexity of integer addition is O(1). This assumption makes sense in practice because computers use a fixed number of bits to store numbers for many applications. There is no reason to assume such a thing in theory, so time complexity of addition is O(k) where k is the number of bits needed to express the integer.

Finding The Time Complexity of a Class of Problems

The straightforward way to show the time complexity of a problem is O(f(n)) is to construct a Turing machine which solves it in O(f(n)) time. Creating Turing machines for complex problems is not trivial; one needs some familiarity with them. A transition table for a Turing machine is rarely given, and it is described in high level. It becomes easier to see how long it will take a machine to halt as one gets themselves familiar with them.

Showing that a problem is not O(f(n)) time complexity is another story. Even though there are some results like the time hierarchy theorem, there are many open problems here. For example whether problems in NP are in P, i.e. solvable in polynomial time, is one of the seven millennium prize problems in mathematics, whose solver will be awarded 1 million dollars.