Big O Notation

Big O: How to Determine the Efficeiency of Some Function

To determine what the the efficiency of some function is, first we have to clarify what we mean by efficient.

Do we mean:

Let's break these three concepts down into a little more detail.

Why Faster is Not the Optimal Measure of Efficiency

So How Do We Measure an Algorithms Effiecency?

By the number of simple operations it needs to perform.

Take a look at this code:

function doSomeMath(n) {
  return n * (n + 1) / 2;
}

Here we have three simple operations:

This is the case regardless of n's size.

Take a look at this code:

function plsIterateOverMe(n) {
  let count = 0;
  for (let i =1; i <= n; i++) {
    count += 1;
  }
  return count;
}

How many operations are there here?

We have:

let total = 0: 1 assignment let i = 1: 1 assignment count += 1: n additions and n assignments i <= n: n comparisons i++: n additions and assignments

Depending on the operations we count the number of operations can be from 2n up to 5n + 2.

Regardless of the exact number of operations the number of operations grows proportionally with n.

If n doubles the number of operations will too.

This is essentially Big O Notation: a way of discerning the runtime of an algorithm as its input grows.

It doesn't matter what the inputs are, only the trends we can see when we assessing an algorithms efficiency.

Technical Definition

We say that an algorithm is O(f(n)) if the number of simple operations the computer must do is eventually less than the constant times f(n) as n increases.

Let's refer back to our previous doSomeMath funtion.

function doSomeMath(n) {
  return n * (n + 1) / 2;
}

This is always only 3 operations. This is O(1) or constant time.

And our plsIterateOverMe function:

function plsIterateOverMe(n) {
  let count = 0;
  for (let i =1; i <= n; i++) {
    count += 1;
  }
  return count;
}

Number of operations will eventually be bound by a multiple of n, maybe 8n or something. It doesn't really matter. This is linear time, or O(n)

Here is an example of a slower, less efficient algorithm:

function addAllPairs(n) {
  for (var i =0; i < n; i++) {
    for (var j = 0; j < n; j++) {
      console.log(i + j);
    }
  }
}

Here we have an O(n) operation inside another one, giving us, O(n^2).

The important thing is that constants don't matter here. Neither do smaller terms such as O(n -3) which is O(n) or O(n^2 + 12n + 6) which is O(n^2).

Things to Remember

Conclusion

Big O has some interesting math behind it but at the end of the day, recognizing different patterns in your algorithm, such as nested loops, should give you a head start on optimizing an algorithm for efficiency.

© tiff.RSS