Scroll to top

Demystifying the Big O notation


DigitalOnUs - November 27, 2020 - 0 comments

The programmer’s job is to solve problems

At the core of every software development job lies the same objective: problem-solving. No matter the language or platform, every programmer must be great at this. Programming is an activity that produces solutions on demand for a given set of problems (usually provided by the users) which they like to call requirements. Programmers take those requirements, break them down into smaller pieces, and create amazing pieces of software that can go beyond the user’s expectations.

However, most of the time, programmers usually solve problems the user didn’t even know they had. They need to come up with creative ways to manipulate the provided data to get the expected output. These are what we call algorithms. Algorithms are, loosely speaking, a finite list of instructions that must be followed in order to achieve a goal. The mind of one programmer is different from the mind of another, which is why the algorithm developed by one programmer to solve a specific problem can be entirely different from the algorithm developed by another programmer.

If we have multiple options to achieve something, the obvious question pops up: which one is better? Well, that depends on your definition of ‘better’ and it’s entirely dependent on the context. A general consensus between programmers is that a good algorithm should be fast and efficient with the resources at its disposal. It should be able to achieve the desired output in as few steps as possible, and use up as little computer memory as possible.

You might think all this information isn’t really necessary, but I can assure you it changes the way you code and deliver solutions to your client. In my current job at DigitalOnUs, one of the projects required the updating of information of some entities inside a large list, and some of the entities were also part of other lists of data. Given that the information was presented to the user in real time, it was necessary to create an efficient algorithm that could search for the entities to be updated. Had I not applied the knowledge posted here, a bad user experience would have been the result. So let’s get right to it!

Complexity

Complexity is a measurement of how fast and efficient an algorithm performs based on a number of factors. Mainly, there are two factors that affect how well an algorithm performs:

  1. Input size
  2. Situation (scenario) in which the algorithm has to run

By comparing the complexity of two or more algorithms, we can clearly see which one is better (that is, faster and more efficient) at solving the same problem.

There are two types of complexity:

  1. Time complexity
  2. Space complexity

As much as it sounds like science fiction, time and space complexity are simpler than you think! Trust me and continue reading.

Time complexity

Time complexity (or running cost) is the measurement of complexity that gives us the approximate number of operations an algorithm takes when processing an input of a certain size. Operations are nothing more than the necessary steps the algorithm performs when running.

In simpler terms, time complexity lets us know how many more operations the algorithm takes whenever the input increases in size. There are many ways the relationship between the number of operations and the size of the input can go. The most common being linear (`T(n)`), quadratic (`T(n^2)`), and constant (`T(1)`).

Essentially, complexity is nothing more than a mathematical formula, with `n` representing the input size. For the sake of example, let’s assume we have an algorithm that has a time complexity of `T(n^2)`. What this means is that to determine the number of operations, we only need to substitute `n` with the size of the input we have.

If we had an input size of 10 elements:

“`

T(n^2) = T(10^2) = 100 operations.

“`

If we increase the size to 100 elements:

“`

T(n^2) = T(100^2) = 10,000 operations.

“`

Now, let’s compare both results to determine how many times the operations increased from one input size to another.

“`

T(100^2)/T(10^2) = 10,000/100 = 100

“`

100 times more operations just by increasing the input size 10 times!

As you can probably tell, this isn’t a very efficient algorithm, because it would need to do far more steps as the input increases. This type of algorithm can usually be optimized with some trade-offs, but we’ll get to that later.

One important thing to note here is that the variable that denotes the input size doesn’t necessarily have to be `n`. It can be whatever you like, but the recommendation is to keep it meaningful in relation to what you are actually using as input. Read on for an example that demonstrates this.

Let’s assume we have a square plot of land and we want to know how much time it would take to plow the entire area. Obviously, this depends entirely on the area of the land, so that would be our input.

“`

T(a) where “a” is the area of the plot of land

“`

 

But what if instead we decided to express it the following way:

 

“`

T(s^2) where “s” is one side of the terrain.

“`

We know that the area of a square is (surprise!) the square of one of its sides. So, `T(s^2)` and `T(a)` are equivalent, but their variables represent different things. That’s why you need to be careful to use a meaningful term as input when measuring the time complexity of an algorithm, or you might get unexpected results.

How to obtain an algorithm’s time complexity

Now that you know what time complexity entails and how to interpret its results, you need to figure out how to get the formula in the first place. To do this, you have to count the operations the algorithm performs. Again, this is easier to understand if you visualize it by way of a few examples.

Example 1

Let’s assume we have the following algorithm (in JavaScript) and we want to obtain its time complexity:

“`js

function addUpTo(number) {

let total = 0;

for (let i = 0; i <= number; i++) {

total += i;

}

return total;

}

“`

Its purpose is to obtain the sum of all the numbers that lead up to the input number. Now, let’s count all of its operations.

`let total = 0` => 1 assignment operation.

`let i = 0` => 1 assignment operation.

`i <= number` => n number of comparison operations.

`i++` => n number of addition and assignment operations.

`total += 1` => n number of addition and assignment operations.

Why do we assume that we are having “n number” of some operations? Because the number of times the step is taken is directly proportional to the input size. The algorithm doesn’t stop until it reaches the number it has as input.

Now, we just add up the operations. Remember your algebra classes – same terms add up. And lo and behold the time complexity of this algorithm: `T(3n+2)`. This algorithm has linear complexity.

If we were to graph the formula, it would look like this: **(plot-1 image goes here)**

We can see that as the input size increases, the number of operations necessary also increases in a linear fashion.

“`

T(3(10)+2) = 32 operations

T(3(100)+2) = 302 operations

T(3(10,000)+2) = 30,002 operations

“`

Since you already know that the nature of this time complexity is linear and don’t need to be precise in your operations count, you can disregard the constants and end up with this: `T(n)`. More on the rules to simplify the complexities formulas later.

Example 2

We can take a different approach to the same problem and obtain the same result. Thanks to a young Gauss, to obtain the sum of all the numbers up to a certain number, one can use the following formula:

sum = (n * (n + 1)) / 2

“`

function addUpTo(number) {

return (number * (number + 1)) / 2;

}

“`

How many operations can you count in this algorithm?

`n + 1` => 1 addition

`n * (n + 1)` => 1 multiplication

`(n * (n + 1)) / 2` => 1 division

Therefore, this algorithm has a time complexity of `T(3)`. If we graph it, it would look this way:

**(plot-2 image goes here)**

This shows us a constant nature, meaning that no matter the input size, we always perform just three operations. And remember, we only care about the nature in which the algorithm grows, not the exact number of operations, so we could simplify this complexity to just `T(1)`. This is known as constant complexity.

Isn’t this way better? Even if the input size tends to grow up to infinity, the algorithm always performs the same number of operations. This complexity is the peak in algorithmic optimization in terms of time.

But sadly, it is really hard, or even impossible, to optimize our algorithms to have this time complexity. The algorithms for the popular NP problems usually have time complexities of `T(2^n)` or even `T(n!)`!

Actually, there’s a million dollar prize for the brilliant soul that finds a solution to any of those problems that is more efficient than `T(2^n)`! Or you could prove that it is not possible… in which the money is yours as well.

Complexity rules

Let’s now take a look at the rules we can use to simplify the expressions that describe the complexity of our algorithms. The reason for these rules is that there’s no need to be so incredibly precise when counting the operations or computer memory our algorithms are consuming. Once we start seeing a pattern, we know right away if we are being efficient or if some improvements can be made.

The rules are:

  1. Assume the worst: if you have a loop in your algorithm that depends on the input, assume it will always run the maximum input size.
  2. Drop constants: those numbers don’t affect the overall behavior of the complexity. `2n` and `665n` are essentially the same; that is, they both represent linear growth, so they are insignificant.
  3. Different terms for different inputs: if your algorithm has two inputs (`n` and `m`), and you perform a loop for each of them, add them up. `T(n + m)`.
  4. Drop non-dominant terms: an equation always has a dominant term that determines how it behaves in the long run. Usually, it is the variable with the greatest exponent, so you can neglect the other terms.

In general, these rules tell us that `T(78n^2 + n/3 + 540)` is actually just `T(n^2)`.

So one only needs to be aware of the input size, right?

Well, not quite. There’s one more factor that also determines the complexity of an algorithm: the context of the problem. In other words, the situation of the problem at hand.

Let’s imagine that we want to sort a deck of playing cards. We know that a deck always has 52 cards, so the input size in this problem is static. Nevertheless, the order in which the deck is at the moment of sorting (that is, the context) plays a huge role!

It is not the same as sorting a deck when it is already sorted than one that is randomly sorted, right? That’s why we can identify at least three possible scenarios:

  1. Best case scenario: The minimum number of operations for an input of certain size is performed in this scenario. For example, the deck is already sorted.
  2. Average case scenario: As the name suggests, it’s the average number of operations for an input of a certain size. Typically, you can relate this scenario as the most likely to happen. For example, the deck is randomly sorted.
  3. Worst case scenario: This is the maximum number of operations for an input of certain size. For example, the deck is in reverse order.

As a rule of the thumb, if nothing is said about the situation, assume the worst. Programmers may seem pessimistic in this matter, but if you always expect the problem to be in the worst possible condition, you will save yourself a lot of headache. Make this your mantra: Hope for the best and prepare for the worst. Pretty stoic, right?

Space complexity

Like we said earlier, there is one more type of complexity you need to be aware of: Space complexity. It is all about measuring how much computer memory the algorithm would take. This is usually the type of complexity most overlook, given the amazing and huge resources modern computers have. That’s why it’s not usually asked in tech interviews, for example, but it is still important to keep an eye on it.

Back in the day, when computer memory was truly a luxury, algorithms had to be extremely careful not to hoard all of it, or risk crashing mid-execution. Today it is not really much of an issue, but a good developer always contemplates both types of complexity.

Now, what things need to use computer memory in an algorithm? Not many, but one in particular could lead to those dreadful `Out of memory` errors:

– Variables: One of the most used features in any programming language.

– Data structures: Arrays, dictionaries (a.k.a. maps), linked lists, stacks, queues, graphs, trees, etc.

– Function calls: Yes, calling a function takes memory when running an algorithm. This is because the runtime environment needs to keep a record, and controls the variables and operations being performed inside that function. Be careful with this one, especially when doing recursion.

– Allocations: explicit memory allocations, which are more common in low-level languages like C.

**Important:** Input is not considered for space complexity. This is because it is a variable and we have no way of knowing its size.

Space and time complexity cannot be fully optimized in a single algorithm. You will never find a sorting algorithm that has an efficient time complexity of `n log n` and a space complexity of 1. There’s usually a trade-off between them. Rule of the thumb: if you want to improve one, you need to sacrifice the other.

For example, let’s say we want to know which letter is present more than once in a word composed of only lowercase letters. One way we could do it would be using this algorithm:

“`js

function getRepeatedLetter(word) {

for (let i = 0; i < word.length; i++) {

for (let j = i + 1; j < word.length; j++) {

if (word[i] === word[j]) {

return word[i];

}

}

}

return `There is no repeated letter`;

}

“`

This algorithm’s complexities can be measured like this:

**Time**

`let i = 0` => 1 assignment

`i < word.length` => n comparison

`word.length` => n access

`i++` => n addition and assignment

`let j = i + 1` => 1 assignment and addition

`j < word.length` => n comparison

`word.length` => n access

`j++` => n addition and assignment

`word[i]` => n access

`word[j]` => n access

**Space**

`i` => 1 variable

`j` => 1 variable

Nested loops that are based on the input are not added up, but multiplied. Simplified, the time complexity would be `T(n^2)` and space complexity of `S(1)`

Now let’s look at another algorithm for the same problem that aims to improve its time complexity:

“`js

function getRepeatedLetter(word) {

const letterMap = {};

for (let i = 0; i < word.length; i++) {

// if the letter was already seen, return it

if (letterMap[word[i]) {

return word[i];

}

letterMap[word[i]] = true;

}

return ‘There is no repeated letter’;

}

“`

We improved the time complexity from `T(n^2)` to `T(n)`, which is great. But we had to sacrifice some memory because we made use of a new variable that, in the worst case, could be filled with all of the letters in the word if the repeated letter is the last one, or none is found. This means, the space complexity is now `S(n)`. See the trade-off?

Big O Notation

What does the content of this big wall of text have to do with Big O notation then? Everything! Big O is just the elegant notation developers use to describe the time/space complexity of algorithms in the worst possible case. That’s it.

So instead of writing `T(n^2)` or `S(n)`, you can just write `O(n^2)` or `O(n)`, which can be read as “oh of n squared” or “oh of n”.

Like we said, Big O is reserved to describe the complexity of the worst case, and this is the most commonly used notation among the three existents. This is because it is better to assume our algorithms will run on the most hostile conditions ever. Write them tough.

The other notations are:

– Big Omega – Ω(n) for best cases

– Big Theta – Θ(n) for average cases

And that’s it! That’s all you need to know about Big O notation and what it truly means. Hopefully, this article has been able to shed some light for you on one of the mysterious terms developers like to use to sound smart.

Use this link to help you visually understand Big O and the complexities of the most common algorithms and data structures.

Happy coding!

FIN ARTICULO

Perfil de Medium: https://medium.com/@byomead

By – Hector Soto , Software Engineer

Related posts