Big O notation is a tool used in computer science to describe the time or space complexity of algorithms. It’s frequently used to compare the efficiency of various algorithms and analyze their best and worst-case performances. In interviews, you may be asked the time and space complexity of your algrithm at the end, and your interviewer may ask you to optimize your algorithm if necessary. Time complexity is given an input of size n, what is the runtime of the algorithm? On the other hand, space complexity is how much additional memory the algorithm needs.
Linear time complexity means that the runtime of the algorithm grows proportionally to the size of the input.
Think about it in the context of a linear function: y = ax + b. When you graph it, it’s a straight line, with the output increasing by the same increment each time, thereby linearly.
Consider a scenario where we are given an array and want to find the sum of all elements in it. The number of elements that we’ll have to traverse will be linear with the number of elements in the array each time.
Below is some Python code for your reference.
def find_sum(nums):
total = 0
for num in nums:
total += num
return total
An algorithm of O(n) time complexity is usually associated with a single pass through an iterative data structure, such as an array.
Logarithmic time complexity is when the runtime of an algorithm is proportional to the input size.
For example, you’re given an even number, and you want to find the log of that number with a base of 2 WITHOUT using any built-in math functions in the respective programming language.
Thus, we may divide the number by 2 each time until the number becomes 1, incrementing the exponent each time.
Let’s take a look at the Python code below.
def find_log(num):
exponent = 0
while num != 1:
num = num // 2
exponent += 1
return exponent
Another popular example is the binary search algorithm, which finds a certain number in a sorted array of elements. Below is the Python code.
def binary_search(nums, target):
left, right = 0, len(nums) - 1
while left <= right:
middle = (left + right) // 2
if nums[middle] == target:
return middle
elif nums[middle] > target:
right = m - 1
else:
left = m + 1
So, with the 2 examples in mind, what is one common pattern we see with algorithms of O(logn) time complexity? You probably noticed that we’re dividing the size of the input by a certain numer each time. Thus, we you see the size of an input being frequently divided by a certain number, there’s a good chance that the time complexity may be O(logn).
Let’s think back to the O(n) time complexity we covered earlier, which usually involves a single pass through an iterative data structure, such as an array.
If O(n) is associated with a single pass, what about O(n^2)?
Well, you probably guessed correctly! O(n^2) is usually associated with a double pass through an iterative data structure, achieved through nested loops, and suggests that the runtime is proportional to the square of the input size.
One such example is that we’re given a 2D array and what to find the sum of all elements in the matrix. Below is the Python code.
def find_matrix_sum(matrix):
total = 0
for row in range(len(matrix)):
for col in range(len(matrix[0])):
total += matrix[row][col]
return total
Exponential time complexity occurs when the number of operations doubles each time an input unit increases by 1.
In many cases, this is achieved through recursion, with the Fibonacci sequence being a good example. Below is the Python code.
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci (n - 2)
As you can see from the code above, there are 2 additional calls for each time the function is called, yielding the O(2^n) time complexity.
Usually, algorithms of exponential time complexity should be avoided, and it’s possible to achieve so in many cases through a technique called memoization, which will be covered later.