Big O Notation
Big O describes how an algorithm’s runtime or memory usage grows as the input size grows. It strips away constants and lower-order terms to focus on the dominant factor.
Why It Matters
Without Big O, you can’t compare algorithms or predict whether code will be fast enough for production data. An O(n^2) solution might work for 100 items but choke on 100,000.
Common Complexities
| Big O | Name | Example |
|---|---|---|
| O(1) | Constant | Hash table lookup |
| O(log n) | Logarithmic | Binary search |
| O(n) | Linear | Scanning an array |
| O(n log n) | Linearithmic | Merge sort |
| O(n^2) | Quadratic | Nested loops (bubble sort) |
| O(2^n) | Exponential | Brute-force subsets |
| O(n!) | Factorial | Brute-force permutations |
Key Rules
- Drop constants — O(2n) is just O(n)
- Drop lower terms — O(n^2 + n) is O(n^2)
- Worst case by default — unless stated otherwise, Big O means the worst case
- Multiply nested loops — a loop inside a loop over the same input is O(n^2)
Code Example
# O(n) — linear scan
def find_max(nums: list[int]) -> int:
best = nums[0]
for x in nums: # runs n times
if x > best:
best = x
return best
# O(n^2) — nested loops
def has_duplicate_pair(nums: list[int]) -> bool:
for i in range(len(nums)):
for j in range(i + 1, len(nums)): # inner loop depends on outer
if nums[i] == nums[j]:
return True
return False
# O(n) using a hash set — much better
def has_duplicate(nums: list[int]) -> bool:
seen = set()
for x in nums:
if x in seen:
return True
seen.add(x)
return FalseSpace Complexity
Big O also applies to memory. has_duplicate above uses O(n) extra space for the set. The nested-loop version uses O(1) extra space. Often there’s a time-space tradeoff.
Related
- Binary and Hexadecimal — understanding data sizes
- Hash Tables — O(1) average lookup
- Sorting Algorithms — classic complexity comparisons
- Recursion — recursive call stacks affect space complexity