Big O Notation
Big O notation is a mathematical notation that describes how an algorithm's runtime or space requirements scale as the input size grows. It characterizes the worst-case growth rate, ignoring constants and lower-order terms.
Explanation
Big O answers the question: "as input doubles, how does the work grow?" O(1) means constant — doubling input doesn't change work. O(log n) means the work grows by 1 step when input doubles (binary search: 1 billion items requires ~30 steps). O(n) means work doubles when input doubles (linear scan). O(n log n) means work grows slightly faster than double (merge sort, quick sort average). O(n²) means work quadruples when input doubles (nested loops). O(2^n) means work doubles for each additional input element (naive recursion without memoization). The "O" stands for "order of" and the formal definition says O(f(n)) means there exist constants c and n₀ such that the algorithm's actual work T(n) ≤ c·f(n) for all n ≥ n₀. The constant factor is irrelevant to the asymptotic class. An O(n) algorithm that does 1000n operations is still O(n) — even if slower in practice than an O(n) algorithm doing 5n operations. Big O focuses on worst-case behavior by default (though average case and best case are also analyzed). For a hash table, worst case is O(n) (all keys collide) but average case is O(1) — the distinction matters. Amortized O(1) means an occasional expensive operation (like array doubling on push) is averaged over many cheap operations, giving an effective O(1) per operation. Common complexities ranked from fastest to slowest: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2^n) < O(n!).
Code Example
javascript// Big O examples in JavaScript
// O(1) — constant time: work doesn't grow with input
const getFirst = arr => arr[0];
const setLookup = (set, val) => set.has(val);
// O(log n) — binary search
function binarySearch(arr, target) {
let lo = 0, hi = arr.length - 1;
while (lo <= hi) {
const mid = (lo + hi) >> 1;
if (arr[mid] === target) return mid;
arr[mid] < target ? lo = mid + 1 : hi = mid - 1;
}
return -1;
}
// O(n) — single pass
const sum = arr => arr.reduce((a, b) => a + b, 0);
// O(n log n) — sorting
const sorted = arr => [...arr].sort((a, b) => a - b);
// O(n²) — nested loops (avoid for large n!)
function hasDuplicate(arr) {
for (let i = 0; i < arr.length; i++)
for (let j = i + 1; j < arr.length; j++)
if (arr[i] === arr[j]) return true;
return false;
// Fix: use a Set — O(n)
}
// O(2^n) — naive Fibonacci (exponential, avoid!)
const fib = n => n <= 1 ? n : fib(n-1) + fib(n-2);
Why It Matters for Engineers
Big O notation is the universal language for discussing algorithm performance. Without it, you can't have a precise conversation about whether your solution is fast enough, why a senior engineer suggested a different approach, or what the bottleneck is in a slow API. It's the vocabulary of algorithmic reasoning. More practically: knowing that an O(n²) algorithm on 100,000 items does 10 billion operations (probably too slow) vs. an O(n log n) algorithm doing 1.7 million operations (fast) lets you make engineering decisions before writing a single line of code. That judgment separates engineers who write code that scales from those who discover the performance problem after shipping.
Related Terms
Time Complexity · Space Complexity · Binary Search · Hash Table
Learn This In Practice
Go deeper with the full module on Beyond Vibe Code.
Algorithms Fundamentals → →