Dynamic Programming
Dynamic programming (DP) is an optimization technique that solves problems by breaking them into overlapping subproblems, solving each subproblem once, and storing the result to avoid redundant recomputation.
Explanation
Dynamic programming applies when a problem has two properties: optimal substructure (the optimal solution can be built from optimal solutions to subproblems) and overlapping subproblems (the same subproblems recur multiple times). Without overlapping subproblems, you just have recursion or divide-and-conquer; it's the overlap that DP optimizes. There are two DP implementation styles: top-down (memoization) starts from the full problem and recursively solves subproblems, caching results in a memo table so each subproblem is solved only once. It's natural to write (add a cache to recursive code) but has function call overhead. Bottom-up (tabulation) starts from the smallest subproblems and iteratively builds up to the full solution, filling a dp table. It's more space-efficient (you often only need the previous row) but requires thinking about the computation order. Classic DP problems: Fibonacci (trivial DP — cache fib(n-1) and fib(n-2)), 0/1 knapsack (include or exclude each item), longest common subsequence, longest increasing subsequence, coin change (minimum coins to make a sum), edit distance (minimum edits to transform one string to another), and climbing stairs (how many ways to reach step n). These are patterns; recognizing which DP template a problem fits is the key skill. The hardest part of DP is defining the dp state — what exactly dp[i] (or dp[i][j]) represents. Writing this definition precisely before coding is the difference between a correct DP solution and a vaguely similar approach that passes half the test cases.
Code Example
javascript// DP classic: coin change (minimum coins to make amount)
// Top-down (memoization)
function coinChangeMemo(coins, amount) {
const memo = new Map();
function dp(remaining) {
if (remaining === 0) return 0;
if (remaining < 0) return Infinity;
if (memo.has(remaining)) return memo.get(remaining);
let min = Infinity;
for (const coin of coins) {
min = Math.min(min, 1 + dp(remaining - coin));
}
memo.set(remaining, min);
return min;
}
const result = dp(amount);
return result === Infinity ? -1 : result;
}
// Bottom-up (tabulation) — often more space-efficient
function coinChange(coins, amount) {
// dp[i] = min coins to make amount i
const dp = new Array(amount + 1).fill(Infinity);
dp[0] = 0;
for (let i = 1; i <= amount; i++) {
for (const coin of coins) {
if (coin <= i) dp[i] = Math.min(dp[i], 1 + dp[i - coin]);
}
}
return dp[amount] === Infinity ? -1 : dp[amount];
}
console.log(coinChange([1, 5, 10, 25], 41)); // 4 (25+10+5+1)
Why It Matters for Engineers
Dynamic programming is one of the highest-difficulty and highest-value algorithm topics. DP problems appear at senior/staff engineering interviews at virtually every top tech company. More importantly, DP thinking — identifying overlapping subproblems and caching intermediate results — applies in production: query result caching, incremental re-computation in reactive systems, and memoized selectors in Redux all apply the same principle. Learning DP also teaches you to recognize when a naive recursive solution is exponential (computing fib(50) naively takes 2^50 calls) and how to make it polynomial (fib(50) with memoization takes 50 calls). This is one of the most dramatic possible algorithmic improvements.
Related Terms
Learn This In Practice
Go deeper with the full module on Beyond Vibe Code.
Algorithms Fundamentals → →