Dynamic Programming (DP) is a powerful problem-solving technique used to optimize problems that involve overlapping subproblems and optimal substructure. It is one of the most important topics in DSA and interviews because it transforms slow exponential-time solutions into efficient polynomial-time algorithms.
Introduction to Dynamic Programming and Its Principles
What Is Dynamic Programming?
Dynamic Programming is applicable when a problem satisfies both of the following properties:
- Overlapping Subproblems
The same subproblems appear multiple times during computation. - Optimal Substructure
The optimal solution to the problem can be constructed from optimal solutions of its subproblems.
Why Dynamic Programming Improves Performance
A naive recursive solution often recomputes the same values many times.
- Without DP: Exponential time (very slow)
- With DP: Linear or polynomial time (efficient)
DP avoids recomputation by storing results and reusing them.
DP vs Recursion
| Technique | Description |
|---|---|
| Recursion | Solves by calling itself repeatedly; may repeat work |
| Dynamic Programming | Recursion + memory (memoization) or iterative table (tabulation) |
Memoization and Tabulation
Dynamic Programming is implemented using two main approaches.
Memoization (Top-Down DP)
Memoization is recursion with caching.
Steps
- Write a recursive solution
- Store computed results in a dictionary or array
- Return cached results when available
Advantages
- Easy to write
- Natural extension of recursion
Disadvantages
- Uses recursion stack
- Slight overhead due to function calls
Tabulation (Bottom-Up DP)
Tabulation builds the solution iteratively from base cases.
Steps
- Identify base cases
- Build the DP table iteratively
- The final cell contains the answer
Advantages
- Faster in practice
- No recursion overhead
General Rule
- Memoization → easier to code
- Tabulation → better performance
Classic Dynamic Programming Problems
Fibonacci Sequence
Fibonacci is the simplest example of overlapping subproblems.
Naive Recursive Solution (Inefficient)
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
Time Complexity: O(2ⁿ) → too slow
Fibonacci with Memoization (Top-Down)
def fib_memo(n, memo=None):
if memo is None:
memo = {}
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo)
return memo[n]
print(fib_memo(10)) # 55
Complexity
- Time: O(n)
- Space: O(n) (memo + recursion stack)
Fibonacci with Tabulation (Bottom-Up)
def fib_tab(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
print(fib_tab(10)) # 55
Space-Optimized Fibonacci
def fib_optimized(n):
if n <= 1:
return n
prev2, prev1 = 0, 1
for _ in range(2, n + 1):
prev2, prev1 = prev1, prev1 + prev2
return prev1
Space: O(1)
0/1 Knapsack Problem
Problem Statement
You are given:
weights[]values[]- capacity
W
Each item can be chosen at most once.
Goal: maximize total value without exceeding capacity.
DP Idea
dp[i][w] = maximum value using first i items with capacity w.
Tabulation Implementation
def knapsack(weights, values, W):
n = len(weights)
dp = [[0] * (W + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(W + 1):
if weights[i-1] <= w:
dp[i][w] = max(
dp[i-1][w],
values[i-1] + dp[i-1][w - weights[i-1]]
)
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
print(knapsack([2, 3, 4, 5], [3, 4, 5, 6], 5)) # 7
Complexity
- Time: O(n × W)
- Space: O(n × W)
Longest Common Subsequence (LCS)
Problem Statement
Given two strings, find the length of the longest subsequence common to both.
Example
s1 = "abcde"
s2 = "ace"
LCS = "ace" → length = 3
DP Idea
dp[i][j] = LCS length for s1[:i] and s2[:j]
Implementation
def lcs(s1, s2):
n, m = len(s1), len(s2)
dp = [[0] * (m + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for j in range(1, m + 1):
if s1[i-1] == s2[j-1]:
dp[i][j] = 1 + dp[i-1][j-1]
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[n][m]
print(lcs("abcde", "ace")) # 3
Complexity
- Time: O(n × m)
- Space: O(n × m)
Coin Change Problem (Minimum Coins)
Problem Statement
Given coin denominations and a target amount, find the minimum number of coins required.
Example
coins = [1, 2, 5]
amount = 11
Output = 3 (5 + 5 + 1)
DP Idea
dp[x] = minimum coins needed to make amount x
Implementation
def coin_change(coins, amount):
dp = [float("inf")] * (amount + 1)
dp[0] = 0
for x in range(1, amount + 1):
for coin in coins:
if coin <= x:
dp[x] = min(dp[x], 1 + dp[x - coin])
return dp[amount] if dp[amount] != float("inf") else -1
print(coin_change([1, 2, 5], 11)) # 3
Complexity
- Time: O(amount × number_of_coins)
- Space: O(amount)
Summary
Dynamic Programming improves efficiency by storing results of overlapping subproblems.
- Memoization (Top-Down): recursion + cache
- Tabulation (Bottom-Up): iterative DP table
- Classic DP problems like Fibonacci, Knapsack, LCS, and Coin Change teach core DP patterns used in interviews and real-world optimization systems.
Mastering DP requires:
- Identifying states
- Defining transitions
- Handling base cases
- Optimizing space when possible
Leave a Reply