Module II·Article III·~3 min read

Dynamic Programming in Discrete Optimization

Branch-and-Bound Methods

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Principle of Optimality: Decomposing the Problem into Subproblems

Dynamic Programming (DP) is one of the most elegant methods in discrete optimization. The idea is both simple and profound: the optimal solution to a large problem is constructed from optimal solutions to smaller subproblems. This is the Bellman principle of optimality. DP “remembers” the results of subproblems (memoization) and avoids repeated calculations. Because of this, DP works where a brute-force approach is impossible.

Bellman Principle of Optimality

Formulation: the optimal solution contains optimal solutions to all its subproblems.

When is the principle satisfied? When subproblems are independent (the solution to one does not interfere with another) and the structure of the optimal solution allows a recursive decomposition.

When is it NOT satisfied? When global constraints must be considered (for example, the traveling salesman problem—the subproblems are connected through the constraint “visit each city exactly once”). Nevertheless, TSP can also be solved by DP in O(2ⁿ n²)!

DP Structure:

  1. Define the state (what describes a subproblem)
  2. Write the recurrence (how the state depends on smaller states)
  3. Determine the calculation order (from smaller to larger states)
  4. Reconstruct the answer (traceback)

The Knapsack Problem: Classic DP

Statement: n items with values cᵢ and weights wᵢ, knapsack capacity W.

State: V[i][w] = maximum value using the first i items and capacity w.

Recurrence:

  • Do not take item i: V[i][w] = V[i−1][w]
  • Take item i (if w ≥ wᵢ): V[i][w] = V[i−1][w−wᵢ] + cᵢ
  • V[i][w] = max(V[i−1][w], V[i−1][w−wᵢ] + cᵢ) if w ≥ wᵢ

Initial conditions: V[0][w] = 0 for all w (no items—no value).

Complexity: O(nW) in time and memory—pseudo-polynomial (not polynomial, since W can be enormous).

Full breakdown (n=3, W=5):

Items: (c₁,w₁)=(4,3), (c₂,w₂)=(3,2), (c₃,w₃)=(2,1).

i/w012345
0000000
1000444
2003447
3023567

V[3][5] = 7. Traceback: V[3][5] = V[2][4]+c₃ (took item 3, w=1). V[2][4] = V[1][2]+c₂ (took item 2, w=2). V[1][2] = 0 (item 1 weight 3 > 2). Answer: took items 2 and 3 (c=5) — no, let's recalculate.

Traceback: V[3][5]=7: take item 3 (V[2][4]+2=4+2=6? no) → V[3][5]=V[2][5]=7 (do not take 3). V[2][5]=7: take item 2 (V[1][3]+3=4+3=7). V[1][3]=4: take item 1. Answer: {1,2}, c=7. ✓

Shortest Paths: DP on a Graph

Bellman-Ford: d[v] = min_{(u,v)∈E} {d[u] + w(u,v)}.

State: d[k][v] = shortest path from s to v using at most k edges.

Recurrence: d[k][v] = min{d[k−1][v], min_{(u,v)∈E}(d[k−1][u] + w(u,v))}.

Perform n−1 iterations: d[n−1][v] = shortest path.

Negative cycle detection: if d[n][v] < d[n−1][v] for some v — negative cycle!

Floyd-Warshall: all pairs shortest paths.

State: d[k][i][j] = shortest path from i to j using intermediate vertices from {1,...,k}.

Recurrence: d[k][i][j] = min{d[k−1][i][j], d[k−1][i][k] + d[k−1][k][j]}.

Complexity O(V³). Simple implementation in 3 lines of code.

Longest Common Subsequence (LCS)

Problem: given strings s₁ and s₂. Find the longest common subsequence (not necessarily contiguous).

State: LCS[i][j] = length of LCS of strings s₁[1..i] and s₂[1..j].

Recurrence:

  • s₁[i] = s₂[j]: LCS[i][j] = LCS[i−1][j−1] + 1 (take the match)
  • s₁[i] ≠ s₂[j]: LCS[i][j] = max(LCS[i−1][j], LCS[i][j−1]) (skip one character)

Applications: bioinformatics—DNA sequence alignment (the BLAST program). Systems programming—diff algorithm for comparing files in git. Data compression—LZW uses common substrings.

TSP via Held-Karp DP

State: dp[S][v] = length of the shortest path starting at vertex 1, visiting all vertices from S ⊆ {2,...,n} exactly once, and ending at v ∈ S.

Recurrence: dp[S][v] = min_{u ∈ S{v}} {dp[S{v}][u] + d(u,v)}.

Answer: min_{v ≠ 1} {dp[{2,...,n}][v] + d(v,1)}.

Complexity: O(2ⁿ · n²)—exponential, but much better than O(n!) with brute-force. For n=20: 2²⁰ · 400 ≈ 4 · 10⁸—feasible.

§ Act · what next