Module III·Article III·~4 min read

PTAS and FPTAS: Precise Approximation Schemes

Approximation Algorithms

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

A Family of Algorithms Instead of One

A conventional approximation algorithm provides a fixed guarantee: for example, a 2-approximation. But what if we need more precision—for instance, a 1.01-approximation? PTAS (Polynomial-Time Approximation Scheme) is not a single algorithm, but an entire family of algorithms $A_\varepsilon$, parameterized by precision $\varepsilon > 0$: $A_\varepsilon$ finds a $(1+\varepsilon)$-approximation in polynomial time. You can specify any precision—the price is: the time grows as $\varepsilon$ decreases.

PTAS: Definition and Examples

Definition: An algorithm $A_\varepsilon$ is a PTAS if:

  • For any $\varepsilon > 0$, algorithm $A_\varepsilon$ finds a $(1+\varepsilon)$-approximation
  • The running time is polynomial in $n$ (the input size) for fixed $\varepsilon$

Important: the dependence on $\varepsilon$ can be arbitrary. For instance, $O(n^{1/\varepsilon})$ is a PTAS, but $O(2^{1/\varepsilon}\cdot n)$ is also a PTAS.

Example: Knapsack Problem via PTAS

Idea: scale the item values.

  1. Find the maximum value $c_{\max} = \max c_i$
  2. Scale: $\tilde{c}i = \left\lfloor c_i \cdot n/(\varepsilon \cdot c{\max}) \right\rfloor$ (round down)
  3. Solve DP with scaled values: time $O\left(n\cdot \sum \tilde{c}_i\right) = O\left(n\cdot n^2/\varepsilon\right) = O(n^3/\varepsilon)$
  4. Return the found set of items

Theorem: This is a $(1+\varepsilon)$-approximation. Proof: losses from scaling are $\leq \varepsilon \cdot OPT/n$ per item, total $\leq \varepsilon \cdot OPT$.

FPTAS: Polynomial in $1/\varepsilon$

Definition: An algorithm is an FPTAS (Fully Polynomial-Time Approximation Scheme) if the time is polynomial in $n$ and in $1/\varepsilon$.

This is much stronger than PTAS: for $\varepsilon = 0.01$ (1% precision), the time for FPTAS is $poly(n, 100)$, while for PTAS it is $poly(n)$ but with a constant like $e^{1/\varepsilon}$ or $n^{1/\varepsilon}$.

Knapsack Problem—FPTAS: Scaling gives time $O(n^3/\varepsilon) = O(n^2 \cdot n/\varepsilon)$—polynomial in $n$ and $1/\varepsilon$!

Scheduling Problem $P||C_{max}$—FPTAS: There is an FPTAS via analogous scaling of processing times.

Not all PTAS are FPTAS. For Euclidean TSP, there exists a PTAS (Arora’s algorithm), but no FPTAS (TSP is APX-hard on general metrics).

Why Does TSP on Arbitrary Metrics Not Have a PTAS?

For metric TSP, there is a $3/2$-approximation (Christofides’ algorithm). But there is no PTAS!

Theorem: If $P \neq NP$, metric TSP does not admit a $(1+\varepsilon)$-approximation for any $\varepsilon>0$. (It is assumed that for Euclidean TSP a PTAS exists—Arora’s algorithm, 1998.)

Proof: If there were a PTAS for TSP, then the Hamiltonian Cycle problem would be solvable in polynomial time. Hamiltonian Cycle is an NP-complete problem.

Construction: given a graph $G = (V, E)$. Construct a complete graph with edge weights $d_{ij} = 1$ if $(i, j) \in E$, otherwise $= n+1$. If $G$ has a Hamiltonian cycle $\rightarrow$ TSP optimum $= n$. Otherwise $\rightarrow$ TSP optimum $\geq n+1 = (1+1/n)\cdot n$. For $\varepsilon < 1/n$, a $(1+\varepsilon)$-approximation would distinguish these two cases $\rightarrow$ Hamiltonian Cycle is solvable. Contradiction!

Arora’s Algorithm for Euclidean TSP

Theorem (Arora, 1998): For $n$ points in the Euclidean plane, TSP has a PTAS with running time $O(n\log^{O(1/\varepsilon)} n)$.

Key Idea: Randomized partitioning of the plane into squares and “portals”. The optimal route passing through the portals approximates any route. Dynamic programming over quadrants gives an exact solution for the “portal” TSP.

PTAS for Scheduling Problems

$P||C_{max}$: $n$ jobs on $m$ machines. Minimize makespan (completion time of all jobs).

LPT (Longest Processing Time First): $4/3$-approximation.

PTAS (Hochbaum-Shmoys, 1987): Split into “long” (processing time

gt; \varepsilon \cdot OPT$) and “short”.

There are at most $1/\varepsilon$ “long” jobs $\rightarrow$ exhaustive search over their assignments in $O((m+1)^{1/\varepsilon}) = O(m^{1/\varepsilon})$ (exponential in $1/\varepsilon$, but polynomial in $n$). “Short” jobs are added greedily. $(1+\varepsilon)$-approximation in $O(n + m^{1/\varepsilon})$.

Practical significance: for $\varepsilon = 0.1$ time is $O(n + m^{10})$. For $n = 1000$, $m = 5$: $5^{10} \approx 10^7$—seconds! For $\varepsilon = 0.01$: $5^{100} \approx 10^{70}$—unrealistic. No FPTAS for $P||C_{max}$ (APX-hard).

Idea of PTAS on the Knapsack Example

The PTAS for knapsack is based on rounding and dynamic programming:

  1. Round the values $c_i' = \left\lfloor c_i \cdot K / c_{max} \right\rfloor$ with suitable $K = \varepsilon \cdot c_{max}/n$
  2. Solve the rounded problem via DP by value (not by weight): $O(n^2\cdot K) = O(n^3/\varepsilon)$
  3. Guarantee: $ALG \geq (1-\varepsilon) \cdot OPT$

This is FPTAS—polynomial in both $n$ and $1/\varepsilon$.

Metric TSP and Christofides' Algorithm

Christofides’ Algorithm (1976) for metric TSP gives a 1.5-approximation:

  1. Build a minimum spanning tree $T$ (MST)
  2. Find a minimum perfect matching $M$ on vertices of odd degree in $T$
  3. Join $T \cup M$ into an Eulerian cycle
  4. Shortcut to a Hamiltonian cycle, skipping repeated vertices

For 40 years, this was the best-known guarantee. In 2020, Karlin-Klein-Garcia-Garbulovich improved it to $1.5 - 10^{-36}$—tiny, but epochal.

Impossibility for General TSP

For non-metric TSP (without triangle inequality) no constant-factor approximation exists if $P \neq NP$. Proof: even the recognition problem “is there a Hamiltonian cycle” is NP-complete, and any $\rho$-approximation algorithm would solve it.

Applications

PTAS is used in network design ($k$-MST, Steiner tree), project scheduling (job shop scheduling), packing (Bin Packing—APTAS by Karagiannis), geometric problems (Euclidean TSP—PTAS by Arora).

§ Act · what next