Module III·Article III·~5 min read
Complexity Theory: P vs NP
Computability Theory
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
Motivation: Not Only Computable, but Also Fast
The halting problem is undecidable altogether. But most practical problems are computable—the question is, how fast? Complexity theory studies the resources (time, memory) required by algorithms. The question P = NP? is the most important open question in computer science: do the problems whose solutions are found quickly coincide with those whose solutions are quickly verified?
Complexity Classes
P (Polynomial Time): L ∈ P if there exists a deterministic Turing machine M and a polynomial p such that: M decides L in time O(p(|w|)) for any input w.
Examples in P: array sorting (O(n log n)), shortest path search (Dijkstra's algorithm O(n²)), primality testing (AKS algorithm, 2002, O((log n)^{12})).
NP (Nondeterministic Polynomial Time): L ∈ NP if there exists a "verifier" V (deterministic Turing machine) with polynomial-time such that: w ∈ L ⟺ there exists a "certificate" c (|c| ≤ poly(|w|)): V(w,c) = "accept".
Examples in NP: SAT (certificate = satisfying assignment), traveling salesman problem (certificate = route), graph coloring in k colors (certificate = coloring).
P ⊆ NP: if a problem is solved quickly—it can also be verified quickly (certificate = the solution itself, verifier = algorithm). The converse is unknown—P = NP?
NP-Completeness
NP-hard problem L: any L' ∈ NP reduces to L in polynomial time (L' ≤ₚ L).
NP-complete problem: L ∈ NP and L is NP-hard.
Cook–Levin Theorem (1971/72): SAT is NP-complete. This is the first NP-complete problem. Idea: For any NDTM M and input w, construct a formula φ_{M,w} in CNF such that φ is satisfiable ⟺ M accepts w.
Through reductions from SAT, NP-completeness has been proven for: 3-SAT, CLIQUE, independent set, Hamiltonian path, traveling salesman problem, integer programming, ...
The P ≠ NP Problem
Community consensus: P ≠ NP, but there is no proof. If P = NP:
- RSA and elliptic cryptography are broken (the factoring problem ∈ NP).
- Automatic theorem proving: finding a proof = no harder than checking it.
- Optimization: thousands of NP problems are solvable in polynomial time.
Numerical Example
Problem: Verify that 3-COLORABILITY (graph coloring in 3 colors) ∈ NP, and find a coloring for the specific graph K₄ (or explain why none exists).
Step 1. K₄ is the complete graph on 4 vertices (all 4·3/2 = 6 edges). Chromatic number χ(K₄) = 4 (each pair is adjacent—4 different colors are needed). K₄ cannot be colored in 3 colors.
Step 2. Graph K₄ {one edge} — remove the edge (1,4). Can it be colored in 3 colors? Vertex 1 and 2 are adjacent: different colors, let 1→A, 2→B. Vertex 3 is adjacent to 1 (A) and 2 (B): 3→C. Vertex 4 is adjacent to 2 (B) and 3 (C), but not to 1 (A): 4→A. Coloring: 1→A, 2→B, 3→C, 4→A. 3-colorable ✓.
Step 3. Verification—verifier V: given graph G and coloring c (certificate). Check that for each edge (u,v): c(u) ≠ c(v). In O(m) steps (m is the number of edges) → V works in polynomial time → 3-COLORABILITY ∈ NP ✓.
Real-World Application
Exam scheduling: the problem of creating a schedule without conflicts (one student—several subjects) is a coloring of the conflict graph. The problem is NP-complete; in practice, heuristics are used (genetic algorithms, simulated annealing).
Additional Aspects
The P ≠ NP hypothesis is an open central problem in theoretical computer science and one of the seven "Millennium Problems" with a $1 million prize. Polynomial algorithms exist for basic problems (sorting, shortest paths, minimum spanning tree, linear programming). NP-complete problems (3-SAT, 3-COLOR, Hamiltonian Cycle, TSP in decision form) are polynomially reducible to one another, and an efficient solution to any gives an efficient solution to all. In practice, P ≠ NP is accepted and approximate algorithms (PTAS, FPTAS) or heuristics (metaheuristics, ML-driven solvers) are used. Public-key cryptography (RSA, ECDSA) relies on the assumption that certain problems (factoring, discrete logarithm) are hard—without P ≠ NP, all modern digital security would collapse.
Connection with Other Areas of Mathematics
Classes P and NP naturally intersect with algebra and combinatorics via results like Lovász's theorem on perfect graphs: the proof uses polynomial algorithms for problems near coloring and maximum clique in special classes of graphs. Algebraic geometry enters the picture in the works of Blass, Shpilman, et al., studying the complexity of solving systems of polynomial equations over various fields; here NP-complete problems are encoded as a system of equations, and geometric invariants reflect the "hardness" of the problem.
Probability theory appears in probabilistic algorithms and classes BPP, RP, ZPP. The works of Gillis, Levin, and Kolmogorov laid the foundations for random algorithms, and the Schnorr–Adleman theorem on probabilistic primality testing anticipated the deterministic AKS algorithm. Modern theory of pseudorandomness and hash functions relies on hypotheses about the inequality of P and NP, linking computational complexity with probabilistic inequalities and concentration of measure.
Differential equations and numerical methods are connected through the complexity analysis of solving ODEs and PDEs. The works of Traub and Wossen showed that even approximate solutions of equations can have exponential complexity relative to input size. In optimization, the boundary between efficient (P) and potentially hard (NP-complete) problems runs between linear programming, for which Khachiyan's algorithm and Karmarkar's interior-point method give polynomial time, and nonlinear or integer programming, described in monographs by Shrik and Nemirovski as fundamentally linked with NP-completeness.
Historical Reference and Development of the Idea
The roots of modern complexity theory go back to the works of Turing (1936) and Post, who formulated the notion of algorithmic computability. In the 1960s, Hartmanis and Stearns in a 1965 paper in Transactions of the AMS introduced classes by time and memory and proved the first hierarchy theorems. Stephen Cook in 1971 in the journal Information and Control formulated the NP class and the theorem of completeness for SAT; independently, Leonid Levin published similar ideas in the USSR somewhat later, in 1973. In the mid-1970s, Richard Karp in his famous 1972 article listed 21 NP-complete problems, systematizing the technique of polynomial reductions. In the same years, Knuth, Garey, and Johnson published the fundamental handbook "Computers and Intractability" (1979), which became the standard. By the turn of the 1980s–1990s, the P vs NP problem had become central in theoretical computer science. The works of Feige, Papadimitriou, Arora, and Safra led to the PCP theorem, linking NP-completeness with the difficulty of approximation. In 2000, the Clay Mathematics Institute included P vs NP on the list of Millennium Problems.
§ Act · what next