Module V·Article I·~6 min read
Löwenheim–Skolem Theorem and Its Paradoxes
Model Theory
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
Löwenheim–Skolem Theorem
Motivation: FOL Cannot “Fix” Cardinality
The Löwenheim–Skolem theorem is the first profound result in model theory. It demonstrates that a first-order theory does not control the cardinality of its models: a theory with a countable model also has an uncountable one, and vice versa. This is the “Skolem paradox” — a countable model of ZFC theory contains “uncountable” sets.
The Löwenheim–Skolem Theorems
Downward (Löwenheim 1915, Skolem 1920): If T has an infinite model, then T has a countable model.
Upward (Tarski–Vaught): If T has an infinite model of cardinality $\kappa$, then T has a model of cardinality $\lambda$ for any $\lambda \geq \max(\kappa, |\sigma|, \aleph_0)$.
Combined version: If T has a model of cardinality $\kappa \geq |T|$, then T has a model of cardinality $\lambda$ for any $\lambda \geq |T|$.
The Skolem Paradox
ZFC theory is an axiomatic set theory. In ZFC, the existence of uncountable sets is provable (Cantor’s theorem: $|P(\mathbb{N})| > |\mathbb{N}|$). According to the Löwenheim–Skolem theorem, ZFC has a countable model $M$!
Paradox: $M$ is countable, but does $M$ “contain” uncountable sets?
Resolution: Uncountability is a relative concept. Within $M$, there is no bijection (inside $M$) between an “uncountable” set $X_M$ and $\omega_M$. But an external observer sees that $M$ itself is countable. “Uncountability of $X_M$” is the absence of a bijection within the model $M$, not the absence of a bijection in the real world.
This illustrates the limitation of FOL: “uncountability” is not absolute, but relative to the model.
Isomorphism and Elementary Equivalence
Isomorphism $M \cong N$: there exists a bijection $f: M \to N$ preserving all predicates and functions. Isomorphic structures are indistinguishable.
Elementary equivalence $M \equiv N$: $M$ and $N$ satisfy the same FOL sentences.
Isomorphism $\to$ elementary equivalence. The converse is not true: $(\mathbb{Q}, <) \equiv (\mathbb{R}, <)$ (both are dense linear orders without endpoints), but $(\mathbb{Q}, <) \not\cong (\mathbb{R}, <)$ (different cardinalities).
Numerical Example
Problem: Prove that $(\mathbb{Q}, <)$ and $(\mathbb{R}, <)$ are elementarily equivalent by constructing an explicit FOL sentence true in both.
Step 1. The theory of dense linear order without endpoints (DLO): axioms
- Linearity: $\forall x \forall y\ (x < y \vee x = y \vee y < x)$
- Transitivity: $\forall x \forall y \forall z\ (x < y \wedge y < z \rightarrow x < z)$
- Density: $\forall x \forall y\ (x < y \rightarrow \exists z\ (x < z \wedge z < y))$
- No endpoints: $\forall x\ \exists y\ (y < x) \wedge \exists y\ (x < y)$
Step 2. Both $\mathbb{Q}$ and $\mathbb{R}$ are models of DLO. Theorem (Cantor, Gunter): DLO is a complete theory: every FOL statement is either derivable from DLO or its negation is. Therefore, all models of DLO are elementarily equivalent: $\mathbb{Q} \equiv \mathbb{R}$ ✓.
Step 3. Example of a sentence: $\forall x \forall y\ (x < y \rightarrow \exists z(x < z \wedge z < y))$ — “density.” True in both $\mathbb{Q}$ and $\mathbb{R}$ ✓.
Step 4. A sentence distinguishing $\mathbb{Q}$ from $\mathbb{R}$ of “second order”: “every bounded set has a least upper bound” — this property is NOT FOL (it talks about arbitrary sets) $\rightarrow$ cannot be expressed in FOL.
Conclusion: $\mathbb{Q}$ and $\mathbb{R}$ are not isomorphic ($|\mathbb{Q}| = \aleph_0$, $|\mathbb{R}| = 2^{\aleph_0}$) but are elementarily equivalent (first-order logic does not distinguish them).
Real-World Application
Database theory: elementary equivalence means that two different data stores (e.g., different concretizations) answer all queries in the language of first-order relational algebra in the same way. This underlies query optimization via “logically equivalent” execution plans.
Additional Aspects
Model theory studies the relationship between syntax (formal theories) and semantics (structures in which these theories are true). Basic theorems: compactness (every subset of a finitely satisfiable theory is itself satisfiable), Löwenheim–Skolem (existence of models of all infinite cardinalities), elementary equivalence. The applied surge in model theory — Shelah’s work on classification of theories and stability theory. O-minimal theories have found application in real algebraic geometry and number theory (proof of the André–Oort conjecture for arithmetic varieties via o-minimality). In computer science, model theory underlies database description languages and SMT solvers, where the task is to find a model of a formula in a suitable theory (arithmetic, arrays, bit vectors).
Model theory unites logic, algebra, and geometry: quantifier elimination in algebraically closed fields yields algorithmic algebraic geometry, o-minimal structures lead to proofs of deep arithmetic conjectures, and compactness equips SMT checkers with reasoning tools for infinite domains.
Connection with Other Areas of Mathematics
The Löwenheim–Skolem spectrum of cardinalities is closely related to algebra via universal algebra and field theory. The theorem about the existence of elementary substructures (often proven using the downward Löwenheim–Skolem theorem and compactness) underlies the construction of ultraproducts, used in proving Łoś’s theorem about the elementarity of ultraproducts. The latter, in turn, is used in proving the Ax–Kochen theorem on the characterization of formal power series and in non-Archimedean geometry.
In field theory, the idea of “transfer” between models of different cardinalities appears in Tarski’s work on the decidability of the theory of real closed fields and in the development of o-minimal structures (Pila, Wilkie), where solutions to differential and Diophantine problems are interpreted in suitable models. Here, Löwenheim–Skolem guarantees the existence of models of the required size, in which one can control types of solutions.
In topology, model-theoretic tools based on the Löwenheim–Skolem and compactness theorems are used in Robinson’s nonstandard analysis: to each topological space one associates an extension with hyperpoints, and the notions of limit and continuity are expressed via elementary substructures. Similar ideas are behind model-theoretic approaches to probabilistic structures: for example, in the theory of random graphs (work of Shelah, Spencer), model-theoretic constructions allow the building of countable and uncountable random objects with identical elementary properties.
In numerical methods, the results of Löwenheim–Skolem appear indirectly through the soundness of logic-oriented algorithms: SMT solvers use the fact that the existence of an infinite model of a certain size can be “compressed” to a more manageable, often countable, model, where the satisfiability of a system of constraints is checked.
Historical Note and Development of the Idea
The first form of the downward theorem was published by Leopold Löwenheim in 1915 in Mathematische Annalen, working within second-order logic and seeking to understand which theories admit countable interpretations. The first-order (Tarskian) version and simplification of the proof were completed by Thoralf Skolem in the 1920s; Skolem placed emphasis on the application to Zermelo set theory and formulated the paradoxical situation of a countable model containing “uncountable” sets. The upward variant, connecting the existence of models of all larger cardinalities, was systematized by Alfred Tarski and his students, particularly Vaught, in the 1940s–1950s in the context of the general theory of models, as subsequently presented in the 1953 Tarski–Mostowski–Robinson book. These results became foundational for the compactness theorem developed by Kurt Gödel and Maltsev, and became the starting point for Shelah’s classification theory in the 1960s–1970s.
§ Act · what next