Module II·Article I·~5 min read

Picard's Theorem: Existence and Uniqueness

Picard’s Theorem

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Picard–Lindelöf Theorem

Fundamental Question

Before solving a differential equation, one must ensure that a solution actually exists. And if it exists—is it unique? These questions are not merely academic: the answer determines whether it is even possible to predict the behavior of a system.

Let us consider a physical analogy. You describe the trajectory of a particle with the equation $x'' = F(x)/m$. If the velocity $x'$ at a given point in space is specified—is the future trajectory unique? If yes, then "Laplace determinism" holds: knowing the state of the system at time $t_0$, one can uniquely reconstruct its past and predict its future. If not—the system is fundamentally unpredictable, even aside from quantum uncertainty.

Lipschitz Condition

Ordinary continuity of $f(x, y)$ turns out to be insufficient for uniqueness. A stronger condition is needed.

A function $f(x, y)$ satisfies the Lipschitz condition in $y$ in a domain $D$ with constant $L$ if for all $(x, y_1)$, $(x, y_2) \in D$:

$|f(x, y_1) - f(x, y_2)| \leq L|y_1 - y_2|.$

Meaning: no matter how close the two values of $y$ are taken, the difference $f$ is "controlled" by a linear function of the distance between them. This forbids the function from "diverging too quickly" with small changes in $y$.

How to check: if $\partial f/\partial y$ is bounded in $D$ ($|\partial f/\partial y| \leq L$), then the Lipschitz condition is met—this follows from the Mean Value Theorem (Lagrange's theorem).

Why $y' = y^{1/3}$ violates the condition: $\partial f/\partial y = (1/3)y^{-2/3} \to \infty$ as $y \to 0$. Indeed, through the point $(0, 0)$ passes infinitely many solutions: $y = 0$ and $y = \pm(2x/3)^{3/2}$.

Picard–Lindelöf Theorem

Statement: Let $f(x, y)$ be continuous in the rectangle $R = {|x - x_0| \leq a, |y - y_0| \leq b}$ and satisfy the Lipschitz condition in $y$ with constant $L$, $M = \max|f|$ on $R$. Then the Cauchy problem $y' = f(x, y)$, $y(x_0) = y_0$ has a unique solution on the interval $|x - x_0| \leq h$, where $h = \min(a, b/M)$.

The restriction $h = \min(a, b/M)$ has a simple geometric meaning: the solution should not "leave" the bounds of $R$ before we reach the end of the interval in $x$.

Picard's Method of Successive Approximations

The constructive proof provides an algorithm for finding the solution. We build a sequence of functions:

$y_0(x) = y_0$ (constant—the initial value),

$y_n(x) = y_0 + \int_{x_0}^{x} f(t, y_{n-1}(t)) dt$.

This is an iterative process: substitute the current approximation $y_{n-1}$ into the right-hand side of the equation, integrate—to get the next approximation $y_n$. Each iteration "accounts for" the system's dynamics more precisely than the previous one.

Expanded example: $y' = y$, $y(0) = 1$.

$y_0(x) = 1$.

$y_1(x) = 1 + \int_0^x y_0(t) dt = 1 + \int_0^x 1 dt = 1 + x$.

$y_2(x) = 1 + \int_0^x y_1(t) dt = 1 + \int_0^x (1 + t) dt = 1 + x + x^2/2$.

$y_3(x) = 1 + \int_0^x y_2(t) dt = 1 + x + x^2/2 + x^3/6$.

It is evident that $y_n(x) = \sum_{k=0}^n x^k/k!$—the partial sum of the Taylor series for $e^x$. In the limit: $y(x) = e^x$—the exact solution. ✓

Estimate of convergence rate: $|y_n - y| \leq ML^n h^{n+1}/(n+1)! \to 0$—the convergence is similar to a geometric progression, so the method converges quickly.

Theorem on the Maximal Interval of Existence

A solution to the Cauchy problem exists on some maximal interval $(\alpha, \beta)$. As $x \to \beta^{-}$ (or $x \to \alpha^+$), one of two things happens: either $x$ reaches the boundary of the domain of $f$, or $|y(x)| \to \infty$ (blow-up in finite time).

Example of blow-up: $y' = y^2$, $y(0) = 1$. Solution: $y = 1/(1 - x)$. As $x \to 1^{-}$ the solution tends to $+\infty$. Maximal interval: $(-\infty, 1)$. Beyond $x = 1$ the solution does not exist.

Physically, this means: quadratic nonlinearity can lead to "catastrophic" growth in finite time. This is fundamentally different from linear equations, for which the solution is always global.

Example of global existence: Linear equation $y' = p(x)y + q(x)$ with continuous $p, q$—the solution exists on the whole real line.

Application: Predictability of Physical Systems

Picard's theorem is the mathematical foundation for determinism in classical mechanics. If the equations of motion satisfy the Lipschitz condition, then the initial conditions uniquely determine the entire trajectory. This is precisely why celestial mechanics successfully predicts the motion of planets thousands of years ahead—Newton's equations for gravity satisfy the Lipschitz condition in regions far from collisions.

Question for thought: Picard's method gives a convergent sequence of approximations, but to find each approximation one needs to compute an integral. How often is this practically feasible? In what cases is Picard's method preferable to numerical methods?

Picard's Theorem and the Size of the Domain of Existence

The theorem guarantees a unique solution on the interval $|x - x_0| \leq h$, where $h = \min(a, b/M)$, $a$—width of the rectangle, $b$—its height, $M = \max|f|$. This is a local guarantee. Global existence: if $|f(x, y)| \leq A + B|y|$ (linear growth in $y$), the solution exists over the entire interval—a consequence of Gronwall's theorem. If $|f|$ grows faster than linearly, the solution can diverge to infinity in finite time ("blow-up"). Example: $y' = y^2$, $y(0) = 1$ has the solution $y = 1/(1 - x)$—"blow-up" at $x = 1$.

§ Act · what next