Module II·Article III·~4 min read
Peano's Theorem and the Non-Lipschitz Case
Picard’s Theorem
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
Existence Without Uniqueness
Picard's theorem requires the Lipschitz condition. What if this requirement is violated? Peano's theorem answers the question of existence in a more general case.
Peano's Theorem: If $f(x, y)$ is continuous in the rectangle $R = {|x − x_0| \leq a, |y − y_0| \leq b}$, then the Cauchy problem $y' = f(x, y),\ y(x_0) = y_0$ has at least one solution on $|x − x_0| \leq h = \min(a, b/M)$.
The key difference from Picard's theorem: only existence is asserted, but not uniqueness. Without the Lipschitz condition, several solutions may pass through a single point.
Proof uses the Arzelà–Ascoli theorem: from a sequence of approximations, an equicontinuous subsequence is selected, which converges (compactness in the space of continuous functions).
Examples of Non-uniqueness
Example 1: $y' = 2\sqrt{|y|},\ y(0) = 0$.
The right-hand side $f = 2\sqrt{|y|}$ is continuous, but $\partial f / \partial y = 1/\sqrt{|y|} \to \infty$ as $y \to 0$—the Lipschitz condition is violated.
Solutions to the Cauchy problem with $y(0) = 0$:
- $y \equiv 0$ (the zero solution),
- $y = (x − a)^2$ for $x > a$ and $y = 0$ for $x \leq a$ for any $a \geq 0$ (infinitely many solutions!).
Thus, the origin hosts a whole one-parameter family of different solutions. The initial condition does not determine the future trajectory.
Example 2 (Finite Time Blowup): $y' = y^2,\ y(0) = 1$.
Solution: $y(t) = 1 / (1 - t)$. As $t \to 1^-$ the solution tends to $+\infty$. This is a "finite time blowup"—typical behavior of quadratically nonlinear equations.
Physical meaning: An equation of the form $\dot{y} = y^2$ models a self-accelerating process. For example, if the speed of fire spread is proportional to the square of the burned area, it will "explode" in finite time. This is important when modeling chain reactions, avalanche processes, plasmas in thermonuclear reactors.
Chaplygin's Comparison Theorem
A powerful tool for obtaining bounds on solutions without an explicit formula.
Statement: Let $y' \leq f(x, y)$ and $z' = f(x, z)$, with $y(x_0) \leq z(x_0)$. Then $y(x) \leq z(x)$ for all $x \geq x_0$ (assuming $f$ is monotonically increasing in $y$).
Meaning: "If you start below, you stay below." If we can solve the "bounding" equation $z' = f(x, z)$, then we can bound the solution of the inequality $y' \leq f(x, y)$ from above.
Application: For the quadratic equation $y' = y^2 + 1$, we estimate the solution from above. Since $y^2 + 1 \leq y^2 + y^2 + 1 = 2y^2 + 1$ for $y \geq 1$, from the equation $z' = z^2$ (starting with the same initial data $z_0 = y_0 > 1$) we obtain that $y \leq z = 1/(C − x)$. This provides an upper bound for the blowup time.
The Grönwall–Bellman Lemma
One of the most frequently used lemmas in ODE theory, allowing exponential estimates.
Statement: If $u(x) \geq 0$ and $u(x) \leq \alpha + \beta \int_{x_0}^x u(t),dt$ for all $x \geq x_0$, then $u(x) \leq \alpha \cdot e^{\beta(x − x_0)}$.
Proof: Let $V(x) = \alpha + \beta \int_{x_0}^x u(t),dt$. Then $V' = \beta u \leq \beta V$ and $V(x_0) = \alpha$. From the linear comparison equation $V' = \beta V$ we get $V \leq \alpha e^{\beta(x−x_0)}$, whence $u \leq V \leq \alpha e^{\beta(x−x_0)}$.
Standard application: Estimate of dependence on initial data. If $\delta = |y(x) − z(x)|$ (the difference of two solutions) satisfies $\delta' \leq L \cdot \delta$ (from the Lipschitz condition), then $\delta(x) \leq \delta(x_0) \cdot e^{L|x−x_0|}$. The entire Picard theorem on continuous dependence on initial data follows from the Grönwall lemma.
Numerical Methods and Approximation Errors
Peano's theorem and the Grönwall lemma are the theoretical foundation for the analysis of numerical methods. Euler's method: $y_{n+1} = y_n + h \cdot f(x_n, y_n)$. The error at each step is $O(h^2)$, and over $N = T/h$ steps accumulates as $O(h)$—global first-order error. The Grönwall lemma allows the estimation of the growth of accumulated error by means of the same exponential factors.
Stiff ODE Systems and Implicit Methods
A stiff equation (stiff ODE) is a system in which rapidly and slowly varying solution components are present simultaneously: $\lambda_1 \ll \lambda_2$ (both negative). The Euler method and explicit Runge–Kutta methods require $h < 2 / |\lambda_2|$, which is catastrophically small. The implicit Euler method: $y_{n+1} = y_n + h \cdot f(x_{n+1}, y_{n+1})$—is conditionally unconditionally stable for any $h$. The price is solving a nonlinear equation at each step (Newton's method). The Crank–Nicolson trapezoidal method—is the gold standard for stiff problems. In chemical kinetics, stiffness always arises: the rates of fast reactions are $6$–$8$ orders of magnitude higher than those of slow ones. Without special methods (LSODE, VODE), numerical integration of such systems is impossible.
Question for reflection: The Grönwall lemma shows that the error grows no faster than an exponential. Does this mean that numerical integration always "works" at sufficiently small step sizes? What happens in the numerical solution of equations with $\lambda > 0$ (chaotic systems)?
§ Act · what next