Module II·Article II·~5 min read

Dependence on Initial Conditions and Parameters

Picard’s Theorem

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Dependence on Parameters and Sensitivity to Initial Conditions

Practical Problem Statement

In real-world problems, initial conditions are never known exactly. The position of a satellite is measured with an accuracy of one meter. The initial concentration of a substance in a reaction is determined with an accuracy of one percent. The air temperature at the launch of a meteorological model is specified with an accuracy of fractions of a degree. The question: how does the error accumulate? If the initial condition changes by δ, by how much will the solution change after time T?

The answer depends on the equation. Sometimes, the errors are small and manageable. Sometimes, they grow exponentially—and long-term forecasting becomes impossible.

Theorem on Continuous Dependence on Initial Data

If $f$ satisfies the Lipschitz condition with constant $L$, then solutions with close initial conditions remain close on a compact interval:

Estimate: $|y(x; y_0) - y(x; \tilde{y}_0)| \leq |y_0 - \tilde{y}_0| \cdot e^{L|x - x_0|}$.

Interpreting the formula: the error grows no faster than an exponential with exponent $L$. If $L$ is small (function $f$ depends weakly on $y$), the error remains under control. If $L$ is large, the error can increase rapidly.

Practical example: $y' = -0.1y$, $y(0) = y_0$. Here $L = 0.1$ (a stable equation). Solution: $y(t) = y_0 e^{-0.1t}$. Error: $|y_0 - \tilde{y}_0| e^{-0.1t} \leq |y_0 - \tilde{y}_0|$. The error decreases! This is an example of a stable equation: small errors in the initial data do not accumulate.

By contrast, for $y' = +0.1y$, the error grows as $e^{0.1t}$: after 100 units of time, an initial error of 1% turns into $e^{10} \approx 22026%$!

Differentiability with Respect to Initial Data and Parameters

Suppose the solution $y(x; y_0)$ is differentiable with respect to $y_0$. Then $w = \partial y / \partial y_0$ satisfies the variation equation—a linearized equation along the solution:

$w' = f_y(x, y(x; y_0)) \cdot w$, $w(x_0) = 1$.

This is a linear equation that can be solved, given $y(x; y_0)$. The function $w$ shows how an infinitesimal change in the initial condition affects the solution.

Similarly for a parameter: if the equation $y' = f(x, y, \lambda)$ depends on parameter $\lambda$, then $z = \partial y / \partial \lambda$ satisfies:

$z' = f_y(x, y) \cdot z + f_\lambda(x, y)$, $z(x_0) = 0$.

Application in control: The derivative of the system state with respect to the control parameter allows optimization of control—this is the basis of the adjoint variables method in optimal control theory.

Lyapunov Exponent and the Phenomenon of Chaos

From the estimate $|y(x; y_0) - y(x; \tilde{y}_0)| \sim |y_0 - \tilde{y}_0| \cdot e^{\lambda |x - x_0|}$ arises the concept of the Lyapunov exponent $\lambda$:

$ \lambda = \lim_{t \to \infty} \frac{1}{t} \ln \left| \frac{\delta y(t)}{\delta y(0)} \right| $

  • $\lambda < 0$: small perturbations decay exponentially—the system is stable.
  • $\lambda = 0$: perturbations neither grow nor decay—neutral stability.
  • $\lambda > 0$: small perturbations grow exponentially—the system is chaotic.

Lorenz system (1963)—the simplest example of deterministic chaos:

$ \dot{x} = \sigma(y - x), \quad \dot{y} = x(\rho - z) - y, \quad \dot{z} = xy - \beta z $

With $\sigma=10$, $\rho=28$, $\beta=8/3$, the Lyapunov exponent $\lambda_1 \approx +0.9$. Trajectories diverge at the rate $e^{0.9t}$. An initial error of 1 mm after 50 “units of time” turns into an error on the scale of the entire attractor!

This is exactly what Edward Lorenz discovered in 1961, when he repeated a weather simulation, entering the initial data with fewer digits. A small rounding led to radically different results. Thus, the concept of the “butterfly effect” was born—a metaphor for the sensitivity of chaotic systems to initial conditions.

Predictability Horizon

Suppose the initial condition is known with accuracy $\delta_0$, and we wish to predict the solution with accuracy $\Delta$. From the estimate $\delta_0 e^{\lambda T} \leq \Delta$, it follows:

Predictability horizon: $T \leq (1/\lambda) \ln(\Delta / \delta_0)$.

For the atmosphere (meteorology): $\lambda \approx 1/5$ day$^{-1}$, $\delta_0/\Delta \sim 10^{-6}$. Horizon $\approx 5 \times \ln(10^6) \approx 5 \times 13.8 \approx 69$ days. In practice, measurement accuracy is much worse, and the real predictability horizon for weather is about 2 weeks. This is a fundamental limitation, unrelated to imperfections in computers.

Question for reflection: If we increase the accuracy of measuring initial conditions for the weather forecast by a factor of 1000, by how much can we extend the predictability horizon? What does this say about the fundamental limitations of long-term forecasts?

Practical Conclusion

Knowledge of the Lyapunov exponent allows an engineer to assess in advance whether it is worth investing in improving measurement accuracy. For stable ($\lambda < 0$) systems—yes: errors decay, and more accurate sensors directly improve the quality of control. For chaotic ($\lambda > 0$) systems—no: the predictability horizon grows only as the logarithm of improved accuracy, and physical limitations quickly become insurmountable.

Runge–Kutta Methods and Adaptive Step Selection

Runge–Kutta methods of 4th order (RK4) are the standard for numerical solution of ODEs. Formula: $y_{n+1} = y_n + (k_1 + 2k_2 + 2k_3 + k_4)/6$, where $k_1 = h f(x_n, y_n)$, $k_2 = h f(x_n + h/2, y_n + k_1/2)$ etc. Global error $O(h^4)$—much better than Euler’s method. Adaptive step (Dormand–Prince method, ode45): we compare the results of RK4 and RK5. If the difference is small—the step is increased; if large—decreased. The Grönwall lemma justifies that with a sufficiently small step, the accumulated error remains bounded. This is a mathematical guarantee of the correctness of adaptive algorithms, applied in SciPy, MATLAB, and Julia DifferentialEquations.

§ Act · what next