Module VI·Article I·~4 min read

The Lyapunov Method: Applications in Control and Nonlinear Dynamics

Lyapunov Stability Theory

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

From Theory to Engineering

The direct Lyapunov method is not only a theoretical tool but also a practical means of design. In control theory, it enables the design of control algorithms that guarantee the stability of the closed-loop system "by construction." Unlike frequency-based methods (such as the Nyquist criterion), the Lyapunov method works directly with nonlinear systems and nonstationary regimes.

Feedback Control

Let us consider a nonlinear system: ẋ = f(x) + g(x)u, where x is the state and u is the control.

Problem: Choose u = u(x) such that x → 0 as t → ∞.

Lyapunov Control Method: We choose the desired Lyapunov function V(x) (for example, V = |x|²/2). We require V̇ < 0:

V̇ = ∇V · (f + gu) = ∇V · f + (∇V · g) u < 0.

If ∇V · g ≠ 0, choose: u = −k(x) · (∇V · f + ε|∇V · g|) / (∇V · g), where k > 0, ε > 0.

Then V̇ = ∇V · f − k|∇V · f + ε|∇V · g|| ≤ −ε k |∇V · g| < 0 when ∇V ≠ 0.

This is Lyapunov-based control: stability is constructively guaranteed.

LQR Control (Linear Quadratic Regulator)

For the linear system ẋ = Ax + Bu, the optimal control problem is:

Minimize J = ∫₀^∞ (xᵀQx + uᵀRu) dt

with u = −Kx, Q ≥ 0 (state penalty), R > 0 (control penalty).

Solution: The optimal regulator is K = R⁻¹BᵀP, where P is the unique positive definite solution of the Riccati equation:

AᵀP + PA − PBR⁻¹BᵀP + Q = 0.

Connection to Lyapunov: The function V = xᵀPx is a Lyapunov function for the closed-loop system:

V̇ = xᵀ(Aᵀ P + PA − 2PBR⁻¹BᵀP)x = xᵀ(−Q − KᵀRK)x ≤ 0.

Practical example: Stabilizing an inverted pendulum. Let A = [[0,1],[-1,-0.5]], B = [[0],[1]], Q = I, R = 1. By solving the Riccati equation (numerically), we obtain P and K. The closed system is stable with guaranteed quality of the transient process, as specified by the Q and R parameters.

Constructive Methods for Finding Lyapunov Functions

One of the central problems of the Lyapunov method is how to find an appropriate V. For linear systems, systematic methods exist; for nonlinear ones—a set of techniques and heuristics.

Sum-of-squares (SOS) Method: For systems with polynomial right-hand sides, we search for V as a sum of squares of polynomials. The conditions V > 0 and V̇ < 0 become a semidefinite programming (SDP) problem, which can be solved numerically in polynomial time.

Lyapunov Neural Networks: In modern machine learning, V(x) is trained as a neural network, minimizing a penalty for violations of the Lyapunov function conditions. This allows for finding Lyapunov functions for high-dimensional nonlinear systems.

Inverse Lyapunov Theorem: If x* = 0 is globally asymptotically stable, then there exists a smooth Lyapunov function (Ursell–Kurzweil theorem). The converse existence theorem guarantees that the "right" V always exists—the question is only how to find it.

Stability of Periodic Solutions: Floquet Theory

Let the system x' = f(x) have a periodic solution xₚ(t) = xₚ(t + T). How do we analyze its stability?

Linearize along xₚ(t): δx' = A(t) δx, where A(t) = Df(xₚ(t)) is a periodic matrix.

Floquet Theory: For a system with periodic coefficients δx' = A(t) δx (A(t+T) = A(t)), the fundamental matrix is: Φ(t) = P(t) e^{Bt}, where P(t) is a T-periodic matrix, B is constant.

Floquet multipliers ρᵢ are the eigenvalues of the monodromy matrix M = Φ(T) (over one period). The periodic solution is stable if and only if all |ρᵢ| < 1 (except for a single unit multiplier corresponding to the tangent direction).

Example: A pendulum with a vertically vibrating suspension point ẍ + (ω₀² + ε cos 2t) sin x = 0—the Mathieu equation. For certain parameter ratios, Floquet multipliers cross the unit circle—parametric resonance (children swing by "pumping" at the right moment—this is precisely parametric excitation).

Question for thought: In LQR regulation, the Q and R parameters reflect the trade-off between the "quality" of stabilization and the "cost" of control. How should these matrices be chosen for a specific engineering problem?

H-infinity Control and Robustness

LQR is optimal with an exactly known model. Real systems have uncertainties: parameter errors, unmodeled dynamics, external disturbances. H∞ control (Zames, 1981) minimizes the "worst case": min_K ‖T_{zw}‖_∞—the norm of the transfer matrix from disturbances w to the "performance output" z. Connection to ODEs: the H∞ problem reduces to an algebraic Riccati equation with two matrices (instead of one in LQR). A solution exists if and only if the "robustness margin" γ > γ_min. Applications: stability of flight control systems under sensor failures, vibration suppression in rigid mechanical structures, power grid control under variable load.

§ Act · what next