Module IV·Article I·~5 min read

Systems of Linear ODEs

Systems of ODEs

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Systems of Linear ODEs: Matrix Approach

From an Equation to a System

A single higher-order ODE $y^{(n)} = f(x, y, y', ..., y^{(n-1)})$ is equivalent to a first-order system. We introduce: $x_1 = y$, $x_2 = y'$, ..., $x_n = y^{(n-1)}$. Then $x_1' = x_2$, $x_2' = x_3$, ..., $x_n' = f(x, x_1, ..., x_n)$. This is the standard form of the system. Transitioning to matrix notation is not merely a convenient notation but a powerful tool: matrix algebra enables the use of the full arsenal of linear algebra.

In mechanics, systems of ODEs describe the motion of several interacting bodies. In electronics — current and voltage in complex circuits. In ecology — the joint dynamics of several populations (predator–prey model).

Matrix Notation of a System

The system: $x_1' = a_{11}x_1 + ... + a_{1n}x_n + f_1$, ..., $x_n' = a_{n1}x_1 + ... + a_{nn}x_n + f_n$.

In vector form: x' = $A(t)$ x + f$(t)$, where x = $(x_1, ..., x_n)^\mathrm{T}$ is the column vector of state, $A$ is the $n\times n$ matrix of coefficients, f is the inhomogeneity vector.

Cauchy problem: x$(t_0) = \mathbf{x}_0$.

Fundamental Matrix and the Principle of Superposition

The fundamental matrix $\Phi(t)$ is an $n\times n$ matrix whose columns form a FSS (n linearly independent solutions of the homogeneous system $\mathbf{x}' = A\mathbf{x}$). Wronskian determinant: $\det \Phi(t) \neq 0$.

The general solution of the homogeneous system: $\mathbf{x}(t) = \Phi(t) \mathbf{c}$, where $\mathbf{c}$ is an arbitrary constant vector.

Method of Variation of Constants (Duhamel): For the inhomogeneous system:

$ \mathbf{x}(t) = \Phi(t)\mathbf{c} + \Phi(t)\int_{t_0}^t \Phi^{-1}(s)\mathbf{f}(s),ds. $

The second term is the Duhamel principle: the system's response to continuous excitation f is the sum of the responses to instantaneous impulses.

Systems with Constant Matrix: Matrix Exponential

For $\mathbf{x}' = A\mathbf{x}$ ($A$ a constant matrix), the solution is: $\mathbf{x}(t) = e^{At} \mathbf{x}_0$, where the matrix exponential is:

$ e^{At} = I + At + (At)^2/2! + (At)^3/3! + ... = \sum_{k=0}^\infty \frac{(At)^k}{k!}. $

Computing $e^{At}$ via spectral decomposition. If $A = PDP^{-1}$, where $D = \operatorname{diag}(\lambda_1, ..., \lambda_n)$ is a diagonal matrix of eigenvalues, $P$ is the matrix of eigenvectors, then:

$ e^{At} = P e^{Dt} P^{-1} = P \cdot \operatorname{diag}(e^{\lambda_1 t}, ..., e^{\lambda_n t}) \cdot P^{-1}. $

Meaning: in the eigenvector basis, components evolve independently, each as $e^{\lambda_i t}$.

Full Example: Predator–Prey System (Linearized)

The Lotka–Volterra model near the equilibrium point is linearized. Let $A = \begin{bmatrix}0 & 1 \ -1 & 0\end{bmatrix}$ (the small oscillation matrix).

Eigenvalues: $\det(A - \lambda I) = \lambda^2 + 1 = 0 \Rightarrow \lambda = \pm i$.

Eigenvectors: for $\lambda = i$: $(A - iI)v = 0$. $\begin{bmatrix}-i & 1 \ -1 & -i\end{bmatrix}v = 0 \Rightarrow v_1 = (1, i)^\mathrm{T}$.

Matrix exponential: $e^{At} = \begin{bmatrix}\cos t & \sin t \ -\sin t & \cos t\end{bmatrix}$ (rotation matrix!).

Solution: $(x(t), y(t))^\mathrm{T} = \begin{bmatrix}\cos t & \sin t \ -\sin t & \cos t\end{bmatrix} \cdot (x_0, y_0)^\mathrm{T}$. These are elliptical orbits around the equilibrium — the populations perform closed oscillations around the equilibrium point, just as the Lotka–Volterra model predicts for the lossless case.

Jordan Case: Multiple Eigenvalues

If $A$ is not diagonalizable (multiple eigenvalues), we use the Jordan form $A = PJP^{-1}$.

For a Jordan block $J(\lambda, k)$ of size $k$:

$ e^{J(\lambda, k)t} = e^{\lambda t} \cdot \begin{bmatrix} 1 & t & t^2/2! & \ldots & t^{k-1}/(k-1)! \ 0 & 1 & t & \ldots & t^{k-2}/(k-2)! \ \vdots & & \ddots & & \vdots \ 0 & 0 & \ldots & & 1 \end{bmatrix}. $

Physically: multiple roots produce terms like $t^k e^{\lambda t}$ — this is precisely the condition for resonance in the matrix context.

Numerical example: $A = \begin{bmatrix}1 & 1 \ 0 & 1\end{bmatrix}$, $\lambda = 1$ (algebraic multiplicity 2).

$e^{At} = e^{t} \cdot \begin{bmatrix}1 & t \ 0 & 1\end{bmatrix}$. If $x(0) = (1, 0)^\mathrm{T}$: $x(t) = (e^{t}, 0)^\mathrm{T}$. If $x(0) = (0, 1)^\mathrm{T}$: $x(t) = (t e^{t}, e^{t})^\mathrm{T}$. The first component grows linearly — a sign of resonance.

Question for reflection: Why do eigenvalues with negative real part correspond to decay, and those with zero real part to neutral stability? What happens when $\lambda = 0$?

Practical Conclusions for the Engineer

The matrix approach allows one to immediately assess the behavior of the entire system by the sign of the real parts of the eigenvalues, without solving the system explicitly. This is the main tool of the control engineer: before tuning a regulator, one must compute the eigenvalues of the closed-loop system matrix and make sure they all lie in the left half-plane ($\operatorname{Re} \lambda < 0$). That is why knowledge of linear algebra is an essential component of control theory.

Method of Variation of Parameters for Systems

For the inhomogeneous system $\mathbf{x}' = A\mathbf{x} + \mathbf{g}(t)$, the method of variation of parameters gives: $\mathbf{x}(t) = \Phi(t)\mathbf{c} + \Phi(t)\int \Phi^{-1}(s)\mathbf{g}(s) ds$, where $\Phi(t)$ is the fundamental matrix. This is the matrix Duhamel formula. Transfer matrix: $G(t,s) = \Phi(t)\Phi^{-1}(s)$ describes the system’s response to an impulse input at moment $s$. In control theory, this is the matrix of impulse response functions. Stability and $G$: the system is asymptotically stable if and only if $\int_0^\infty |G(t,0)| dt < \infty$ — an integral characteristic of the decay of transients, important for assessing regulator quality.

§ Act · what next