Module IV·Article I·~5 min read
Linear Systems: Controllability and Observability
Linear Control and Stability
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
Before it is possible to “optimally” control a system, two fundamental questions must be answered: can the system, in principle, be brought to the desired state? And is it possible to determine the system's state from the available measurements? These issues—controllability and observability—are resolved by the classical Kalman criteria, which initiated linear systems theory in the 1960s. Without understanding these concepts, it is impossible to design a Luenberger observer, an LQR regulator, or a Kalman filter.
Linear Time-Invariant Systems
Standard form (state-space representation): ẋ = A·x + B·u, y = C·x + D·u.
Here x ∈ ℝⁿ is the state vector (position, velocity, temperature, currents), u ∈ ℝᵐ is the input (control), y ∈ ℝᵖ is the output (measurements). The matrices A (n×n), B (n×m), C (p×n), D (p×m) describe the system's physics. Often D = 0.
Solution: x(t) = e^{At}·x(0) + ∫₀^t e^{A(t−s)}·B·u(s) ds.
Matrix exponential: e^{At} = Σ_{k=0}^∞ (At)^k/k! — fundamental matrix. Computed via eigen-decomposition A = V·Λ·V⁻¹: e^{At} = V·diag(e^{λ_i·t})·V⁻¹.
Example. Harmonic oscillator: A = [0, 1; −ω², 0]. Eigenvalues ±iω, e^{At} = [cos ωt, sin ωt/ω; −ω·sin ωt, cos ωt] — rotation in phase space.
Controllability
Definition. The system (A, B) is controllable if for any initial state x₀ and target state x₁ there exists a control u(t) on some finite [0, T] that transfers x(0) = x₀ to x(T) = x₁.
Controllability matrix: 𝓒 = [B | A·B | A²·B | ... | A^{n−1}·B] ∈ ℝ^{n × n·m}.
Kalman criterion: The system is controllable ⟺ rank(𝓒) = n.
Geometric intuition. rank(𝓒) is the dimension of the subspace of states reachable from the origin. If rank < n, there are directions in the state space that the control cannot “reach”.
Numerical Example: Double Integrator
ẋ₁ = x₂, ẋ₂ = u. A = [0, 1; 0, 0], B = [0; 1]. n = 2. 𝓒 = [B | A·B] = [0, 1; 1, 0]. rank(𝓒) = 2 = n → the system is controllable.
This means: from any position with any velocity, we can reach any other position and velocity in finite time by choosing an appropriate acceleration u(t).
Counterexample: Uncontrollable System
A = [1, 0; 0, 2], B = [1; 0]. The state x₂ evolves according to ẋ₂ = 2·x₂, and is unaffected by u. 𝓒 = [1, 1; 0, 0], rank = 1 < 2. Not controllable.
Physically: x₂ is an independent variable, not affected by our control.
Observability
Definition. The system (A, C) is observable if the initial state x(0) can be uniquely recovered from the output y(t), t ∈ [0, T], for a given (or zero) u(t).
Observability matrix: 𝒪 = [C; C·A; C·A²; ...; C·A^{n−1}] ∈ ℝ^{p·n × n}.
Criterion: The system is observable ⟺ rank(𝒪) = n.
Kalman duality: (A, B) is controllable ⟺ (Aᵀ, Bᵀ) is observable. This means: control and estimation problems are “mirror” images, and algorithms for one automatically provide algorithms for the other.
Canonical Forms
With full controllability, a linear transformation z = T·x can bring the system to the controllable canonical form — A_c = [0, 1, 0, ..., 0; 0, 0, 1, ..., 0; ...; −a_0, −a_1, ..., −a_{n−1}], B_c = [0; 0; ...; 1]. This simplifies regulator synthesis.
For full observability, there is a corresponding observable canonical form.
Luenberger Observer
If the state x(t) is not available for direct measurement, it can be estimated via an observer:
x̂̇ = A·x̂ + B·u + L·(y − C·x̂).
Here L is the observer gain matrix. The estimation error e = x − x̂ satisfies ė = (A − L·C)·e. Choosing L such that all eigenvalues of (A − L·C) lie in the left half-plane (for example, with a large stability margin) guarantees exponential convergence x̂ → x.
Separation principle: In linear systems, the control problem (choosing K in u = −K·x̂) and the estimation problem (choosing L) can be solved independently, and the result—a combination of regulator and observer—ensures the target closed-loop performance.
Numerical Example: Observer for Double Integrator
System: ẋ₁ = x₂, ẋ₂ = u, y = x₁ (we measure only position). C = [1, 0]. Observability matrix 𝒪 = [1, 0; 0, 1] — rank 2, observable.
Desired observer eigenvalues: λ = −5, −5 (rapid convergence). L = [l₁; l₂] such that det(s·I − A + L·C) = (s + 5)² → l₁ = 10, l₂ = 25. The observer converges in ~1 second—sufficient for most applications.
Real-World Applications
- GPS receivers. State x = (position, velocity, clock error), measurements are pseudoranges to satellites. The observer (extended Kalman filter) reconstructs position with 5–10 meter accuracy.
- Power systems. State estimation in SCADA: measurements of voltages and currents in network nodes → estimate the state of the whole network (thousands of variables) → dispatch control.
- Automotive electronics. Estimation of battery state of charge (SoC) in an electric car by current and voltage—Luenberger observer or Kalman filter.
- Biomedical devices. Continuous glucose monitors estimate “true” blood glucose concentration from readings of a subcutaneous sensor—an observability problem.
Assignment. Double integrator: ẋ₁ = x₂, ẋ₂ = u, y = x₁. (a) Check controllability and observability via the ranks of 𝓒 and 𝒪. (b) Find u = −K·x (state feedback) so that the eigenvalues of A − B·K are {−2 + 2i, −2 − 2i}. (c) Construct observer L with spectrum A − L·C = {−5, −5}. (d) Simulate the closed-loop system with observer for x(0) = (1, 0), x̂(0) = (0, 0). Plot x(t), x̂(t), u(t).
§ Act · what next