Optim. Control·Course
Optimal Control
Optimal control: Pontryagin maximum principle, LQR, MPC, stochastic control, and reinforcement learning
5
Modules
15
Articles
~2 h
Reading
IV
CLOs
§ 01 — Curriculum
5 modules.
Each module is a small unit. Most read in sequence — but a determined reader can begin anywhere.
- M ICalculus of VariationsThe Lagrange problem, the Euler–Lagrange equation, and classical problems3 articles
18 minBegin → - M IIPontryagin’s Maximum PrincipleOptimal control in continuous time, the Hamiltonian, and adjoint variables3 articles
18 minBegin → - M IIIBellman’s Dynamic ProgrammingThe principle of optimality, the Bellman equation, and the value function3 articles
18 minBegin → - M IVLinear Control and StabilityLinear systems, controllability, observability, and PID controllers3 articles
18 minBegin → - M VStochastic Optimal ControlStochastic systems, the Kalman filter, and stochastic dynamic programming3 articles
18 minBegin →
§ 02 — Learning outcomes
4 outcomes.
CLO I
Maximum Principle
Apply Pontryagin’s maximum principle to optimal control problems.
CLO II
LQR and MPC
Design linear–quadratic regulators and model predictive controllers.
CLO III
Stochastic Control
Solve stochastic control problems using the Hamilton–Jacobi–Bellman equation.
CLO IV
Reinforcement Learning
Relate optimal control to reinforcement learning methods.
§ 03 — Practices