Module III·Article III·~4 min read

Taylor Formula and Applications

Derivative and Differential

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

The Idea of Polynomial Approximation

Polynomials are the simplest functions: their computation requires only addition and multiplication. Taylor's idea is to approximate an arbitrary function with a polynomial that coincides with the function at a given point as closely as possible.

"Coincide" means: the polynomial and the function match at point $a$ in value, in the first derivative, in the second, ..., up to the $n$-th derivative. This polynomial is unique and is called the Taylor polynomial.

Taylor Formula

$P_n(x) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} \cdot (x-a)^k = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \ldots + \frac{f^{(n)}(a)}{n!}(x-a)^n$

Remainder term in Lagrange form: $R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1}$ for some $c$ between $a$ and $x$.

Series Expansions of Standard Functions ($a = 0$, Maclaurin series)

$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots + \frac{x^n}{n!} + O(x^{n+1})$

$\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots + O(x^{2n+1})$

$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \ldots + O(x^{2n})$

$\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \ldots + O(x^{n+1}),\ |x| \le 1$

$(1+x)^\alpha = 1 + \alpha x + \frac{\alpha(\alpha-1)}{2!} x^2 + \ldots$ (binomial series)

Applications of the Taylor Formula

Limit calculation: $\lim_{x\to 0} (e^x - 1 - x)/x^2 = \lim_{x\to 0} \left( \frac{x^2}{2} + O(x^3) \right)/x^2 = 1/2$.

Approximate computations with error estimation: $\sin(0.1) \approx 0.1 - (0.1)^3/6 = 0.1 - 0.000167 = 0.09983$. The error $\leq (0.1)^5/120 \approx 10^{-8}$.

Analysis of extrema when $f'(a) = f''(a) = 0$: Look at the first nonzero term in the expansion.

Applications in Physics and Engineering

The Taylor formula is a tool for linearization. When a physicist writes "small oscillations", they expand the potential energy in a Taylor series and keep the quadratic term. When an engineer analyzes the stability of a system, they linearize the equation in the neighborhood of equilibrium.

The formula $e^{ix} = \cos x + i\sin x$ (Euler's formula) is derived from the expansions of $e^x$, $\sin x$, and $\cos x$ in Taylor series—this is one of the most beautiful formulas in mathematics, linking the exponential function with trigonometric functions.

Approximation Error and Practice

Estimating the remainder term via the Lagrange formula $|R_n(x)| \le \frac{M_{n+1}}{(n+1)!} |x-a|^{n+1}$ is critically important in computational practice. Consider the task: compute $e$ with accuracy $10^{-6}$. Need to find $n$ so that $|R_n(1)| \le \frac{1}{n!} \cdot |e| \le \frac{3}{n!} < 10^{-6}$. From the table: $3/9! = 3/362880 \approx 8.3 \cdot 10^{-6}$ (too large), $3/10! \approx 8.3 \cdot 10^{-7} < 10^{-6}$. So, ten terms suffice.

Taylor Method in Numerical Analysis

The Taylor formula is the theoretical foundation of numerical integration and differentiation. Runge–Kutta method for ODEs: expanding the solution as a Taylor series truncated to degree 4 yields an error $O(h^5)$ at each step. Numerical differentiation: the central difference $(f(x+h) - f(x-h))/(2h) = f'(x) + O(h^2)$—second-order error, because odd-degree terms mutually cancel when subtracting Taylor expansions.

Difficulty: for small $h$, rounding error in computer computations becomes significant. The optimal step $h \approx \sqrt{\varepsilon}$ ($\varepsilon$ is machine precision), which is about $10^{-8}$ for 64-bit arithmetic.

Multivariate Taylor Series and Hessian Matrix

For the function $f(x_1, \ldots, x_n)$, Taylor series around point $a$ to second order:

$f(a + h) \approx f(a) + \nabla f(a)^T h + \frac{1}{2} h^T H(a) h,$

where $H$ is the Hessian matrix (matrix of second derivatives). This quadratic form determines whether point $a$ is a minimum ($H > 0$), maximum ($H < 0$), or saddle point (indefinite sign). This fact underlies second-order optimization methods (Newton's method in multivariate case).

Remainder Estimation and Practical Accuracy

The Lagrange remainder $R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1}$ allows estimation of the approximation's accuracy. For $\sin x$: $|R_n(x)| \le |x|^{n+1}/(n+1)!$—at $x = 0.1$ and $n = 3$ the error does not exceed $0.1^4/24 \approx 4 \cdot 10^{-6}$. This makes Taylor polynomials a practical tool of computational mathematics: engineering libraries implement trigonometric and exponential functions precisely via truncated Taylor series with remainder control.

Question for reflection: Why does the Taylor expansion of $e^x$ converge for all real $x$, while the expansion of $1/(1+x^2)$ converges only for $|x| < 1$? How is this connected to the poles in the complex plane?

Calculating Limits Using Taylor Expansion

The Taylor expansion is a powerful tool for limit calculation. Example 1: $\lim_{x \to 0} (e^x - 1 - x)/x^2 = \lim (1 + x + x^2/2 + \ldots - 1 - x)/x^2 = \lim (x^2/2 + O(x^3))/x^2 = 1/2$. Example 2: $\lim_{x \to 0} (\sin x - x)/x^3 = \lim (x - x^3/6 + \ldots - x)/x^3 = -1/6$. Example 3: $\lim_{x \to 0} (1 - \cos x)/x^2 = \lim (x^2/2 - x^4/24 + \ldots)/x^2 = 1/2$. In all cases, the method is the same: expand the numerator to the required order and cancel with the denominator. This approach is much more efficient than repeated application of L'Hospital's rule.

§ Act · what next