§ CALCULUS · 14 MIN READ · Updated 2026-05-13

Eigenvalues and Eigenvectors, Intuitively

The most important concept in linear algebra after matrix multiplication — and the one most students never see clearly.

"Eigenvalues are the secret of a matrix."
Gilbert Strang, MIT 18.06 lecture
Eigenvalues and Eigenvectors, Intuitively
EIGENVALUES AND EIGENVECTORS, INTUITIVELY

When a matrix acts on a vector , the result is generally a different vector — different direction, different length. But for some special vectors, points in the same direction as , only stretched or compressed by a scalar factor. Those vectors are eigenvectors, and the scalar factor is the eigenvalue.

This sounds technical, and the definition is. But the intuition is simple: an eigenvector is a direction that the matrix preserves. When the matrix acts, the eigenvector stays on its own line — it just gets longer or shorter (or possibly flipped to the other side).

This article covers what eigenvalues and eigenvectors mean intuitively, the formal definition, how to compute them, diagonalization (the magic), and the four major applications: principal component analysis, Markov chains, dynamical systems, and Google PageRank.

The intuition: directions that don't change

Consider a 2×2 matrix and its action on the plane. A typical matrix rotates and scales most vectors — a vector becomes , pointing in a generally different direction.

For most matrices, there are exactly two special directions (in 2D) along which the matrix only stretches or compresses, without rotating. These are the eigenvector directions. A vector pointing along an eigenvector direction stays along that direction after the matrix acts.

Example 1: Consider the matrix .

The vector satisfies . So is an eigenvector with eigenvalue 3.

The vector satisfies . So is an eigenvector with eigenvalue 2.

The matrix stretches the x-axis by 3 and the y-axis by 2. Those two axes are the eigenvector directions.

For general matrices (not diagonal like this example), the eigenvectors are not aligned with the standard axes — they could point in any direction. But the same idea holds: there are special directions that the matrix only stretches.

Formal definition

An eigenvector of is a nonzero vector such that

for some scalar . The scalar is the eigenvalue corresponding to .

Two technicalities:

  1. The zero vector is excluded — it trivially satisfies for any , so calling it an eigenvector would be uninteresting.
  2. The eigenvalue can be zero. An eigenvector with eigenvalue zero is a vector that maps to zero.

How to compute eigenvalues

Rewrite the equation as , or .

For this equation to have a nonzero solution , the matrix must be singular — must have determinant zero.

Characteristic equation:

This is a polynomial equation in . Solve it to find the eigenvalues.

Example 2: Find the eigenvalues of .

Solve: , so or .

The eigenvalues are 5 and 2.

How to compute eigenvectors

For each eigenvalue , solve for .

Example 3 (continued from Example 2): Find the eigenvectors of .

For :

Solve .

Both rows give . So the eigenvector is (or any scalar multiple).

For :

Both rows give , so . The eigenvector is .

So the eigenvectors are (eigenvalue 5) and (eigenvalue 2). You can verify: and . ✓

Diagonalization

A matrix is diagonalizable if it has a complete set of eigenvectors — enough to form a basis.

For a diagonalizable matrix :

where is a diagonal matrix with the eigenvalues on the diagonal, and is a matrix whose columns are the corresponding eigenvectors.

Why this is useful. Diagonal matrices are trivial to work with. Diagonalization expresses the action of a general matrix in a basis where the action is just scaling along each direction.

In particular, , and computing is easy — just raise each diagonal entry to the th power.

Example 4 (continued): For :

You can verify . And to compute , you compute , then .

Not every matrix is diagonalizable. Matrices with fewer eigenvectors than dimensions (the eigenvalues have less than full geometric multiplicity) cannot be diagonalized — they require a more general decomposition called the Jordan canonical form.

Application 1: Principal component analysis (PCA)

PCA is the most-used technique in data analysis and machine learning for dimensionality reduction.

Given a dataset, compute the covariance matrix — a symmetric matrix whose entry is the covariance between feature and feature . The eigenvectors of this matrix are the principal components — the directions of maximum variance in the data. The corresponding eigenvalues measure how much variance each direction captures.

By projecting the data onto the top few principal components (those with the largest eigenvalues), you reduce dimension while preserving most of the structure. This is how PCA reduces a 1000-feature dataset to 50 features while keeping the meaningful patterns.

Application 2: Markov chains

A Markov chain is a stochastic process that transitions between states with probabilities encoded in a transition matrix . The long-term behavior of the chain is governed by the eigenvalues and eigenvectors of .

The largest eigenvalue is always 1 (for a probability-preserving transition matrix). The corresponding eigenvector is the stationary distribution — the long-run proportions of time spent in each state.

If you simulate a Markov chain for many steps, the distribution converges to the stationary eigenvector. The other eigenvalues control how fast this convergence happens.

Application 3: Dynamical systems

A linear dynamical system has the form . Iterating: .

The long-term behavior of this system depends on the eigenvalues of :

  • If all eigenvalues have magnitude less than 1, . The system is stable.
  • If any eigenvalue has magnitude greater than 1, grows unboundedly along that eigenvector direction. The system is unstable.
  • If eigenvalues are exactly 1, the system has a steady state.

Continuous-time dynamical systems have similar analysis using the real parts of eigenvalues.

Application 4: Google PageRank

The original PageRank algorithm, which made Google possible, is an eigenvector computation.

Model the web as a graph where pages are nodes and links are directed edges. Define a transition matrix where is the probability of moving from page to page (uniform across 's outgoing links). The PageRank vector — the importance score for each page — is the dominant eigenvector of (with eigenvalue 1).

In essence: a page is important if other important pages link to it. This recursive definition resolves to an eigenvector equation, which is solvable.

The original Google paper (Brin and Page, 1998) is essentially a paper about computing the dominant eigenvector of a very large sparse matrix.


Frequently asked

Why are eigenvalues called "eigen"?
From the German word *eigen*, meaning "own" or "characteristic." The eigenvalue is the matrix's "own" value — the scaling factor along its preserved directions. The term was popularized by David Hilbert in 1904.
Can a matrix have no real eigenvalues?
Real matrices can have complex eigenvalues. The matrix $\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ (a 90° rotation) has eigenvalues $i$ and $-i$. Geometrically: a pure rotation has no fixed direction (in the real plane), so it has no real eigenvectors.
Do all matrices have eigenvalues?
Square matrices always have eigenvalues — possibly complex. Non-square matrices don't, but they have *singular values* (computed from the SVD, a more general decomposition).
What is the geometric multiplicity of an eigenvalue?
The dimension of the eigenspace (the space of eigenvectors with that eigenvalue). It can be less than the algebraic multiplicity (how many times the eigenvalue appears as a root of the characteristic polynomial). When they're equal, the matrix is diagonalizable.
Can an eigenvalue be zero?
Yes. A zero eigenvalue means the matrix maps the corresponding eigenvector to zero. Equivalently, the matrix is singular (not invertible).
Why is PCA computed using eigenvectors of the covariance matrix?
Because the covariance matrix encodes the linear structure of the data, and its eigenvectors are the directions of maximum variance. The largest eigenvalue corresponds to the direction along which the data varies most.

— ACT —


Cited works & further reading

  • ·Strang, G. (2016). Introduction to Linear Algebra, 5th edition. Wellesley-Cambridge Press. — Chapter 6.
  • ·Brin, S. and Page, L. (1998). "The Anatomy of a Large-Scale Hypertextual Web Search Engine."
  • ·Axler, S. (2024). Linear Algebra Done Right, 4th edition. Springer.

More from this cluster


About the author

Tim Sheludyakov writes the Stoa library.

By Tim Sheludyakov · Edited 2026-05-13

A letter from the portico

Once a week — a long-read, a quote, a practice. No promotions. Unsubscribe in one click.

By subscribing you agree to receive letters from Stoa.