You are about to erase your work on this activity. Are you sure you want to do this?

Updated Version Available

There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?

Mathematical Expression Editor

The general constant coefficient system of differential equations has the form

In Section ??, we plotted the phase space picture of the planar system of differential equations

Thus, using matrix multiplication, we are able to prove analytically that there are solutions to (??) of exactly the type suggested by our MATLAB experiments. However, even more is true and this extension is based on the principle of superposition that was introduced for algebraic equations in Section ??.

Superposition in Linear Differential Equations

Consider a general linear differential equation of the form

Initial Value Problems

Thus we can solve our prescribed initial value problem, if we can solve the system of linear equations

Eigenvectors and Eigenvalues

Note that scalar multiples of eigenvectors are also eigenvectors. More precisely:

We have proved the following theorem.

An Example of a Matrix with No Real Eigenvalues

In Exercises ?? – ?? use map to find an (approximate) eigenvector for the given matrix. Hint: Choose a vector in map and repeatedly click on the button Map until the vector maps to a multiple of itself. You may wish to use the Rescale feature in the MAP Options. Then the length of the vector is rescaled to one after each use of the command Map. In this way, you can avoid overflows in the computations while still being able to see the directions where the vectors are moved by the matrix mapping. The coordinates of the new vector obtained by applying map can be viewed in the Vector input window.

  • Compare the results of the two plots.

Eigenvalues

solve initial value problem eigenvectors

Eigenvalues [ m ]

gives a list of the eigenvalues of the square matrix m .

Eigenvalues [ { m , a } ]

gives the generalized eigenvalues of m with respect to a .

Eigenvalues [ m , k ]

gives the first k eigenvalues of m .

Eigenvalues [ { m , a } , k ]

gives the first k generalized eigenvalues.

Details and Options

solve initial value problem eigenvectors

  • Eigenvalues finds numerical eigenvalues if m contains approximate real or complex numbers.
  • Repeated eigenvalues appear with their appropriate multiplicity.

solve initial value problem eigenvectors

  • If they are numeric, eigenvalues are sorted in order of decreasing absolute value.

solve initial value problem eigenvectors

  • Ordinary eigenvalues are always finite; generalized eigenvalues can be infinite.

solve initial value problem eigenvectors

  • For numeric eigenvalues, Eigenvalues [ m , k ] gives the k that are largest in absolute value.
  • Eigenvalues [ m , - k ] gives the k that are smallest in absolute value.
  • Eigenvalues [ m , spec ] is always equivalent to Take [ Eigenvalues [ m ] , spec ] .
  • Eigenvalues [ m , UpTo [ k ] ] gives k eigenvalues, or as many as are available.
  • SparseArray objects and structured arrays can be used in Eigenvalues .
  • Eigenvalues has the following options and settings:
  • Explicit Method settings for approximate numeric matrices include:
  • The "Arnoldi" method is also known as a Lanczos method when applied to symmetric or Hermitian matrices.
  • The "Arnoldi" and "FEAST" methods take suboptions Method -> { " name " , opt 1 -> val 1 , … } , which can be found in the Method subsection.

Basic Examples     (4)

Machine-precision numerical eigenvalues:

Eigenvalues of an arbitrary-precision matrix:

Eigenvalues of an exact matrix:

Symbolic eigenvalues:

Scope     (19)

Basic uses     (6).

Find the eigenvalues of a machine-precision matrix:

Approximate 20-digit precision eigenvalues:

Eigenvalues of a complex matrix:

Exact eigenvalues:

The eigenvalues of large numerical matrices are computed efficiently:

Eigenvalues of a CenteredInterval matrix:

Find the eigenvalues for a random representative mrep of m :

Verify that, after reordering, vals contain rvals :

Subsets of Eigenvalues     (5)

The largest five eigenvalues:

Three smallest eigenvalues:

Find the four largest eigenvalues, or as many as there are if fewer:

Repeated eigenvalues are listed multiple times:

Repeats are considered when extracting a subset of the eigenvalues:

Generalized Eigenvalues     (4)

Generalized machine-precision eigenvalues:

Generalized exact eigenvalues:

Compute the result at finite precision:

Find the generalized eigenvalues of symbolic matrices:

Find the two smallest generalized eigenvalues:

Special Matrices     (4)

Eigenvalues of sparse matrices:

Eigenvalues of structured matrices:

IdentityMatrix always has all-one eigenvalues:

Eigenvalues of HilbertMatrix :

Options     (10)

Cubics     (1).

Eigenvalues uses Root to compute exact eigenvalues:

Explicitly use the cubic formula to get the result in terms of radicals:

Method     (8)

"arnoldi"     (5).

The Arnoldi method can be used for machine- and arbitrary-precision matrices. The implementation of the Arnoldi method is based on the "ARPACK" library. It is most useful for large sparse matrices.

The following options can be specified for the method "Arnoldi":

Possible settings for "Criteria" include:

solve initial value problem eigenvectors

By default, "Criteria"->"Magnitude" selects a largest-magnitude eigenvalue:

Find the largest real-part eigenvalues:

Find the largest imaginary-part eigenvalue:

Find two eigenvalues from both ends of the matrix spectrum:

Use "StartingVector" to avoid randomness:

Different starting vectors may converge to different eigenvalues:

solve initial value problem eigenvectors

Manually shift the matrix and adjust the resulting eigenvalue:

Automatically shift and adjust the eigenvalue:

"Banded"     (1)

The banded method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding all eigenvalues.

Compute the two largest eigenvalues for a banded matrix:

"FEAST"     (2)

The FEAST method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding eigenvalues in a given interval.

The following suboptions can be specified for the method "FEAST" :

solve initial value problem eigenvectors

Use "Interval" to specify the interval:

solve initial value problem eigenvectors

Quartics     (1)

A 4 × 4 matrix:

In general, for a 4 × 4 matrix, the result will be given in terms of Root objects:

You can get the result in terms of radicals using the Cubics and Quartics options:

Applications     (15)

The geometry of eigenvalues     (3).

Eigenvectors with positive eigenvalues point in the same direction when acted on by the matrix:

Eigenvectors with negative eigenvalues point in the opposite direction when acted on by the matrix:

solve initial value problem eigenvectors

The sign of the eigenvalue corresponds to the sign of the right-hand side of the hyperbola equation:

Here is a positive-definite quadratic form in three dimensions:

solve initial value problem eigenvectors

Get the symmetric matrix for the quadratic form, using CoefficientArrays :

Numerically compute its eigenvalues and eigenvectors:

Show the principal axes of the ellipsoid:

Diagonalization     (4)

m=p.d.TemplateBox[{p}, Inverse]

Note that this is simply the diagonal matrix whose entries are the eigenvalues:

s=o.d.TemplateBox[{o}, Transpose]

Computing the eigenvalues, they are real, as expected:

solve initial value problem eigenvectors

For an orthogonal matrix, it is necessary to normalize the eigenvectors before placing them in columns:

s=o.d.TemplateBox[{o}, Transpose]

Show that the following matrix is normal, then diagonalize it:

Confirm using NormalMatrixQ :

The eigenvalues of a real normal matrix that is not also symmetric are complex valued:

Construct a diagonal matrix from the eigenvalues:

Compute the eigenvectors:

Normalizing the eigenvectors and putting them in columns gives a unitary matrix:

n=u.d.TemplateBox[{u}, ConjugateTranspose]

Differential Equations and Dynamical Systems     (4)

solve initial value problem eigenvectors

Find the eigenvalues and eigenvectors:

solve initial value problem eigenvectors

Construct the matrix whose columns are the corresponding eigenvectors:

p.d.TemplateBox[{p}, Inverse].{TemplateBox[{1}, CTraditional],TemplateBox[{2}, CTraditional],TemplateBox[{3}, CTraditional]}

Verify the solution using DSolveValue :

solve initial value problem eigenvectors

Construct the appropriate linear combination of the eigenvectors:

solve initial value problem eigenvectors

Find the eigenvalues and eigenvectors, using Chop to discard small numerical errors:

solve initial value problem eigenvectors

The Lorenz equations:

Find the Jacobian for the right-hand side of the equations:

Find the equilibrium points:

Find the eigenvalues and eigenvectors of the Jacobian at the one in the first octant:

A function that integrates backward from a small perturbation of pt in the direction dir :

Show the stable curve for the equilibrium point on the right:

Find the stable curve for the equilibrium point on the left:

Show the stable curves along with a solution of the Lorenz equations:

Physics     (4)

solve initial value problem eigenvectors

Find the eigenvectors and normalize them in order to compute proper projections:

solve initial value problem eigenvectors

Find and normalize the eigenvectors:

solve initial value problem eigenvectors

The moment of inertia is a real symmetric matrix that describes the resistance of a rigid body to rotating in different directions. The eigenvalues of this matrix are called the principal moments of inertia, and the corresponding eigenvectors (which are necessarily orthogonal) the principal axes. Find the principal moments of inertia and principal axis for the following tetrahedron:

First compute the moment of inertia:

solve initial value problem eigenvectors

Verify that the axes are orthogonal:

The center of mass of the tetrahedron is at the origin:

Visualize the tetrahedron and its principal axes:

A generalized eigensystem can be used to find normal modes of coupled oscillations that decouple the terms. Consider the system shown in the diagram:

solve initial value problem eigenvectors

The shapes of the modes are derived from the generalized eigenvectors:

Construct the normal mode solutions as a generalized eigenvector times the corresponding exponential:

Verify that both satisfy the differential equation for the system:

Properties & Relations     (15)

Eigenvalues [ m ] is effectively the first element of the pair returned by Eigensystem :

If both eigenvectors and eigenvalues are needed, it is generally more efficient to just call Eigensystem :

The eigenvalues are the roots of the characteristic polynomial:

Compute the polynomial with CharacteristicPolynomial :

Verify the equality:

TemplateBox[{{a, -, {b,  , lambda}}}, Det]

The generalized characteristic polynomial defines the finite eigenvalues only:

solve initial value problem eigenvectors

The product of the eigenvalues of m equals Det [ m ] :

The sum of the eigenvalues of m equals Tr [ m ] :

solve initial value problem eigenvectors

The converse is false:

solve initial value problem eigenvectors

Because Eigenvalues sorts by absolute value, this gives the same values but in the opposite order:

solve initial value problem eigenvectors

The eigenvalues of a real symmetric matrix are real:

So are the eigenvalues of any Hermitian matrix:

The eigenvalues of a real antisymmetric matrix are imaginary:

So are the eigenvalues of any antihermitian matrix:

The eigenvalues of an orthogonal matrix lie on the unit circle:

So do the eigenvalues of any unitary matrix:

TemplateBox[{m}, ConjugateTranspose].m

The t matrix is diagonal and with eigenvalue entries, possibly in a different order from Eigensystem :

solve initial value problem eigenvectors

Possible Issues     (5)

Eigenvalues and Eigenvectors are not absolutely guaranteed to give results in corresponding order:

The sixth and seventh eigenvalues are essentially equal and opposite:

In this particular case, the seventh eigenvector does not correspond to the seventh eigenvalue:

Instead it corresponds to the sixth eigenvalue:

Use Eigensystem [ mat ] to ensure corresponding results always match:

The general symbolic case very quickly gets very complicated:

The expression sizes increase faster than exponentially:

Here is a 20 × 20 Hilbert matrix:

Compute the smallest eigenvalue exactly and give its numerical value:

Compute the smallest eigenvalue with machine-number arithmetic:

The smallest eigenvalue is not significant compared to the largest:

Use sufficient precision for the numerical computation:

When eigenvalues are closely grouped, the iterative method for sparse matrices may not converge:

The iteration has not converged well after 1000 iterations:

You can give the algorithm a shift near the expected value to speed up convergence:

The endpoints given to an interval as specified for the FEAST method are not included. Set up a matrix with eigenvalues at 3 and 9:

solve initial value problem eigenvectors

Eigenvectors   Eigensystem   NDEigenvalues   DEigenvalues   SingularValueList   CharacteristicPolynomial   Det   Tr   PositiveDefiniteMatrixQ

  • Vectors and Matrices
  • Eigenvalues and Eigenvectors

Related Guides

  • Matrix Operations
  • Matrices and Linear Algebra
  • Graph Programming

Related Links

  • NKS|Online  ( A New Kind of Science )

Introduced in 1988 (1.0) | Updated in 2003 (5.0) ▪ 2014 (10.0) ▪ 2015 (10.3) ▪ 2023 (14.0)

Wolfram Research (1988), Eigenvalues, Wolfram Language function, https://reference.wolfram.com/language/ref/Eigenvalues.html (updated 2023).

Wolfram Language. 1988. "Eigenvalues." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2023. https://reference.wolfram.com/language/ref/Eigenvalues.html.

Wolfram Language. (1988). Eigenvalues. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Eigenvalues.html

@misc{reference.wolfram_2023_eigenvalues, author="Wolfram Research", title="{Eigenvalues}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/Eigenvalues.html}", note=[Accessed: 16-February-2024 ]}

@online{reference.wolfram_2023_eigenvalues, organization={Wolfram Research}, title={Eigenvalues}, year={2023}, url={https://reference.wolfram.com/language/ref/Eigenvalues.html}, note=[Accessed: 16-February-2024 ]}

Enable JavaScript to interact with content and submit forms on Wolfram websites. Learn how

Problems in Mathematics

  • Eigenvectors and Eigenspaces

Let $A$ be an $n\times n$ matrix.

  • The eigenspace corresponding to an eigenvalue $\lambda$ of $A$ is defined to be $E_{\lambda}=\{\mathbf{x}\in \C^n \mid A\mathbf{x}=\lambda \mathbf{x}\}$.
  • The eigenspace $E_{\lambda}$ consists of all eigenvectors corresponding to $\lambda$ and the zero vector.
  • $A$ is singular if and only if $0$ is an eigenvalue of $A$.
  • The nullity of $A$ is the geometric multiplicity of $\lambda=0$ if $\lambda=0$ is an eigenvalue.
  • Let $A$ be a $3\times 3$ matrix. Suppose that $A$ has eigenvalues $2$ and $-1$, and suppose that $\mathbf{u}$ and $\mathbf{v}$ are eigenvectors corresponding to $2$ and $-1$, respectively, where \[\mathbf{u}=\begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix} \text{ and } \mathbf{v}=\begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}.\] Then compute $A^5\mathbf{w}$, where $\mathbf{w}=\begin{bmatrix} 7 \\ 2 \\ -3 \end{bmatrix}$.
  • Let $A=\begin{bmatrix} 1 & 2 & 1 \\ -1 &4 &1 \\ 2 & -4 & 0 \end{bmatrix}$. The matrix $A$ has an eigenvalue $2$. Find a basis of the eigenspace $E_2$ corresponding to the eigenvalue $2$. ( The Ohio State University )
  • Let \[A=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 &1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}.\] One of the eigenvalues of the matrix $A$ is $\lambda=0$. Find the geometric multiplicity of the eigenvalue $\lambda=0$. See (b)
  • Let $A$ and $B$ be $n\times n$ matrices. Suppose that these matrices have a common eigenvector $\mathbf{x}$. Show that $\det(AB-BA)=0$.
  • Suppose that $A$ is a diagonalizable matrix with characteristic polynomial \[f_A(\lambda)=\lambda^2(\lambda-3)(\lambda+2)^3(\lambda-4)^3.\] (a)  Find the size of the matrix $A$. (b)  Find the dimension of $E_4$, the eigenspace corresponding to the eigenvalue $\lambda=4$. (c)  Find the dimension of the nullspace of $A$. ( Stanford University )
  • Let $A$ be a square matrix and its characteristic polynomial is given by \[p(t)=(t-1)^3(t-2)^2(t-3)^4(t-4).\] Find the rank of $A$. ( The Ohio State University )
  • (a) Let \[A=\begin{bmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}\] be a matrix such that $a_{11}+a_{12}=1$ and $a_{21}+a_{22}=1$. Namely, the sum of the entries in each row is $1$. (Such a matrix is called (right)  stochastic matrix .) Then prove that the matrix $A$ has an eigenvalue $1$. (b) Find all the eigenvalues of the matrix \[B=\begin{bmatrix} 0.3 & 0.7\\ 0.6& 0.4 \end{bmatrix}.\] (c) For each eigenvalue of $B$, find the corresponding eigenvectors.
  • Let $A$ be an $n\times n$ matrix. Suppose that $\lambda_1, \lambda_2$ are distinct eigenvalues of the matrix $A$ and let $\mathbf{v}_1, \mathbf{v}_2$ be eigenvectors corresponding to $\lambda_1, \lambda_2$, respectively. Show that the vectors $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent.
  • Let $A$ and $B$ be an $n \times n$ matrices. Suppose that all the eigenvalues of $A$ are distinct and the matrices $A$ and $B$ commute, that is $AB=BA$. Then prove that each eigenvector of $A$ is an eigenvector of $B$.
  • Let $H$ and $E$ be $n \times n$ matrices satisfying the relation $HE-EH=2E$. Let $\lambda$ be an eigenvalue of the matrix $H$ such that the real part of $\lambda$ is the largest among the eigenvalues of $H$. Let $\mathbf{x}$ be an eigenvector corresponding to $\lambda$. Then prove that $E\mathbf{x}=\mathbf{0}$.
  • Let \[ A=\begin{bmatrix} 5 & 2 & -1 \\ 2 &2 &2 \\ -1 & 2 & 5 \end{bmatrix}.\] Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix. Your score of this problem is equal to that dimension times five. ( The Ohio State University )
  • Let $A=\begin{bmatrix} 1 & -14 & 4 \\ -1 &6 &-2 \\ -2 & 24 & -7 \end{bmatrix}$ and $\quad \mathbf{v}=\begin{bmatrix} 4 \\ -1 \\ -7 \end{bmatrix}$. Find $A^{10}\mathbf{v}$. You may use the following information without proving it. The eigenvalues of $A$ are $-1, 0, 1$. The eigenspaces are given by \[E_{-1}=\Span\left\{\, \begin{bmatrix} 3 \\ -1 \\ -5 \end{bmatrix} \,\right\}, \quad E_{0}=\Span\left\{\, \begin{bmatrix} -2 \\ 1 \\ 4 \end{bmatrix} \,\right\}, \quad E_{1}=\Span\left\{\, \begin{bmatrix} -4 \\ 2 \\ 7 \end{bmatrix} \,\right\}.\] ( The Ohio State University )

Graphs of characteristic polynomials

  • Let $C$ be a $4 \times 4$ matrix with all eigenvalues $\lambda=2, -1$ and eigensapces \[E_2=\Span\left \{\quad \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad\right \} \text{ and } E_{-1}=\Span\left \{ \quad\begin{bmatrix} 1 \\ 2 \\ 1 \\ 1 \end{bmatrix},\quad \begin{bmatrix} 1 \\ 1 \\ 1 \\ 2 \end{bmatrix} \quad\right\}.\] Calculate $C^4 \mathbf{u}$ for $\mathbf{u}=\begin{bmatrix} 6 \\ 8 \\ 6 \\ 9 \end{bmatrix}$ if possible. Explain why if it is not possible! ( The Ohio State University )
  • Let $A$ be an $n \times n$ matrix and let $c$ be a complex number. (a) For each eigenvalue $\lambda$ of $A$, prove that $\lambda+c$ is an eigenvalue of the matrix $A+cI$, where $I$ is the identity matrix. What can you say about the eigenvectors corresponding to $\lambda+c$? (b) Prove that the algebraic multiplicity of the eigenvalue $\lambda$ of $A$ is the same as the algebraic multiplicity of the eigenvalue $\lambda+c$ of $A+cI$ are equal. (c) How about geometric multiplicities?
  • Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 3 & 9 & 9 & 9 \\ 9 &3 & 9 & 9 \\ 9 & 9 & 3 & 9 \\ 9 & 9 & 9 & 3 \end{bmatrix}.\] ( Harvard University )
  • Find the determinant of the following matrix \[A=\begin{bmatrix} 6 & 2 & 2 & 2 &2 \\ 2 & 6 & 2 & 2 & 2 \\ 2 & 2 & 6 & 2 & 2 \\ 2 & 2 & 2 & 6 & 2 \\ 2 & 2 & 2 & 2 & 6 \end{bmatrix}.\] ( Harvard University )
  • Find all eigenvalues of the matrix \[A=\begin{bmatrix} 0 & i & i & i \\ i &0 & i & i \\ i & i & 0 & i \\ i & i & i & 0 \end{bmatrix},\] where $i=\sqrt{-1}$. For each eigenvalue of $A$, determine its algebraic multiplicity and geometric multiplicity.
  • Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 10001 & 3 & 5 & 7 &9 & 11 \\ 1 & 10003 & 5 & 7 & 9 & 11 \\ 1 & 3 & 10005 & 7 & 9 & 11 \\ 1 & 3 & 5 & 10007 & 9 & 11 \\ 1 &3 & 5 & 7 & 10009 & 11 \\ 1 &3 & 5 & 7 & 9 & 10011 \end{bmatrix}.\] ( MIT )
  • Consider the matrix \[A=\begin{bmatrix} 3/2 & 2\\ -1& -3/2 \end{bmatrix} \in M_{2\times 2}(\R).\] (a) Find the eigenvalues and corresponding eigenvectors of $A$. (b) Show that for $\mathbf{v}=\begin{bmatrix} 1 \\ 0 \end{bmatrix}\in \R^2$, we can choose $n$ large enough so that the length $\|A^n\mathbf{v}\|$ is as small as we like. ( University of California, Berkeley )
  • Let $F$ and $H$ be an $n\times n$ matrices satisfying the relation $HF-FH=-2F$. (a) Find the trace of the matrix $F$. (b)  Let $\lambda$ be an eigenvalue of $H$ and let $\mathbf{v}$ be an eigenvector corresponding to $\lambda$. Show that there exists an positive integer $N$ such that $F^N\mathbf{v}=\mathbf{0}$.
  • Let $A$ and $B$ be $n\times n$ matrices and assume that they commute: $AB=BA$. Then prove that the matrices $A$ and $B$ share at least one common eigenvector.
  • Let $a$ and $b$ be two distinct positive real numbers. Define matrices \[A:=\begin{bmatrix} 0 & a\\ a & 0 \end{bmatrix}, \,\, B:=\begin{bmatrix} 0 & b\\ b& 0 \end{bmatrix}.\] Find all the pairs $(\lambda, X)$, where $\lambda$ is a real number and $X$ is a non-zero real matrix satisfying the relation \[AX+XB=\lambda X.\] ( The University of Tokyo )
  • Introduction to Matrices
  • Elementary Row Operations
  • Gaussian-Jordan Elimination
  • Solutions of Systems of Linear Equations
  • Linear Combination and Linear Independence
  • Nonsingular Matrices
  • Inverse Matrices
  • Subspaces in $\R^n$
  • Bases and Dimension of Subspaces in $\R^n$
  • General Vector Spaces
  • Subspaces in General Vector Spaces
  • Linearly Independency of General Vectors
  • Bases and Coordinate Vectors
  • Dimensions of General Vector Spaces
  • Linear Transformation from $\R^n$ to $\R^m$
  • Linear Transformation Between Vector Spaces
  • Orthogonal Bases
  • Determinants of Matrices
  • Computations of Determinants
  • Introduction to Eigenvalues and Eigenvectors
  • Diagonalization of Matrices
  • The Cayley-Hamilton Theorem
  • Dot Products and Length of Vectors
  • Eigenvalues and Eigenvectors of Linear Transformations
  • Jordan Canonical Form
  • IIT JEE Study Material

Eigenvalues and Eigenvectors Problems and Solutions

Introduction to eigenvalues and eigenvectors.

A rectangular arrangement of numbers in the form of rows and columns is known as a matrix. In this article, we will discuss Eigenvalues and Eigenvectors Problems and Solutions .

Download Complete Chapter Notes of Matrices & Determinants Download Now

Consider a square matrix n × n. If X is the non-trivial column vector solution of the matrix equation AX = λX, where λ is a scalar, then X is the eigenvector of matrix A, and the corresponding value of λ is the eigenvalue of matrix A.

Suppose the matrix equation is written as A X – λ X = 0. Let I be the n × n identity matrix.

If I X is substituted by X in the equation above, we obtain

A X – λ I X = 0.

The equation is rewritten as (A – λ I) X = 0.

The equation above consists of non-trivial solutions if and only if the determinant value of the matrix is 0. The characteristic equation of A is Det (A – λ I) = 0. ‘A’ being an n × n matrix, if (A – λ I) is expanded, (A – λ I) will be the characteristic polynomial of A because its degree is n.

Properties of Eigenvalues

Let A be a matrix with eigenvalues λ 1 , λ 2 ,…., λ n .

The following are the properties of eigenvalues.

(1) The trace of A, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,

\(\begin{array}{l}{\displaystyle {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.}\end{array} \)

(2) The determinant of A is the product of all its eigenvalues, \(\begin{array}{l}{\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.}\end{array} \)

(3) The eigenvalues of the k th power of A, that is, the eigenvalues of A k , for any positive integer k, are \(\begin{array}{l}{\displaystyle \lambda _{1}^{k},…,\lambda _{n}^{k}}.\end{array} \) .

(4) The matrix A is invertible if and only if every eigenvalue is nonzero.

(5) If A is invertible, then the eigenvalues of A -1 are \(\begin{array}{l}{\displaystyle {\frac {1}{\lambda _{1}}},…,{\frac {1}{\lambda _{n}}}}\end{array} \) and each eigenvalue’s geometric multiplicity coincide. The characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.

(6) If A is equal to its conjugate transpose, or equivalently if A is Hermitian, then every eigenvalue is real. The same is true for any real symmetric matrix.

(7) If A is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.

(8) If A is unitary, every eigenvalue has absolute value |λ i | = 1.

(9) If A is a n×n matrix and {λ 1 , λ 2 ,…., λ k } are its eigenvalues, then the eigenvalues of the matrix I + A (where I is the identity matrix) are {λ 1 + 1, λ 2 +1,…., λ k +1}.

Also, Read:

Eigenvectors of a Matrix

Adjoint and Inverse of a Matrix

Normalized and Decomposition of Eigenvectors

Eigenvalues and Eigenvectors Solved Problems

Example 1: Find the eigenvalues and eigenvectors of the following matrix.

Example 2: Find all eigenvalues and corresponding eigenvectors for the matrix A if

\(\begin{array}{l}\begin{pmatrix}2&-3&0\\ \:\:2&-5&0\\ \:\:0&0&3\end{pmatrix}\end{array} \)

Solution:  

\(\begin{array}{l}\det \left(\begin{pmatrix}2&-3&0\\ 2&-5&0\\ 0&0&3\end{pmatrix}-λ\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\right)\\\begin{pmatrix}2&-3&0\\ 2&-5&0\\ 0&0&3\end{pmatrix}-λ\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\\λ\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}=\begin{pmatrix}λ&0&0\\ 0&λ&0\\ 0&0&λ\end{pmatrix}\\=\begin{pmatrix}2&-3&0\\ 2&-5&0\\ 0&0&3\end{pmatrix}-\begin{pmatrix}λ&0&0\\ 0&λ&0\\ 0&0&λ\end{pmatrix}\\=\begin{pmatrix}2-λ&-3&0\\ 2&-5-λ&0\\ 0&0&3-λ\end{pmatrix}\\=\det \begin{pmatrix}2-λ&-3&0\\ 2&-5-λ&0\\ 0&0&3-λ\end{pmatrix}\\=\left(2-λ\right)\det \begin{pmatrix}-5-λ&0\\ 0&3-λ\end{pmatrix}-\left(-3\right)\det \begin{pmatrix}2&0\\ 0&3-λ\end{pmatrix}+0\cdot \det \begin{pmatrix}2&-5-λ\\ 0&0\end{pmatrix}\\=\left(2-λ\right)\left(λ^2+2λ-15\right)-\left(-3\right)\cdot \:2\left(-λ+3\right)+0\cdot \:0\\=-λ^3+13λ-12\\-λ^3+13λ-12=0\\-\left(λ-1\right)\left(λ-3\right)\left(λ+4\right)=0\\\mathrm{The\:eigenvalues\:are:}\\λ=1,\:λ=3,\:λ=-4\\\mathrm{Eigenvectors\:for\:}λ=1\\\begin{pmatrix}2&-3&0\\ 2&-5&0\\ 0&0&3\end{pmatrix}-1\cdot \begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}=\begin{pmatrix}1&-3&0\\ 2&-6&0\\ 0&0&2\end{pmatrix}\\\left(A-1I\right)\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}1&-3&0\\ 0&0&1\\ 0&0&0\end{pmatrix}\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\end{pmatrix}\\\begin{Bmatrix}x-3y=0\\ z=0\end{Bmatrix}\\Isolate\\\begin{Bmatrix}z=0\\ x=3y\end{Bmatrix}\\\mathrm{Plug\:into\:}\begin{pmatrix}x\\ y\\ z\end{pmatrix}\\η=\begin{pmatrix}3y\\ y\\ 0\end{pmatrix}\space\space\:y\ne \:0\\\mathrm{Let\:}y=1\\\begin{pmatrix}3\\ 1\\ 0\end{pmatrix}\\Similarly\\\mathrm{Eigenvectors\:for\:}λ=3:\quad \begin{pmatrix}0\\ 0\\ 1\end{pmatrix}\\\mathrm{Eigenvectors\:for\:}λ=-4:\quad \begin{pmatrix}1\\ 2\\ 0\end{pmatrix}\\\mathrm{The\:eigenvectors\:for\:}\begin{pmatrix}2&-3&0\\ 2&-5&0\\ 0&0&3\end{pmatrix}\\=\begin{pmatrix}3\\ 1\\ 0\end{pmatrix},\:\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},\:\begin{pmatrix}1\\ 2\\ 0\end{pmatrix}\\\end{array} \)

Example 3: Consider the matrix

Solving Eigen Values and Eigen Vactors

for some variable ‘a’. Find all values of ‘a’, which will prove that A has eigenvalues 0, 3, and −3.

Let p (t) be the characteristic polynomial of A, i.e. let p (t) = det (A − tI) = 0. By expanding along the second column of A − tI, we can obtain the equation

How to Solve EigenValues and EigenVactors of Matrix

= (3 − t) (2 + t + 2t + t 2 −4) + 2 (−2a − ta + 5)

= (3 − t) (t 2 + 3t − 2) + (−4a −2ta + 10)

= 3t 2 + 9t − 6 − t 3 − 3t 2 + 2t − 4a − 2ta + 10

= −t 3 + 11t − 2ta + 4 − 4a

= −t 3 + (11 − 2a) t + 4 − 4a

For the eigenvalues of A to be 0, 3 and −3, the characteristic polynomial p (t) must have roots at t = 0, 3, −3. This implies p (t) = –t (t − 3) (t + 3) =–t(t 2 − 9) = –t 3 + 9t.

Therefore, −t 3 + (11 − 2a) t + 4 − 4a = −t 3 + 9t.

For this equation to hold, the constant terms on the left and right-hand sides of the above equation must be equal. This means that 4 − 4a = 0, which implies a = 1.

Hence, A has eigenvalues 0, 3, and −3 precisely when a = 1.

Example 4: Find the eigenvalues and eigenvectors of \(\begin{array}{l}\begin{pmatrix}2&0&0\\ \:0&3&4\\ \:0&4&9\end{pmatrix}\end{array} \)

Frequently Asked Questions

What do you mean by eigenvalues.

Eigenvalues are the special set of scalar values associated with the set of linear equations in the matrix equations.

Can the eigenvalue be zero?

Yes, the eigenvalue can be zero.

Can a singular matrix have eigenvalues?

Every singular matrix has a 0 eigenvalue.

How to find the eigenvalues of a square matrix A?

Use the equation det(A-λI) = 0 and solve for λ. Determine all the possible values of λ, which are the required eigenvalues of matrix A.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

solve initial value problem eigenvectors

  • Share Share

Solver Title

Practice

Generating PDF...

  • Pre Algebra Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Number Line Mean, Median & Mode
  • Algebra Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Interval Notation Pi (Product) Notation Induction Logical Sets Word Problems
  • Pre Calculus Equations Inequalities Scientific Calculator Scientific Notation Arithmetics Complex Numbers Polar/Cartesian Simultaneous Equations System of Inequalities Polynomials Rationales Functions Arithmetic & Comp. Coordinate Geometry Plane Geometry Solid Geometry Conic Sections Trigonometry
  • Calculus Derivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series Fourier Transform
  • Functions Line Equations Functions Arithmetic & Comp. Conic Sections Transformation
  • Linear Algebra Matrices Vectors
  • Trigonometry Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify
  • Statistics Mean Geometric Mean Quadratic Mean Average Median Mode Order Minimum Maximum Probability Mid-Range Range Standard Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge Standard Normal Distribution
  • Physics Mechanics
  • Chemistry Chemical Reactions Chemical Properties
  • Finance Simple Interest Compound Interest Present Value Future Value
  • Economics Point of Diminishing Return
  • Conversions Radical to Exponent Exponent to Radical To Fraction To Decimal To Mixed Number To Improper Fraction Radians to Degrees Degrees to Radians Hexadecimal Scientific Notation Distance Weight Time
  • Pre Algebra
  • Pre Calculus
  • Linear Algebra
  • Trigonometry
  • Conversions

Click to reveal more operations

Most Used Actions

Number line.

  • x^{2}-x-6=0
  • -x+3\gt 2x+1
  • line\:(1,\:2),\:(3,\:1)
  • prove\:\tan^2(x)-\sin^2(x)=\tan^2(x)\sin^2(x)
  • \frac{d}{dx}(\frac{3x+9}{2-x})
  • (\sin^2(\theta))'
  • \lim _{x\to 0}(x\ln (x))
  • \int e^x\cos (x)dx
  • \int_{0}^{\pi}\sin(x)dx
  • \sum_{n=0}^{\infty}\frac{3}{2^n}

step-by-step

initial value problem

  • My Notebook, the Symbolab way Math notebooks have been around for hundreds of years. You write down problems, solutions and notes to go back... Read More

Please add a message.

Message received. Thanks for the feedback.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Mathematics LibreTexts

5.5: Complex Eigenvalues

  • Last updated
  • Save as PDF
  • Page ID 70209

  • Dan Margalit & Joseph Rabinoff
  • Georgia Institute of Technology
  • Learn to find complex eigenvalues and eigenvectors of a matrix.
  • Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales.
  • Understand the geometry of \(2\times 2\) and \(3\times 3\) matrices with a complex eigenvalue.
  • Recipes:  a \(2\times 2\) matrix with a complex eigenvalue is similar to a rotation-scaling matrix, the eigenvector trick for \(2\times 2\) matrices.
  • Pictures:  the geometry of matrices with a complex eigenvalue.
  • Theorems:  the rotation-scaling theorem, the block diagonalization theorem.
  • Vocabulary word:   rotation-scaling matrix .

In Section 5.4 , we saw that an \(n \times n\) matrix whose characteristic polynomial has \(n\) distinct real roots is diagonalizable : it is similar to a diagonal matrix, which is much simpler to analyze. The other possibility is that a matrix has complex roots, and that is the focus of this section. It turns out that such a matrix is similar (in the \(2\times 2\) case) to a rotation-scaling matrix , which is also relatively easy to understand.

In a certain sense, this entire section is analogous to Section 5.4 , with rotation-scaling matrices playing the role of diagonal matrices. See Section 7.1  for a review of the complex numbers.

Matrices with Complex Eigenvalues

As a consequence of the fundamental theorem of algebra, Theorem 7.1.1 in Section 7.1 , as applied to the characteristic polynomial, we see that:

Note \(\PageIndex{1}\)

Every \(n \times n\) matrix has exactly \(n\) complex eigenvalues, counted with multiplicity.

We can compute a corresponding (complex) eigenvector in exactly the same way as before: by row reducing the matrix \(A - \lambda I_n\). Now, however, we have to do arithmetic with complex numbers.

Example \(\PageIndex{1}\): A \(2\times 2\) matrix

Find the complex eigenvalues and eigenvectors of the matrix

\[ A = \left(\begin{array}{cc}1&-1\\1&1\end{array}\right). \nonumber \]

The characteristic polynomial of \(A\) is

\[ f(\lambda) = \lambda^2 - \text{Tr}(A)\lambda + \det(A) = \lambda^2 - 2\lambda + 2. \nonumber \]

The roots of this polynomial are

\[ \lambda = \frac{2\pm\sqrt{4-8}}2 = 1\pm i. \nonumber \]

First we compute an eigenvector for \(\lambda = 1+i\). We have

\[ A-(1+i) I_2 = \left(\begin{array}{cc}1-(1+i)&-1 \\ 1&1-(1+i) \end{array}\right) = \left(\begin{array}{cc}-i&-1\\1&-i\end{array}\right). \nonumber \]

Now we row reduce, noting that the second row is \(i\) times the first:

\[ \left(\begin{array}{cc}-i&-1\\1&-i\end{array}\right) \;\xrightarrow{R_2=R_2-iR_1}\; \left(\begin{array}{cc}-i&-1\\0&0\end{array}\right) \;\xrightarrow{R_1=R_1\div -i}\; \left(\begin{array}{cc}1&-i\\0&0\end{array}\right). \nonumber \]

The parametric form is \(x = iy\text{,}\) so that an eigenvector is \(v_1={i\choose 1}\). Next we compute an eigenvector for \(\lambda=1-i\). We have

\[ A-(1-i) I_2 = \left(\begin{array}{cc}1-(1-i)&-1\\1&1-(1-i)\end{array}\right) = \left(\begin{array}{cc}i&-1\\1&i\end{array}\right). \nonumber \]

Now we row reduce, noting that the second row is \(-i\) times the first:

\[ \left(\begin{array}{cc}i&-1\\1&i\end{array}\right) \;\xrightarrow{R_2=R_2+iR_1}\; \left(\begin{array}{cc}i&-1\\0&0\end{array}\right) \;\xrightarrow{R_1=R_1\div i}\; \left(\begin{array}{cc}1&i\\0&0\end{array}\right). \nonumber \]

The parametric form is \(x = -iy\text{,}\) so that an eigenvector is \(v_2 = {-i\choose 1}\).

We can verify our answers:

\[\begin{aligned}\left(\begin{array}{cc}1&-1\\1&1\end{array}\right)\left(\begin{array}{c}i\\1\end{array}\right)&=\left(\begin{array}{c}i-1\\i+1\end{array}\right)=(1+i)\left(\begin{array}{c}i\\1\end{array}\right) \\ \left(\begin{array}{cc}1&-1\\1&1\end{array}\right)\left(\begin{array}{c}-i\\1\end{array}\right)&=\left(\begin{array}{c}-i-1\\-i+1\end{array}\right)=(1-i)\left(\begin{array}{c}-i\\1\end{array}\right).\end{aligned}\]

Example \(\PageIndex{2}\): A \(3\times 3\) matrix

Find the eigenvalues and eigenvectors, real and complex, of the matrix

\[A=\left(\begin{array}{ccc}4/5&-3/5&0 \\ 3/5&4/5&0\\1&2&2\end{array}\right).\nonumber\]

We compute the characteristic polynomial by expanding cofactors along the third row:

\[ f(\lambda) = \det\left(\begin{array}{ccc}4/5-\lambda &-3/5&0 \\ 3/5&4-5-\lambda &0 \\ 1&2&2-\lambda\end{array}\right) = (2-\lambda)\left(\lambda^2-\frac 85\lambda+1\right). \nonumber \]

This polynomial has one real root at \(2\text{,}\) and two complex roots at

\[ \lambda = \frac{8/5\pm\sqrt{64/25-4}}2 = \frac{4\pm 3i}5. \nonumber \]

Therefore, the eigenvalues are

\[ \lambda = 2,\quad \frac{4+3i}5,\quad \frac{4-3i}5. \nonumber \]

We eyeball that \(v_1 = e_3\) is an eigenvector with eigenvalue \(2\text{,}\) since the third column is \(2e_3\).

Next we find an eigenvector with eigenvaluue \((4+3i)/5\). We have

\[ A-\frac{4+3i}5I_3 = \left(\begin{array}{ccc}-3i/5&-3/5&0\\3/5&-3i/5&0\\ 1&2&2-(4+3i)/5\end{array}\right) \;\xrightarrow[R_2=R_2\times5/3]{R_1=R_1\times -5/3}\; \left(\begin{array}{ccc}i&1&0\\1&-i&0\\1&2&\frac{6-3i}{5}\end{array}\right). \nonumber \]

We row reduce, noting that the second row is \(-i\) times the first:

\[\begin{aligned}\left(\begin{array}{ccc}i&1&0\\1&-i&0\\1&2&\frac{6-3i}{5}\end{array}\right)\xrightarrow{R_2=R_2+iR_1}\quad &\left(\begin{array}{ccc}i&1&0\\0&0&0\\1&2&\frac{6-3i}{5}\end{array}\right) \\ {}\xrightarrow{R_3=R_3+iR_1}\quad &\left(\begin{array}{ccc}i&1&0\\0&0&0\\0&2+i&\frac{6-3i}{5}\end{array}\right) \\ {}\xrightarrow{R_2\longleftrightarrow R_3}\quad &\left(\begin{array}{ccc}i&1&0\\0&2+i&\frac{6-3i}{5}\\0&0&0\end{array}\right) \\ {}\xrightarrow[R_2=R_2\div(2+i)]{R_1=R_1\div i}\quad &\left(\begin{array}{ccc}1&-i&0\\0&1&\frac{9-12i}{25}\\0&0&0\end{array}\right) \\ {}\xrightarrow{R_1=R_1+iR_2}\quad &\left(\begin{array}{ccc}1&0&\frac{12+9i}{25}\\0&1&\frac{9-12i}{25}\\0&0&0\end{array}\right).\end{aligned}\]

The free variable is \(z\text{;}\) the parametric form of the solution is

\[\left\{\begin{array}{rrr}x &=& -\dfrac{12+9i}{25}z \\ y &=& -\dfrac{9-12i}{25}z.\end{array}\right.\nonumber\]

Taking \(z=25\) gives the eigenvector

\[ v_2 = \left(\begin{array}{c}-12-9i\\-9+12i\\25\end{array}\right). \nonumber \]

A similar calculation (replacing all occurences of \(i\) by \(-i\)) shows that an eigenvector with eigenvalue \((4-3i)/5\) is

\[ v_3 = \left(\begin{array}{c}-12+9i\\-9-12i\\25\end{array}\right). \nonumber \]

We can verify our calculations:

\[\begin{aligned}\left(\begin{array}{ccc}4/5&-3/5&0\\3/5&4/5&0\\1&2&2\end{array}\right)\left(\begin{array}{c}-12+9i\\-9-12i\\25\end{array}\right)&=\left(\begin{array}{c}-21/5+72i/5 \\ -72/5-21i/5\\20-15i\end{array}\right)=\frac{4+3i}{5}\left(\begin{array}{c}-12+9i\\-9-12i\\25\end{array}\right) \\ \left(\begin{array}{ccc}4/5&-3/5&0\\3/5&4/5&0\\1&2&2\end{array}\right)\left(\begin{array}{c}-12-9i\\-9+12i\\25\end{array}\right)&=\left(\begin{array}{c}-21/5-72i/5\\-72/5+21i/5\\20+15i\end{array}\right)=\frac{4-3i}{5}\left(\begin{array}{c}-12-9i\\-9+12i\\25\end{array}\right).\end{aligned}\]

If \(A\) is a matrix with real entries, then its characteristic polynomial has real coefficients, so Note 7.1.3  in Section 7.1  implies that its complex eigenvalues come in conjugate pairs. In the first example, we notice that

\[ \begin{split} 1+i \text{ has an eigenvector } \amp v_1 =\left(\begin{array}{c}i\\1\end{array}\right) \\ 1-i \text{ has an eigenvector } \amp v_2 = \left(\begin{array}{c}-i\\1\end{array}\right). \end{split} \nonumber \]

In the second example,

\[ \begin{split} \frac{4+3i}5 \text{ has an eigenvector } \amp v_1 = \left(\begin{array}{c}-12-9i\\-9+12i\\25\end{array}\right) \\ \frac{4-3i}5 \text{ has an eigenvector } \amp v_2 =\left(\begin{array}{c}-12+9i\\-9-12i\\25\end{array}\right) \end{split} \nonumber \]

In these cases, an eigenvector for the conjugate eigenvalue is simply the conjugate eigenvector (the eigenvector obtained by conjugating each entry of the first eigenvector). This is always true. Indeed, if \(Av=\lambda v\) then

\[ A \bar v = \bar{Av} = \bar{\lambda v} = \bar \lambda \bar v, \nonumber \]

which exactly says that \(\bar v\) is an eigenvector of \(A\) with eigenvalue \(\bar \lambda\).

Note \(\PageIndex{2}\)

Let \(A\) be a matrix with real entries. If

\[ \begin{split} \lambda \text{ is a complex eigenvalue with eigenvector } \amp v, \\ \text{then } \bar\lambda \text{ is a complex eigenvalue with eigenvector }\amp\bar v. \end{split} \nonumber \]

In other words, both eigenvalues and eigenvectors come in conjugate pairs.

Since it can be tedious to divide by complex numbers while row reducing, it is useful to learn the following trick, which works equally well for matrices with real entries.

Note \(\PageIndex{3}\): Eigenvector Trick for \(2\times 2\) Matrices

Let \(A\) be a \(2\times 2\) matrix, and let \(\lambda\) be a (real or complex) eigenvalue. Then

\[ A - \lambda I_2 = \left(\begin{array}{cc}z&w\\ \star&\star\end{array}\right) \quad\implies\quad \left(\begin{array}{c}-w\\z\end{array}\right) \text{ is an eigenvector with eigenvalue } \lambda, \nonumber \]

assuming the first row of \(A-\lambda I_2\) is nonzero.

Indeed, since \(\lambda\) is an eigenvalue, we know that \(A-\lambda I_2\) is not an invertible matrix. It follows that the rows are collinear (otherwise the determinant is nonzero), so that the second row is automatically a (complex) multiple of the first:

\[\left(\begin{array}{cc}z&w\\ \star&\star\end{array}\right)=\left(\begin{array}{cc}z&w\\cz&cw\end{array}\right).\nonumber\]

It is obvious that \({-w\choose z}\) is in the null space of this matrix, as is \({w\choose -z}\text{,}\) for that matter. Note that we never had to compute the second row of \(A-\lambda I_2\text{,}\) let alone row reduce!

Example \(\PageIndex{3}\): A \(2\times 2\) matrix, the easy way

Since the characteristic polynomial of a \(2\times 2\) matrix \(A\) is \(f(\lambda) = \lambda^2-\text{Tr}(A)\lambda + \det(A)\text{,}\) its roots are

\[ \lambda = \frac{\text{Tr}(A)\pm\sqrt{\text{Tr}(A)^2-4\det(A)}}2 = \frac{2\pm\sqrt{4-8}}2 = 1\pm i. \nonumber \]

To find an eigenvector with eigenvalue \(1+i\text{,}\) we compute

\[ A - (1+i)I_2 = \left(\begin{array}{cc}-i&-1\\ \star&\star\end{array}\right) \;\xrightarrow{\text{eigenvector}}\; v_1 = \left(\begin{array}{c}1\\-i\end{array}\right). \nonumber \]

The eigenvector for the conjugate eigenvalue is the complex conjugate:

\[ v_2 = \bar v_1 = \left(\begin{array}{c}1\\i\end{array}\right). \nonumber \]

In Example \(\PageIndex{1}\) we found the eigenvectors \({i\choose 1}\) and \({-i\choose 1}\) for the eigenvalues \(1+i\) and \(1-i\text{,}\) respectively, but in Example \(\PageIndex{3}\) we found the eigenvectors \({1\choose -i}\) and \({1\choose i}\) for the same eigenvalues of the same matrix. These vectors do not look like multiples of each other at first—but since we now have complex numbers at our disposal, we can see that they actually are multiples:

\[ -i\left(\begin{array}{c}i\\1\end{array}\right) = \left(\begin{array}{c}1\\-i\end{array}\right) \qquad i\left(\begin{array}{c}-i\\1\end{array}\right) = \left(\begin{array}{c}1\\i\end{array}\right). \nonumber \]

Rotation-Scaling Matrices

The most important examples of matrices with complex eigenvalues are rotation-scaling matrices, i.e., scalar multiples of rotation matrices.

Definition \(\PageIndex{1}\): Rotation-Scaling matrix

A rotation-scaling matrix is a \(2\times 2\) matrix of the form

\[ \left(\begin{array}{cc}a&-b\\b&a\end{array}\right), \nonumber \]

where \(a\) and \(b\) are real numbers, not both equal to zero.

The following proposition justifies the name.

Proposition \(\PageIndex{1}\)

Let \[ A = \left(\begin{array}{cc}a&-b\\b&a\end{array}\right) \nonumber \] be a rotation-scaling matrix. Then:

  • \(A\) is a product of a rotation matrix \[\left(\begin{array}{cc}\cos\theta&-\sin\theta \\ \sin\theta&\cos\theta\end{array}\right)\quad\text{with a scaling matrix}\quad\left(\begin{array}{cc}r&0\\0&r\end{array}\right).\nonumber\]
  • The scaling factor \(r\) is \[ r = \sqrt{\det(A)} = \sqrt{a^2+b^2}. \nonumber \]
  • The rotation angle \(\theta\) is the counterclockwise angle from the positive \(x\)-axis to the vector \({a\choose b}\text{:}\) 

clipboard_e77a44044fae0133211b6b2dd1e067f74.png

Figure \(\PageIndex{1}\)

The eigenvalues of \(A\) are \(\lambda = a \pm bi.\)

Set \(r = \sqrt{\det(A)} = \sqrt{a^2+b^2}\). The point \((a/r, b/r)\) has the property that

\[ \left(\frac ar\right)^2 + \left(\frac br\right)^2 = \frac{a^2+b^2}{r^2} = 1. \nonumber \]

In other words \((a/r,b/r)\) lies on the unit circle. Therefore, it has the form \((\cos\theta,\sin\theta)\text{,}\) where \(\theta\) is the counterclockwise angle from the positive \(x\)-axis to the vector \({a/r\choose b/r}\text{,}\) or since it is on the same line, to \({a\choose b}\text{:}\)

clipboard_e3813567909b075cb84b4126769dd1718.png

Figure \(\PageIndex{2}\)

It follows that

\[ A = r\left(\begin{array}{cc}a/r&-b/r \\ b/r&a/r\end{array}\right) =\left(\begin{array}{cc}r&0\\0&r\end{array}\right) \left(\begin{array}{cc}\cos\theta&-\sin\theta \\ \sin\theta&\cos\theta\end{array}\right), \nonumber \]

as desired.

For the last statement, we compute the eigenvalues of \(A\) as the roots of the characteristic polynomial:

\[ \lambda = \frac{\text{Tr}(A)\pm\sqrt{\text{Tr}(A)^2-4\det(A)}}2 = \frac{2a\pm\sqrt{4a^2-4(a^2+b^2)}}2 = a\pm bi. \nonumber \]

Geometrically, a rotation-scaling matrix does exactly what the name says: it rotates and scales (in either order).

Example \(\PageIndex{4}\): A rotation-scaling matrix

What does the matrix

\[ A = \left(\begin{array}{cc}1&-1\\1&1\end{array}\right) \nonumber \]

do geometrically?

This is a rotation-scaling matrix with \(a=b=1\). Therefore, it scales by a factor of \(\sqrt{\det(A)} = \sqrt 2\) and rotates counterclockwise by \(45^\circ\text{:}\)

clipboard_e30a5cb699bba7f32ce27321349bdddf5.png

Figure \(\PageIndex{3}\)

Here is a picture of \(A\text{:}\)

clipboard_e8087fd25baba758573e072c7d6ea84a6.png

Figure \(\PageIndex{4}\)

An interactive figure is included below.

clipboard_e3345bf120be86aefbad482f5eff60d38.png

Example \(\PageIndex{5}\): A rotation-scaling matrix

\[ A = \left(\begin{array}{cc}-\sqrt{3}&-1\\1&-\sqrt{3}\end{array}\right) \nonumber \]

This is a rotation-scaling matrix with \(a=-\sqrt3\) and \(b=1\). Therefore, it scales by a factor of \(\sqrt{\det(A)}=\sqrt{3+1}=2\) and rotates counterclockwise by the angle \(\theta\) in the picture:

clipboard_e41d3d4fc7025054ec471a500c53ea376.png

Figure \(\PageIndex{6}\)

To compute this angle, we do a bit of trigonometry:

clipboard_e71b300dfcbdf265224a3e194a9fca7a1.png

Figure \(\PageIndex{7}\)

Therefore, \(A\) rotates counterclockwise by \(5\pi/6\) and scales by a factor of \(2\).

clipboard_e466572ea316ada2ab85e51672be053df.png

Figure \(\PageIndex{8}\)

clipboard_e80d7ce7ef91a47d2a40f20e041beb480.png

The matrix in the second example has second column \({-\sqrt 3\choose 1}\text{,}\) which is rotated counterclockwise from the positive \(x\)-axis by an angle of \(5\pi/6\). This rotation angle is not equal to \(\tan^{-1}\bigl(1/(-\sqrt3)\bigr) = -\frac\pi 6.\) The problem is that arctan always outputs values between \(-\pi/2\) and \(\pi/2\text{:}\) it does not account for points in the second or third quadrants. This is why we drew a triangle and used its (positive) edge lengths to compute the angle \(\varphi\text{:}\)

clipboard_e204f2067da799d95e6bce5041c0485e4.png

Figure \(\PageIndex{10}\)

Alternatively, we could have observed that \({-\sqrt 3\choose 1}\) lies in the second quadrant, so that the angle \(\theta\) in question is

\[ \theta = \tan^{-1}\left(\frac1{-\sqrt3}\right) + \pi. \nonumber \]

Note \(\PageIndex{4}\)

When finding the rotation angle of a vector \({a\choose b}\text{,}\) do not blindly compute \(\tan^{-1}(b/a)\text{,}\) since this will give the wrong answer when \({a\choose b}\) is in the second or third quadrant. Instead, draw a picture.

Geometry of \(2 \times 2\) Matrices with a Complex Eigenvalue

Let \(A\) be a \(2\times 2\) matrix with a complex, non-real eigenvalue \(\lambda\). Then \(A\) also has the eigenvalue \(\bar\lambda\neq\lambda\). In particular, \(A\) has distinct eigenvalues, so it is diagonalizable using the complex numbers. We often like to think of our matrices as describing transformations of \(\mathbb{R}^n \) (as opposed to \(\mathbb{C}^n\)). Because of this, the following construction is useful. It gives something like a diagonalization, except that all matrices involved have real entries.

Theorem \(\PageIndex{1}\): Rotation-Scaling Theorem

Let \(A\) be a \(2\times 2\) real matrix with a complex (non-real) eigenvalue \(\lambda\text{,}\) and let \(v\) be an eigenvector. Then \(A = CBC^{-1}\) for

\[ C = \left(\begin{array}{cc}|&| \\ \Re (v)&\Im(v) \\ |&|\end{array}\right)\quad\text{and}\quad B = \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right). \nonumber \]

In particular, \(A\) is similar to a rotation-scaling matrix that scales by a factor of \(|\lambda| = \sqrt{\det(B)}.\)

First we need to show that \(\Re(v)\) and \(\Im(v)\) are linearly independent, since otherwise \(C\) is not invertible. If not, then there exist real numbers \(x,y,\) not both equal to zero, such that \(x\Re(v) + y\Im(v) = 0\). Then

\[ \begin{split} (y+ix)v \amp= (y+ix)\bigl(\Re(v)+i\Im(v)\bigr) \\ \amp= y\Re(v) - x\Im(v) + \left(x\Re(v) + y\Im(v)\right)i \\ \amp= y\Re(v) - x\Im(v). \end{split} \nonumber \]

Now, \((y+ix)v\) is also an eigenvector of \(A\) with eigenvalue \(\lambda\text{,}\) as it is a scalar multiple of \(v\). But we just showed that \((y+ix)v\) is a vector with real entries, and any real eigenvector of a real matrix has a real eigenvalue. Therefore, \(\Re(v)\) and \(\Im(v)\) must be linearly independent after all.

Let \(\lambda = a+bi\) and \(v = {x+yi\choose z+wi}\). We observe that

\[ \begin{split} Av = \lambda v \amp= (a+bi)\left(\begin{array}{c}x+yi\\z+wi\end{array}\right) \\ \amp= \left(\begin{array}{c}(ax-by)+(ay+bx)i \\ (az-bw)+(aw+bz)i\end{array}\right) \\ \amp= \left(\begin{array}{c}ax-by\\az-bw\end{array}\right) + i\left(\begin{array}{c}ay+bx \\ aw+bz\end{array}\right). \end{split} \nonumber \]

On the other hand, we have

\[ A\left(\left(\begin{array}{c}x\\z\end{array}\right) + i\left(\begin{array}{c}y\\w\end{array}\right)\right) = A\left(\begin{array}{c}x\\z\end{array}\right) + iA\left(\begin{array}{c}y\\w\end{array}\right) = A\Re(v) + iA\Im(v). \nonumber \]

Matching real and imaginary parts gives

\[ A\Re(v) = \left(\begin{array}{c}ax-by\\az-bw\end{array}\right) \qquad A\Im(v) = \left(\begin{array}{c}ay+bx\\aw+bz\end{array}\right). \nonumber \]

Now we compute \(CBC^{-1}\Re(v)\) and \(CBC^{-1}\Im(v)\). Since \(Ce_1 = \Re(v)\) and \(Ce_2 = \Im(v)\text{,}\) we have \(C^{-1}\Re(v) = e_1\) and \(C^{-1}\Im(v)=e_2\text{,}\) so

\[ \begin{split} CBC^{-1}\Re(v) \amp= CBe_1 = C\left(\begin{array}{c}a\\-b\end{array}\right) = a\Re(v)-b\Im(v) \\ \amp= a\left(\begin{array}{c}x\\z\end{array}\right) - b\left(\begin{array}{c}y\\w\end{array}\right) =\left(\begin{array}{c}ax-by\\az-bw\end{array}\right) = A\Re(v) \\ CBC^{-1}\Im(v) \amp= CBe_2 = C\left(\begin{array}{c}b\\a\end{array}\right) = b\Re(v)+a\Im(v) \\ \amp= b\left(\begin{array}{c}x\\z\end{array}\right) + a\left(\begin{array}{c}y\\w\end{array}\right) = \left(\begin{array}{c}ay+bx\\aw+bz\end{array}\right) = A\Im(v). \end{split} \nonumber \]

Therefore, \(A\Re(v) = CBC^{-1}\Re(v)\) and \(A\Im(v) = CBC^{-1}\Im(v)\).

Since \(\Re(v)\) and \(\Im(v)\) are linearly independent, they form a basis for \(\mathbb{R}^2 \). Let \(w\) be any vector in \(\mathbb{R}^2 \text{,}\) and write \(w = c\Re(v) + d\Im(v)\). Then

\[ \begin{split} Aw \amp= A\bigl(c\Re(v) + d\Im(v)\bigr) \\ \amp= cA\Re(v) + dA\Im(v) \\ \amp= cCBC^{-1}\Re(v) + dCBC^{-1}\Im(v) \\ \amp= CBC^{-1}\bigl(c\Re(v) + d\Im(v)\bigr) \\ \amp= CBC^{-1} w. \end{split} \nonumber \]

This proves that \(A = CBC^{-1}\).

Here \(\Re\) and \(\Im\) denote the real and imaginary parts, respectively:

\[ \Re(a+bi) = a \quad \Im(a+bi) = b \quad \Re\left(\begin{array}{c}x+yi\\z+wi\end{array}\right) = \left(\begin{array}{c}x\\z\end{array}\right) \quad \Im\left(\begin{array}{c}x+yi\\z+wi\end{array}\right) = \left(\begin{array}{c}y\\w\end{array}\right). \nonumber \]

The rotation-scaling matrix in question is the matrix

\[ B = \left(\begin{array}{cc}a&-b\\b&a\end{array}\right)\quad\text{with}\quad a = \Re(\lambda),\; b = -\Im(\lambda). \nonumber \]

Geometrically, the rotation-scaling theorem says that a \(2\times 2\) matrix with a complex eigenvalue behaves similarly to a rotation-scaling matrix. See Note 5.3.3  in Section 5.3 .

One should regard Theorem \(\PageIndex{1}\) as a close analogue of Theorem 5.4.1 in Section 5.4 , with a rotation-scaling matrix playing the role of a diagonal matrix. Before continuing, we restate the theorem as a recipe:

Recipe: A \(2\times 2\) Matrix with a Complex Eigenvalue

Let \(A\) be a \(2\times 2\) real matrix.

  • Compute the characteristic polynomial \[ f(\lambda) = \lambda^2 - \text{Tr}(A)\lambda + \det(A), \nonumber \] then compute its roots using the quadratic formula.
  • If the eigenvalues are complex, choose one of them, and call it \(\lambda\).
  • Find a corresponding (complex) eigenvalue \(v\) using the trick 3.
  • Then \(A=CBC^{-1}\) for \[ C = \left(\begin{array}{cc}|&| \\ \Re(v)&\Im(v) \\ |&|\end{array}\right)\quad\text{and}\quad B = \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right). \nonumber \] This scales by a factor of \(|\lambda|\).

Example \(\PageIndex{6}\)

\[ A = \left(\begin{array}{cc}2&-1\\2&0\end{array}\right) \nonumber \]

The eigenvalues of \(A\) are

\[ \lambda = \frac{\text{Tr}(A) \pm \sqrt{\text{Tr}(A)^2-4\det(A)}}2 = \frac{2\pm\sqrt{4-8}}2 = 1\pm i. \nonumber \]

We choose the eigenvalue \(\lambda = 1-i\) and find a corresponding eigenvector, using the trick, note \(\PageIndex{3}\):

\[ A - (1-i)I_2 = \left(\begin{array}{cc}1+i&-1 \\ \star&\star\end{array}\right) \;\xrightarrow{\text{eigenvector}}\; v = \left(\begin{array}{c}1\\1+i\end{array}\right). \nonumber \]

According to Theorem \(\PageIndex{1}\), we have \(A=CBC^{-1}\) for

\[ \begin{split} C \amp= \left(\begin{array}{cc}\Re\left(\begin{array}{c}1\\1+i\end{array}\right)&\Im\left(\begin{array}{c}1\\1+i\end{array}\right)\end{array}\right) = \left(\begin{array}{cc}1&0\\1&1\end{array}\right) \\ B \amp= \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right) = \left(\begin{array}{cc}1&-1\\1&1\end{array}\right). \end{split} \nonumber \]

The matrix \(B\) is the rotation-scaling matrix in above Example \(\PageIndex{4}\): it rotates counterclockwise by an angle of \(45^\circ\) and scales by a factor of \(\sqrt 2\). The matrix \(A\) does the same thing, but with respect to the \(\Re(v),\Im(v)\)-coordinate system:

clipboard_e8ebce3ac3a95ebd53f0c96895249ea7a.png

Figure \(\PageIndex{11}\)

To summarize:

  • \(B\) rotates around the circle centered at the origin and passing through \(e_1\) and \(e_2\text{,}\) in the direction from \(e_1\) to \(e_2\text{,}\) then scales by \(\sqrt 2\).
  • \(A\) rotates around the ellipse centered at the origin and passing through \(\Re(v)\) and \(\Im(v)\text{,}\) in the direction from \(\Re(v)\) to \(\Im(v)\text{,}\) then scales by \(\sqrt 2\).

The reader might want to refer back to Example 5.3.7 in Section 5.3 .

clipboard_e85427cb72d2836cc739e9fdbb657e1f0.png

If instead we had chosen \(\bar\lambda = 1+i\) as our eigenvalue, then we would have found the eigenvector \(\bar v = {1\choose 1-i}\). In this case we would have \(A=C'B'(C')^{-1}\text{,}\) where

\[ \begin{split} C' \amp= \left(\begin{array}{cc}\Re\left(\begin{array}{c}1\\1-i\end{array}\right)&\Im\left(\begin{array}{c}1\\1-i\end{array}\right)\end{array}\right) = \left(\begin{array}{cc}1&0\\1&-1\end{array}\right) \\ B' \amp= \left(\begin{array}{cc}\Re(\overline{\lambda})&\Im(\overline{\lambda}) \\ -\Im(\overline{\lambda})&\Re(\overline{\lambda})\end{array}\right) = \left(\begin{array}{cc}1&1\\-1&1\end{array}\right). \end{split} \nonumber \]

So, \(A\) is also similar to a clockwise rotation by \(45^\circ\text{,}\) followed by a scale by \(\sqrt 2\).

Example \(\PageIndex{7}\)

\[ A = \left(\begin{array}{cc}-\sqrt{3}+1&-2\\1&-\sqrt{3}-1\end{array}\right) \nonumber \]

\[ \lambda = \frac{\text{Tr}(A) \pm \sqrt{\text{Tr}(A)^2-4\det(A)}}2 = \frac{-2\sqrt 3\pm\sqrt{12-16}}2 = -\sqrt3\pm i. \nonumber \]

We choose the eigenvalue \(\lambda = -\sqrt3-i\) and find a corresponding eigenvector, using the trick, note \(\PageIndex{3}\):

\[ A - (-\sqrt3-i)I_2 = \left(\begin{array}{cc}1+i&-2\\ \star&\star\end{array}\right) \;\xrightarrow{\text{eigenvector}}\; v = \left(\begin{array}{c}2\\1+i\end{array}\right). \nonumber \]

\[ \begin{split} C \amp= \left(\begin{array}{cc}\Re\left(\begin{array}{c}2\\1+i\end{array}\right)&\Im\left(\begin{array}{c}2\\1+i\end{array}\right)\end{array}\right) = \left(\begin{array}{cc}2&0\\1&1\end{array}\right) \\ B \amp= \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right) = \left(\begin{array}{cc}-\sqrt{3}&-1\\1&-\sqrt{3}\end{array}\right). \end{split} \nonumber \]

The matrix \(B\) is the rotation-scaling matrix in the above Example \(\PageIndex{5}\): it rotates counterclockwise by an angle of \(5\pi/6\) and scales by a factor of \(2\). The matrix \(A\) does the same thing, but with respect to the \(\Re(v),\Im(v)\)-coordinate system:

clipboard_e51b7a54c228d117addf94614c4d494f7.png

Figure \(\PageIndex{13}\)

  • \(B\) rotates around the circle centered at the origin and passing through \(e_1\) and \(e_2\text{,}\) in the direction from \(e_1\) to \(e_2\text{,}\) then scales by \(2\).
  • \(A\) rotates around the ellipse centered at the origin and passing through \(\Re(v)\) and \(\Im(v)\text{,}\) in the direction from \(\Re(v)\) to \(\Im(v)\text{,}\) then scales by \(2\).

clipboard_e54f56aa77295d0713d7a3784a0988851.png

If instead we had chosen \(\bar\lambda = -\sqrt3-i\) as our eigenvalue, then we would have found the eigenvector \(\bar v = {2\choose 1-i}\). In this case we would have \(A=C'B'(C')^{-1}\text{,}\) where

\[ \begin{split} C' \amp= \left(\begin{array}{cc}\Re\left(\begin{array}{c}2\\1-i\end{array}\right)&\Im\left(\begin{array}{c}2\\1-i\end{array}\right)\end{array}\right) = \left(\begin{array}{cc}2&0\\1&-1\end{array}\right) \\ B' \amp= \left(\begin{array}{cc}\Re(\overline{\lambda})&\Im(\overline{\lambda}) \\ -\Im(\overline{\lambda})&\Re(\overline{\lambda})\end{array}\right) = \left(\begin{array}{cc}-\sqrt{3}&1\\-1&-\sqrt{3}\end{array}\right). \end{split} \nonumber \]

So, \(A\) is also similar to a clockwise rotation by \(5\pi/6\text{,}\) followed by a scale by \(2\).

We saw in the above examples that Theorem \(\PageIndex{1}\) can be applied in two different ways to any given matrix: one has to choose one of the two conjugate eigenvalues to work with. Replacing \(\lambda\) by \(\bar\lambda\) has the effect of replacing \(v\) by \(\bar v\text{,}\) which just negates all imaginary parts, so we also have \(A=C'B'(C')^{-1}\) for

\[ C' = \left(\begin{array}{cc}|&| \\ \Re(v)&-\Im(v) \\ |&|\end{array}\right)\quad\text{and}\quad B' = \left(\begin{array}{cc}\Re(\lambda)&-\Im(\lambda) \\ \Im(\lambda)&\Re(\lambda)\end{array}\right). \nonumber \]

The matrices \(B\) and \(B'\) are similar to each other. The only difference between them is the direction of rotation, since \({\Re(\lambda)\choose -\Im(\lambda)}\) and \({\Re(\lambda)\choose \Im(\lambda)}\) are mirror images of each other over the \(x\)-axis:

clipboard_ed22d2dca1090c8b4ec00c51329097edb.png

Figure \(\PageIndex{15}\)

The discussion that follows is closely analogous to the exposition in subsection The Geometry of Diagonalizable Matrices in Section 5.4 , in which we studied the dynamics of diagonalizable \(2\times 2\) matrices.

Note \(\PageIndex{5}\): Dynamics of a \(2\times 2\) Matrix with a Complex Eigenvalue

Let \(A\) be a \(2\times 2\) matrix with a complex (non-real) eigenvalue \(\lambda\). By Theorem \(\PageIndex{1}\), the matrix \(A\) is similar to a matrix that rotates by some amount and scales by \(|\lambda|\). Hence, \(A\) rotates around an ellipse and scales by \(|\lambda|\). There are three different cases.

\(\color{Red}|\lambda| > 1\text{:}\) when the scaling factor is greater than \(1\text{,}\) then vectors tend to get longer, i.e., farther from the origin. In this case, repeatedly multiplying a vector by \(A\) makes the vector “spiral out”. For example,

\[ A = \frac 1{\sqrt 2}\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad \lambda = \frac{\sqrt3-i}{\sqrt 2} \qquad |\lambda| = \sqrt 2 > 1 \nonumber \]

gives rise to the following picture:

162767989880585129.png

Figure \(\PageIndex{16}\)

\(\color{Red}|\lambda| = 1\text{:}\) when the scaling factor is equal to \(1\text{,}\) then vectors do not tend to get longer or shorter. In this case, repeatedly multiplying a vector by \(A\) simply “rotates around an ellipse”. For example,

\[ A = \frac 12\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad \lambda = \frac{\sqrt3-i}2 \qquad |\lambda| = 1 \nonumber \]

imageedit_6_3306311414.png

Figure \(\PageIndex{17}\)

\(\color{Red}|\lambda| \lt 1\text{:}\) when the scaling factor is less than \(1\text{,}\) then vectors tend to get shorter, i.e., closer to the origin. In this case, repeatedly multiplying a vector by \(A\) makes the vector “spiral in”. For example,

\[ A = \frac 1{2\sqrt 2}\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad \lambda = \frac{\sqrt3-i}{2\sqrt 2} \qquad |\lambda| = \frac 1{\sqrt 2} \lt 1 \nonumber \]

Figure \(\PageIndex{18}\)

Example \(\PageIndex{8}\): Interactive: \(|\lambda|>1\)

\[ A = \frac 1{\sqrt 2}\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad B = \frac 1{\sqrt 2}\left(\begin{array}{cc}\sqrt{3}&-1\\1&\sqrt{3}\end{array}\right) \qquad C = \left(\begin{array}{cc}2&0\\1&1\end{array}\right) \nonumber \]

\[ \lambda = \frac{\sqrt3-i}{\sqrt 2} \qquad |\lambda| = \sqrt 2 > 1 \nonumber \]

clipboard_e6fbc16677eb57e192e1fb8200bfaf1d9.png

Example \(\PageIndex{9}\): Interactive: \(|\lambda|=1\)

\[ A = \frac 12\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad B = \frac 12\left(\begin{array}{cc}\sqrt{3}&-1\\1&\sqrt{3}\end{array}\right) \qquad C = \left(\begin{array}{cc}2&0\\1&1\end{array}\right) \nonumber \]

\[ \lambda = \frac{\sqrt3-i}2 \qquad |\lambda| = 1 \nonumber \]

clipboard_ec2f02437baf7d4e9f6fd884fb166626a.png

Example \(\PageIndex{10}\): Interactive: \(|\lambda|\lt1\)

\[ A = \frac 1{2\sqrt 2}\left(\begin{array}{cc}\sqrt{3}+1&-2\\1&\sqrt{3}-1\end{array}\right) \qquad B = \frac 1{2\sqrt 2}\left(\begin{array}{cc}\sqrt{3}&-1\\1&\sqrt{3}\end{array}\right) \qquad C = \left(\begin{array}{cc}2&0\\1&1\end{array}\right) \nonumber \]

\[ \lambda = \frac{\sqrt3-i}{2\sqrt 2} \qquad |\lambda| = \frac 1{\sqrt 2} \lt 1 \nonumber \]

clipboard_e4076acbab3646d86ea56cd764e7f5367.png

Remark: Classification of \(2\times 2\) matrices up to similarity

At this point, we can write down the “simplest” possible matrix which is similar to any given \(2\times 2\) matrix \(A\). There are four cases:

  • \(A\) has two real eigenvalues \(\lambda_1,\lambda_2\). In this case, \(A\) is diagonalizable, so \(A\) is similar to the matrix \[ \left(\begin{array}{cc}\lambda_1&0\\0&\lambda_2\end{array}\right). \nonumber \] This representation is unique up to reordering the eigenvalues.
  • \(A\) has one real eigenvalue \(\lambda\) of geometric multiplicity \(2\). In this case, we saw in Example 5.4.20 in Section 5.4  that \(A\) is equal to the matrix \[ \left(\begin{array}{cc}\lambda&0\\0&\lambda\end{array}\right). \nonumber \]
  • \(A\) has one real eigenvalue \(\lambda\) of geometric multiplicity \(1\). In this case, \(A\) is not diagonalizable, and we saw in Remark: Non-diagonalizable \(2\times 2\) matrices with an eigenvalue in Section 5.4  that \(A\) is similar to the matrix \[ \left(\begin{array}{cc}\lambda&1\\0&\lambda\end{array}\right). \nonumber \]
  • \(A\) has no real eigenvalues. In this case, \(A\) has a complex eigenvalue \(\lambda\text{,}\) and \(A\) is similar to the rotation-scaling matrix \[ \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right) \nonumber \] by Theorem \(\PageIndex{1}\). By Proposition \(\PageIndex{1}\), the eigenvalues of a rotation-scaling matrix \(\left(\begin{array}{cc}a&-b\\b&a\end{array}\right) are \(a\pm bi\text{,}\) so that two rotation-scaling matrices \(\left(\begin{array}{cc}a&-b\\b&a\end{array}\right)\) and \(\left(\begin{array}{cc}c&-d\\d&c\end{array}\right)\) are similar if and only if \(a=c\) and \(b=\pm d\).

Block Diagonalization

For matrices larger than \(2\times 2\text{,}\) there is a theorem that combines Theorem 5.4.1 in Section 5.4  and Theorem \(\PageIndex{1}\). It says essentially that a matrix is similar to a matrix with parts that look like a diagonal matrix, and parts that look like a rotation-scaling matrix.

Theorem \(\PageIndex{2}\): Block Diagonalization Theorem

Let \(A\) be a real \(n\times n\) matrix. Suppose that for each (real or complex) eigenvalue, the algebraic multiplicity equals the geometric multiplicity. Then \(A = CBC^{-1}\text{,}\) where \(B\) and \(C\) are as follows:

  • The matrix \(B\) is block diagonal , where the blocks are \(1\times 1\) blocks containing the real eigenvalues (with their multiplicities), or \(2\times 2\) blocks containing the matrices \[ \left(\begin{array}{cc}\Re(\lambda)&\Im(\lambda) \\ -\Im(\lambda)&\Re(\lambda)\end{array}\right) \nonumber \] for each non-real eigenvalue \(\lambda\) (with multiplicity).
  • The columns of \(C\) form bases for the eigenspaces for the real eigenvectors, or come in pairs \(\bigl(\,\Re(v)\;\Im(v)\,\bigr)\) for the non-real eigenvectors.

The Theorem \(\PageIndex{2}\) is proved in the same way as Theorem 5.4.1 in Section 5.4  and Theorem \(\PageIndex{1}\). It is best understood in the case of \(3\times 3\) matrices.

Note \(\PageIndex{6}\): Block Diagonalization of a \(3\times 3\) Matrix with a Complex Eigenvalue

Let \(A\) be a \(3\times 3\) matrix with a complex eigenvalue \(\lambda_1\). Then \(\bar\lambda_1\) is another eigenvalue, and there is one real eigenvalue \(\lambda_2\). Since there are three distinct eigenvalues, they have algebraic and geometric multiplicity one, so Theorem \(\PageIndex{2}\) applies to \(A\).

Let \(v_1\) be a (complex) eigenvector with eigenvalue \(\lambda_1\text{,}\) and let \(v_2\) be a (real) eigenvector with eigenvalue \(\lambda_2\). Then Theorem \(\PageIndex{2}\) says that \(A = CBC^{-1}\) for

imageedit_5_3133266391.png

Figure \(\PageIndex{22}\)

Example \(\PageIndex{11}\): Geometry of a \(3\times 3\) matrix with a complex eigenvalue

\[ A = \frac 1{29}\left(\begin{array}{ccc}33&-23&9\\22&33&-23\\19&14&50\end{array}\right) \nonumber \]

First we find the (real and complex) eigenvalues of \(A\). We compute the characteristic polynomial using whatever method we like:

\[ f(\lambda) = \det(A-\lambda I_3) = -\lambda^3 + 4\lambda^2 - 6\lambda + 4. \nonumber \]

We search for a real root using the rational root theorem. The possible rational roots are \(\pm 1,\pm 2,\pm 4\text{;}\) we find \(f(2) = 0\text{,}\) so that \(\lambda-2\) divides \(f(\lambda)\). Performing polynomial long division gives

\[ f(\lambda) = -(\lambda-2)\bigl(\lambda^2-2\lambda+2\bigr). \nonumber \]

The quadratic term has roots

\[ \lambda = \frac{2\pm\sqrt{4-8}}2 = 1\pm i, \nonumber \]

so that the complete list of eigenvalues is \(\lambda_1 = 1-i\text{,}\) \(\bar\lambda_1 = 1+i\text{,}\) and \(\lambda_2 = 2\).

Now we compute some eigenvectors, starting with \(\lambda_1=1-i\). We row reduce (probably with the aid of a computer):

\[ A-(1-i)I_3 = \frac 1{29}\left(\begin{array}{ccc}4+29i&-23&9\\22&4+29i&-23\\19&14&21+29i\end{array}\right) \;\xrightarrow{\text{RREF}}\; \left(\begin{array}{ccc}1&0&7/5+i/5 \\ 0&1&-2/5+9i/5 \\ 0&0&0\end{array}\right). \nonumber \]

The free variable is \(z\text{,}\) and the parametric form is

\[\left\{\begin{array}{ccc}x &=& -\left(\dfrac 75+\dfrac 15i\right)z\\ y &=& \left(\dfrac 25-\dfrac 95i\right)z\end{array}\right. \quad\xrightarrow[\text{eigenvector}]{z=5}\quad v_1=\left(\begin{array}{c}-7-i\\2-9i\\5\end{array}\right).\nonumber\]

For \(\lambda_2=2\text{,}\) we have

\[ A - 2I_3 = \frac 1{29}\left(\begin{array}{ccc}-25&-23&9\\22&-25&-23\\19&14&-8\end{array}\right) \;\xrightarrow{\text{RREF}}\; \left(\begin{array}{ccc}1&0&-2/3 \\ 0&1&1/3 \\ 0&0&0\end{array}\right). \nonumber \]

\[\left\{\begin{array}{rrr}x &=& \dfrac 23z \\ y &=& -\dfrac 13z \end{array}\right. \quad\xrightarrow[\text{eigenvector}]{z=3}\quad v_2=\left(\begin{array}{c}2\\-1\\3\end{array}\right).\nonumber\]

According to Theorem \(\PageIndex{2}\), we have \(A=CBC^{-1}\) for

\[\begin{aligned}C&=\left(\begin{array}{ccc}|&|&| \\ \Re(v_1)&\Im(v_1)&v_2 \\ |&|&|\end{array}\right)=\left(\begin{array}{ccc}-7&-1&2\\2&-9&-1\\5&0&3\end{array}\right) \\ B&=\left(\begin{array}{ccc}\Re(\lambda_1)&\Im(\lambda_1)&0 \\ -\Im(\lambda_1)&\Re(\lambda_1)&0 \\ 0&0&2\end{array}\right)=\left(\begin{array}{ccc}1&-1&0\\1&1&0\\0&0&2\end{array}\right).\end{aligned}\]

The matrix \(B\) is a combination of the rotation-scaling matrix \(\left(\begin{array}{cc}1&-1\\1&1\end{array}\right)\) from Example \(\PageIndex{4}\), and a diagonal matrix. More specifically, \(B\) acts on the \(xy\)-coordinates by rotating counterclockwise by \(45^\circ\) and scaling by \(\sqrt2\text{,}\) and it scales the \(z\)-coordinate by \(2\). This means that points above the \(xy\)-plane spiral out away from the \(z\)-axis and move up, and points below the \(xy\)-plane spiral out away from the \(z\)-axis and move down.

The matrix \(A\) does the same thing as \(B\text{,}\) but with respect to the \(\{\Re(v_1),\Im(v_1),v_2\}\)-coordinate system. That is, \(A\) acts on the \(\Re(v_1),\Im(v_1)\)-plane by spiraling out, and \(A\) acts on the \(v_2\)-coordinate by scaling by a factor of \(2\). See the demo below.

clipboard_e27917602510bf4e07a2ccd2df3add014.png

IMAGES

  1. Solved (1 point) Consider the initial value problem x(0)-

    solve initial value problem eigenvectors

  2. Shortcut Method to Find Eigenvectors of 3 × 3 matrix

    solve initial value problem eigenvectors

  3. (1 point) Consider the initial value problem (a) Find the eigenvalues

    solve initial value problem eigenvectors

  4. SOLVED: Consider the following initial-value problem. 𝐗^'=( 6 9 -1 12

    solve initial value problem eigenvectors

  5. Solved (1 point) Consider the initial value problem dx [2 -2

    solve initial value problem eigenvectors

  6. Solved (4 points) Consider the initial value problem

    solve initial value problem eigenvectors

VIDEO

  1. how to solve eigenvectors of a 3×3 matrix

  2. An intuitive Approach to Solve Initial Value Cauchy Problem 🥰#csirnetmaths #maths #iitjam

  3. Solve the initial value problem (IVP). (a) y? = x3(1 ?y), y(0) = 3

  4. L5

  5. How to Solve a Second-Order Initial Value Problem (Differential Equations)

  6. Solve initial and boundary value problem by using Fourier and Laplace Transform||Mathematical Method

COMMENTS

  1. The Initial Value Problem and Eigenvectors

    laode Linear Algebra Solving Ordinary Differential Equations The Initial Value Problem and Eigenvectors Martin Golubitsky and Michael Dellnitz The general constant coefficient system of differential equations has the form dx1 dt (t) ⋮ dxn dt (t) = ⋮ = c11x1(t)+⋯+c1nxn(t) ⋮ cn1x1(t)+⋯+cnnxn(t) (1) where the coefficients cij ∈R are constants.

  2. 5.1: Eigenvalues and Eigenvectors

    Here is the most important definition in this text. Definition 5.1.1: Eigenvector and Eigenvalue. Let A be an n × n matrix. An eigenvector of A is a nonzero vector v in Rn such that Av = λv, for some scalar λ. An eigenvalue of A is a scalar λ such that the equation Av = λv has a nontrivial solution.

  3. eigenvalues eigenvectors

    eigenvalues eigenvectors - Solving an initial value problem (Systems of Linear Differential equations) - Mathematics Stack Exchange Solving an initial value problem (Systems of Linear Differential equations) Asked 1 year, 7 months ago Modified 1 year, 7 months ago Viewed 103 times 0 Here's the question: Solve the initial value problem

  4. initial value problem

    Assuming "initial value problem" is a general topic | Use as a calculus result or referring to a mathematical definition instead. Examples for Differential Equations. ... Solve an ODE using a specified numerical method: Runge-Kutta method, dy/dx = -2xy, y(0) = 2, from 1 to 3, h = .25

  5. PDF Chapter 6 Eigenvalues and Eigenvectors

    Most 2 by 2 matrices have two eigenvector directions and two eigenvalues. We will show that det(A − λI)=0. This section explains how to compute the x's and λ's. It can come early in the course. We only need the determinant ad − bc of a 2 by 2 matrix. Example 1 uses to find the eigenvalues λ = 1 and λ = det(A−λI)=0 1.

  6. PDF [1] Eigenvectors and Eigenvalues

    eigenvectors: x = Ax De nitions A nonzero vector x is an eigenvector if there is a number such that Ax = x: The scalar value is called the eigenvalue. Note that it is always true that A0 = 0 for any . This is why we make the distinction than an eigenvector must be a nonzero vector, and an eigenvalue must correspond to a nonzero vector.

  7. 3.4: Eigenvalue Method

    In this section we will learn how to solve linear homogeneous constant coefficient systems of ODEs by the eigenvalue method.

  8. 7.4: Numerical Methods

    where \(F(\lambda)=y(1)\), obtained by solving the initial value problem. Again, an iteration for the roots of \(F(\lambda)\) can be done using the Secant Method. For the eigenvalue problem, there are an infinite number of roots, and the choice of the two initial guesses for \(\lambda\) will then determine to which root the iteration will converge.

  9. Eigenvectors—Wolfram Language Documentation

    Solve this initial value problem for : First, compute the eigenvalues and corresponding eigenvectors of : The general solution of the system is . ... Generalized eigenvectors either solve , with is a root of the generalized characteristic polynomial, or obey for some scalar :

  10. Matrix Eigenvectors Calculator

    Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step

  11. Eigenvalues: Eigenvalues of a Matrix—Wolfram Documentation

    This preserves the eigenvectors but changes the eigenvalues by - ... Solve this initial value problem for : First, compute the eigenvalues and corresponding eigenvectors of : The general solution of the system is . Use LinearSolve to determine the coefficients:

  12. Eigenvectors and Eigenspaces

    Eigenvectors and Eigenspaces. Definition. Let A be an n × n matrix. The eigenspace corresponding to an eigenvalue λ of A is defined to be Eλ = {x ∈ Cn ∣ Ax = λx}. Summary. Let A be an n × n matrix. The eigenspace Eλ consists of all eigenvectors corresponding to λ and the zero vector. A is singular if and only if 0 is an eigenvalue of A.

  13. Solved 1. Use the eigenvalue method to solve the initial

    Question: 1. Use the eigenvalue method to solve the initial value problem below. Clearly indicate your eigenvalues, eigenvectors, general, and particular solutions for each problem. Show all work! Assume Y= (x (t),y (t))T. dldY= [2−22−3]Y,Y (0)= [31] Characteristic Equation: Eigenvalues: General Solution: Eigenvector (s): Particular Solution:

  14. 11.6.1: Eigenvalues and Eigenvectors (Exercises)

    In Exercises 11.6.1.1 11.6.1. 1 - 11.6.1.6 11.6.1. 6, a matrix A A and one of its eigenvectors are given. Find the eigenvalue of A A for the given eigenvector. Exercise 11.6.1.1 11.6.1. 1. A = [ 9 −6 8 −5] x = [−4 3] A = [ 9 8 − 6 − 5] x → = [ − 4 3] Answer. Exercise 11.6.1.2 11.6.1. 2.

  15. Eigenvalues and Eigenvectors Problems and Solutions

    Eigenvalues and Eigenvectors Problems and Solutions Introduction to Eigenvalues and Eigenvectors A rectangular arrangement of numbers in the form of rows and columns is known as a matrix. In this article, we will discuss Eigenvalues and Eigenvectors Problems and Solutions. Download Complete Chapter Notes of Matrices & Determinants Download Now

  16. Solved Consider the Initial Value Problem: (a) Find the

    Consider the Initial Value Problem: (a) Find the eigenvalues and eigenvectors for the coefficient matrix. A₁ = 4-19 7₁ = (b) Solve the initial value problem.

  17. 5.1: The Eigenvalue Problem

    The first eigenvector is found by solving (A −λ1I)x1 = 0 ( A − λ 1 I) x 1 = 0, or. so that x21 = x11 x 21 = x 11. The second eigenvector is found by solving (A −λ2I)x2 = 0 ( A − λ 2 I) x 2 = 0, or. so that x22 = −x12 x 22 = − x 12. The eigenvalues and eigenvectors are therefore given by.

  18. Differential Equations

    Section 5.8 : Complex Eigenvalues. In this section we will look at solutions to. →x ′ = A→x x → ′ = A x →. where the eigenvalues of the matrix A A are complex. With complex eigenvalues we are going to have the same problem that we had back when we were looking at second order differential equations. We want our solutions to only ...

  19. initial value problem

    Solve problems from Pre Algebra to Calculus step-by-step . step-by-step. initial value problem. en. Related Symbolab blog posts. My Notebook, the Symbolab way. Math notebooks have been around for hundreds of years. You write down problems, solutions and notes to go back...

  20. PDF Complex Eigenvalues

    If the eigenvalues of A (and hence the eigenvectors) are real, one has an idea how to proceed. However if the eigenvalues are complex, it is less obvious how to find the real solutions. Because we ... Example Solve the initial value problem x ...

  21. Solved First, solve the initial value problem

    Question: First, solve the initial value problem dydx=2x9-y22,y(0)=0 with|y|<3. Evaluate your solution at x=3 and round your solution to two decimalplaces.Your Answer:Answer First, solve the initial value problem d y d x = 2 x 9 - y 2 2 , y ( 0 ) = 0 w

  22. 5.5: Complex Eigenvalues

    A is a product of a rotation matrix (cosθ − sinθ sinθ cosθ) with a scaling matrix (r 0 0 r). The scaling factor r is r = √ det (A) = √a2 + b2. The rotation angle θ is the counterclockwise angle from the positive x -axis to the vector (a b): Figure 5.5.1. The eigenvalues of A are λ = a ± bi.