The product of the eigenvalues can be found by multiplying the two values expressed in (**) above:
If 0 is an eigenvalue of a matrix A, then the equation A x = λ x = 0 x = 0 must have nonzero solutions, which are the eigenvectors associated with λ = 0. But if A is square and A x = 0 has nonzero solutions, then A must be singular, that is, det A must be 0. This observation establishes the following fact: Zero is an eigenvalue of a matrix if and only if the matrix is singular.
Example 3: Determine the eigenvalues and eigenvectors of the identity matrix I without first calculating its characteristic equation.
The equation A x = λ x characterizes the eigenvalues and associated eigenvectors of any matrix A. If A = I, this equation becomes x = λ x. Since x ≠ 0, this equation implies λ = 1; then, from x = 1 x, every (nonzero) vector is an eigenvector of I. Remember the definition: x is an eigenvector of a matrix A if A x is a scalar multiple of x and x ≠ 0. Since multiplication by I leaves x unchanged, every (nonzero) vector must be an eigenvector of I, and the only possible scalar multiple—eigenvalue—is 1.
Example 4: The Cayley‐Hamilton Theorem states that any square matrix satisfies its own characteristic equation; that is, if A has characteristic polynomial p(λ), then p(A) = 0. To illustrate, consider the matrix from Example 1. Since its characteristic polynomial is p(λ) = λ 2+3λ+2, the Cayley‐Hamilton Theorem states that p(A) should equal the zero matrix, 0. This is verified as follows:
If A is an n by n matrix, then its characteristic polynomial has degree n. The Cayley‐Hamilton Theorem then provides a way to express every integer power A k in terms of a polynomial in A of degree less than n. For example, for the 2 x 2 matrix above, the fact that A 2 + 3 A + 2 I = 0 implies A 2 = −3 A − 2 I. Thus, A 2 is expressed in terms of a polynomial of degree 1 in A. Now, by repeated applications, every positive integer power of this 2 by 2 matrix A can be expressed as a polynomial of degree less than 2. To illustrate, note the following calculation for expressing A 5 in term of a linear polynomial in A; the key is to consistently replace A 2 by −3 A − 2 I and simplify:
This result yields
a calculation which you are welcome to verify be performing the repeated multiplications
The Cayley‐Hamilton Theorem can also be used to express the inverse of an invertible matrix A as a polynomial in A. For example, for the 2 by 2 matrix A above,
This result can be easily verified. The inverse of an invertible 2 by 2 matrix is found by first interchanging the entries on the diagonal, then taking the opposite of the each off‐diagonal entry, and, finally, dividing by the determinant of A. Since det A = 2,
but
validating the expression in (*) for A −1. The same ideas used to express any positive integer power of an n by n matrix A in terms of a polynomial of degree less than n can also be used to express any negative integer power of (an invertible matrix) A in terms of such a polynomial.
Example 5: Let A be a square matrix. How do the eigenvalues and associated eigenvectors of A 2 compare with those of A? Assuming that A is invertible, how do the eigenvalues and associated eigenvectors of A −1 compare with those of A?
Let λ be an eigenvalue of the matrix A, and let x be a corresponding eigenvector. Then A x = λ x, and it follows from this equation that
Therefore, λ 2 is an eigenvalue of A 2, and x is the corresponding eigenvector. Now, if A is invertible, then A has no zero eigenvalues, and the following calculations are justified:
so λ −1 is an eigenvalue of A −1 with corresponding eigenvector x.