Today we cover Section 4.1. Read Section 4.2 for next class. Work through suggested exercises.
Homework 8 is on WeBWorK and is due Friday at 11:55pm.
It will be at least a week before the midterm grades are available.
Math Help Centre: M-F 12:30-5:30 in PAB48/49 and online 6pm-8pm.
My next office hour is today 3:30-4:20 in MC130.
Flu/covid vaccination clinic: Mon-Thurs this week, 8:30-4, Thames Hall Atrium. Drop in. Free. Students who have never been to Student Health Services need to call to register.
A Markov chain has states $1, 2, \ldots n$ and transition probabilities $P_{ij}$ for moving from state $j$ to state $i$ at each time step. These probabilities are constant and depend only on the current state.
$P$ is a stochastic matrix, which means that it is square, has non-negative entries, and the columns each sum to $1$.
If $\vx_k$ is the state vector after $k$ time steps, then $\vx_{k+1} = P \vx_k$.
A state $\vx$ with $P \vx = \vx$ is called a steady state vector. That is, $\vx - P\vx = \vec 0$, or $(I - P) \vx = \vec 0$. To find a non-trivial steady state vector for this Markov chain, we solve the homogeneous system with coefficient matrix $I-P$.
Theorem: Every Markov chain has a non-trivial steady state vector.
This appears in the book as Theorem 4.30 in Section 4.6, but I proved it last time.
More generally, a central problem in linear algebra is to find $\vx$ such that $A \vx$ is a scalar multiple of $\vx$.
Definition: Let $A$ be an $\red{n} \times \red{n}$ matrix. A scalar $\lambda$ (lambda) is called an eigenvalue of $A$ if there is a nonzero vector $\vx$ such that $A \vx = \lambda \vx$. Such a vector $\vx$ is called an eigenvector of $A$ corresponding to $\lambda$.
We showed that $\lambda = 1$ is an eigenvalue of every stochastic matrix $P$.
Example 25-1: Since $$ \bmat{rr} 1 & 2 \\ 2 & -2 \emat \coll 2 1 = \coll 4 2 = 2 \coll 2 1 , $$ we see that $2$ is an eigenvalue of $\bmat{rr} 1 & 2 \\ 2 & -2 \emat$ with eigenvector $\coll 2 1$.
Example 4.2: Show that $5$ is an eigenvalue of $A = \bmat{rr} 1 & 2 \\ 4 & 3 \emat$ and determine all eigenvectors corresponding to this eigenvalue.
Solution: We are looking for nonzero solutions to $A \vx = 5 \vx$. This is the same as $(A - 5I) \vx = \vec 0$, so we compute the coefficient matrix: $$ \kern-7ex A - 5I = \bmat{rr} 1 & 2 \\ 4 & 3 \emat - \bmat{rr} 5 & 0 \\ 0 & 5 \emat = \bmat{rr} -4 & 2 \\ 4 & -2 \emat $$ The columns are linearly dependent, so the null space of $A-5I$ is nonzero. So $A \vx = 5 \vx$ has a nontrivial solution, which is what it means for $5$ to be an eigenvalue.
To find the eigenvectors, we compute the null space of $A - 5I$: $$ \kern-7ex [\, A-5I \mid \vec 0\,] = \bmat{rr|r} -4 & 2 & 0 \\ 4 & -2 & 0 \emat \lra{} \bmat{rr|r} 1 & -1/2 & 0 \\ 0 & 0 & 0 \emat $$ The solutions are of the form $\ccoll {t/2} t = t \ccoll {1/2} 1$. So the eigenvectors for the eigenvalue $5$ are the nonzero multiples of ${\ccoll {1/2} 1}^{\strut}$.
Illustrate with the linear transformation applet.
Definition: Let $A$ be an $n \times n$ matrix and let $\lambda$ be an eigenvalue of $A$. The collection of all eigenvectors corresponding to $\lambda$, together with the zero vector, is a subspace called the eigenspace of $\lambda$ and is denoted $E_\lambda$. In other words, $$ E_\lambda = \null(A - \lambda I) . $$
In the above Example, $E_5 = \span\left\{ \ccoll {1/2} 1 \right\}$.
Example: Give an eigenvalue of the matrix $A = \bmat{rr} 2 & 0 \\ 0 & 2 \emat$ and compute its eigenspace.
Example: If $0$ is an eigenvalue of $A$, what is another name for $E_0$?
Given a specific number $\lambda$, we now know how to check whether $\lambda$ is an eigenvalue: we check whether $A - \lambda I$ has a nontrivial null space. And we can find the eigenvectors by finding the null space.
We also have a geometric way to find all eigenvalues $\lambda$, at least in the $2 \times 2$ case. Is there an algebraic way to check all $\lambda$ at once?
By the fundamental theorem of invertible matrices, $A - \lambda I$ has a nontrivial null space if and only if it is not invertible. For $2 \times 2$ matrices, we can check invertibility using the determinant!
Example 25-1 (cont): Find all eigenvalues of $A = \bmat{rr} 1 & 2 \\ 2 & -2 \emat$.
Solution: We need to find all $\lambda$ such that $\det(A-\lambda I) = 0$. $$ \kern-6ex \begin{aligned} \det(A-\lambda I) &= \det \bmat{cc} 1-\lambda & 2 \\ 2 & -2-\lambda \emat \\ &= (1-\lambda)(-2-\lambda)-4 = \lambda^2 + \lambda - 6 , \end{aligned} $$ so we need to solve the quadratic equation $\lambda^2 + \lambda - 6 = 0$. This can be factored as $(\lambda - 2)(\lambda + 3) = 0$, and so $\lambda = 2$ or $\lambda = -3$, the same as we saw above and with the applet.
We proceed to find the eigenvectors for these eigenvalues, by solving $(A - 2I) \vx = \vec 0$ and $(A + 3I) \vx = \vec 0$. On board.
Appendix D provides review of polynomials and their solutions. Look it over now. We'll discuss it in a couple of lectures.
So now we know how to handle the $2 \times 2$ case. See also Example 4.5 in text.
Example 4.3: Show that $\lambda = 6$ is an eigenvalue of $A = \bmat{rrr} 7 & 1 & -2 \\ -3 & 3 & 6 \\ 2 & 2 & 2 \emat$ and find a basis for the eigenspace $E_6$.
Solution: We need to compute the null space of $A - 6I$. We row reduce: \[ A - 6I = \bmat{rrr} 1 & 1 & -2 \\ -3 & -3 & 6 \\ 2 & 2 & -4 \emat \longrightarrow \bmat{rrr} 1 & 1 & -2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \emat. \] Since there are two free variables, the null space is nonzero, so $6$ is an eigenvalue of $A$. $x_2 = s$ and $x_3 = t$ are free variables, and $x_1= -x_2 + 2x_3$, so \[ E_6 = \left\{ \colll {-s + 2 t} {s \phantom{\,\,\, + 2t}} t \right\} = \span\left\{ \colll {-1} 1 0 ,\, \colll 2 0 1 \right\} \] On board: show how to check that these vectors are eigenvectors.
This works fine if we are told the prospective eigenvalue, but to find the eigenvalues we need to learn about determinants of $n \times n$ matrices, which is Section 4.2.
Note: We won't discuss eigenvectors and eigenvalues for matrices over $\Z_m$. We will discuss complex numbers $\C$ in a later lecture.