Math 1600 Lecture 22, Section 2, 27 Oct 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $

Announcements:

Read Sections 4.0 and 4.1 for next class. Work through recommended homework questions.

Midterm results: Grades will be posted on OWL over the next few days. Midterms will be returned at the tutorials next week.

Tutorials: No tutorials this week! Fall break Thurs/Fri.

Office hour: today, 3:00-3:30, MC103B.

Help Centers: Monday-Friday 2:30-6:30 in MC 106 except Thurs/Fri.

Review of last lecture: Section 3.6: Linear Transformations

Given an $m \times n$ matrix $A$, we can use $A$ to transform a column vector in $\R^n$ into a column vector in $\R^m$. We write: $$ T_A(\vx) = A \vx \quad\text{for $\vx$ in $\R^n$} $$

Any rule $T$ that assigns to each $\vx$ in $\R^n$ a unique vector $T(\vx)$ in $\R^m$ is called a transformation from $\R^n$ to $\R^m$ and is written $T : \R^n \to \R^m$.

Definition: A transformation $T : \R^n \to \R^m$ is called a linear transformation if:
1. $T(\vu + \vv) = T(\vu) + T(\vv)$ for all $\vu$ and $\vv$ in $\R^n$, and
2. $T(c \vu) = c \, T(\vu)$ for all $\vu$ in $\R^n$ and all scalars $c$.

Theorem 3.30: Let $A$ be an $m \times n$ matrix. Then $T_A : \R^n \to \R^m$ is a linear transformation.

Theorem 3.31: Let $T : \R^n \to \R^m$ be a linear transformation. Then $T = T_A$, where $$ A = [\, T(\ve_1) \mid T(\ve_2) \mid \cdots \mid T(\ve_n) \,] $$

The matrix $A$ is called the standard matrix of $T$ and is written $[T]$.

Example 3.58: Let $R_\theta : \R^2 \to \R^2$ be rotation by an angle $\theta$ counterclockwise about the origin. Show that $R_\theta$ is linear and find its standard matrix.

Solution: A geometric argument shows that $R_\theta$ is linear.

To find the standard matrix, we note that $$ \kern-8ex R_\theta \left( \coll 1 0 \right) = \coll {\cos \theta} {\sin \theta} \qqtext{and} R_\theta \left( \coll 0 1 \right) = \coll {-\sin \theta} {\cos \theta} $$ Therefore, the standard matrix of $R_\theta$ is $\bmat{rr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \emat$.

New linear transformations from old

If $T : \R^m \to \R^\red{n}$ and $S : \R^\red{n} \to \R^p$, then $S(T(\vx))$ makes sense for $\vx$ in $\R^m$. The composition of $S$ and $T$ is the transformation $S \circ T : \R^m \to \R^p$ defined by $$ (S \circ T)(\vx) = S(T(\vx)) . $$ If $S$ and $T$ are linear, it is easy to check that this new transformation $S \circ T$ is automatically linear.

Theorem 3.32: $[S \circ T] = [S][T]$.

We saw an applet illustrating linear transformations.

New material

Example: It is geometrically clear that $R_\theta \circ R_\phi = R_{\theta+\phi}$. This tells us that $$ \kern-9ex \bmat{rr} \cos(\theta+\phi) & \!-\sin(\theta+\phi) \\ \sin(\theta+\phi) & \cos(\theta+\phi) \emat = \bmat{rr} \cos(\theta) & \!-\sin(\theta) \\ \sin(\theta) & \cos(\theta) \emat \bmat{rr} \cos(\phi) & \!-\sin(\phi) \\ \sin(\phi) & \cos(\phi) \emat $$ This implies some trigonometric identities. For example, looking at the top-left entry, we find that $$ \cos(\theta+\phi) = \cos(\theta) \cos(\phi) - \sin(\theta) \sin(\phi) $$ Other trig identities also follow.

Note that $R_0$ is rotation by zero degrees, so $R_0(\vx) = \vx$. We say that $R_0$ is the identity transformation, which we write $I : \R^2 \to \R^2$. Similarly, $R_{360} = I$.

Since $R_{120} \circ R_{120} \circ R_{120} = R_{360} = I$, we must have $[R_{120}]^3 = [I] = I$. This is how I came up with the answer $[R_{120}] = \bmat{rr} -1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & -1/2 \emat$ to the challenge problem in Lecture 15.

Our new point of view about matrix multiplication gives us a geometrical way to understand it!

Inverses of Linear Transformations

Since $R_{60} \circ R_{-60} = R_0 = I$, it follows that $[R_{60}] [R_{-60}] = I$. So $[R_{-60}] = [R_{60}]^{-1}$. See Example 3.62 for details.

Definition: Let $S$ and $T$ be linear transformations from $\R^\red{n}$ to $\R^\red{n}$. Then $S$ and $T$ are inverse transformations if $S \circ T = I$ and $T \circ S = I$. When this is the case, we say that $S$ and $T$ are invertible and are inverses.

The same argument as for matrices shows that an inverse is unique when it exists, so we write $S = T^{-1}$ and $T = S^{-1}$.

Theorem 3.33: Let $T : \R^n \to \R^n$ be a linear transformation. Then $T$ is invertible if and only if $[T]$ is an invertible matrix. In this case, $[T^{-1}] = [T]^{-1}$.

The argument is easy and is essentially what we did for $R_{60}$.

Question: Is projection onto the $x$-axis invertible?

Question: Is reflection in the $x$-axis invertible?

Question: Is translation a linear transformation?


Section 3.7: Markov Chains

Example 3.64: 200 people are testing two brands of toothpaste, Brand A and Brand B. Each month they are allowed to switch brands. The research firm observes the following:

This is called a Markov chain. There are definite states, and from each state there is a transition probability for moving to another state at each time step. These probabilities are constant and depend only on the current state.

Suppose at the start that 120 people use Brand A and 80 people use Brand B. Then, in the next month, $$ 0.70(120) + 0.20(80) = 100 \qtext{will use Brand A} $$ and $$ 0.30(120) + 0.80(80) = 100 \qtext{will use Brand B} $$ This is a matrix equation: $$ \bmat{ll} 0.70 & 0.20 \\ 0.30 & 0.80 \emat \coll {120} {80} = \coll {100} {100} $$ Write $P$ for the transition matrix and $\vx_k$ for the state vector after $k$ months have gone by. Then $\vx_{k+1} = P \vx_k$. So $$ \vx_2 = P \vx_1 = \bmat{ll} 0.70 & 0.20 \\ 0.30 & 0.80 \emat \coll {100} {100} = \coll {90} {110} $$ and we see that there are 90 people using Brand A and 110 using Brand B after 2 months.

We can also work with the percentage of people using each brand. Then $\vx_0 = \coll {120/200} {80/200} = \coll {0.60} {0.40}$ and $P\vx_0 = \coll {0.50} {0.50}$. Vectors with non-negative components that sum to 1 are called probability vectors

Note that $P$ is a stochastic matrix: this means that it is square and that each column is a probability vector.

The column indices of $P$ correspond to the current state and the row indices correspond to the next state. The entry $P_{ij}$ is the probability that you transition from state $j$ to state $i$ in one time step, where we now label the states with numbers.

Multiple steps: Can we compute the probability that we go from state $j$ to state $i$ in two steps? Well, $x_{k+2} = P x_{k+1} = P^2 x_k$, so the matrix $P^2$ computes this transition: $$ \kern-9.5ex P^2 = \bmat{ll} 0.7 & 0.2 \\ 0.3 & 0.8 \emat \bmat{ll} 0.7 & 0.2 \\ 0.3 & 0.8 \emat = \bmat{ll} 0.55 & 0.30 \\ 0.45 & 0.70 \emat $$

So the probability of going from Brand A to Brand B after two steps is $(P^2)_{21} = 0.45 = 0.21+0.24$.

More generally, $(P^k)_{ij}$ is the probability of going from state $\red{j}$ to state $\red{i}$ in $k$ steps.

Long-term behaviour: By multiplying by $P$, you can show that the state evolves as follows: $$ \kern-5ex \begin{aligned} &\coll {0.60} {0.40}, \coll {0.50} {0.50}, \coll {0.45} {0.55}, \coll {0.425} {0.575}, \coll {0.412} {0.588}, \coll {0.406} {0.594},\\ &\coll {0.403} {0.597}, \coll {0.402} {0.598}, \coll {0.401} {0.599}, \coll {0.400} {0.600}, \coll {0.400} {0.600}, \dots \end{aligned} $$ with 40% of the people using Brand A in the long run. Since $$ \bmat{ll} 0.70 & 0.20 \\ 0.30 & 0.80 \emat \coll {0.4} {0.6} = \coll {0.4} {0.6} , $$ once we reach this state, we don't leave. A state $\vx$ with $P \vx = \vx$ is called a steady state vector. We'll prove below that every Markov chain has a steady state vector!

Here's how to find it. We want to find $\vx$ such that $(I - P)\vx = \vec 0$. The augmented system is $$ [I - P \mid \vec 0\,] = \bmat{rr|r} 0.30 & -0.20 & 0 \\ -0.30 & 0.20 & 0 \emat $$ which row reduces to $$ \bmat{rr|r} 1 & -2/3 & 0 \\ 0 & 0 & 0 \emat $$ The solution is $$ x_1 = \frac{2}{3} t,\quad x_2 = t $$ We'd like a probability vector, so $\frac{2}{3} t + t = 1$ which means that $t = 3/5$. This gives $\vx = \coll {0.4} {0.6}$ as we found above.

Theorem: Every Markov chain has a non-trivial steady state vector.

This appears in the book as Theorem 4.30 in Section 4.6.

Proof: Let $P$ be the transition matrix. We want to find a non-trivial solution to $(I - P)\vx = \vec 0$. By the fundamental theorem of invertible matrices and the fact that $\rank (I - P) = \rank ((I - P)^T)$, this is equivalent to $(I - P)^T \vx = \vec 0$ having a non-trivial solution. That is, finding a non-trivial $\vx$ such that $$ P^T \vx = \vx \qtext{(since $I^T = I$).} $$ But since $P$ is a stochastic matrix, we always have $$ P^T \ccolll 1 {\vdots} 1 = \ccolll 1 {\vdots} 1 $$ So therefore $P \vx = \vx$ also has a (different) non-trivial solution.$\qquad\Box$

Example 3.65: A Markov chain can have more than two states. A rat is in a maze with three rooms, and always chooses to go through one of the doors with equal probability. Draw the state diagram, determine the transition matrix $P$ and describe how to find a steady-state vector.

We'll probably study Markov chains again in Section 4.6.