Read Section 3.5 for Friday. This is core material. We aren't covering 3.4. Work through recommended homework questions.
Quiz 5 will focus on 3.1, 3.2 and the first half of 3.3 (up to and including Example 3.26).
Midterm: It is Saturday, March 1, 6:30pm-9:30pm, one week after reading week. If you have a conflict, you must let me know this week. It will cover the material up to and including the lecture on Monday, Feb 24. Note that Friday, Feb 14 and Monday, Feb 24 cover core material for the course.
No tutorials during reading week.
Office hour: None during reading week.
Help Centers: Monday-Friday 2:30-6:30 in MC 106, but not during reading week.
Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$ A A' = I \qtext{and} A' A = I . $$ If such an $A'$ exists, we say that $A$ is invertible.
Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.
Because of this, we write $A^{-1}$ for the inverse of $A$, when $A$ is invertible. We do not write $\frac{1}{A}$.
Example: If $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$, then $A^{-1} = \bmat{rr} 7 & -2 \\ -3 & 1 \emat$ is the inverse of $A$.
But the zero matrix and the matrix $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$ are not invertible.
Theorem 3.7: If $A$ is an invertible matrix $n \times n$ matrix, then the system $A \vx = \vb$ has the unique solution $\vx = A^{-1} \vb$ for any $\vb$ in $\R^n$.
Remark: This is not in general an efficient way to solve a system.
Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$ A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat . $$
We call $ad-bc$ the determinant of $A$, and write it $\det A$. It determines whether or not $A$ is invertible, and also shows up in the formula for $A^{-1}$.
To verify these, in every case you just check that the matrix shown is an inverse.
Remark: Property (c) is the most important, and generalizes to more than two matrices, e.g. $(ABC)^{-1} = C^{-1} B^{-1} A^{-1}$.
Remark: For $n$ a positive integer, we define $A^{-n}$ to be $(A^{-1})^n = (A^n)^{-1}$. Then $A^n A^{-n} = I = A^0$, and more generally $A^r A^s = A^{r+s}$ for all integers $r$ and $s$.
Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.
We can use these properties to solve a matrix equation for an unknown matrix.
But it's not possible to have $A' A = I_3$ with the given sizes.
Suppose we did have $A' A = I_3$ with $A$ a $2 \times 3$ matrix.
Consider the homogenous system
$$A \colll x y z = \coll 0 0$$
Since $\rank\, A \leq 2$ and there are three variables, this system must
have infinitely many solutions.
But
$$
\kern-5ex
A \vx = \vec 0 \qimplies A' A \vx = A' \svec 0 \qimplies \vx = \vec 0 , $$
so there is only one solution. This is a contradiction.
More generally, unless $A$ is square, you can't find a matrix $A'$ that makes both $AA' = I$ and $A'A = I$ true.
Theorem 3.12:
Let $A$ be an $n \times n$ matrix. The following are equivalent:
a. $A$ is invertible.
b. $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
c. $A \vx = \vec 0$ has only the trivial (zero) solution.
d. The reduced row echelon form of $A$ is $I_n$.
We'll use our past work on solving systems to show that (b) $\implies$ (c) $\implies$ (d) $\implies$ (b), which will prove that (b), (c) and (d) are equivalent.
We will only partially explain why (b) implies (a).
(b) $\implies$ (c): If $A \vx = \vb$ has a unique solution for every $\vb$, then it's true when $\vb$ happens to be the zero vector.
(c) $\implies$ (d):
Suppose that $A \vx = \vec 0$ has only the trivial solution.
That means that the rank of $A$ must be $n$.
So in reduced row echelon form, every row must have a leading $1$.
The only $n \times n$ matrix in reduced row echelon form with
$n$ leading $1$'s is the identity matrix.
(d) $\implies$ (b): If the reduced row echelon form of $A$ is $I_n$, then the augmented matrix $[A \mid \vb\,]$ row reduces to $[I_n \mid \vc\,]$, from which you can read off the unique solution $\vx = \vc$.
(b) $\implies$ (a) (partly):
Assume $A \vx = \vb$ has a solution for every $\vb$.
That means we can find $\vx_1, \ldots, \vx_n$ such that
$A \vx_i = \ve_i$ for each $i$.
If we let $B = [ \vx_1 \mid \cdots \mid \vx_n\,]$ be the matrix with the
$\vx_i$'s as columns, then
$$
\kern-8ex
AB = A \, [ \vx_1 \mid \cdots \mid \vx_n\,] = [ A \vx_1 \mid \cdots \mid A \vx_n\,]
= [ \ve_1 \mid \cdots \mid \ve_n \,] = I_n .
$$
So we have found a right inverse for $A$.
It turns out that $BA= I_n$ as well, but this is harder to see.
$\qquad\Box$
Note: We have omitted (e) from the theorem, since we aren't covering elementary matrices. They are used in the text to prove the other half of (b) $\implies$ (a).
We will see many important applications of Theorem 3.12. For now, we illustrate one theoretical application and one computational application.
Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.
Proof: If $BA = I$, then the system $A \vx = \vec 0$ has only the trivial solution, as we saw in the challenge problem. So (c) is true. Therefore (a) is true, i.e. $A$ is invertible. Then: $$ \kern-6ex B = BI = BAA^{-1} = IA^{-1} = A^{-1} . $$ (The uniqueness argument again!)$\quad\Box$
This is very useful! It means you only need to check multiplication in one order to know you have an inverse.
Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.
Why does this work? It's the combination of our arguments that (d) $\implies$ (b) and (b) $\implies$ (a). If we row reduce $[ A \mid \ve_i\,]$ to $[ I \mid \vc_i \,]$, then $A \vc_i = \ve_i$. So if $B$ is the matrix whose columns are the $\vc_i$'s, then $AB = I$. So, by Theorem 3.14, $B = A^{-1}$.
The trick is to notice that we can solve all of the systems $A \vx = \ve_i$ at once by row reducing $[A \mid I\,]$. The matrix on the right will be exactly $B$!
Example on whiteboard: Find the inverse of $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$.
Illustrate proof of Theorem 3.14.
Example on whiteboard: Find the inverse of $A = \bmat{rrr} 1 & 0 & 2 \\ 2 & 1 & 3 \\ 1 & -2 & 5 \emat$. Illustrate proof of Theorem 3.14.
Example on whiteboard: Find the inverse of $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$.
So now we have a general purpose method for determining whether a matrix $A$ is invertible, and finding the inverse:
2. Use row operations to get it into reduced row echelon form.
3. If a zero row appears in the left-hand portion, then $A$ is not invertible.
4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.
The trend continues: when given a problem to solve in linear algebra, we usually find a way to solve it using row reduction!
We aren't covering inverse matrices over $\Z_m$.
Question: Let $A$ be a $4 \times 4$ matrix with rank $3$. Is $A$ invertible? What if the rank is $4$?
True/false: If $A$ is a square matrix, and the column vectors of $A$ are linearly independent, then $A$ is invertible.
True/false: If $A$ and $B$ are square matrices such that $AB$ is not invertible, then at least one of $A$ and $B$ is not invertible.
True/false: If $A$ and $B$ are matrices such that $AB = I$, then $BA = I$.
Question: Find invertible matrices $A$ and $B$ such that $A+B$ is not invertible.