Math 1600 Lecture 17, Section 002, 23 Oct 2024

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\blue}[1]{{\color{blue}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\blue{\text{?}}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\blue{\text{?}}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\blue{\text{?}}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} \newcommand{\cyc}[2]{\cssId{#1}{\style{visibility:hidden}{#2}}} $

Announcements:

Today we continue Section 3.3. Continue reading Section 3.3 and also read Section 3.5. We aren't covering 3.4. Work through suggested exercises.

Homework 6 is on Gradescope and is due on Friday, October 25.

Math Help Centre: M-F 12:30-5:30 in PAB48/49 and online 6pm-8pm.

My next office hour is Friday 2:30-3:20 in MC130.

Partial review of Lecture 16 (Section 3.3):

Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$ A A' = I \qtext{and} A' A = I . $$ If such an $A'$ exists, we say that $A$ is invertible.

Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.

Because of this, we write $A^{-1}$ for the inverse of $A$, when $A$ is invertible. We do not write $\frac{1}{A}$.

Example: If $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$, then $A^{-1} = \bmat{rr} 7 & -2 \\ -3 & 1 \emat$ is the inverse of $A$.

But the zero matrix and the matrix $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$ are not invertible.

Theorem 3.7: If $A$ is an invertible matrix $n \times n$ matrix, then the system $A \vx = \vb$ has the unique solution $\vx = A^{-1} \vb$ for any $\vb$ in $\R^n$.

Remark: This is not in general an efficient way to solve a system.

Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$ A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat . $$

We call $ad-bc$ the determinant of $A$, and write it $\det A$. It determines whether or not $A$ is invertible, and also shows up in the formula for $A^{-1}$.

Properties of Invertible Matrices

Theorem 3.9: Assume $A$ and $B$ are invertible matrices of the same size. Then:
  1. $A^{-1}$ is invertible and $(A^{-1})^{-1} = \query{A}$
  2. If $c$ is a non-zero scalar, then $cA$ is invertible and $(cA)^{-1} = \query{\frac{1}{c} A^{-1}}$
  3. $AB$ is invertible and $(AB)^{-1} = \query{B^{-1} A^{-1}}$ (socks and shoes rule)
  4. $A^T$ is invertible and $(A^T)^{-1} = \query{(A^{-1})^T}$
  5. $A^n$ is invertible for all $n \geq 0$ and $(A^n)^{-1} = \query{(A^{-1})^n}$

To verify these, in every case you just check that the matrix shown is an inverse.

Remark: Property (c) is the most important, and generalizes to more than two matrices, e.g. $(ABC)^{-1} = C^{-1} B^{-1} A^{-1}$.

Remark: For $n$ a positive integer, we define $A^{-n}$ to be $(A^{-1})^n = (A^n)^{-1}$. Then $A^n A^{-n} = I = A^0$, and more generally $A^r A^s = A^{r+s}$ for all integers $r$ and $s$.

Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.

We can use these properties to solve a matrix equation for an unknown matrix.

New material

True/false: If $A$ is symmetric and invertible, then $A^{-1}$ is symmetric.

True/false: If $AB = AC$, then $B = C$.

True/false: If $AB = AC$ and $A$ is invertible, then $B = C$.

Challenge problem: Can you find a $2 \times 3$ matrix $A$ and a $3 \times 2$ matrix $A'$ such that $A A' = I_2$ and $A' A = I_3?$

Elementary Matrices

We'll use matrix multiplication to understand row operations.

Example: If $E = \bmat{rr} 1 & 0 \\ 3 & 1 \emat$ and $A = \bmat{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \emat$, then \[ EA = \bmat{rr} 1 & 0 \\ 3 & 1 \emat \bmat{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \emat = \bmat{rrr} 1 & 2 & 3 \\ 7 & 11 & 15 \emat \] $E$ is what you get by the applying the row operation $R_2 \leftarrow R_2 + 3 R_1$ to the identity matrix, and $EA$ is the result of applying that operation to $A$.

Definition: An elementary matrix is any matrix that can be obtained by performing an elementary row operation to an identity matrix.

Examples: \[ \kern-9ex E_1 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \emat,\, E_2 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 6 & 0 \\ 0 & 0 & 1 \emat,\, E_3 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 0 & 1 \emat. \]

Theorem 3.10: If $E$ is an elementary matrix obtained by performing a row operation on $I_n$ and $A$ is an $n \times r$ matrix, then $EA$ is the matrix obtained by performing the same row operation on $A$.

Examples: \[ \kern-9ex \bmat{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \emat \bmat{rrr} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \emat = \bmat{rrr} a_{11} & a_{12} \\ a_{31} & a_{32} \\ a_{21} & a_{22} \emat . \] \[ \kern-9ex \bmat{rrr} 1 & 0 & 0 \\ 0 & 6 & 0 \\ 0 & 0 & 1 \emat \bmat{rrr} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \emat = \bmat{rrr} a_{11} & a_{12} \\ 6 a_{21} & 6 a_{22} \\ a_{31} & a_{32} \emat . \] \[ \kern-9ex \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 0 & 1 \emat \bmat{rrr} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \emat = \bmat{ccc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ -2 a_{11} + a_{31} & -2 a_{12} + a_{32} \emat . \]

Note: This is not an efficient way to perform row operations! It will be used for theoretical purposes later today.

Since row operations can be "undone", we see that elementary matrices are invertible.

For example, if we do $R_3 \leftarrow R_3 + 2 R_1$ to $E_3$ above, we get the identity. It follows that: \[ \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1 \emat \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 0 & 1 \emat = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \] and the same for the other order. Similarly: \[ \bmat{rrr} 1 & 0 & 0 \\ 0 & 1/6 & 0 \\ 0 & 0 & 1 \emat \bmat{rrr} 1 & 0 & 0 \\ 0 & 6 & 0 \\ 0 & 0 & 1 \emat = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \] and \[ \bmat{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \emat \bmat{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \emat = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \] (so $E_1$ is its own inverse).

Theorem 3.11: Each elementary matrix is invertible, and its inverse is an elementary matrix of the same type.

The fundamental theorem of invertible matrices

Very important! Will be used repeatedly, and expanded later.

Theorem 3.12: Let $A$ be an $n \times n$ matrix. The following are equivalent:
(a) $A$ is invertible.
(b) $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
(c) $A \vx = \vec 0$ has only the trivial (zero) solution.
(d) The reduced row echelon form of $A$ is $I_n$.
(e) $A$ is a product of elementary matrices.

Proof: We'll show that (a) $\implies$ (b) $\implies$ (c) $\implies$ (d) $\implies$ (e) $\implies$ (a), which will prove all of the statements are equivalent.

(a) $\implies$ (b): This is Theorem 3.7 above.

(b) $\implies$ (c): If $A \vx = \vb$ has a unique solution for every $\vb$, then it's true when $\vb$ happens to be the zero vector.

(c) $\implies$ (d): Suppose that $A \vx = \vec 0$ has only the trivial solution.
That means that the rank of $A$ must be $n$.
So in reduced row echelon form, every row must have a leading $1$.
The only $n \times n$ matrix in reduced row echelon form with $n$ leading $1$'s is the identity matrix.

(d) $\implies$ (e): If $A$ can be reduced to $I_n$ by elementary row operations, then \[ E_k \cdots E_2 E_1 A = I_n \] for some elementary matrices $E_1, \ldots, E_k$. Since these are invertible, we have \[ A = (E_k \cdots E_2 E_1)^{-1} = E_1^{-1} \cdots E_k^{-1} , \] which is a product of elementary matrices by Theorem 3.11.

(e) $\implies$ (a): If $A$ is a product of elementary matrices, then it is invertible, using 3.11 and 3.9(c). $\qquad\Box$

We will see many important applications of Theorem 3.12. For now, we illustrate one theoretical application and one computational application.

Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.

Proof: If $BA = I$, then the system $A \vx = \vec 0$ has only the trivial solution, as we saw in the challenge problem. So (c) is true. Therefore (a) is true, i.e. $A$ is invertible. Then: $$ \kern-6ex B = BI = BAA^{-1} = IA^{-1} = A^{-1} . $$ (The uniqueness argument again!) The other case is similar. $\quad\Box$

This is very useful! It means you only need to check multiplication in one order to know you have an inverse.

Gauss-Jordan method for computing the inverse

Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.

Why does this work? It comes from our proof that (d) $\implies$ (e). If we represent the row operations as elementary matrices, we saw that \[ E_k \cdots E_2 E_1 A = I . \] Therefore, $E_k \cdots E_1$ is the inverse of $A$. And since $E_k \cdots E_1 = E_k \cdots E_1 I$, we can view this as the result of applying those row operations to $I$.

This gives the Gauss-Jordan method for determining whether a matrix $A$ is invertible, and finding the inverse:

1. Form the $n \times 2n$ matrix $[A \mid I\,]$.

2. Use row operations to get it into reduced row echelon form.

3. If a zero row appears in the left-hand portion, then $A$ is not invertible.

4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.

The trend continues: when given a problem to solve in linear algebra, we usually find a way to solve it using row reduction!

Note that finding $A^{-1}$ is more work than solving a system $A \vx = \vb$.

We aren't covering inverse matrices over $\Z_m$ (Example 3.32).