Today we finish Section 3.3 and start 3.5. Continue reading Section 3.5. We aren't covering 3.4. Work through suggested exercises.
Homework 6 is on Gradescope and is due today.
Homework 7 is on WeBWorK is will be available tomorrow morning.
Math Help Centre: M-F 12:30-5:30 in PAB48/49 and online 6pm-8pm.
My next office hour is today 2:30-3:20 in MC130.
The midterm is on Saturday, November 9, 2-4pm. It will cover what we get to on Friday, November 1, probably until the end of 3.5.
After today, we are halfway done the course!
Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$ A A' = I \qtext{and} A' A = I . $$ If such an $A'$ exists, we say that $A$ is invertible.
Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.
We write $A^{-1}$ for the inverse of $A$, when $A$ is invertible.
Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$ A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat . $$
We call $ad-bc$ the determinant of $A$, and write it $\det A$.
Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.
Theorem 3.12:
Let $A$ be an $n \times n$ matrix. The following are equivalent:
(a) $A$ is invertible.
(b) $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
(c) $A \vx = \vec 0$ has only the trivial (zero) solution.
(d) The reduced row echelon form of $A$ is $I_n$.
(e) $A$ is a product of elementary matrices.
Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.
Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.
This gives the Gauss-Jordan method for determining whether a matrix $A$ is invertible, and finding the inverse:
1. Form the $n \times 2n$ matrix $[A \mid I\,]$.
2. Use row operations to get it into reduced row echelon form.
3. If a zero row appears in the left-hand portion, then $A$ is not invertible.
4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.
The trend continues: when given a problem to solve in linear algebra, we usually find a way to solve it using row reduction!
Note that finding $A^{-1}$ is more work than solving a system $A \vx = \vb$.
We aren't covering inverse matrices over $\Z_m$ (Example 3.32).
Example 18-1: (Board.) Find the inverse of $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$.
Example 18-2: (Board.) Find the inverse of $A = \bmat{rrr} 1 & 0 & 2 \\ 2 & 1 & 3 \\ 1 & -2 & 5 \emat$.
Example 18-3: (Board.) Find the inverse of $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$.
Question: Let $A$ be a $4 \times 4$ matrix with rank $3$. Is $A$ invertible? What if the rank is $4$?
True/false: If $A$ is a square matrix, and the column vectors of $A$ are linearly independent, then $A$ is invertible.
True/false: If $A$ and $B$ are square matrices such that $AB$ is not invertible, then at least one of $A$ and $B$ is not invertible.
True/false: If $A$ and $B$ are matrices such that $AB = I$, then $BA = I$.
Question: Find invertible matrices $A$ and $B$ such that $A+B$ is not invertible.
Definition: A subspace of $\R^n$ is any collection $S$ of
vectors in $\R^n$ such that:
1. The zero vector $\vec 0$ is in $S$.
2. $S$ is closed under addition:
If $\vu$ and $\vv$ are in $S$, then $\vu + \vv$ is in $S$.
3. $S$ is closed under scalar multiplication:
If $\vu$ is in $S$ and $c$ is any scalar, then $c \vu$ is in $S$.
Conditions (2) and (3) together are the same as saying that $S$ is closed under linear combinations.
Example: $\R^n$ is a subspace of $\R^n$. Also, $S = \{ \vec 0 \}$ is a subspace of $\R^n$.
Example: A plane $\cP$ through the origin in $\R^3$ is a subspace. Applet.
Here's an algebraic argument.
Suppose $\vv_1$ and $\vv_2$ are direction vectors for $\cP$,
so $\cP = \span(\vv_1, \vv_2)$.
(1) $\vec 0$ is in $\cP$, since $\vec 0 = 0 \vv_1 + 0 \vv_2$.
(2) If $\vu = c_1 \vv_1 + c_2 \vv_2$ and $\vv = d_1 \vv_1 + d_2 \vv_2$,
then
$$
\begin{aligned}
\vu + \vv\ &= (c_1 \vv_1 + c_2 \vv_2) + (d_1 \vv_1 + d_2 \vv_2) \\
&= (c_1 + d_1) \vv_1 + (c_2 + d_2) \vv_2
\end{aligned}
$$
which is in $\span(\vv_1, \vv_2)$ as well.
(3) For any scalar $c$,
$$
c \vu = c (c_1 \vv_1 + c_2 \vv_2) = (c c_1) \vv_1 + (c c_2) \vv_2
$$
which is also in $\span(\vv_1, \vv_2)$.
On the other hand, a plane not through the origin is not a subspace. It of course fails (1), but the other conditions fail as well, as shown in the applet.
As another example, a line through the origin in $\R^3$ is also a subspace.
The same method as used above proves:
Theorem 3.19: Let $\vv_1, \vv_2, \ldots, \vv_k$ be vectors in $\R^n$. Then $\span(\vv_1, \ldots, \vv_k)$ is a subspace of $\R^n$.
See text. We call $\span(\vv_1, \ldots, \vv_k)$ the subspace spanned by $\vv_1, \ldots, \vv_k$. This generalizes the idea of a line or a plane through the origin.
Example: Is the set of vectors $\colll x y z$ with $x = y + z$ a subspace of $\R^3$?
See Example 3.38 in the text for a similar question.
Example: Is the set of vectors $\ccolll x y z$ with $x = y + z + 1$ a subspace of $\R^3$?
Example: Is the set of vectors $\ccoll x y $ with $y = \sin(x)$ a subspace of $\R^2$?
Theorem 3.21: Let $A$ be an $m \times n$ matrix and let $N$ be the set of solutions of the homogeneous system $A \vx = \vec 0$. Then $N$ is a subspace of $\R^n$.
Proof:
(1) Since $A \, \vec 0_n = \vec 0_m$, the zero vector $\vec 0_n$ is in $N$.
(2) Let $\vu$ and $\vv$ be in $N$, so $A \vu = \vec 0$ and $A \vv = \vec 0$.
Then
$$ A (\vu + \vv) = A \vu + A \vv = \vec 0 + \vec 0 = \vec 0 $$
so $\vu + \vv$ is in $N$.
(3) If $c$ is a scalar and $\vu$ is in $N$, then
$$ A (c \vu) = c A \vu = c \, \vec 0 = \vec 0 $$
so $c \vu$ is in $N$. $\qquad \Box$
Aside: At this point, the book states Theorem 3.22, which says that every linear system has no solution, one solution or infinitely many solutions, and gives a proof of this. We already know this is true, using Theorem 2.2 from Section 2.2 (see Lecture 9). The proof given here is in a sense better, since it doesn't rely on knowing anything about row echelon form, but I won't use class time to cover it.
Spans and null spaces are the two main sources of subspaces.
Definition: Let $A$ be an $m \times n$ matrix.
1. The row space of $A$ is the subspace $\row(A)$ of $\R^n$ spanned
by the rows of $A$.
2. The column space of $A$ is the subspace $\col(A)$ of $\R^m$ spanned
by the columns of $A$.
3. The null space of $A$ is the subspace $\null(A)$ of $\R^n$
consisting of the solutions to the system $A \vx = \vec 0$.
Example: Let $A = \bmat{rr} 1 & 2 \\ -2 & -4 \emat$.
The column space is $\span(\coll 1 {-2}, \coll 2 {-4})$. Since these vectors are parallel, this is the same as $\span(\coll 1 {-2})$.
Similarly, the row space is $\span([1,\, 2])$.
To find the null space, we need to solve the system $A \vx = \vec 0$, so we row reduce $\bmat{rr|r} 1 & 2 & 0 \\ -2 & -4 & 0 \emat$ to $\bmat{rr|r} 1 & 2 & 0 \\ 0 & 0 & 0 \emat$. Then $y = t$ and $x = -2t$ are the solutions, so \[ \null(A) = \left\{ \coll {-2t} t \right\} = \span(\coll {-2} 1). \]