Math 1600 Lecture 18, Section 2, 17 Oct 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $
Enable java.

Announcements:

Continue reading Section 3.5. We aren't covering 3.4. Work through recommended homework questions.

Five practice midterms have been posted on the course web page.

Next office hour: Monday, 3:00-3:30, MC103B.

Help Centers: Monday-Friday 2:30-6:30 in MC 106, but not during reading week.

After today, we are halfway done the course!

Partial review of Section 3.3, Lectures 16 and 17:

Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$ A A' = I \qtext{and} A' A = I . $$ If such an $A'$ exists, we say that $A$ is invertible.

Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.

We write $A^{-1}$ for the inverse of $A$, when $A$ is invertible.

Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$ A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat . $$

We call $ad-bc$ the determinant of $A$, and write it $\det A$.

Properties of Invertible Matrices

Theorem 3.9: Assume $A$ and $B$ are invertible matrices of the same size. Then:
  1. $A^{-1}$ is invertible and $(A^{-1})^{-1} = {A}$
  2. If $c$ is a non-zero scalar, then $cA$ is invertible and $(cA)^{-1} = {\frac{1}{c} A^{-1}}$
  3. $AB$ is invertible and $(AB)^{-1} = {B^{-1} A^{-1}}$ (socks and shoes rule)
  4. $A^T$ is invertible and $(A^T)^{-1} = {(A^{-1})^T}$
  5. $A^n$ is invertible for all $n \geq 0$ and $(A^n)^{-1} = {(A^{-1})^n}$

Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.

The fundamental theorem of invertible matrices:

Very important! Will be used repeatedly, and expanded later.

Theorem 3.12: Let $A$ be an $n \times n$ matrix. The following are equivalent:
a. $A$ is invertible.
b. $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
c. $A \vx = \vec 0$ has only the trivial (zero) solution.
d. The reduced row echelon form of $A$ is $I_n$.

Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.

Gauss-Jordan method for computing the inverse

Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.

This gives a general purpose method for determining whether a matrix $A$ is invertible, and finding the inverse:

1. Form the $n \times 2n$ matrix $[A \mid I\,]$.

2. Use row operations to get it into reduced row echelon form.

3. If a zero row appears in the left-hand portion, then $A$ is not invertible.

4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.

New material: Section 3.5: Subspaces, basis, dimension and rank

This section contains some of the most important concepts of the course.

Subspaces

A generalization of lines and planes through the origin.

Definition: A subspace of $\R^n$ is any collection $S$ of vectors in $\R^n$ such that:
1. The zero vector $\vec 0$ is in $S$.
2. $S$ is closed under addition: If $\vu$ and $\vv$ are in $S$, then $\vu + \vv$ is in $S$.
3. $S$ is closed under scalar multiplication: If $\vu$ is in $S$ and $c$ is any scalar, then $c \vu$ is in $S$.

Conditions (2) and (3) together are the same as saying that $S$ is closed under linear combinations.

Example: $\R^n$ is a subspace of $\R^n$. Also, $S = \{ \vec 0 \}$ is a subspace of $\R^n$.

Example: A plane $\cP$ through the origin in $\R^3$ is a subspace. Applet.

Here's an algebraic argument. Suppose $\vv_1$ and $\vv_2$ are direction vectors for $\cP$, so $\cP = \span(\vv_1, \vv_2)$.
(1) $\vec 0$ is in $\cP$, since $\vec 0 = 0 \vv_1 + 0 \vv_2$.
(2) If $\vu = c_1 \vv_1 + c_2 \vv_2$ and $\vv = d_1 \vv_1 + d_2 \vv_2$, then $$ \begin{aligned} \vu + \vv\ &= (c_1 \vv_1 + c_2 \vv_2) + (d_1 \vv_1 + d_2 \vv_2) \\ &= (c_1 + d_1) \vv_1 + (c_2 + d_2) \vv_2 \end{aligned} $$ which is in $\span(\vv_1, \vv_2)$ as well.
(3) For any scalar $c$, $$ c \vu = c (c_1 \vv_1 + c_2 \vv_2) = (c c_1) \vv_1 + (c c_2) \vv_2 $$ which is also in $\span(\vv_1, \vv_2)$.

On the other hand, a plane not through the origin is not a subspace. It of course fails (1), but the other conditions fail as well, as shown in the applet.

As another example, a line through the origin in $\R^3$ is also a subspace.

The same method as used above proves:

Theorem 3.19: Let $\vv_1, \vv_2, \ldots, \vv_k$ be vectors in $\R^n$. Then $\span(\vv_1, \ldots, \vv_k)$ is a subspace of $\R^n$.

See text. We call $\span(\vv_1, \ldots, \vv_k)$ the subspace spanned by $\vv_1, \ldots, \vv_k$. This generalizes the idea of a line or a plane through the origin.

Example: Is the set of vectors $\colll x y z$ with $x = y + z$ a subspace of $\R^3$?

See Example 3.38 in the text for a similar question.

Example: Is the set of vectors $\ccolll x y z$ with $x = y + z + 1$ a subspace of $\R^3$?

Example: Is the set of vectors $\ccoll x y $ with $y = \sin(x)$ a subspace of $\R^2$?

Subspaces associated with matrices

Theorem 3.21: Let $A$ be an $m \times n$ matrix and let $N$ be the set of solutions of the homogeneous system $A \vx = \vec 0$. Then $N$ is a subspace of $\R^n$.

Proof: (1) Since $A \, \vec 0_n = \vec 0_m$, the zero vector $\vec 0_n$ is in $N$.
(2) Let $\vu$ and $\vv$ be in $N$, so $A \vu = \vec 0$ and $A \vv = \vec 0$. Then $$ A (\vu + \vv) = A \vu + A \vv = \vec 0 + \vec 0 = \vec 0 $$ so $\vu + \vv$ is in $N$.
(3) If $c$ is a scalar and $\vu$ is in $N$, then $$ A (c \vu) = c A \vu = c \, \vec 0 = \vec 0 $$ so $c \vu$ is in $N$. $\qquad \Box$

Aside: At this point, the book states Theorem 3.22, which says that every linear system has no solution, one solution or infinitely many solutions, and gives a proof of this. We already know this is true, using Theorem 2.2 from Section 2.2 (see Lecture 9). The proof given here is in a sense better, since it doesn't rely on knowing anything about row echelon form, but I won't use class time to cover it.

Spans and null spaces are the two main sources of subspaces.

Definition: Let $A$ be an $m \times n$ matrix.

1. The row space of $A$ is the subspace $\row(A)$ of $\R^n$ spanned by the rows of $A$.
2. The column space of $A$ is the subspace $\col(A)$ of $\R^m$ spanned by the columns of $A$.
3. The null space of $A$ is the subspace $\null(A)$ of $\R^n$ consisting of the solutions to the system $A \vx = \vec 0$.

Example: The column space of $A = \bmat{rr} 1 & 2 \\ 3 & 4 \emat$ is $\span(\coll 1 3, \coll 2 4)$. A vector $\vb$ is a linear combination of these columns if and only if the system $A \vx = \vb$ has a solution. But since $A$ is invertible (its determinant is $4 - 6 = -2 \neq 0$), every such system has a (unique) solution. So $\col(A) = \R^2$.

The row space of $A$ is the same as the column space of $A^T$, so by a similar argument, this is all of $\R^2$ as well.

The null space of $A$ consists of the vectors $\coll x y$ such that $A \coll x y = \vec 0$. That is, \[ x \coll 1 3 + y \coll 2 4 = \vec 0 . \] Since those columns are linearly independent, $\null(A) = \{ \vec 0 \}$.

Example: The column space of $A = \bmat{rr} 1 & 2 \\ 3 & 4 \\ 5 & 6 \emat$ is the span of the two columns, which is a subspace of $\R^3$. Since the columns are linearly independent, this is a plane through the origin in $\R^3$.

Determine whether $\colll 2 0 1$ and $\colll 2 0 {-2}$ are in $\col(A)$. (On board.)

The row space of $A$ is the span of the three rows. But we already saw that the span of the first two rows is $\R^2$, so the span of all three rows is also $\R^2$. So $\row(A) = \R^2$.

Again, since the columns are linearly independent, $\null(A) = \{ \vec 0 \}$.

Example: Find the null space of $A = \bmat{rr} 1 & 2 \\ -2 & -4 \emat$.

We want to solve the system $A \vx = \vec 0$, so we row reduce $\bmat{rr|r} 1 & 2 & 0 \\ -2 & -4 & 0 \emat$ to $\bmat{rr|r} 1 & 2 & 0 \\ 0 & 0 & 0 \emat$. Then $y = t$ and $x = -2t$ are the solutions, so \[ \null(A) = \left\{ \coll {-2t} t \right\} . \]

Next we will explain the best way to describe a subspace.

Basis

We know that to describe a plane $\cP$ through the origin, we can give two direction vectors $\vu$ and $\vv$ which are linearly independent. Then $\cP = \span(\vu, \vv)$. We know that two vectors is always enough, and one vector will not work.

Definition: A basis for a subspace $S$ of $\R^n$ is a set of vectors $\vv_1, \ldots, \vv_k$ such that:
1. $S = \span(\vv_1, \ldots, \vv_k)$, and
2. $\vv_1, \ldots, \vv_k$ are linearly independent.

Condition (2) ensures that none of the vectors is redundant, so we aren't being wasteful. Giving a basis for a subspace is a good way to "describe" it.

Example 3.42: The standard unit vectors $\ve_1, \ldots, \ve_n$ in $\R^n$ are linearly independent and span $\R^n$, so they form a basis of $\R^n$ called the standard basis.

Example: We saw above that $\coll 1 3$ and $\coll 2 4$ span $\R^2$. They are also linearly independent, so they are a basis for $\R^2$.

Note that $\coll 1 0$ and $\coll 0 1$ are another basis for $\R^2$. A subspace will in general have many bases, but we'll see soon that they all have the same number of vectors! (Grammar: one basis, two bases.)

Next class we will continue talking about bases and will discuss systematic methods for finding the three subspaces associated to a matrix $A$.