Math 1600 Lecture 14, Section 002, 9 Oct 2024

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\blue}[1]{{\color{blue}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\blue{\text{?}}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\blue{\text{?}}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\blue{\text{?}}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} \newcommand{\cyc}[2]{\cssId{#1}{\style{visibility:hidden}{#2}}} $

Announcements:

Today we cover part of Section 3.2. Continue reading Section 3.1 (partitioned matrices) and Section 3.2 for next class. Work through suggested exercises.

Homework 5 is on WeBWorK and is due on Friday.

Math Help Centre: M-F 12:30-5:30 in PAB48/49 and online 6pm-8pm.

My next office hour is Friday 2:30-3:20 in MC130.

Partial review of Lecture 13:

Section 3.1: Matrix Operations

Definition: An $m \times n$ matrix $A$ is a rectangular array of numbers called the entries, with $m$ rows and $n$ columns. $A$ is called square if $m = n$.

The entry in the $i$th row and $j$th column of $A$ is usually written $a_{ij}$ or sometimes $A_{ij}$.

The diagonal entries are $a_{11}, a_{22}, \ldots$

If $A$ is square and the nondiagonal entries are all zero, then $A$ is called a diagonal matrix.

$$ % The Rules create some space below the matrices: \kern-8ex \small \mystack{ \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat \Rule{0pt}{0pt}{20pt} }{\text{not square or diagonal}} \qquad \mystack{ \bmat{rr} 1 & 2 \\ 3 & 4 \emat \Rule{0pt}{0pt}{20pt} }{\text{square}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 4 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 0 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} $$

Definition: A diagonal matrix with all diagonal entries equal is called a scalar matrix. A scalar matrix with diagonal entries all equal to $1$ is an identity matrix.

All of these are scalar matrices: $$ \kern-8ex % The Rules create some space below the matrices: \mystack{ I_3 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \Rule{0pt}{0pt}{18pt} }{\text{identity matrix}} \quad \mystack{ \bmat{rrr} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \emat \Rule{0pt}{0pt}{18pt} }{\text{scalar}} \quad \mystack{ O = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{square zero matrix}} $$ Note: Identity $\implies$ scalar $\implies$ diagonal $\implies$ square.

Matrix addition and scalar multiplication

Our first two operations are just like for vectors:

Definition: If $A$ and $B$ are both $m \times n$ matrices, then their sum $A + B$ is the $m \times n$ matrix obtained by adding the corresponding entries of $A$ and $B$:   $A + B = [a_{ij} + b_{ij}]$.

Definition: If $A$ is an $m \times n$ matrix and $c$ is a scalar, then the scalar multiple $cA$ is the $m \times n$ matrix obtained by multiplying each entry by $c$:   $cA = [c \, a_{ij}]$.

New material: Section 3.2: Matrix Algebra

Addition and scalar multiplication for matrices behave exactly like addition and scalar multiplication for vectors, with the entries just written in a rectangle instead of in a row or column.

Theorem 3.2: Let $A$, $B$ and $C$ be matrices of the same size, and let $c$ and $d$ be scalars. Then:

(a) $A + B = B + A$ (comm.) (b) $(A + B) + C = A + (B + C)$ (assoc.)
(c) $A + O = A$ (d) $A + (-A) = O$
(e) $c(A+B) = cA + cB$ (dist.)  (f) $(c+d)A = cA + dA$ (dist.)
(g) $c(dA) = (cd)A$ (h) $1A = A$

Compare to Theorem 1.1.

This means that all of the concepts for vectors transfer to matrices.

E.g., manipulating matrix equations: $$ \kern-8ex 2(A+B) - 3(2B - A) = 2A + 2B -6B +3A = 5A - 4B . $$

We define a linear combination to be a matrix of the form: $$ c_1 A_1 + c_2 A_2 + \cdots + c_k A_k .$$

And we can define the span of a set of matrices to be the set of all their linear combinations.

And we can say that the matrices $A_1, A_2, \ldots, A_k$ are linearly independent if $$ c_1 A_1 + c_2 A_2 + \cdots + c_k A_k = O$$ has only the trivial solution $c_1 = \cdots = c_k = 0$, and are linearly dependent otherwise.

Our techniques for vectors also apply to answer questions such as:

Example 3.16 (a): Suppose $$ \kern-8ex \small A_1 = \bmat{rr} 0 & 1 \\ -1 & 0 \emat, \ A_2 = \bmat{rr} 1 & 0 \\ 0 & 1 \emat, \ A_3 = \bmat{rr} 1 & 1 \\ 1 & 1 \emat, \ B = \bmat{rr} 1 & 4 \\ 2 & 1 \emat $$ Is $B$ a linear combination of $A_1$, $A_2$ and $A_3$?

That is, are there scalars $c_1$, $c_2$ and $c_3$ such that $$ \kern-6ex c_1 \bmat{rr} 0 & 1 \\ -1 & 0 \emat + c_2 \bmat{rr} 1 & 0 \\ 0 & 1 \emat + c_3 \bmat{rr} 1 & 1 \\ 1 & 1 \emat = \bmat{rr} 1 & 4 \\ 2 & 1 \emat ? $$ Rewriting the left-hand side gives $$ \bmat{rr} c_2+c_3 & c_1+c_3 \\ -c_1+c_3 & c_2+c_3 \emat = \bmat{rr} 1 & 4 \\ 2 & 1 \emat $$ and this is equivalent to the system $$ \begin{aligned} \phantom{-c_1 + {}} c_2 + c_3\ &= 1 \\ \ph c_1 \phantom{{}+c_2} + c_3\ &= 4 \\ -c_1 \phantom{{}+c_2} + c_3\ &= 2 \\ \phantom{-c_1 + {}} c_2 + c_3\ &= 1 \\ \end{aligned} $$ and we can use row reduction to determine that there is a solution, and to find it if desired: $c_1 = 1, c_2 = -2, c_3 = 3$, so $A_1 - 2A_2 + 3A_3 = B$.

This works exactly as if we had written the matrices as column vectors and asked the same question.

See also Examples 3.16(b), 3.17 and 3.18 in text.

More review of Lecture 13:

Matrix multiplication

Definition: If $A$ is $m \times \red{n}$ and $B$ is $\red{n} \times r$, then the product $C = AB$ is the $m \times r$ matrix whose $i,j$ entry is $$ \kern-6ex c_{ij} = a_{i\red{1}} b_{\red{1}j} + a_{i\red{2}} b_{\red{2}j} + \cdots + a_{i\red{n}} b_{\red{n}j} = \sum_{\red{k}=1}^{n} a_{i\red{k}} b_{\red{k}j} . $$ This is the dot product of the $i$th row of $A$ with the $j$th column of $B$.

$$ \mystack{A}{m \times n} \ \ \mystack{B}{n \times r} \mystack{=}{\strut} \mystack{AB}{m \times r} $$

Powers

In general, $A^2 = AA$ doesn't make sense. But if $A$ is $n \times n$ (square), then it makes sense to define the power $$A^k = AA\cdots A \quad\text{with $k$ factors}.$$

We write $A^1 = A$ and $A^0 = I_n$.

We will see in a moment that $(AB)C = A(BC)$, so the expression for $A^k$ is unambiguous. And it follows that $$ A^r A^s = A^{r+s} \qquad\text{and}\qquad (A^r)^s = A^{rs} $$ for all nonnegative integers $r$ and $s$.

New material: Section 3.2: Matrix Algebra (continued)

Properties of Matrix Multiplication and Powers

This is new ground, as you can't multiply vectors.

For the most part, matrix multiplication behaves like multiplication of real numbers, but there are several differences:

Example 3.13: (Board.) Powers of $$ B = \bmat{rr} 0 & -1 \\ 1 & 0 \emat $$

Question: Is there a nonzero matrix $A$ such that $A^2 = O$?

Example 14-1: Tell me the entries of two $2 \times 2$ matrices $A$ and $B$, and let's compute $AB$ and $BA$. (Board.)

So we've seen:

But most expected properties do hold:

Theorem 3.3: Let $A$, $B$ and $C$ be matrices of the appropriate sizes, and let $k$ be a scalar. Then:

(a) $A(BC) = (AB)C$(associativity)
(b) $A(B + C) = AB + AC$(left distributivity)
(c) $(A+B)C = AC + BC$(right distributivity)
(d) $k(AB) = (kA)B = A(kB)$(no cool name)
(e) $I_m A = A = A I_n$ if $A$ is $m \times n$ (identity)

The text proves (b) and half of (e). (c) and the other half of (e) are the same, with right and left reversed.

Proof of (d): (Using $A_{ij}$ notation for matrix entries.) $$ \kern-8ex \begin{aligned} (k(AB))_{ij}\ &= k (AB)_{ij} = k (\row_i(A) \cdot \col_j(B)) \\ &= (k \, \row_i(A)) \cdot \col_j(B) = \row_i(kA) \cdot \col_j(B) \\ &= ((kA)B)_{ij} \end{aligned} $$ so $k(AB) = (kA)B$. The other part of (d) is similar.$\quad\Box$

Proof of (a): $$ \kern-8ex %\small \begin{aligned} ((AB)C)_{ij}\ &= \sum_k (AB)_{ik} C_{kj} = \sum_k \sum_l A_{il} B_{lk} C_{kj} \\ &= \sum_l \sum_k A_{il} B_{lk} C_{kj} = \sum_l A_{il} (BC)_{lj} = (A(BC))_{ij} \end{aligned} $$ so $A(BC) = (AB)C$.$\quad\Box$

Example 14-2: $A I = A$. (Board.)

Example 14-3: Solve $$2 ( X - A) + (A + B)(B + I) = 0$$ for $X$ in terms of $A$ and $B.$ (Board.)

Example 3.20: If $A$ and $B$ are square matrices of the same size, is $(A+B)^2 = A^2 + 2 AB + B^2$?

Note: Theorem 3.3 shows that a scalar matrix $kI_n$ commutes with every $n \times n$ matrix $A$. So $$ \kern-8ex (A + kI_n)^2 = A^2 + 2 A (k I_n) + (k I_n)^2 = \query{A^2 + 2 k A + k^2 I_n} $$ ($I_n$ is like the number $1$.)

True/false: If $A$ is $2 \times 3$ and $B$ is $3 \times 2$, then we always have $AB \neq BA$.

True/false: Every $1 \times 1$ matrix is a scalar matrix.

Note: The non-commutativity of matrices is directly related to quantum mechanics. Observables in quantum mechanics are described by matrices, and if the matrices don't commute, then you can't know both quantities at the same time!

Next class: more from Sections 3.1 and 3.2: Transpose, symmetric matrices, partitioned matrices.