Math 1600 Lecture 26, Section 002, 13 Nov 2024

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\blue}[1]{{\color{blue}#1}} \newcommand{\green}[1]{{\color{green}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\blue{\text{?}}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\blue{\text{?}}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\blue{\text{?}}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} \newcommand{\cyc}[2]{\cssId{#1}{\style{visibility:hidden}{#2}}} $

Announcements:

Today we start Section 4.2. Continue reading Section 4.2 for next class. Work through suggested exercises.

Homework 8 is on WeBWorK and is due Friday at 11:55pm.

Math Help Centre: M-F 12:30-5:30 in PAB48/49 and online 6pm-8pm.

My next office hour is Friday 2:30-3:20 in MC130.

Partial summary of Section 4.1: Eigenvalues and eigenvectors

Definition: Let $A$ be an $\red{n} \times \red{n}$ matrix. A scalar $\lambda$ (lambda) is called an eigenvalue of $A$ if there is a nonzero vector $\vx$ such that $A \vx = \lambda \vx$. Such a vector $\vx$ is called an eigenvector of $A$ corresponding to $\lambda$.

Question: Why do we only consider square matrices here?

Example 25-1: Since $$ \bmat{rr} 1 & 2 \\ 2 & -2 \emat \coll 2 1 = \coll 4 2 = 2 \coll 2 1 , $$ we see that $2$ is an eigenvalue of $\bmat{rr} 1 & 2 \\ 2 & -2 \emat$ with eigenvector $\coll 2 1$.

In general, the eigenvectors for a given eigenvalue $\lambda$ are the nonzero solutions to $(A - \lambda I) \vx = \vec 0$.

Definition: Let $A$ be an $n \times n$ matrix and let $\lambda$ be an eigenvalue of $A$. The collection of all eigenvectors corresponding to $\lambda$, together with the zero vector, is a subspace called the eigenspace of $\lambda$ and is denoted $E_\lambda$. In other words, $$ E_\lambda = \null(A - \lambda I) . $$

We worked out many examples, and used an applet to understand the geometry.

Finding eigenvalues

Given a specific number $\lambda$, we know how to check whether $\lambda$ is an eigenvalue: we check whether $A - \lambda I$ has a nontrivial null space. (And we can find the eigenvectors by finding the null space.)

By the fundamental theorem of invertible matrices, $A - \lambda I$ has a nontrivial null space if and only if it is not invertible. For $2 \times 2$ matrices, we can check invertibility using the determinant!

Example: Find all eigenvalues of $A = \bmat{rr} 1 & 2 \\ 2 & -2 \emat$.

Solution: We need to find all $\lambda$ such that $\det(A-\lambda I) = 0$. $$ \kern-6ex \begin{aligned} \det(A-\lambda I) &= \det \bmat{cc} 1-\lambda & 2 \\ 2 & -2-\lambda \emat \\ &= (1-\lambda)(-2-\lambda)-4 = \lambda^2 + \lambda - 6 , \end{aligned} $$ so we need to solve the quadratic equation $\lambda^2 + \lambda - 6 = 0$. This can be factored as $(\lambda - 2)(\lambda + 3) = 0$, and so $\lambda = 2$ or $\lambda = -3$ are the eigenvalues.

To handle larger matrices, we need to learn about their determinants.

New material: Section 4.2: Determinants

Recall that we defined the determinant of a $2 \times 2$ matrix $A = \bmat{rr} a & b \\ c& d \emat$ by $ \det A = a d - b c $. We also write this as $$ \kern-6ex \det A = |A| = \bdmat{rr} a & b \\ c& d \edmat = a d - b c . $$

For a $3 \times 3$ matrix $A$, we define $$ \kern-8.5ex \det A = \bdmat{rrr} \red{a_{11}} & \red{a_{12}} & \red{a_{13}} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \edmat = \red{a_{11}} \bdmat{rr} a_{22} & a_{23} \\ a_{32} & a_{33} \edmat \boldsymbol{\green{-}}\ \red{a_{12}} \bdmat{rr} a_{21} & a_{23} \\ a_{31} & a_{33} \edmat + \red{a_{13}} \bdmat{rr} a_{21} & a_{22} \\ a_{31} & a_{32} \edmat $$ If we write $A_{ij}$ for the matrix obtained from $A$ by deleting the $i$th row and the $j$th column, then this can be written $$ \kern-8.5ex \det A = a_{11} \det A_{11} - a_{12} \det A_{12} + a_{13} \det A_{13} = \sum_{j=1}^{3} (-1)^{1+j} \, a_{1j} \det A_{1j} $$ We call $\det A_{ij}$ the $(i,j)$-minor of $A$.

Example 26-1: On board.

Example 4.9 in the book shows another method that doesn't generalize to larger matrices.

Determinants of $n \times n$ matrices

Definition: Let $A = [a_{ij}]$ be an $n \times n$ matrix. Then the determinant of $A$ is the scalar $$ \kern-8.5ex \begin{aligned} \det A = |A|\ &= a_{11} \det A_{11} - a_{12} \det A_{12} + \cdots + (-1)^{1+n} a_{1n} \det A_{1n} \\ &= \sum_{j=1}^{n} (-1)^{1+j} \, a_{1j} \det A_{1j} . \end{aligned} $$

This is a recursive definition!

Example 26-2: $A = \bdmat{rrrr} 2 & 0 & 1 & 0 \\ 3 & 1 & 0 & 0 \\ 1 & 0 & 2 & 3 \\ 2 & 0 & 4 & 5 \edmat$, on board.

The computation can be very long if there aren't many zeros! We'll learn some better methods.

Note that if we define the determinant of a $1 \times 1$ matrix $A = [a]$ to be $a$, then the general definition works in the $2 \times 2$ case as well.

It will make the notation simpler if we define the $(i,j)$-cofactor of $A$ to be $$ C_{ij} = (-1)^{i+j} \det A_{ij} . $$ Then the definition above says $$ \det A = \sum_{j=1}^{n} \, a_{1j} C_{1j} . $$ This is called the cofactor expansion along the first row. It turns out that any row or column works!

Theorem 4.1 (The Laplace Expansion Theorem): Let $A$ be any $n \times n$ matrix. Then for each $i$ we have $$ \kern-6ex \det A = a_{i1} C_{i1} + a_{i2} C_{i2} + \cdots + a_{in} C_{in} = \sum_{j=1}^{n} \, a_{ij} C_{ij} $$ (cofactor expansion along the $i$th row). And for each $j$ we have $$ \kern-6ex \det A = a_{1j} C_{1j} + a_{2j} C_{2j} + \cdots + a_{nj} C_{nj} = \sum_{i=1}^{n} \, a_{ij} C_{ij} $$ (cofactor expansion along the $j$th column).

The book proves this result at the end of this section, but we won't cover the proof.

The signs in the cofactor expansion form a checkerboard pattern: $$ \bmat{ccccc} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \emat $$

Example 26-3: Redo the previous $4 \times 4$ example, saving work by expanding along the second column. On board. Note that the $+-$ pattern for the $3 \times 3$ determinant is not from the original matrix.

Example 26-4: A $4 \times 4$ triangular matrix, on board.

A triangular matrix is a square matrix that is all zero below the diagonal or above the diagonal.

Theorem 4.2: If $A$ is triangular, then $\det A$ is the product of the diagonal entries.

Better methods

Laplace Expansion is convenient when there are appropriately placed zeros in the matrix, but it is not good in general. It produces $n!$ different terms (explain). A supercomputer would require $10^{30}$ times the age of the universe just to compute a $50 \times 50$ determinant in this way. And that's a puny determinant for real-world applications.

So how do we do better? Like always, we turn to row reduction! These properties will be what we need:

Theorem 4.3: Let $A$ be a square matrix.

a. If $A$ has a zero row, then $\det A = 0$.
b. If $B$ is obtained from $A$ by interchanging two rows, then $\det B = - \det A$.
c. If $A$ has two identical rows, then $\det A = 0$.
d. If $B$ is obtained from $A$ by multiplying a row of $A$ by $k$, then $\det B = k \det A$.
e. If $A$, $B$ and $C$ are identical in all rows except the $i$th row, and the $i$th row of $C$ is the sum of the $i$th rows of $A$ and $B$, then $\det C = \det A + \det B$.
f. If $C$ is obtained from $A$ by adding a multiple of one row to another, then $\det C = \det A$.

All of the above statements are true with rows replaced by columns.

Explain verbally, making use of: $$ \kern-6ex \det A = a_{i1} C_{i1} + a_{i2} C_{i2} + \cdots + a_{in} C_{in} = \sum_{j=1}^{n} \, a_{ij} C_{ij} $$ The following will help explain how (f) follows from (d) and (e): $$ \kern-8ex A = \collll {\vr_1}{\vr_2}{\vr_3}{\vr_4},\quad B = \collll {\vr_1}{5 \vr_4}{\vr_3}{\vr_4},\quad B' = \collll {\vr_1}{\vr_4}{\vr_3}{\vr_4},\quad C = \ccollll {\vr_1}{\vr_2 + 5 \vr_4}{\vr_3}{\vr_4} $$ $$ \kern-8.5ex \det C = \det A + \det B = \det A + 5 \det B' = \det A + 5 (0) = \det A $$

The bold statements are the ones that are useful for understanding how row operations change the determinant.

Example: Use row operations to compute $\det A$ by reducing to triangular form, where $A = \bmat{rrrr} 2 & 4 & 6 & 8 \\ 1 & 4 & 1 & 2 \\ 2 & 2 & 12 & 8 \\ 1 & 2 & 3 & 9 \emat^\strut$.

Solution: $$ \kern-8ex \begin{aligned} \bmat{rrrr} 2 & \phm 4 & \phm 6 & \phm 8 \\ 1 & 4 & 1 & 2 \\ 2 & 2 & 12 & 8 \\ 1 & 2 & 3 & 9 \emat &\lra{(1/2)R_1} \bmat{rrrr} 1 & \phm 2 & \phm 3 & \phm 4 \\ 1 & 4 & 1 & 2 \\ 2 & 2 & 12 & 8 \\ 1 & 2 & 3 & 9 \emat \\ \lra{\mystackthree{\scriptstyle R_2 - R_1}{\scriptstyle R_3 - 2R_1}{\scriptstyle R_4 - R_1}} \bmat{rrrr} 1 & 2 & 3 & 4 \\ 0 & 2 & -2 & -2 \\ 0 & -2 & 6 & 0 \\ 0 & 0 & 0 & 5 \emat &\lra{\,R_3 + R_2\,} \bmat{rrrr} 1 & \phm 2 & 3 & 4 \\ 0 & 2 & -2 & -2 \\ 0 & 0 & 4 & -2 \\ 0 & 0 & 0 & 5 \emat \end{aligned} $$ The determinant of the last matrix is $(1)(2)(4)(5) = 40$, so the determinant of the original matrix is $\query{80}$