User Tools

Site Tools


Plugin installed incorrectly. Rename plugin directory '_include' to 'include'.
Plugin installed incorrectly. Rename plugin directory '__include' to 'include'.
lecture_15

Warning: Undefined array key "do" in /home/levene/public_html/w/mst10030/lib/plugins/revealjs/action.php on line 14

Theorem: row/column operations and determinants

Let $A$ be an $n\times n$ matrix, let $c$ be a scalar and let $i\ne j$.

$A_{Ri\to x}$ means $A$ but with row $i$ replaced by $x$.

  1. If $i\ne j$, then $\det(A_{Ri\leftrightarrow Rj})=-\det(A)$ (swapping two rows changes the sign of det).
  2. $\det(A_{Ri\to c Ri}) = c\det(A)$ (scaling one row scales $\det(A)$ in the same way)
  3. $\det(A_{Ri\to Ri + c Rj}) = \det(A)$ (adding a multiple of one row to another row doesn't change $\det(A)$)
  • Also, these properties all hold if you change “row” into “column” throughout.

Corollary

If an $n\times n$ matrix $A$ has two equal rows (or columns), then $\det(A)=0$, and $A$ is not invertible.

Proof

If $A$ has two equal rows, row $i$ and row $j$, then $A=A_{Ri\leftrightarrow Rj}$ So $\det(A)=\det(A_{Ri\leftrightarrow Rj}) = -\det(A)$, so $2\det(A)=0$, so $\det(A)=0$.

If $A$ has two equal columns, then $A^T$ has two equal rows, so $\det(A)=\det(A^T)=0$.

In either case, $\det(A)=0$. So $A$ is not invertible.■

Examples

  • Swapping two rows changes the sign, so $\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\vm{0&0&2\\0&3&0\\4&0&0} = -\vm{4&0&0\\0&3&0\\0&0&2}=-4\cdot 3\cdot 2 = -24$.
  • Multiplying a row or a column by a constant multiplies the determinant by that constant, so \begin{align*}\vm{ 2&4&6&10\\5&0&0&-10\\9&0&81&99\\1&2&3&4} &= 2\vm{ 1&2&3&5\\5&0&0&-10\\9&0&81&99\\1&2&3&4} \\&= 2\cdot 5\vm{ 1&2&3&5\\1&0&0&-2\\9&0&81&99\\1&2&3&4}\\&= 2\cdot 5\cdot 9 \vm{ 1&2&3&5\\1&0&0&-2\\1&0&9&11\\1&2&3&4}\\&=2\cdot 5\cdot 9\cdot 2 \vm{ 1&1&3&5\\1&0&0&-2\\1&0&9&11\\1&1&3&4}\\&=2\cdot 5\cdot 9\cdot 2\cdot 3 \vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4}.\end{align*}
  • $\det(A_{R1\to R1-R4})=\det(A)$, so \begin{align*}\vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4} &=\vm{ 0&0&0&1\\1&0&0&-2\\1&0&3&11\\1&1&1&4}=-1\vm{1&0&0\\1&0&3\\1&1&1}+0\\&=-\vm{0&3\\1&1} = -(-3)=3.\end{align*}
  • Hence \begin{align*}\vm{ 2&4&6&10\\5&0&0&-10\\9&0&81&99\\1&2&3&4} &= 2\cdot 5\cdot 9\cdot 2\cdot 3 \vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4} \\&= 2\cdot 5\cdot 9\cdot 2\cdot 3 \cdot 3 = 1620.\end{align*}

Corollary

If $\def\row{\text{row}}\row_j(A)=c\cdot \row_i(A)$ for some $i\ne j$ and some $c\in \mathbb{R}$, then $\det(A)=0$.

Proof

Note that $\row_i(A)-c \cdot\row_j(A)=0$. So $A_{Ri\to Ri-c\,Rj}$ has a zero row, and by Laplace expansion along this row we obtain $\det(A_{Ri\to Ri-c\,Rj})=0$. So $\det(A)=\det(A_{Ri\to Ri-c\,Rj})=0$.■

The effect of EROs on the determinant

We have now seen the effect of each of the three types of ERO on the determinant of a matrix:

  1. swapping two rows of the matrix multiplies the determinant by $-1$. By swapping rows repeatedly, we are able to shuffle the rows in an arbitrary fashion, and the determinant will either remain unchanged (if we used an even number of swaps) or be multiplied by $-1$ (if we used an odd number of swaps).
  2. multiplying one of the rows of the matrix by $c\in \mathbb{R}$ multiplies the determinant by $c$; and
  3. replacing row $j$ by “row $j$ ${}+{}$ $c\times {}$ (row $i$)”, where $c$ is a non-zero real number and $i\ne j$ does not change the determinant.

Moreover, since $\det(A)=\det(A^T)$, this all applies equally to columns instead of rows.

We can use EROs to put a matrix into upper triangular form, and then finding the determinant is easy: just multiply the diagonal entries together. We just have to keep track of how the determinant is changed by the EROs of types 1 and 2.

Example: using EROs to find the determinant

\begin{align*}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}} \vm{1&3&1&3\\\color{red}4&\color{red}8&\color{red}0&\color{red}{12}\\0&1&3&6\\2&2&1&6}&= \color{red}{4}\vm{1&3&1&\color{blue}3\\1&2&0&\color{blue}3\\0&1&3&\color{blue}6\\2&2&1&\color{blue}6}\\&=4\cdot \color{blue}3\vm{\color{green}1&3&1&1\\\color{red}1&2&0&1\\\color{red}0&1&3&2\\\color{red}2&2&1&2} \\&=12\vm{1&3&1&1\\\color{blue}0&\color{blue}{-1}&\color{blue}{-1}&\color{blue}{0}\\\color{blue}0&\color{blue}1&\color{blue}3&\color{blue}2\\0&-4&-1&-0} \\&=\color{blue}{-}12\vm{1&3&1&1\\0&\color{green}1&3&2\\0&\color{red}{-1}&{-1}&{0}\\0&\color{red}{-4}&-1&0} \\&=-12\vm{1&3&1&1\\0&1&3&2\\0&0&\color{green}2&2\\0&0&\color{red}{11}&8} \\&=-12\vm{1&3&1&1\\0&1&3&2\\0&0&2&2\\0&0&0&-3} \\&=-12(1)(1)(2)(-3)=72. \end{align*}

Finding the inverse of an invertible $n\times n$ matrix

Definition: the adjoint of a square matrix

Let $A$ be an $n\times n$ matrix. Recall that $C_{ij}$ is the $(i,j)$ cofactor of $A$. The matrix of cofactors of $A$ is the $n\times n$ matrix $C$ whose $(i,j)$ entry is $C_{ij}$.

The adjoint of $A$ is the $n\times n$ matrix $J=C^T$, the transpose of the matrix of cofactors.

Example: $n=2$

If $A=\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\mat{1&2\\3&4}$, then $C_{11}=+4$, $C_{12}=-3$, $C_{21}=-2$, $C_{22}=+1$. So the matrix of cofactors is $C=\mat{4&-3\\-2&1}$, so the adjoint of $A$ is $J=C^T=\mat{4&-2\\-3&1}$.

lecture_15.txt · Last modified: by rupert

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki