Warning: Undefined array key "do" in /home/levene/public_html/w/mst10030/lib/plugins/revealjs/action.php on line 14
Table of Contents
Chapter 2: The algebra of matrices
Definition
An $n\times m$ matrix is a grid of numbers with $n$ rows and $m$ columns: \[ A=\begin{bmatrix}a_{11}&a_{12}&\dots&a_{1m}\\a_{21}&a_{22}&\dots&a_{2m}\\\vdots&\vdots&&\vdots\\a_{n1}&a_{n2}&\dots&a_{nm}\end{bmatrix}\]
The $(i,j)$ entry of a matrix $A$ is $a_{ij}$, the number in row $i$ and column $j$ of $A$.
Examples
- If $B=\begin{bmatrix} 99&3&5\\7&-20&14\end{bmatrix}$, then $B$ is a $2\times 3$ matrix, and the $(1,1)$ entry of $B$ is $b_{11}=99$, the $(1,3)$ entry of $B$ is $b_{13}=5$, the $(2,1)$ entry is $b_{21}=7$, etc.
- $\begin{bmatrix}3\\2\\4\\0\\-1\end{bmatrix}$ is a $5\times 1$ matrix. A matrix like this with one column is called a column vector.
- $\begin{bmatrix}3&2&4&0&-1\end{bmatrix}$ is a $1\times 5$ matrix. A matrix like this with one row is called a row vector.
Even though the row matrix and the column matrix above have the same entries, they have a different “shape”, or “size”, so we must think of them has being different matrices. Let's give the definitions to make this precise.
Definition
Two matrices $A$ and $B$ have the same size if they have the same number of rows, and they have the same number of columns.
If two matrices do not have the same size, we say they have different sizes.
Definition
Two matrices $A$ and $B$ are said to be equal if both of the following conditions hold:
- $A$ and $B$ have the same size; and
- every entry of $A$ is equal to the corresponding entry of $B$; in other words, for every $(i,j)$ so that $A$ and $B$ have an $(i,j)$ entry, we have $a_{ij}=b_{ij}$.
When $A$ and $B$ are equal matrices, we write $A=B$. Otherwise, we write $A\ne B$.
Examples
- $\begin{bmatrix}3\\2\\4\\0\\-1\end{bmatrix}\ne \begin{bmatrix}3&2&4&0&-1\end{bmatrix}$, since these matrices have different sizes: the first is $5\times 1$ but the second is $1\times 5$.
- $\begin{bmatrix}1\\2\end{bmatrix}\ne\begin{bmatrix}1 &0\\2&0\end{bmatrix}$ since these matrices are not the same size.
- $\begin{bmatrix}1&0\\0&1\end{bmatrix}\ne \begin{bmatrix}1&0\\1&0\end{bmatrix}$ because even though they have the same size, the $(2,1)$ entries are different.
- If $\begin{bmatrix}3x&7y+2\\8z-3&w^2\end{bmatrix}=\begin{bmatrix}1&2z\\\sqrt2&9\end{bmatrix}$ then we know that all the corresponding entries are equal, so we get four equations:\begin{align*}3x&=1\\7y+2&=2z\\8z-3&=\sqrt2\\w^2&=9\end{align*}
Definition of matrix multiplication
If $A$ is an $n\times m$ matrix and $B$ is an $m\times k$ matrix, then the product $AB$ is the $n\times k$ matrix whose $(i,j)$ entry is the row-column product of the $i$th row of $A$ with the $j$th column of $B$. That is: \[ (AB)_{i,j} = \text{row}_i(A)\cdot \text{col}_j(B).\]
If we want to emphasize that we are multiplying matrices in this way, we might sometimes write $A\cdot B$ instead of $AB$.
If $A$ is an $n\times m$ matrix and $B$ is an $\ell\times k$ matrix with $m\ne \ell$, then the matrix product $AB$ is undefined.
Examples
- If $\newcommand{\mat}[1]{\begin{bmatrix}#1\end{bmatrix}} A=\mat{1&0&5\\2&-1&3}$ and $B=\mat{1&2\\3&4\\5&6}$, then $AB=\mat{26&32\\14&18}$ and $BA=\mat{5&-2&11\\11&-4&27\\17&-6&43}$. Note that $AB$ and $BA$ are both defined, but $AB\ne BA$ since $AB$ and $BA$ don't even have the same size.
- If $A=\mat{1&2\\3&4\\5&6}$, $B=\mat{2&1&1\\1&2&0\\1&0&2\\2&2&1}$ and $C=\mat{1&3&0&7\\0&4&6&8}$, then $A$ is $3\times 2$, $B$ is $4\times 3$ and $C$ is $2\times 4$, so
- $AB$, $CA$ and $BC$ don't exist (i.e., they are undefined);
- $AC$ exists and is $3\times 4$;
- $BA$ exists and is $4\times 2$; and
- $CB$ exists and is $2\times 2$.
- In particular, $AB\ne BA$ and $AC\ne CA$ and $BC\ne CB$, since in each case one of the matrices doesn't exist.
- If $A=\mat{0&1\\0&0}$ and $B=\mat{0&0\\1&0}$, then $AB=\mat{1&0\\0&0}$ and $BA=\mat{0&0\\0&1}$. So $AB$ and $BA$ are both defined and have the same size, but they are not equal matrices: $AB\ne BA$.
- If $A=\mat{1&0\\0&0}$ and $B=\mat{0&0\\0&1}$, then $AB=\mat{0&0\\0&0}$ and $BA=\mat{0&0\\0&0}$. So $AB=BA$ in this case.
- If $A=0_{n\times n}$ is the $n\times n$ zero matrix and $B$ is any $n\times n$ matrix, then $AB=0_{n\times n}$ and $BA=0_{n\times n}$. So in this case, we do have $AB=BA$.
- If $A=\mat{1&2\\3&4}$ and $B=\mat{7&10\\15&22}$, then $AB=\mat{37&54\\81&118}=BA$, so $AB=BA$ for these particular matrices $A$ and $B$.
- If $A=\mat{1&2\\3&4}$ and $B=\mat{6&10\\15&22}$, then $AB=\mat{36&54\\78&118}$ and $BA= \mat{36&52\\81&118}$, so $AB\ne BA$.
Commuting matrices
We say that matrices $A$ and $B$ commute if $AB=BA$.
Which matrices commute? Suppose $A$ is an $n\times m$ matrix and $B$ is an $\ell\times k$ matrix, and $A$ and $B$ commute, i.e., $AB=BA$.
- $AB$ must be defined, so $m=\ell$
- $BA$ must be defined, so $k=n$
- $AB$ is an $n\times k$ matrix and $BA$ is an $\ell\times n$ matrix. Since $AB$ has the same size as $BA$, we must have $n=\ell$ and $k=m$.
Putting this together: we see that if $A$ and $B$ commute, then $A$ and $B$ must both be $n\times n$ matrices for some number $n$. In other words, they must be square matrices of the same size.
Examples 4 and 5 above show that for some square matrices $A$ and $B$ of the same size, it is true that $A$ and $B$ commute. On the other hand, examples 3 and 6 show that it's not true that square matrices of the same size must always commute.
Because it's not true in general that $AB=BA$, we say that matrix multiplication is not commutative.
Definition of the $n\times n$ identity matrix
The $n\times n$ identity matrix is the $n\times n$ matrix $I_n$ with $1$s in every diagonal entry (that is, in the $(i,i)$ entry for every $i$ between $1$ and $n$), and $0$s in every other entry. So \[ I_n=\begin{bmatrix} 1&0&0&\dots&0\\0&1&0&\dots&0\\0&0&1&\dots&0\\\vdots & & &\ddots & \vdots\\0&0&0&\dots&1\end{bmatrix}.\]
Examples
- $I_1=[1]$
- $I_2=\mat{1&0\\0&1}$
- $I_3=\mat{1&0&0\\0&1&0\\0&0&1}$
- $I_4=\mat{1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1}$, and so on!
Proposition: properties of $I_n$
- $I_nA=A$ for any $n\times m$ matrix $A$;
- $AI_m=A$ for any $n\times m$ matrix $A$; and
- $I_nB=B=BI_n$ for any $n\times n$ matrix $B$. In particular, $I_n$ commutes with every other square $n\times n$ matrix $B$.
Proof of the proposition
1. We want to show that $I_nA=A$ for any $n\times m$ matrix $A$. These matrices the same size, since $I_n$ has size $n\times n$ and $A$ has size $n\times m$, so $I_n A$ has size $n\times m$ by the definition of matrix multiplication, which is the same as the size of $A$.
Note that $\text{row}_i(I_n)=[0~0~\dots~0~1~0~\dots~0]$, where the $1$ is in the $i$th place, by definition of the identity matrix $I_n$; and the $j$th column of $A$ is $\begin{bmatrix}a_{1j}\\a_{2j}\\\vdots\\a_{nj}\end{bmatrix}$. The (i,j) entry of $I_nA$ is $\text{row}_i(I_n)\cdot \text{col}_j(A)$, by the definition of matrix multiplication, which is therefore \begin{align*} [0~0~\dots~0~1~0~\dots~0]\begin{bmatrix}a_{1j}\\a_{2j}\\\vdots\\a_{nj}\end{bmatrix} &= 0a_{1j}+0a_{2j}+\dots+0a_{i-1,j}+1a_{ij}+0a_{i+1,j}+\dots+0a_{nj} \\&= a_{ij}.\end{align*} So the matrices $I_nA$ and $A$ have the same size, and the same $(i,j)$ entries, for any $(i,j)$. So $I_nA=A$.
Proof of the proposition, continued
2. To show that $AI_m=A$ for any $n\times m$ matrix $A$ is similar to the first part of the proof; the details are left as an exercise.
3. If $B$ is any $n\times n$ matrix, then $I_nB=B$ by part 1 and $BI_n=B$ by part 2, so $I_nB=B=BI_n$. In particular, $I_nB=BI_n$ so $I_n$ commutes with $B$, for every square $n\times n$ matrix $B$. ■
Algebraic properties of matrix multiplication
The associative law
Proposition: associativity of matrix multiplication
Matrix multiplication is associative. This means that $(AB)C=A(BC)$ whenever $A,B,C$ are matrices which can be multiplied together in this order.
We omit the proof, but this is not terribly difficult; it is a calculation in which you write down two formulae for the $(i,j)$ entries of $(AB)C$ and $A(BC)$, and carefully check they are equal using the fact that if $a,b,c$ are real numbers, then $(ab)c=a(bc)$.
Example
We saw above that $\newcommand{\m}[1]{\begin{bmatrix}#1\end{bmatrix}}A=\m{1&2\\3&4}$ commutes with $B=\m{7&10\\15&22}$. We can explain why this is so using associativity. You can check that $B=AA$ (which we usually write as $B=A^2$). Hence, using associativity at $\stackrel*=$, \[ AB=A(AA)\stackrel*=(AA)A=BA.\] The same argument for any square matrix $A$ gives a proof of:
Proposition
If $A$ is any square matrix, then $A$ commutes with $A^2$.■
The powers of a square matrix $A$ are defined by $A^1=A$, and $A^{k+1}=A(A^k)$ for $k\in \mathbb{N}$. Using mathematical induction, you can prove the following more general proposition.
Proposition: a square matrix commutes with its powers
If $A$ is any square matrix and $k\in\mathbb{N}$, then $A$ commutes with $A^k$.■
The distributive laws
Lemma: the distributive laws for row-column multiplication
- If $a$ is a $1\times m$ row vector and $b$ and $c$ are $m\times 1$ column vectors, then $a\cdot (b+c)=a\cdot b+a\cdot c$.
- If $b$ and $c$ are $1\times m$ row vectors and $a$ is an $m\times 1$ column vector, then $(b+c)\cdot a=b\cdot a+c\cdot a$.
The proof is an exercise (see tutorial worksheet 5).
Proposition: the distributive laws for matrix multiplication
If $A$ is an $n\times m$ matrix and $k\in\mathbb{N}$, then:
- $A(B+C)=AB+AC$ for any $m\times k$ matrices $B$ and $C$; and
- $(B+C)A=BA+CA$ for any $k\times n$ matrices $B$ and $C$.
In other words, $A(B+C)=AB+AC$ whenever the matrix products make sense, and similarly $(B+C)A=BA+CA$ whenever this makes sense.
Proof
1. First note that
- $B$ and $C$ are both $m\times k$, so $B+C$ is $m\times k$ by the definition of matrix addition;
- $A$ is $n\times m$ and $B+C$ is $m\times k$, so $A(B+C)$ is $m\times k$ by the definition of matrix multiplication;
- $AB$ and $AC$ are both $n\times k$ by the definition of matrix multiplication
- so $AB+AC$ is $n\times k$ by the definition of matrix addition.
So we have (rather long-windedly) checked that $A(B+C)$ and $AB+AC$ have the same size.
By the Lemma above, the row-column product has the property that \[a\cdot (b+c)=a\cdot b+a\cdot c.\] So the $(i,j)$ entry of $A(B+C)$ is \begin{align*}\def\row{\text{row}}\def\col{\text{col}} \text{row}_i(A)\cdot \col_j(B+C) &= \text{row}_i(A)\cdot \big(\col_j(B)+\col_j(C)\big) \\ &= \text{row}_i(A)\cdot \col_j(B)+\row_i(A)\cdot\col_j(C).\end{align*} On the other hand,
- the $(i,j)$ entry of $AB$ is $\text{row}_i(A)\cdot \col_j(B)$; and
- the $(i,j)$ entry of $AC$ is $\row_i(A)\cdot\col_j(C)$;
- so the $(i,j)$ entry of $AB+AC$ is also $\text{row}_i(A)\cdot \col_j(B)+\row_i(A)\cdot\col_j(C)$.
So the entries of $A(B+C)$ and $AB+AC$ are all equal, so $A(B+C)=AB+AC$.
2. The proof is similar, and is left as an exercise.■
Matrix equations
We've seen that a single linear equation can be written using row-column multiplication. For example, \[ 2x-3y+z=8\] can be written as \[ \def\m#1{\begin{bmatrix}#1\end{bmatrix}}\m{2&-3&1}\m{x\\y\\z}=8\] or \[ a\vec x=8\] where $a=\m{2&-3&1}$ and $\vec x=\m{x\\y\\z}$.
We can write a whole system of linear equations in a similar way, as a matrix equation using matrix multiplication. For example we can rewrite the linear system \begin{align*} 2x-3y+z&=8\\ y-z&=4\\x+y+z&=0\end{align*} as \[ \m{2&-3&1\\0&1&-1\\1&1&1}\m{x\\y\\z}=\m{8\\4\\0},\] or \[ A\vec x=\vec b\] where $A=\m{2&-3&1\\0&1&-1\\1&1&1}$, $\vec x=\m{x\\y\\z}$ and $\vec b=\m{8\\4\\0}$. (We are writing the little arrow above the column vectors here because otherwise we might get confused between the $\vec x$: a column vector of variables, and $x$: just a single variable).
More generally, any linear system \begin{align*} a_{11}x_1+a_{12}x_2+\dots+a_{1m}x_m&=b_1\\ a_{21}x_1+a_{22}x_2+\dots+a_{2m}x_m&=b_2\\ \hphantom{a_{11}}\vdots \hphantom{x_1+a_{22}}\vdots\hphantom{x_2+\dots+{}a_{nn}} \vdots\ & \hphantom{{}={}\!} \vdots\\ a_{n1}x_1+a_{n2}x_2+\dots+a_{nm}x_m&=b_n \end{align*} can be written in the form \[ A\vec x=\vec b\] where $A$ is the $n\times m $ matrix, called the coefficient matrix of the linear system, whose $(i,j)$ entry is $a_{ij}$ (the number in front of $x_j$ in the $i$th equation of the system) and $\vec x=\m{x_1\\x_2\\\vdots\\x_m}$, and $\vec b=\m{b_1\\b_2\\\vdots\\b_n}$.
More generally still, we might want to solve a matrix equation like \[AX=B\] where $A$, $X$ and $B$ are matrices of any size, with $A$ and $B$ fixed matrices and $X$ a matrix of unknown variables. Because of the definition of matrix multiplication, if $A$ is $n\times m$, we need $B$ to be $n\times k$ for some $k$, and then $X$ must be $m\times k$, so we know the size of any solution $X$. But which $m\times k$ matrices $X$ are solutions?
Example
If $A=\def\m#1{\begin{bmatrix}#1\end{bmatrix}}\m{1&0\\0&0}$ and $B=0_{2\times 3}$, then any solution $X$ to $AX=B$ must be $2\times 3$.
One solution is $X=0_{2\times 3}$, since in this case we have $AX=A0_{2\times 3}=0_{2\times 3}$.
However, this is not the only solution. For example, $X=\m{0&0&0\\1&2&3}$ is another solution, since in this case \[AX=\m{1&0\\0&0}\m{0&0&0\\1&2&3}=\m{0&0&0\\0&0&0}=0_{2\times 3}.\]
So from this example, we see that a matrix equation can have many solutions.
Invertibility
We've seen that solving matrix equations $AX=B$ is useful, since they generalise systems of linear equations.
How can we solve them?
Example
Take $A=\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\mat{2&4\\0&1}$ and $B=\mat{3&4\\5&6}$, so we want to find all matrices $X$ so that $AX=B$, or \[ \mat{2&4\\0&1}X=\mat{3&4\\5&6}.\] Note that $X$ must be a $2\times 2$ matrix for this to work, by the definition of matrix multiplication. So one way to solve this is to write $X=\mat{x_{11}&x_{12}\\x_{21}&x_{22}}$ and plug it in: \[\mat{2&4\\0&1}\mat{x_{11}&x_{12}\\x_{21}&x_{22}}=\mat{3&4\\5&6}\iff \mat{2x_{11}+4x_{21}&2 x_{12}+4x_{22}\\x_{21}&x_{22}}=\mat{3&4\\5&6}\] and then equate entries to get four linear equations: \begin{align*}2x_{11}+4x_{21}&=3\\2 x_{12}+4x_{22}&=4\\x_{21}&=5\\x_{22}&=6\end{align*} which we can solve in the usual way.
But this is a bit tedious! We will develop a slicker method by first thinking about solving ordinary equations $ax=b$ where $a,x,b$ are all numbers, or if you like, $1\times 1$ matrices.
Solving $ax=b$ and $AX=B$
If $a\ne0$, then solving $ax=b$ where $a,b,x$ are numbers is easy. We just divide both sides by $a$, or equivalently, we multiply both sides by $\tfrac1a$, to get the solution: $x=\tfrac1a\cdot b$.
Why does this work? If $x=\tfrac1a\cdot b$, then \begin{align*} ax&=a(\tfrac1a\cdot b)\\&=(a\cdot \tfrac1a)b\\&=1b\\&=b\end{align*} so $ax$ really is equal to $b$, and we do have a solution to $ax=b$.
What is special about $\tfrac1a$ which made this all work?
- we have $a\cdot \tfrac1a = 1$,
- and $1b = b$.
Now for an $n\times k$ matrix $B$, we know that the identity matrix $I_n$ does the same sort of thing as $1$ is doing in the relation $1b=b$: we have $I_nB=B$ for any $n\times k$ matrix $B$. So instead of $\tfrac1a$, we want to find a matrix $C$ with the property: $AC=I_n$. In fact, because matrix multiplication is not commutative, we also require that $CA=I_n$. It's then easy to argue that $X=C\cdot B$ is a solution to $AX=B$, since \begin{align*} AX&=A(CB)\\&=(AC)B\\&=I_nB\\&=B.\end{align*}
Example revisited
If $A=\mat{2&4\\0&1}$, then the matrix $C=\mat{\tfrac12&-2\\0&1}$ does have the property that \[ A C=I_2=C A.\] (You should check this!). So a solution to $AX=B$ where $B=\mat{3&4\\5&6}$ is $X=CB=\mat{\tfrac12&-2\\0&1}\mat{3&4\\5&6} = \mat{-8.5&-10\\5&6}$.
Notice that having found the matrix $C$, then we can solve $AX=C$ easily for any $2\times 2$ matrix $C$: the answer is $X=CC$. This is quicker than having to solve four new linear equations using our more tedious method above.
Definition: invertible
An $n\times n$ matrix $A$ is invertible if there exists an $n\times n$ matrix $C$ so that \[ AC=I_n=C A.\] The matrix $C$ is called an inverse of $A$.
Examples
- $A=\mat{2&4\\0&1}$ is invertible, and the matrix $C=\mat{\tfrac12&-2\\0&1}$ is an inverse of $A$
- a $1\times 1$ matrix $A=[a]$ is invertible if and only if $a\ne0$, and if $a\ne0$ then an inverse of $A=[a]$ is $C=[\tfrac1a]$.
- $I_n$ is invertible for any $n$, since $I_n\cdot I_n=I_n=I_n\cdot I_n$, so an inverse of $I_n$ is $I_n$.
- $0_{n\times n}$ is not invertible for any $n$, since $0_{n\times n}\cdot C=0_{n\times n}$ for any $n\times n$ matrix $C$, so $0_{n\times n}\cdot C\ne I_n$.
- $A=\mat{1&0\\0&0}$ is not invertible, since for any $2\times 2$ matrix $C=\mat{a&b\\c&d}$ we have $AC=\mat{a&b\\0&0}$ which is not equal to $I_2=\mat{1&0\\0&1}$ since the $(2,2)$ entries are not equal.
- $A=\mat{1&2\\-3&-6}$ is not invertible. We'll see why later!
Proposition: uniqueness of the inverse
If $A$ is an invertible $n\times n$ matrix, then $A$ has a unique inverse.
Proof
Suppose $C$ and $C'$ are both inverses of $A$. Then $AC=I_n=CA$ and $AC'=I_n=C'A$. So \begin{align*} C&=CI_n\quad\text{by the properties of $I_n$}\\&=C(AC')\quad\mbox{because }AC'=I_n\\&=(CA)C'\quad\mbox{because matrix multiplication is associative}\\&=I_nC'\quad\mbox{because }CA=I_n\\&=C'\quad\text{by the properties of $I_n$}.\end{align*} So $C=C'$, whenever $C$ and $C'$ are inverses of $A$. So $A$ has a unique inverse. ■
Definition/notation: $A^{-1}$
If $A$ is an invertible $n\times n$ matrix, then the unique $n\times n$ matrix $C$ with $AC=I_n=CA$ is called the inverse of $A$. If $A$ is invertible, then we write $A^{-1}$ to mean the (unique) inverse of $A$.
If a matrix $A$ is not invertible, then $A^{-1}$ does not exist.
Warning
If $A$ is a matrix then $\frac 1A$ doesn't make sense! You should never write this down. In particular, $A^{-1}$ definitely doesn't mean $\frac 1A$.
Similarly, you should never write down $\frac AB$ where $A$ and $B$ are matrices. This doesn't make sense either!
Examples revisited
- $A=\mat{2&4\\0&1}$ has $A^{-1}=\mat{\tfrac12&-2\\0&1}$. In other words, $\mat{2&4\\0&1}^{-1}=\mat{\tfrac12&-2\\0&1}$.
- a $1\times 1$ matrix $A=[a]$ with $a\ne 0$ has $[a]^{-1}=[\tfrac1a]$.
- $I_n^{-1}=I_n$.
- $0_{n\times n}^{-1}$ does not exist
- $\mat{1&0\\0&0}^{-1}$ does not exist
- $\mat{1&2\\-3&-6}^{-1}$ does not exist
Proposition: solving $AX=B$ when $A$ is invertible
If $A$ is an invertible $n\times n$ matrix and $B$ is an $n\times k$ matrix, then the matrix equation \[ AX=B\] has a unique solution: $X=A^{-1}B$.
Proof
First we check that $X=A^{-1}B$ really is a solution to $AX=B$. To see this, note that if $X=A^{-1}B$, then \begin{align*} AX&=A(A^{-1}B)\\&=(AA^{-1})B\\&=I_n B \\&= B. \end{align*} Now we check that the solution is unique. If $X$ and $Y$ are both solutions, then $AX=B$ and $AY=B$, so \[AX=AY.\] Multiplying both sides on the left by $A^{-1}$, we get \[ A^{-1}AX=A^{-1}AY\implies I_nX=I_nY\implies X=Y.\] So any two solutions are equal, so $AX=B$ has a unique solution. ■
Corollary
If $A$ is an $n\times n$ matrix and there is a non-zero $n\times m$ matrix $K$ so that $AK=0_{n\times m}$, then $A$ is not invertible.
Proof
Since $A0_{n\times m}=0_{n\times m}$ and $AK=0_{n\times m}$, the equation $AX=0_{n\times m}$ has (at least) two solutions: $X=0_{n\times m}$ and $X=K$. Since $K$ is non-zero, these two solutions are different.
So there is not a unique solution to $AX=B$, for $B$ the zero matrix. If $A$ was invertible, this would contradict the uniqueness statement of the last Proposition. So $A$ cannot be invertible. ■
Examples
- We can now see why the matrix $\def\mat#1{\left[\begin{smallmatrix}#1\end{smallmatrix}\right]}A=\mat{1&2\\-3&-6}$ is not invertible. If $X=\mat{-2\\1}$ and $K=\mat{2\\-1}$, then $K$ is non-zero, but $AK=0_{2\times 1}$. So $A$ is not invertible, by the Corollary.
- $A=\mat{1&4&5\\2&5&7\\3&6&9}$ is not invertible, since $K=\mat{1\\1\\-1}$ is non-zero and $AK=0_{3\times 1}$.
$2\times 2$ matrices: determinants and invertibility
Question
Which $2\times 2$ matrices are invertible? For the invertible matrices, can we find their inverse?
Lemma
If $A=\mat{a&b\\c&d}$ and $J=\mat{d&-b\\-c&a}$, then we have \[ AJ=\delta I_2=JA\] where $\delta=ad-bc$.
Proof
This is a calculation (done in the lectures; you should also check it yourself). ■
Definition: the determinant of a $2\times 2$ matrix
The number $ad-bc$ is called the determinant of the $2\times 2$ matrix $A=\mat{a&b\\c&d}$. We write $\det(A)=ad-bc$ for this number.
Theorem: the determinant determines the invertibility (and inverse) of a $2\times 2$ matrix
Let $A=\mat{a&b\\c&d}$ be a $2\times 2$ matrix.
- $A$ is invertible if and only if $\det(A)\ne0$.
- If $A$ is invertible, then $A^{-1}=\frac{1}{\det(A)}\mat{d&-b\\-c&a}$.
Proof
If $A=0_{2\times 2}$, then $\det(A)=0$ and $A$ is not invertible. So the statement is true is this special case.
Now assume that $A\ne0_{2\times 2}$ and let $J=\mat{d&-b\\-c&a}$.
By the previous lemma, we have \[AJ=(\det(A))I_2=JA.\]
If $\det(A)\ne0$, then multiplying this equation through by the scalar $\frac1{\det(A)}$, we get \[ A\left(\frac1{\det(A)}J\right)=I_2=\left(\frac1{\det(A)}J\right) A,\] so if we write $B=\frac1{\det(A)}J$ to make this look simpler, then we obtain \[ AB=I_2=BA,\] so in this case $A$ is invertible with inverse $B=\frac1{\det(A)}J=\frac1{\det(A)}\mat{d&-b\\-c&a}$.
If $\det(A)=0$, then $AJ=0_{2\times 2}$ and $J\ne 0_{2\times2}$ (since $A\ne0_{2\times2}$, and $J$ is obtained from $A$ by swapping two entries and multiplying the others by $-1$). Hence by the previous corollary, $A$ is not invertible in this case. ■
Example
Let's solve the matrix equation $\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\mat{1&5\\3&-2}X=\mat{4&1&0\\0&2&1}$ for $X$.
Write $A=\mat{1&5\\3&-2}$. Then $\det(A)=1(-2)-5(3)=-2-15=-17$ which isn't zero, so $A$ is invertible. And $A^{-1}=\frac1{-17}\mat{-2&-5\\-3&1}=\frac1{17}\mat{2&5\\3&-1}$.
Hence the solution is $X=A^{-1}\mat{4&1&0\\0&2&1}=\frac1{17}\mat{2&5\\3&-1}\mat{4&1&0\\0&2&1}=\frac1{17}\mat{8&12&5\\12&1&-1}$.
The transpose of a matrix
We defined this in tutorial sheet 4:
The transpose of an $n\times m$ matrix $A$ is the $m\times n$ matrix $A^T$ whose $(i,j)$ entry is the $(j,i)$ entry of $A$. In other words, to get $A^T$ from $A$, you write the rows of $A$ as columns, and vice versa; equivalently, you reflect $A$ in its main diagonal.
For example, $\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\mat{a&b\\c&d}^T=\mat{a&c\\b&d}$ and $\mat{1&2&3\\4&5&6}^T=\mat{1&4\\2&5\\3&6}$.
Exercise: simple properties of the transpose
Prove that for any matrix $A$:
- $(A^T)^T=A$; and
- $(A+B)^T=A^T+B^T$ if $A$ and $B$ are matrices of the same size; and
- $(cA)^T=c(A^T)$ for any scalar $c$.
In tutorial sheet 4, we proved:
Lemma: transposes and row-column multiplication
If $a$ is a $1\times m$ row vector and $b$ is an $m\times 1$ column vector, then \[ ab=b^Ta^T.\]
Observation: the transpose swaps rows with columns
Formally, for any matrix $A$ and any $i,j$, we have \begin{align*}\def\col#1{\text{col}_{#1}}\def\row#1{\text{row}_{#1}} \row i(A^T)&=\col i(A)^T\\\col j(A^T)&=\row j(A)^T .\end{align*}
Theorem: the transpose reverses the order of matrix multiplication
If $A$ and $B$ are matrices and the matrix product $AB$ is defined, then $B^TA^T$ is also defined. Moreover, in this case we have \[ (AB)^T=B^TA^T.\]
Proof
If $AB$ is defined, then $A$ is $n\times m$ and $B$ is $m\times k$ for some $n,m,k$, so $B^T$ is $k\times m$ and $A^T$ is $m\times n$, so $B^TA^T$ is defined. Moreover, in this case $B^TA^T$ is an $k\times n$ matrix, and $AB$ is an $n\times k$ matrix, so $(AB)^T$ is a $k\times n$ matrix. Hence $B^TA^T$ has the same size as $(AB)^T$. To show that they are equal, we calculate, using the fact that the transpose swaps rows with columns: \begin{align*} \text{the }(i,j)\text{ entry of }(AB)^T&= \text{the }(j,i)\text{ entry of }AB \\&= \row j(A)\cdot\col i(B) \\&=\col i(B)^T\cdot \row j(A)^T \quad\text{by the previous Lemma} \\&=\row i(B^T)\cdot \col j(A^T) \quad\text{by the Observation} \\&=\text{the }(i,j)\text{ entry of }B^TA^T \end{align*} Hence $(AB)^T=B^TA^T$. ■
Determinants of $n\times n$ matrices
Given any $n\times n$ matrix $A$, it is possible to define a number $\det(A)$ (as a formula using the entries of $A$) so that \[ A\text{ is invertible} \iff \det(A)\ne0.\]
- If $A$ is a $1\times 1$ matrix, say $A=[a]$, then we just define $\det[a]=a$.
- If $A$ is a $2\times 2$ matrix, say $A=\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\mat{a&b\\c&d}$, then we've seen that $\det(A)=ad-bc$.
- If $A$ is a $3\times 3$ matrix, say $A=\mat{a&b&c\\d&e&f\\g&h&i}$, then it turns out that $\det(A)=aei-afh+bfg-bdi+cdh-ceg$.
- If $A$ is a $4\times 4$ matrix, then the formula for $\det(A)$ is more complicated still, with $24$ terms.
- If $A$ is a $5\times 5$ matrix, then the formula for $\det(A)$ has $120$ terms.
Trying to memorise a formula in every case (or even in the $3\times 3$ case!) isn't convenient unless we understand it somehow. We will approach this is several steps.
Step 1: minors
Definition
If $A$ is an $n\times n$ matrix, then the $(i,j)$ minor of $A$ is defined to be the determinant of the $(n-1)\times (n-1)$ matrix formed by removing row $i$ and column $j$ from $A$. We will write this number as $M_{ij}$.
Examples
- If $A=\mat{3&5\\-4&7}$, then $M_{11}=\det[7]=7$, $M_{12}=\det[-4]=-4$, $M_{21}=5$, and $M_{22}=3$.
- If $A=\mat{1&2&3\\7&8&9\\11&12&13}$, then $M_{23}=\det\mat{1&2\\11&12}=1\cdot 12-2\cdot 11=-10$ and $M_{32}=\det\mat{1&3\\7&9}=-12$.
Step 2: cofactors
Definition
The $(i,j)$ cofactor of an $n\times n$ matrix $A$ is $(-1)^{i+j}M_{ij}$, where $M_{ij}$ is the (i,j) minor of $A$.
Note that $(-1)^{i+j}$ is $+1$ or $-1$, and can looked up in the matrix of signs: $\mat{+&-&+&-&\dots\\-&+&-&+&\dots\\+&-&+&-&\dots\\\vdots&\vdots&\vdots&\vdots&\ddots}$. This matrix starts with a $+$ in the $(1,1)$ entry (corresponding to $(-1)^{1+1}=(-1)^2=+1$) and the signs then alternate.
Examples
- If $A=\mat{3&5\\-4&7}$, then $C_{11}=+M_{11}=\det[7]=7$, $C_{12}=-M_{12}=-\det[-4]=4$, $C_{21}=-5$, and $C_{22}=3$.
- If $A=\mat{1&2&3\\7&8&9\\11&12&13}$, then $C_{23}=-M_{23}=-(-10)=10$ and $C_{33}=+M_{33}=\det\mat{1&2\\7&8}=-6$.
Step 3: the determinant of a $3\times 3$ matrix using Laplace expansion along the first row
Definition
If $\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}A=\mat{a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}}$ is a $3\times 3$ matrix, then \[\det A=a_{11}C_{11}+a_{12}C_{12}+a_{13}C_{13}.\] Here $C_{ij}$ are the cofactors of $A$.
This formula is called the Laplace expansion of $\det A$ along the first row, since $a_{11}$, $a_{12}$ and $a_{13}$ make up the first row of $A$.
Example
\begin{align*}\def\mat#1{\left[\begin{smallmatrix}#1\end{smallmatrix}\right]}\det\mat{1&2&3\\7&8&9\\11&12&13} &= 1\cdot C_{11} + 2 C_{12} + 3 C_{13}\\ &= 1 \cdot (+M_{11}) + 2 \cdot (-M_{12}) + 3 \cdot(+M_{13})\\ &= M_{11}-2M_{12}+3M_{13}\\ &= \det\mat{8&9\\12&13} -2\det\mat{7&9\\11&13} + 3\det\mat{7&8\\11&12}\\ &= (8\cdot 13-9\cdot 12) -2(7\cdot 13-9\cdot 11)+3(7\cdot 12-8\cdot 11)\\ &=-4 -2(-8)+3(-4)\\ &=-4+16-12\\ &=0.\end{align*}
From this, we can conclude that $\mat{1&2&3\\7&8&9\\11&12&13}$ is not invertible.
Notation
To save having to write $\det$ all the time, we sometimes write the entries of a matrix inside vertical bars $|\ |$ to mean the determinant of that matrix. Using this notation (and doing a few steps in our heads), we can rewrite the previous example as:
\begin{align*}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\vm{1&2&3\\7&8&9\\11&12&13} &= 1\vm{8&9\\12&13} -2\vm{7&9\\11&13} + 3\vm{7&8\\11&12}\\ &=-4 -2(-8)+3(-4)\\ &=0.\end{align*}
Step 4: the determinant of an $n\times n$ matrix
Definition
If $\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}A=\mat{a_{11}&a_{12}&\dots&a_{1n}\\\vdots&&&\vdots\\a_{n1}&a_{n2}&\dots&a_{nn}}$ is an $n\times n$ matrix, then \[\det A=a_{11}C_{11}+a_{12}C_{12}+\dots+a_{1n}C_{1n}.\] Here $C_{ij}$ are the cofactors of $A$.
This formula is called the Laplace expansion of $\det A$ along the first row, since $a_{11}, a_{12},\dots,a_{1n}$ make up the first row of $A$.
Example
\begin{align*} \def\vm#1{\begin{vmatrix}#1\end{vmatrix}} \vm{\color{red}1&\color{red}0&\color{red}2&\color{red}3\\0&2&1&-1\\2&0&0&1\\3&0&4&2} &= \color{red}1\vm{\color{blue}2&\color{blue}1&\color{blue}-1\\0&0&1\\0&4&2}-\color{red}0\vm{0&1&-1\\2&0&1\\3&4&2}+\color{red}2\vm{\color{orange}0&\color{orange}2&\color{orange}{-1}\\2&0&1\\3&0&2}-\color{red}3\vm{\color{purple}0&\color{purple}2&\color{purple}1\\2&0&0\\3&0&4}\\ &= 1\left(\color{blue}2\vm{0&1\\4&2}-\color{blue}1\vm{0&1\\0&2}\color{blue}{-1}\vm{0&0\\0&4}\right)-0+2\left(\color{orange}0-\color{orange}{2}\vm{2&1\\3&2}\color{orange}{-1}\vm{2&0\\3&0}\right)-3\left(\color{purple}0-\color{purple}2\vm{2&0\\3&4}+\color{purple}1\vm{2&0\\3&0}\right)\\ &=1(2(-4)-0-0)+2(-2(1)-0)-3(-2(8)+0)\\ &=-8-4+48\\ &=36. \end{align*}
Theorem: Laplace expansion along any row or column gives the determinant
- For any fixed $i$: $\det(A)=a_{i1}C_{i1}+a_{i2}C_{i2}+\dots+a_{in}C_{in}$ (Laplace expansion along row $i$)
- For any fixed $j$: $\det(A)=a_{1j}C_{1j}+a_{2j}C_{2j}+\dots+a_{nj}C_{nj}$ (Laplace expansion along column $j$)
Example
We can make life easier by choosing expansion rows or columns with lots of zeros, if possible. Let's redo the previous example with this in mind:
\begin{align*} \def\vm#1{\begin{vmatrix}#1\end{vmatrix}} \vm{1&\color{red}0&2&3\\0&\color{red}2&1&-1\\2&\color{red}0&0&1\\3&\color{red}0&4&2} &= -\color{red}0+\color{red}2\vm{1&2&3\\\color{purple}2&\color{purple}0&\color{purple}1\\3&4&2}-\color{red}0+\color{red}0\\ &=2\left(-\color{purple}2\vm{2&3\\4&2}+\color{purple}0-\color{purple}1\vm{1&2\\3&4}\right)\\ &=2(-2(-8)-(-2))\\ &=36. \end{align*}
Definition: upper triangular matrices
An $n\times n$ matrix $A$ is upper triangular if all the entries below the main diagonal are zero.
Definition: diagonal matrices
An $n\times n$ matrix $A$ is diagonal if the only non-zero entries are on its main diagonal.
Corollary: the determinant of upper triangular matrices and diagonal matrices
- The determinant of an upper triangular $n\times n$ matrix is the product of its diagonal entries: $\det(A)=a_{11}a_{22}\dots a_{nn}$.
- The determinant of an $n\times n$ diagonal matrix is the product of its diagonal entries: $\det(A)=a_{11}a_{22}\dots a_{nn}$.
Proof
- This is true for $n=1$, trivially. For $n>1$, assume inductively that it is true for $(n-1)\times (n-1)$ matrices and use the Laplace expansion of an upper triangular $n\times n$ matrix $A$ along the first column of $A$ to see that $\det(A)=a_{11}C_{11}+0+\dots+0=a_{11}C_{11}$. Now $C_{11}$ is the determinant of the $(n-1)\times (n-1)$ matrix formed by removing the first row and and column of $A$, and this matrix is upper triangular with diagonal entries $a_{22},a_{33},\dots,a_{nn}$. By our inductive assumption, we have $C_{11}=a_{22}a_{33}\dots a_{nn}$. So $\det(A)=a_{11}C_{11}=a_{11}a_{22}a_{33}\dots a_{nn}$ as desired.
- Any diagonal matrix is upper triangular, so this is a special case of statement 1. ■
Examples
- For any $n$, we have $\det(I_n)=1\cdot 1\cdots 1 = 1$.
- For any $n$, we have $\det(5I_n)=5^n$.
- $\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\vm{1&9&43&23434&4&132\\0&3&43&2&-1423&-12\\0&0&7&19&23&132\\0&0&0&2&0&0\\0&0&0&0&-1&-903\\0&0&0&0&0&6}=1\cdot3\cdot7\cdot2\cdot(-1)\cdot6 = 252$.
Theorem: important properties of the determinant
Let $A$ be an $n\times n$ matrix.
- $A$ is invertible if and only if $\det(A)\ne0$.
- $\det(A^T)=\det(A)$
- If $B$ is another $n\times n$ matrix, then $\det(AB)=\det(A)\det(B)$
Corollary on invertibility
- $A^T$ is invertible if and only if $A$ is invertible
- $AB$ is invertible if and only if both $A$ and $B$ are invertible
Proof
- We have $\det(A^T)=\det(A)$. So $A^T$ is invertible $\iff$ $\det(A^T)\ne0$ $\iff$ $\det(A)\ne 0$ $\iff$ $A$ is invertible.
- We have $\det(AB)=\det(A)\det(B)$. So $AB$ is invertible $\iff$ $\det(AB)\ne 0$ $\iff$ $\det(A)\det(B)\ne0$ $\iff$ $\det(A)\ne0$ and $\det(B)\ne 0$ $ \iff$ $A$ is invertible and $B$ is invertible. ■
Theorem: row/column operations and determinants
Let $A$ be an $n\times n$ matrix, let $c$ be a scalar and let $i\ne j$.
$A_{Ri\to x}$ means $A$ but with row $i$ replaced by $x$.
- If $i\ne j$, then $\det(A_{Ri\leftrightarrow Rj})=-\det(A)$ (swapping two rows changes the sign of det).
- $\det(A_{Ri\to c Ri}) = c\det(A)$ (scaling one row scales $\det(A)$ in the same way)
- $\det(A_{Ri\to Ri + c Rj}) = \det(A)$ (adding a multiple of one row to another row doesn't change $\det(A)$)
- Also, these properties all hold if you change “row” into “column” throughout.
Corollary
If an $n\times n$ matrix $A$ has two equal rows (or columns), then $\det(A)=0$, and $A$ is not invertible.
Proof
If $A$ has two equal rows, row $i$ and row $j$, then $A=A_{Ri\leftrightarrow Rj}$ So $\det(A)=\det(A_{Ri\leftrightarrow Rj}) = -\det(A)$, so $2\det(A)=0$, so $\det(A)=0$.
If $A$ has two equal columns, then $A^T$ has two equal rows, so $\det(A)=\det(A^T)=0$.
In either case, $\det(A)=0$. So $A$ is not invertible.■
Examples
- Swapping two rows changes the sign, so $\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\vm{0&0&2\\0&3&0\\4&0&0} = -\vm{4&0&0\\0&3&0\\0&0&2}=-4\cdot 3\cdot 2 = -24$.
- Multiplying a row or a column by a constant multiplies the determinant by that constant, so \begin{align*}\vm{ 2&4&6&10\\5&0&0&-10\\9&0&81&99\\1&2&3&4} &= 2\vm{ 1&2&3&5\\5&0&0&-10\\9&0&81&99\\1&2&3&4} \\&= 2\cdot 5\vm{ 1&2&3&5\\1&0&0&-2\\9&0&81&99\\1&2&3&4}\\&= 2\cdot 5\cdot 9 \vm{ 1&2&3&5\\1&0&0&-2\\1&0&9&11\\1&2&3&4}\\&=2\cdot 5\cdot 9\cdot 2 \vm{ 1&1&3&5\\1&0&0&-2\\1&0&9&11\\1&1&3&4}\\&=2\cdot 5\cdot 9\cdot 2\cdot 3 \vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4}.\end{align*}
- $\det(A_{R1\to R1-R4})=\det(A)$, so \begin{align*}\vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4} &=\vm{ 0&0&0&1\\1&0&0&-2\\1&0&3&11\\1&1&1&4}=-1\vm{1&0&0\\1&0&3\\1&1&1}+0\\&=-\vm{0&3\\1&1} = -(-3)=3.\end{align*}
- Hence \begin{align*}\vm{ 2&4&6&10\\5&0&0&-10\\9&0&81&99\\1&2&3&4} &= 2\cdot 5\cdot 9\cdot 2\cdot 3 \vm{ 1&1&1&5\\1&0&0&-2\\1&0&3&11\\1&1&1&4} \\&= 2\cdot 5\cdot 9\cdot 2\cdot 3 \cdot 3 = 1620.\end{align*}
Corollary
If $\def\row{\text{row}}\row_j(A)=c\cdot \row_i(A)$ for some $i\ne j$ and some $c\in \mathbb{R}$, then $\det(A)=0$.
Proof
Note that $\row_i(A)-c \cdot\row_j(A)=0$. So $A_{Ri\to Ri-c\,Rj}$ has a zero row, and by Laplace expansion along this row we obtain $\det(A_{Ri\to Ri-c\,Rj})=0$. So $\det(A)=\det(A_{Ri\to Ri-c\,Rj})=0$.■
The effect of EROs on the determinant
We have now seen the effect of each of the three types of ERO on the determinant of a matrix:
- swapping two rows of the matrix multiplies the determinant by $-1$. By swapping rows repeatedly, we are able to shuffle the rows in an arbitrary fashion, and the determinant will either remain unchanged (if we used an even number of swaps) or be multiplied by $-1$ (if we used an odd number of swaps).
- multiplying one of the rows of the matrix by $c\in \mathbb{R}$ multiplies the determinant by $c$; and
- replacing row $j$ by “row $j$ ${}+{}$ $c\times {}$ (row $i$)”, where $c$ is a non-zero real number and $i\ne j$ does not change the determinant.
Moreover, since $\det(A)=\det(A^T)$, this all applies equally to columns instead of rows.
We can use EROs to put a matrix into upper triangular form, and then finding the determinant is easy: just multiply the diagonal entries together. We just have to keep track of how the determinant is changed by the EROs of types 1 and 2.
Example: using EROs to find the determinant
\begin{align*}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}} \vm{1&3&1&3\\\color{red}4&\color{red}8&\color{red}0&\color{red}{12}\\0&1&3&6\\2&2&1&6}&= \color{red}{4}\vm{1&3&1&\color{blue}3\\1&2&0&\color{blue}3\\0&1&3&\color{blue}6\\2&2&1&\color{blue}6}\\&=4\cdot \color{blue}3\vm{\color{green}1&3&1&1\\\color{red}1&2&0&1\\\color{red}0&1&3&2\\\color{red}2&2&1&2} \\&=12\vm{1&3&1&1\\\color{blue}0&\color{blue}{-1}&\color{blue}{-1}&\color{blue}{0}\\\color{blue}0&\color{blue}1&\color{blue}3&\color{blue}2\\0&-4&-1&-0} \\&=\color{blue}{-}12\vm{1&3&1&1\\0&\color{green}1&3&2\\0&\color{red}{-1}&{-1}&{0}\\0&\color{red}{-4}&-1&0} \\&=-12\vm{1&3&1&1\\0&1&3&2\\0&0&\color{green}2&2\\0&0&\color{red}{11}&8} \\&=-12\vm{1&3&1&1\\0&1&3&2\\0&0&2&2\\0&0&0&-3} \\&=-12(1)(1)(2)(-3)=72. \end{align*}
Finding the inverse of an invertible $n\times n$ matrix
Definition: the adjoint of a square matrix
Let $A$ be an $n\times n$ matrix. Recall that $C_{ij}$ is the $(i,j)$ cofactor of $A$. The matrix of cofactors of $A$ is the $n\times n$ matrix $C$ whose $(i,j)$ entry is $C_{ij}$.
The adjoint of $A$ is the $n\times n$ matrix $J=C^T$, the transpose of the matrix of cofactors.
Example: $n=2$
If $A=\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\mat{1&2\\3&4}$, then $C_{11}=+4$, $C_{12}=-3$, $C_{21}=-2$, $C_{22}=+1$. So the matrix of cofactors is $C=\mat{4&-3\\-2&1}$, so the adjoint of $A$ is $J=C^T=\mat{4&-2\\-3&1}$.
Example: $n=2$, general case
If $A=\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}\def\vm#1{\begin{vmatrix}#1\end{vmatrix}}\mat{a&b\\c&d}$, then $C=\mat{d&-c\\-b&a}$, so the adjoint of $A$ is $J=C^T=\mat{d&-b\\-c&a}$.
Recall that $AJ=(\det A)I_2=JA$; we calculated this earlier when we looked at the inverse of a $2\times 2$ matrix. Hence for a $2\times 2$ matrix $A$, if $\det A\ne0$, then $A^{-1}=\frac1{\det A}J$.
Example: $n=3$
If $\def\mat#1{\begin{bmatrix}#1\end{bmatrix}}A=\mat{3&1&0\\-2&-4&3\\5&4&-2}$, then the matrix of signs is $\mat{+&-&+\\-&+&-\\+&-&+}$, so \[\def\vm#1{\begin{vmatrix}#1\end{vmatrix}} C=\mat{ \vm{-4&3\\4&-2}&-\vm{-2&3\\5&-2}&\vm{-2&-4\\5&4}\\ -\vm{1&0\\4&-2}&\vm{3&0\\5&-2}&-\vm{3&1\\5&4}\\ \vm{1&0\\-4&3}&-\vm{3&0\\-2&3}&\vm{3&1\\-2&-4}} = \mat{-4&11&12\\2&-6&-7\\3&-9&-10}\] so the adjoint of $A$ is \[ J=C^T=\mat{-4&2&3\\11&-6&-9\\12&-7&-10}.\]
Observe that $AJ=\mat{3&1&0\\-2&-4&3\\5&4&-2}\mat{-4&2&3\\11&-6&-9\\12&-7&-10}=\mat{-1&0&0\\0&-1&0\\0&0&-1}=-1\cdot I_3$, and $JA=\mat{-4&2&3\\11&-6&-9\\12&-7&-10}\mat{3&1&0\\-2&-4&3\\5&4&-2}=\mat{-1&0&0\\0&-1&0\\0&0&-1}=-1\cdot I_3$; and $\det(A)=-1$.
This is an illustration of the following theorem, whose proof is omitted:
Theorem: key property of the adjoint of a square matrix
If $A$ is any $n\times n$ matrix and $J$ is its adjoint, then $AJ=(\det A)I_n=JA$.
Corollary: a formula for the inverse of a square matrix
If $A$ is any $n\times n$ matrix with $\det(A)\ne 0$, then $A$ is invertible and \[A^{-1}=\frac1{\det A}J\] where $J$ is the adjoint of $A$.
Proof
Divide the equation $AJ=(\det A)I_n=JA$ by $\det A$. ■
Example
If again we take $A=\mat{3&1&0\\-2&-4&3\\5&4&-2}$, then $J=\mat{-4&2&3\\11&-6&-9\\12&-7&-10}$ and $\det(A)=-1$, so $A$ is invertible and $A^{-1}=\frac1{-1}J=-J=\mat{4&-2&-3\\-11&6&9\\-12&7&10}$.
Example ($n=4$)
Let $A=\mat{1&0&0&0\\1&2&0&0\\1&2&3&0\\1&2&3&4}$.
Recall that a matrix with a repeated row or a zero row has determinant zero. We have \[C=\mat{+\vm{2&0&0\\2&3&0\\2&3&4}&-\vm{1&0&0\\1&3&0\\1&3&4}&+0&-0\\-0&+\vm{1&0&0\\1&3&0\\1&3&4}&-\vm{1&0&0\\1&2&0\\1&2&4}&+0\\+0&-0&+\vm{1&0&0\\1&2&0\\1&2&4}&-\vm{1&0&0\\1&2&0\\1&2&3}\\-0&+0&-0&+\vm{1&0&0\\1&2&0\\1&2&3}}=\mat{24&-12&0&0\\0&12&-8&0\\0&0&8&-6\\0&0&0&6}\] so \[J=C^T=\mat{24&0&0&0\\-12&12&0&0\\0&-8&8&0\\0&0&-6&6}.\] Since $A$ is lower triangular, its determinant is given by multiplying together its diagonal entries: $\det(A)=1\times 2\times 3\times 4=24$. (Note that even if $A$ was not triangular, $\det A$ can be easily found from the matrix of cofactors $C$ by summing the entries of $A$ multiplied by the entries of $C$ (i.e., the minors) along any row or column.)
So \[A^{-1}=\frac1{\det A}J = \frac1{24}\mat{24&0&0&0\\-12&12&0&0\\0&-8&8&0\\0&0&-6&6}=\mat{1&0&0&0\\-1/2&1/2&0&0\\0&-1/3&1/3&0\\0&0&-1/4&1/4}.\] You should check that this really is the inverse, by checking that $AA^{-1}=I_4=A^{-1}A$.
A more efficient way to find $A^{-1}$
Given an $n\times n$ matrix $A$, form the $n\times 2n$ matrix \[ \def\m#1{\left[ \begin{array}{@{} c|c {}@} % it does autodetection #1 \end{array} \right]}\m{A&I_n}\] and use EROs to put this matrix into RREF. One of two things can happen:
- Either you get a row of the form $[0~0~\dots~0~|~*~*~\dots~*]$ which starts with $n$ zeros. You can then conclude that $A$ is not invertible.
- Or you end up with a matrix of the form $\m{I_n&B}$ for some $n\times n$ matrix $B$. You can then conclude that $A$ is invertible, and $A^{-1}=B$.
Examples
- Consider $A=\def\mat#1{\begin{matrix}#1\end{matrix}}\left[\mat{1&3\\2&6}\right]$. \begin{align*}\m{A&I_2}&=\m{\mat{1&3\\2&6}&\mat{1&0\\0&1}} \def\go#1#2{\m{\mat{#1}&\mat{#2}}} \def\ar#1{\\[6pt]\xrightarrow{#1}&} \ar{R2\to R2-2R1}\go{1&3\\0&0}{1&0\\-2&1} \end{align*} Conclusion: $A$ is not invertible.
- Consider $A=\left[\mat{1&3\\2&7}\right]$.\begin{align*}\m{A&I_2}&=\m{\mat{1&3\\2&7}&\mat{1&0\\0&1}} \ar{R2\to R2-2R1}\go{1&3\\0&1}{1&0\\-2&1} \ar{R1\to R1-3R1}\go{1&0\\0&1}{7&-3\\-2&1} \end{align*} Conclusion: $A$ is invertible and $A^{-1}=\left[\mat{7&-3\\-2&1}\right]$.
- Consider $A=\left[\mat{3&1&0\\-2&-4&3\\5&4&-2}\right]$.\begin{align*}\m{A&I_3}&=\go{3&1&0\\-2&-4&3\\5&4&-2}{1&0&0\\0&1&0\\0&0&1} \ar{R1\to R1+R2} \go{1&-3&3\\-2&-4&3\\5&4&-2}{1&1&0\\0&1&0\\0&0&1} \ar{R2\to R2+2R1,\ R3\to R3-5R1} \go{1&-3&3\\0&-10&9\\0&19&-17}{1&1&0\\2&3&0\\-5&-5&1} \ar{R3\leftrightarrow R2} \go{1&-3&3\\0&19&-17\\0&-10&9}{1&1&0\\-5&-5&1\\2&3&0} \ar{R2\to R2+2R3} \go{1&-3&3\\0&-1&1\\0&-10&9}{1&1&0\\-1&1&1\\2&3&0} \ar{R1\to R1+3R2,\ R3\to R3-10R2} \go{1&0&0\\0&-1&1\\0&0&-1}{4&-2&3\\-1&1&1\\12&-7&-10} \ar{R2\to R2+R3} \go{1&0&0\\0&-1&0\\0&0&-1}{4&-2&3\\11&-6&-9\\12&-7&-10} \ar{R2\to -R2,\ R3\to -R3} \go{1&0&0\\0&1&0\\0&0&1}{4&-2&3\\-11&6&9\\-12&7&10} \end{align*} Conclusion: $A$ is invertible, and $A^{-1}=\left[\mat{4&-2&3\\-11&6&9\\-12&7&10}\right]$.
A more efficient way to find $A^{-1}$
Given an $n\times n$ matrix $A$, form the $n\times 2n$ matrix \[ \def\m#1{\left[ \begin{array}{@{} c|c {}@} % it does autodetection #1 \end{array} \right]}\m{A&I_n}\] and use EROs to put this matrix into RREF. One of two things can happen:
- Either you get a row of the form $[0~0~\dots~0~|~*~*~\dots~*]$ which starts with $n$ zeros. You can then conclude that $A$ is not invertible.
- Or you end up with a matrix of the form $\m{I_n&B}$ for some $n\times n$ matrix $B$. You can then conclude that $A$ is invertible, and $A^{-1}=B$.
Examples
- Consider $A=\def\mat#1{\begin{matrix}#1\end{matrix}}\left[\mat{1&3\\2&6}\right]$. \begin{align*}\m{A&I_2}&=\m{\mat{1&3\\2&6}&\mat{1&0\\0&1}} \def\go#1#2{\m{\mat{#1}&\mat{#2}}} \def\ar#1{\\[6pt]\xrightarrow{#1}&} \ar{R2\to R2-2R1}\go{1&3\\0&0}{1&0\\-2&1} \end{align*} Conclusion: $A$ is not invertible.
- Consider $A=\left[\mat{1&3\\2&7}\right]$.\begin{align*}\m{A&I_2}&=\m{\mat{1&3\\2&7}&\mat{1&0\\0&1}} \ar{R2\to R2-2R1}\go{1&3\\0&1}{1&0\\-2&1} \ar{R1\to R1-3R1}\go{1&0\\0&1}{7&-3\\-2&1} \end{align*} Conclusion: $A$ is invertible and $A^{-1}=\left[\mat{7&-3\\-2&1}\right]$.
- Consider $A=\left[\mat{3&1&0\\-2&-4&3\\5&4&-2}\right]$.\begin{align*}\m{A&I_3}&=\go{3&1&0\\-2&-4&3\\5&4&-2}{1&0&0\\0&1&0\\0&0&1} \ar{R1\to R1+R2} \go{1&-3&3\\-2&-4&3\\5&4&-2}{1&1&0\\0&1&0\\0&0&1} \ar{R2\to R2+2R1,\ R3\to R3-5R1} \go{1&-3&3\\0&-10&9\\0&19&-17}{1&1&0\\2&3&0\\-5&-5&1} \ar{R3\leftrightarrow R2} \go{1&-3&3\\0&19&-17\\0&-10&9}{1&1&0\\-5&-5&1\\2&3&0} \ar{R2\to R2+2R3} \go{1&-3&3\\0&-1&1\\0&-10&9}{1&1&0\\-1&1&1\\2&3&0} \ar{R1\to R1+3R2,\ R3\to R3-10R2} \go{1&0&0\\0&-1&1\\0&0&-1}{4&-2&3\\-1&1&1\\12&-7&-10} \ar{R2\to R2+R3} \go{1&0&0\\0&-1&0\\0&0&-1}{4&-2&3\\11&-6&-9\\12&-7&-10} \ar{R2\to -R2,\ R3\to -R3} \go{1&0&0\\0&1&0\\0&0&1}{4&-2&3\\-11&6&9\\-12&7&10} \end{align*} Conclusion: $A$ is invertible, and $A^{-1}=\left[\mat{4&-2&3\\-11&6&9\\-12&7&10}\right]$.
