\documentclass{article}
\usepackage[left=2.5cm,right=2.5cm, top=2cm, bottom=2cm]{geometry}
\usepackage{enumerate,amsmath,amssymb,amsthm,graphicx}
\newenvironment{amatrix}[1]{%
\left(\begin{array}{@{}*{#1}{c}|c@{}}
}{%
\end{array}\right)
}
\newif\ifsinglecharacter
\def\lengthone#1{\isSingleCharacterHelper(#1\empty)!!end!!\ifsinglecharacter}
\def\isSingleCharacterHelper(#1#2)!!end!!{\ifx#2\empty\singlecharactertrue\else\singlecharacterfalse\fi}
\def\splitcommainto(#1#2)!!end!!#3{\ifx#1,#3\else #1\fi\ifx#2\empty\else\splitcommainto(#2){#3}\fi}
\def\det(#1){\operatorname{det}(#1)}
\def\trans#1{\lengthone{#1} #1^T\else\left(#1\right)^T\fi}
\def\inv#1{\lengthone{#1} #1^{-1}\else\left(#1\right)^{-1}\fi}
\def\dotprod#1#2{\vec{#1}\cdot\vec{#2}}
\def\vec#1{\mathbf{#1}}
\def\vecsub #1_#2{\mathbf{#1_{#2}}}
\def\norm#1{\left|\!\left|#1\right|\!\right|}
\def\R{\mathbb{R}}
\def\spn{\operatorname{span}}
\def\rank{\operatorname{rank}}
\def\set#1{\left\{\,#1\,\right\}}
\def\s#1{\left\{#1\right\}}
\def\st{\text{\huge{.}}}
\def\colvec#1{\begin{pmatrix}\splitcommainto(#1\empty)!!end!!{\\}\end{pmatrix}}
\def\mlist#1_#2{#1_1, \ldots, #1_{#2}}
\def\veclist#1_#2{\mlist\vec{#1}_{#2}}
\def\lincomb#1#2_#3{#1_1\vec{#2_1} + \cdots + #1_{#3}\vec{#2_{#3}}}
\def\df#1{\textbf{#1}}
\def\theorem{\par\noindent{\bf \underline{Theorem}} }
\def\lemma{\par\noindent{\bf \underline{Lemma}} }
\def\proof{\par\noindent{\sl Proof.} }
\def\example{\par\noindent{\bf \underline{Example}} }
\def\soln{\par\noindent{\sl Solution.} }
\def\remark{\par\noindent{\bf \underline{Remark}} }\title{Summary of Day 8}
\author{William Gunther}
\date{May 29, 2014}
\begin{document}
\maketitle
\section{Objectives}
\begin{itemize}
\item Explore the algebraic structure of inverses.
\end{itemize}
\section{Summary}
\begin{itemize}
\item Today we will be focusing on the following algebraic property of a matrix: an inverse. Let $A$ be a $n\times n$ matrix. $B$ is an \df{inverse} of $A$ if: \[ BA = AB = I \] If such a $B$ exists for $A$ we call $A$ \df{invertible}
\example EASY EXAMPLE
\item It's a relatively simple exercise to come up with a matrix without an inverse. You canuse, for example, the first row to ensure that there cannot be a $1$ in the $1,1$ entry of a matrix after the right multiplication of any matrix.
\example The easiest example is $O_{n\times n}$. You can check that $OA = AO = O\neq I$ for any $A$, so $O$ cannot have an inverse. Can you think of a less trivial example? How about less trivial than less trivial?
\remark The fact that not every matrix is invertible makes sense if you follow the analogy that matrices are like functions where multiplication is function application or composition; do you any necessary and sufficient conditions for a function to have an inverse?
\item The following is a theorem that one usually discovers when there are inverses. As such, the proof is simple and carries over in most algebraic environments (you can actually keep track of exactly what properties we will use for matrix multiplication to see exactly where is is useful).
\theorem Inverses, if they exist, are unique.
\proof Let $B$ and $C$ be $n\times n$ matrices which are inverses for $A$. We want to show that $B=C$. Consider $BAC$; then $BAC = (BA)C = IC = C$. Moreover, $BAC = B(AC) = BI = B$. Therefore, $B=C$.\qed
Because of the above theorem it makes sense to talk about \emph{the} inverse of a matrix. We define an operation for this: if $A$ is an invertible matrix then we write $\inv{A}$ to stand for the inverse of $A$. \emph{Never} write $1/A$. When you want to divide by a matrix, multiply by it's inverse on the appropriate side instead.
\item Recall that a system of $n$ equations with $n$ unknowns can be realized by the following matrix equations:
\[
A\vec x = \vec b
\]
where $A$ is an $n\times n$ matrix. Imagine if $A$ was invertible. Then we could apply $\inv{A}$ to the left on both sides and you get $\inv{A}A\vec x = \inv{A}\vec v$, or that $\vec x = \inv{A}$. Let's state this as a theorem and actually prove that this is a valid way to solve the equation.
\theorem Let $A$ be an $n\times n$ invertible matrix; then, \emph{for any $\vec b$} the system of linear equations represented by $A\vec x = \vec b$ has a unique solution: $\inv{A}\vec b$.
\proof There are a few obligations we have to prove here. We have to prove any there is a solution no matter what $\vec b$ is, which means it depends purely on $A$. Moreover, we have to show that solution has a particular form, and is unique.
To handle the any $\vec b$ part, let us fix an arbitrary $\vec b\in \R^n$; that is, $\vec b$ is a stand in for any vector. If we do not rely on any other properties of $\vec b$ besides being in $\R^n$ and prove the results we have proven it for all $\R^n$.
Our first obligation is to show that $\inv{A}b$ is a solution. This is the calculation we made above essentially:
\[
A(\inv{A}\vec b) = (A\inv{A})\vec b = I\vec b = \vec b
\]
Thus $\inv{A}b$ is a solution. Now we need to show it's unique. Suppose that $\vec a$ is also a solution; we want to show that $\vec a = \inv A\vec b$ as that would show we really didn't have two solutions but just the one. So $A\vec a = \vec b$. Apply $\inv{A}$ to the left on both sides:
\begin{align*}
\inv{A}A\vec a &= \inv A\vec b \\
\vec a &=
\end{align*}
\qed
\item Now we want to explore when matrices have inverses so we can then use the results like the above. We'll come back to this later today in full generality, but the $2\times 2$ case is relatively easy to describe, so let's do that first.
\theorem Let $A$ be a $2\times 2$ matrix: \[ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \] $A$ is invertible if and only if $ad - bc \neq 0$. Moreover, the inverse is given by:
\[
\frac{1}{ad-bc}
\begin{pmatrix}
d & -b \\ -c & a
\end{pmatrix}
\]
(This magical quantity $ad-bc$ is called the \df{determinant} and it's denotes $\det(A)$)
\proof For one direction of the if and only if, clearly showing that the given quantity is an inverse suffices. This is left as a exercise.
For the other direction, we need to show that if the matrix has an inverse then $\det(A) \neq 0$. Instead we will show that if $\det(A) = 0$ then $A$ does not have an inverse. This is a bit trickier. As $\det(A) = 0$ we have that $ad = bc$.
Case 1: $a \neq 0$.
Then $d = \frac{bc}{a}$ and $c = \frac{ac}{a}$ so we can write the matrix as:
\[ \begin{pmatrix} a & b \\ \frac{c}{a} a & \frac{c}{a} b \end{pmatrix} \]
Note, the second row is a multiply of the first. By one of the last theorems, if $A$ is invertible then there is a unique solution to $A\vec x = \vec b$ for any $\vec b$. But this is not correct here because either the augmented matrix $(A\mid\vec b)$ is inconsistent or it has a free variable.
Case 2: $a = 0$.
Then either $b$ or $c$ must be $0$ as $ad = bc$. But then $A$ has either a $\vec 0$ row or a $\vec 0$ column which means, as above, the augmented matrix $(A\mid \vec b)$ cannot have a unique solution as the rank is less than the number of columns, so it is either inconsistent or has free variables. \qed
\remark The above should indicate the our knowledge of systems is very helpful when dealing with matrices.
\item Now let's discuss some algebraic properties of inverses before revisiting the question about which matrices have inverses.
\theorem
\begin{enumerate}[a.]
\item If $A$ is an invertible matrix then $\inv{A}$ is invertible. Further:
\[
\inv{\inv{A}} = A
\]
\item If $A$ is invertible then $cA$ is invertible for any $c\neq 0$. Further:
\[
\inv{cA} = \frac{1}{c}\inv{A}
\]
\item If $A$ and $B$ are invertible matrices of the same size then $AB$ is invertible. Further:
\[
\inv{AB} = \inv{B}\inv{A}
\]
\item If $A$ is invertible then $A^r$ is invertible. Further:
\[
\inv{A^r} = (\inv{A})^r
\]
\end{enumerate}
\proof These are all proved in the text. This is Theorem 3.9. A few will be proved during lecture.\qed
\item We will now change gears and go towards the question of determining when a matrix has an inverse. We call a matrix an \df{elementary matrix} if it can be obtained by doing a row operation on the identity matrix.
The following big theorem connects everything we did with row reductions to matrix multiplication.
\theorem Let $A$ be a matrix, and $A'$ be obtained by doing an elementary row operations on $A$. Then there is an elementary matrix $E$ (of the proper size) such that $EA = A'$. Moreover, $E$ can be obtained by performing the elementary row operation on the identity matrix that was performed on $A$ to get $A'$.
\proof Obvious; consider what these elementary matrices do, and what they do when applied on left to a matrix $A$. \qed
\item Elementary matrices are also nice.
\lemma Elementary matrices are invertible.
\proof The homework tells us that elementary row operations can be undone; it's an easy exercise to show that the row operation that undoes another row operation is undone by the one that it undoes (wow, cool sentence).\qed
\item The following is a big theorem, and will allow us to calculate inverses:
\theorem $A$ is invertible if and only if $A$ is a product of elementary matrices.
We will prove this as part of a much bigger theorem that the book calls the \df{Fundamental Theorem on Invertible Matrices} which connects many of the notions we've seen.
\theorem (Fundamental Theorem on Invertible Matrices Version 1: Theorem 3.12 of book) Let $A$ be a square, $n\times n$ matrix. Then the following are equivalent:
\begin{enumerate}[a. ]
\item $A$ is invertible.
\item $A\vec x = \vec b$ has a unique solution for every $\vec b\in \R^n$.
\item $A\vec x = \vec 0$ has only the trivial solution.
\item The reduced row echelon form of $A$ is $I$.
\item $A$ is a product of elementary matrices.
\end{enumerate}
\proof This is proved in the book. We will prove it in class as well.
\item We can use this to calculate invere as well as decide when they exist.
\example I will go through example 3.30 in the book: to calculate the inverse of
\[
\begin{pmatrix}
1 & 2 & -1\\
2 & 2 & 4\\
1 & 3 & 3
\end{pmatrix}
\]
\end{itemize}
\end{document}