\documentclass{article}
\usepackage{enumitem,amsmath,amssymb,amsthm}
\usepackage{graphicx}
\usepackage[left=2.5cm,right=2.5cm, top=2cm, bottom=2cm,nohead,nofoot]{geometry}
\newenvironment{amatrix}[1]{%
\left(\begin{array}{@{}*{#1}{c}|c@{}}
}{%
\end{array}\right)
}
\def\dotprod#1#2{\vec{#1}\cdot\vec{#2}}
\def\vec#1{\mathbf{#1}}
\def\norm#1{\left|\!\left|#1\right|\!\right|}
\def\R{\mathbb{R}}
\def\spn{\operatorname{span}}
\def\rank{\operatorname{rank}}
\def\set#1{\left\{\,#1\,\right\}}
\def\s#1{\left\{#1\right\}}
\def\st{\text{\huge{.}}}
\def\eovec{!!}
\def\colvec#1{\begin{pmatrix}\splitspaceintonewlines(#1\eovec)\end{pmatrix}}
\def\splitspaceintonewlines(#1#2){%
\ifx#1,\\\else #1\fi%
\ifx#2\eovec\else \splitspaceintonewlines(#2)\fi%
}
\def\df#1{\textbf{#1}}
\def\theorem{\par\noindent{\bf \underline{Theorem}} }
\def\proof{\par\noindent{\sl Proof.} }
\def\example{\par\noindent{\bf \underline{Example}} }
\def\soln{\par\noindent{\sl Solution.} }
\def\remark{\par\noindent{\bf \underline{Remark}} }
\title{Summary of Day 5}
\author{William Gunther}
\date{May 23, 2014}
\begin{document}
\maketitle
\section{Objectives}
\begin{itemize}
\item Solidify understanding of the idea of span and linear sombinations.
\item Define the notion of linear independence and linear dependence.
\item Prove some theorems involving linear dependence/independence.
\end{itemize}
\section{Summary}
\begin{itemize}
\item Let's begin with an example:
\example Calculate the span of $[0,1]$ and $[1,1]$.
\soln The span is the set of all linear combinations; thus:
\[
\spn(\set{[0,1], [1,1]}) = \set{ \vec{v}\in\R^2 \mid \exists c_1, c_2\in\R \st \vec v = c_1[0,1] + c_2[1,1]}
\]
Thus, we need to figure out for which vectors $\vec v$ we can find a $c_1$ and $c_2$ such that the following vector equation holds:
\[
c_1 \colvec{0,1} + c_2\colvec{1,1} = \vec v := \colvec{v_1,v_2}
\]
We can realize this as a system of linear equations:
\begin{alignat*}{3}
&&c_2 &=& v_1 \\
c_1 &+& c_2 &=&v_2
\end{alignat*}
which, in term, we can realize as an augmented matrix and use Gauss-Jordan to solve:
\[
\begin{amatrix}{2}
0&1&v_1\\
1&1&v_2
\end{amatrix}
\xrightarrow{R_2 - R_1}
\begin{amatrix}{2}
0&1&v_1\\
1&0&v_2-v_1
\end{amatrix}
\xrightarrow{R_1\leftrightarrow R_2}
\begin{amatrix}{2}
1&0&v_2-v_1\\
0&1&v_1
\end{amatrix}
\]
We can see that this system has a solution no matter what $v_1$ and $v_2$ are. Thus, every $\vec v$ can be written as a linear combination of the vectors $[0,1]$ and $[1,1]$. We can even see what the constants are: if you wanted to figure out what linear combination of these vectors hits, $[v_1, v_2]$ it is:
\[
(v_2 - v_1)\colvec{0,1} + (v_1)\colvec{1,1} = \colvec{v_1,v_2}
\]
We can visualize this example as follows:
PICTURE
You can see that the vectors $[0,1]$ and $[1,1]$ kind of make their own coordinate system in $\R^2$; saying the span of these vectors is all of $\R^2$ is saying that \emph{any} vector can be written in this coordinate system. In this case when the span of a set is the whole space we say that the set \df{spans} the space, and we call the set a \df{spanning set}.
\item The most used spanning set of $\R^2$ is the vectors $[0,1]$ and $[1,0]$ which give us our usual coordinate system of vectors. These vectors are called the \df{standard unit vectors}, or the \df{standard basis}. The word basis we will rediscover soon; the `unit' part of their name just means that they have length $1$.
\item Let's do another example
\example Calculate the span of $\colvec{1,1,1}$ and $\colvec{1,2,1}$ and describe it geometrically.
\soln As before, we can transform this question into one of a system of linear equations represented by this augmented matrix:
\[
\begin{amatrix}{2}
1&1&a\\
1&2&b\\
1&1&c
\end{amatrix}
\]
Doing row reductions to rref, we get the following matrix:
\[
\begin{amatrix}{2}
1&0&2a-b\\
0&1&b-c\\
0&0&c-a
\end{amatrix}
\]
This system is consistent if and only if $c-a = 0$. Any vector for which that holds, the above has a unique solution. Therefore, the set of vectors in the span is exactly the vectors for which $c = a$, i.e.
\[
\spn(\s{[1,1,1],[1,2,1]}) = \set{ \vec{v}\in\R^3 \mid \vec{v} = [v_1, v_2, v_3] \text{ where } v_1 = v_3 }
\]
Another way to view it, is that the span is the set of linear combinations; if $\vec{x}$ is a linear combination of $[1,1,1]$ and $[1,2,1]$ then:
\[
\vec x = s \colvec{1,1,1} + t\colvec{1,2,1}
\]
This is a plane in $\R^3$; it's easy to see that the first and last coordinates must be the same. What's not easy to see is that is the only restriction, but our work above shows that is is. So geometrically, this represents the planein $\R^3$ given by the above parametric equation, or given by the normal equation $x = z$.
\item The above examples allow us to state the following theorem, which is an obvious corollary of the observations we have made.
\theorem A system of linear equations with augmented matrix $(A \mid \vec{b})$ is consistent if and only if $\vec{b}$ is a linear combination of the columns of $A$.
\item Let's now define two terms which are a generalization of the notion of two vectors being parallel, or three vectors being co-planar. This is the notion of linear independence.
Let $S = \s{\vec{v_1}, \ldots \vec{v_m}}\subseteq \R^n$. The set $S$ is said to be \df{linearly dependent} if there are scalars $c_1, \ldots, c_m$ \emph{not all zero} such that
\[
c_1\vec{v_1} + \cdots + c_m \vec{v_m} = \vec{0}
\]
otherwise, $S$ is said to be \df{linearly independent}.
\remark Why the requirement that the coefficents be nonzero? Well, it would be a trivial restriction if we allowed all $0$ scalars, since then any set of vectors would be linearly dependent. The assertion that not all are zero is actually making a nontrivial statement about the vectors.
\example One can check the following two things:
\begin{itemize}
\item In $\R^2$, two vectors are linearly dependent if and only if they are parallel (i.e. one is a scalar multiple of the other).
\item In $\R^3$, three vectors are linearly dependent if and only if two are pairwise parallel (i.e. one is a scalar multiple of another) or one is on the plane that the other two make.
\item Any set of vectors containing the zero vector is linearly dependent.
\end{itemize}
\item The following theorem is how we usually think about linear dependence:
\theorem A set of vectors $S = \s{\vec{v_1}, \vec{v_2}, \ldots, \vec{v_m}}$ is linearly dependent if and only if one of the vectors can be written as linear combination of the rest.
\proof There are two things to prove since we have asserted an `if and only if.'
Let us assume that $S$ is linearly dependent. We want to show that one of the vectors can be written as a linear combination of the rest. Well, since $S$ is linearly dependent we know that there are $c_1, c_2, \ldots, c_m$ such that
\[
c_1 \vec{v_1} + \cdots c_m \vec{v_m} = \vec{0}
\]
Further, we know that not all the $c_j$ are nonzero; let $i$ be such that $c_i \neq 0$. Then, if we subtract $c_i\vec{v_i}$ to the other wise, we get:
\[
\sum_{\substack{j=1\\j\neq i}}^m c_j\vec{v_j} = -c_i \vec{v_i}
\]
as $c_i\neq 0$, we can divide, and we can get:
\[
\sum_{\substack{j=1\\j\neq i}}^m -\frac{c_j}{c_i}\vec{v_j} = \vec{v_i}
\]
Which is exactly $\vec{v_i}$ written as a linear combination of the rest, as we wanted.
Now we need to show the converse. So, let us assume that we can write one of the elements of $S$, say $\vec{v_i}$ as a linear combination of the rest. Then:
\[
\sum_{\substack{j=1\\j\neq i}}^m c_j\vec{v_j} = \vec{v_i}
\]
Let $c_i := -1$; subtracting $\vec{v_i}$ from both sides we get
\[
\sum_{\substack{j=1}}^m c_j\vec{v_j} = 0
\]
This shows that the set is linearly dependent as not all the constants are $0$ (in particular $c_i = -1\neq 0$). \qed
\item The question of whether a set of vectors is linearly dependent, since it's a question of whether a particular vector is a linear combination of others, can be rephrased as a question of systems of equations. We call a system where all the constant terms are $0$ a \df{homogeneous system}.
\theorem A homogeneous solution either has $\vec 0$ as a unique solution, or has infinitely many solutions.
\proof Recall that any system of linear equations either has no, exactly one, or infinitely many solutions. Clearly, $\vec 0$ is one solution. Therefore, either that is a unique solution, or there are infinitely many. \qed
\theorem Consider a homogeneous system with $m$ equations and $n$ variables where $n > m$. Then there are infinitely many solutions.
\proof Clearly, the system is consistent since $\vec 0$ is a solution. The rank theorem tells us the number of free variables is equal to $n-\rank(A)$ where $A$ is the coefficient matrix. Clearly, $\rank(A) \leq m$ since the rank is the number of leading entries, and there can't be more leading entries then there are rows. Therefore $n-\rank(A) \geq n - m > 0$. Therefore, there is at least one free variable, which means there are infinitely many solutions.\qed
\remark The converse is not true; as an exercise come up with a homogeneous system with more equations than unknowns with a unique solution.
\item Because every homogeneous system has $\vec 0$ as a solution, we call that the \df{trivial solution}. Thus the question of whether a set of vectors is linearly dependent can be phrased as such:
\theorem Let $S = \s{\vec{v_1},\ldots \vec{v_n}}\subseteq \R^n$. Then $S$ is linearly dependent if and only if the system represented by the augmented matrix $(A\mid\vec 0)$ has a nontrivial solution.
\proof Obvious from what we know about translating systems to and from linear combinations of vectors.\qed
\end{itemize}
\end{document}