\documentclass{article}
\usepackage{enumitem,amsmath,amssymb}
\usepackage{graphicx}
\usepackage[left=2.5cm,right=2.5cm, top=2cm, bottom=2cm,nohead,nofoot]{geometry}
\newenvironment{amatrix}[1]{%
\left(\begin{array}{@{}*{#1}{r}|r@{}}
}{%
\end{array}\right)
}
\def\vec#1{\mathbf{#1}}
\def\R{\mathbb{R}}
\def\df#1{\textbf{#1}}
\def\theorem{\par\noindent{\bf \underline{Theorem}} }
\def\proof{\par\noindent{\sl Proof.} }
\def\example{\par\noindent{\bf \underline{Example}} }
\def\remark{\par\noindent{\bf \underline{Remark}} }
\title{Summary of Day 3}
\author{William Gunther}
\date{May 21, 2014}
\begin{document}
\maketitle
\section{Objectives}
\begin{itemize}
\item Use Gauss-Jordan elimination to solve a system of linear equations.
\item Restate problem of system of equations into a problem of vector equations.
\item Explore the geometry and algebraic properties of vectors in $R^n$ (particularly $R^2$ and $R^3$ since those are easy to visualize).
\end{itemize}
\section{Summary}
\begin{itemize}
\item The reduced row echelon form of a matrix is a specification of the row echelon form a matrix. It has the following advantage:
\theorem Every matrix is row equivalent to a unique matrix in reduced row echelon form.
\item With regards to solving systems of linear equations, reduced row echelon form offers another advantage in that a matrix in rref can be solved without the final step of back substitution. This algorithm is called \df{Gauss-Jordan Elimination} and is very similar to Gaussian Elimination. These are the steps:
\begin{enumerate}
\item Express the system of equations as an augmented matrix.
\item Use elementary row operations to find a row equivalent matrix in reduced row echelon form.
\item Solve the variables in the columns with leading entries in terms of free variables.
\end{enumerate}
\example Consider the following system of equations:
\begin{alignat*}{6}
-x_1&+&3x_2&-&2x_3&+&4x_4&=&2\\
2x_1&-&6x_2&+&x_3&-&2x_4&=&-1\\
x_1&-&3x_2&+&4x_3&-&8x_4&=&-4\\
\end{alignat*}
\begin{alignat*}{4}
&&\begin{amatrix}{4}
-1 & 3& 2& 4 & 2 \\
2 & 6 & 1 & -2 & -1\\
1 & -3 & 4& 8 & -4
\end{amatrix}
&\xrightarrow[R_2 + 2R_2]{R_3 + R_1}&
\begin{amatrix}{4}
-1 & 3& 2& 4 & 2 \\
0 & 12 & 5 & 6 & 3\\
0 & 0 & 6& 12 & -2
\end{amatrix}
\\
&&&\xrightarrow[-R_1\qquad 1/12R_2]{R_1 - 1/4R_2}&
\begin{amatrix}{4}
1 & 0& -3/4 & -5/2 & -5/4 \\
0 & 1 & 5/12 & 1/2 & 1/4\\
0 & 0 & 6& 12 & -2
\end{amatrix}
\\
&&&\xrightarrow[R_2 - 5/72R_3\qquad 1/6R_3]{R_1 + 1/8R_3}&
\begin{amatrix}{4}
1 & 0& 0 & -1 & -3/2 \\
0 & 1 & 0 & -1/3 & 7/18\\
0 & 0 & 1& 2 & -1/3
\end{amatrix}
\end{alignat*}
This is in reduced row echelon form. Now every column without a leading entry is still a free variable (after all, it is in row echelon form as well), therefore we set $t:= x_4$. But, there is no back substitution as if we solve for the leading entries it is already in terms of the free variables:
\begin{align*}
x_1 &= -3/2 + t\\
x_2 &= 7/18 + 1/3 t\\
x_3 &= -1/3 - 2t\\
x_4 &= t
\end{align*}
or, in vector notation:
\[
\begin{pmatrix}
x_1 \\ x_2 \\ x_3 \\ x_4
\end{pmatrix}
=
\begin{pmatrix}
-3/2 + t\\ 7/18 + 1/3 t\\-1/3 - 2t\\ t
\end{pmatrix}
=
\begin{pmatrix}
-3/2 \\ 7/18 \\-1/3 \\ 0
\end{pmatrix}
+
t\begin{pmatrix}
1\\ 1/3\\2\\ 1
\end{pmatrix}
\]
\remark When you're asked to solve a system, it doesn't really matter which algorithm you use. Personally, I prefer Gauss-Jordan if I want to find the solution. If you just want to analyze the solutions (whether there are any, and if so are there infinitely many, and if so how many free variables are there) then doing Gaussian Elimination to get the matrix in row echelon form will be quicker.
\item We will now rephrase the task of solving a linear equation into vector problem. But first we need to know a bit about vectors and vector operations.
\item In $\R^2$ vectors can be represented on the plane as directed line segments. And in $\R^3$ they can be represented in 3 space similarly. Usually, by convention, we draw vectors with one end starting at the origin and pointing out to the point represented by the $n$-tuple, but we view all directed line segments of the same length and direction as the same vector regardless of starting and ending points. When we draw it starting at the origin we call that \df{standard position}.
\item There is a special vector which we denote $\vec{0}$ (or on the chalkboard $\bar{0}$) which is called the \df{zero vector}. It points in no direction, has no length, and has other algebraic properties which we shall soon discuss.
\item Thinking of applications to vectors in other fields, in physics vectors are used to represent various physical quantities which are no easily expressed just as real numbers (such as force, velocity, and displacement).
\item Algebraically, we like to think of vectors just as some $n$-tuple of real number of data. This makes the following definitions pretty easy to swallow:
\item We define an operation called \df{vector addition} between $\vec{v}$ and $\vec{w}$ of $\R^n$. This operation is defined by {\sl componentwise addition}; ie, if $\vec{v} = [v_1, \ldots, v_n]$ and $\vec{w} = [w_1, \ldots, w_n]$ then:
\[
\vec{v} + \vec{w} = [v_1 + w_1, \ldots, v_n + w_n ]
\]
Geometrically, this operation corresponds to the \df{parallelogram rule}; put the tail of one of the vectors to the tip of the other, and vice versa, and draw the diagonal of the parallelogram created; this is the sum of the two vectors.
\remark For a hint on why this is useful, think of vectors as displacement.
\item We define an operation between real numbers $\R$ (which we call in this context \df{scalars}) and $\R^n$ called \df{scalar multiplication}. This operation is defined by {\sl componentwise multiplication}; ie; if $\vec{v} = [v_1, \ldots, v_n]$ and $c\in \R$ then:
\[
c\vec{v} = [cv_1, \ldots, cv_n]
\]
\remark $\R^n$ is what is called a \df{vector space}; we'll talk more about this in the future. $\R$ is an algebraic structure called a \df{field}. Every vector space comes equipped with a field, which is where the scalars live. In this context, you might hear $\R^n$ called a vector space \df{over} the field $\R$.
Geometrically, scalar multiplication is stretching and shrinking the vector.
\item Let's now investigate some properties of these two operations on vectors and how they iteract with each other. For these statements, let $\vec{u}, \vec{v}, \vec{w}\in\R^n$ be vectors, and $c,d\in \R$ are scalars; note that $0$ and $\vec{0}$ are different things.
\theorem
\begin{enumerate}
\item $\vec{u} + \vec{v} = \vec{v} + \vec{u}$
\item $(\vec{u}+\vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$
\item $\vec{u} + \vec{0} = \vec{u}$
\item $\vec{u}+(-1)\vec{u} = \vec{0}$
\item $c(\vec{u} + \vec{v}) = c\vec{u} + c\vec{v}$
\item $(c+d)\vec{u} = c\vec{u} + d\vec{v}$
\item $c(d\vec{u}) = (cd)\vec{u}$
\item $1\vec{u} = \vec{u}$
\end{enumerate}
\proof We'll just prove the 6th one. $\vec u = [u_1, \ldots, u_n]$. Then:
\[
(c+d)\vec u = [(c+d)u_1, \ldots, (c+d)u_n] = [cu_1 + du_1, \ldots, cu_n + du_n] = [cu_1, \ldots, cu_n] + [du_1, \ldots, du_n] = c\vec u + d\vec u
\]
\item A vector $\vec{v}$ is a \df{linear combination} of vectors $\vec{v_1}, \ldots, \vec{v_m}$ if there are scalars $c_1, \ldots, c_m$ such that:
\[
c_1 \vec{v_1} + \cdots + c_m \vec{v_m} = \vec{v}
\]
\example $[1,2]$ is a linear combination of $[1, 3]$ and $[1,1]$:
\[
1/2[1,3] + 1/2[1,1] = [1,2]
\]
\item We can now rephrase the question of finding solutions to a system of equations to finding out if a vector is a linear combination of some other vectors.
Consider the system:
\begin{alignat*}{3}
a_{11}x_1&+&\cdots&+&a_{1n}x_n = c_1\\
&&\vdots&&\\
a_{m1}x_1&+&\cdots&+&a_{mn}x_n = c_m\\
\end{alignat*}
Any solution to this equation will also satisfy:
\[
x_1 \begin{pmatrix} a_{11}\\\vdots\\a_{1n}\end{pmatrix}
+
\cdots
+
x_m \begin{pmatrix} a_{1m}\\\vdots\\a_{mn}\end{pmatrix}
=
\begin{pmatrix} c_{1}\\\vdots\\c_{n}\end{pmatrix}
\]
Therefore, the question to determine if a system has a solution (and to find solutions) is the same as the question to determine if a vector is a linear combination of other vectors (and to write it as such).
\end{itemize}
\end{document}