Section A.1 Linear Algebra Review
This is an extremely brief review of linear algebra. It is understood that linear algebra is a pre-requisite for this course. However, everyone needs refreshers or a reference for specifics from time to time.
If a more thorough treatment is needed, then there are numerous linear algebra texts, and many that are OERs like this text.
“Understanding Linear Algebra” by David Austin is an excellent text with a focus on developing geometric intuition, and less so on formal proofs. For a more theory oriented text,
“Linear Algebra” by Jim Hefferon is an excellent choice.
Definition A.1.1.
A real-valued matrix is a rectangular array of the form
\begin{equation*}
A=\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1m}\\
a_{21} & a_{22} & \cdots & a_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nm}\\
\end{bmatrix}.
\end{equation*}
Also denoted \(A=[a_{ij}]_{n\times m}\) is a \(n\times m\) matrix, denoting that \(A\) has \(n\) rows and \(m\) columns. We note that \(a_{ij}\) is the entry of \(A\) in row \(i\text{,}\) column \(j\text{.}\)
Definition A.1.2.
\(n\times 1\) matrices are also reffered to as vectors:
\begin{equation*}
\x = \begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{bmatrix}.
\end{equation*}
This is the convention we use, with vectors being column matrices. Some texts default to row vectors.
Definition A.1.3.
Given a \(n\times m\) matrix \(A\text{,}\) we define the transpose of \(A\) denoted \(A^\top\) as \(A=[a_{ij}]_{n\times m}^\top = [a_{ji}]_{m\times n}\) or
\begin{equation*}
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1m}\\
a_{21} & a_{22} & \cdots & a_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nm}\\
\end{bmatrix}^\top =
\begin{bmatrix}
a_{11} & a_{21} & \cdots & a_{n1}\\
a_{12} & a_{22} & \cdots & a_{n2}\\
\vdots & \vdots & \ddots & \vdots\\
a_{1m} & a_{2m} & \cdots & a_{nm}\\
\end{bmatrix}.
\end{equation*}
Example A.1.4.
\begin{equation*}
\begin{bmatrix}
1 & -2 & 3\\
0 & 5 & 6
\end{bmatrix}^\top
=
\begin{bmatrix}
1 & 0\\
-2 & 5\\
3 & 6
\end{bmatrix}.
\end{equation*}
Definition A.1.5.
Given two matrices of the same dimensions \(A=[a_{ij}]_{n\times m}, B=[b_{ij}]_{n\times m}\text{,}\) we define their sum entrywise, that is: \(A+B=[a_{ij}+b_{ij}]_{n\times m}\text{.}\)
Example A.1.6.
\begin{align*}
\amp
\begin{bmatrix}
1 & -2 & 3\\
0 & 5 & 6
\end{bmatrix}
+
\begin{bmatrix}
4 & 7 & 0\\
-8 & 2 & -4
\end{bmatrix}\\
\amp =
\begin{bmatrix}
1+4 & -2+7 & 3+0\\
0+(-8) & 5+2 & 6+(-4)
\end{bmatrix}\\
\amp =
\begin{bmatrix}
5 & 5 & 3\\
-8 & 7 & 2
\end{bmatrix}.
\end{align*}
Definition A.1.7.
Given matrices \(A=[a_{ij}]_{n\times m}, B=[b_{ij}]_{m\times \ell}\text{,}\) we define their product to be \(AB =[c_{ij}]_{n\times \ell}= [\sum_{k=1}^m a_{ik}b_{kj}]_{n\times \ell}\text{.}\)
Example A.1.8.
\begin{align*}
\amp
\begin{bmatrix}
4 & 7 & 0\\
-8 & 2 & -4
\end{bmatrix}
\begin{bmatrix}
1 & 0\\
-2 & 5\\
3 & 6
\end{bmatrix}\\
\amp =
\begin{bmatrix}
4(1)+7(-2)+0(3) & 4(0)+7(5)+0(6)\\
-8(1)+2(-2)+(-4)(3) & -8(0)+2(5)+(-4)(6)
\end{bmatrix}\\
\amp =
\begin{bmatrix}
5 & 5 & 3\\
-8 & 7 & 2
\end{bmatrix}.
\end{align*}
\begin{align*}
\amp
\begin{bmatrix}
1 & 0\\
-2 & 5\\
3 & 6
\end{bmatrix}
\begin{bmatrix}
4 & 7 & 0\\
-8 & 2 & -4
\end{bmatrix} \\
\amp =
\begin{bmatrix}
1(4)+0(-8) & 1(7)+0(2) & 1(0)+0(-4)\\
(-2)(4)+5(-8) & (-2)(7)+5(2) & (-2)(0)+5(-4)\\
3(4)+6(-8) & 3(7)+6(2) & 3(0)+6(-4)
\end{bmatrix}\\
\amp =
\begin{bmatrix}
4 & 7 & 0 \\
-48 & -4 & -20 \\
-36 & 33 & -24
\end{bmatrix}.
\end{align*}
Note that this dry and technical presentation fails to capture even an iota of the beautiful and deep theory this operation is meant to encapsulate. Nor is it meant to. Please see the aforementioned texts for a deeper and richer discussion.
Definition A.1.9.
Given a matrix \(A=[a_{ij}]_{n\times m}\) and real number \(c\text{,}\) we define the scalar product to be \(cA = [ca_{ij}]_{n\times m}\text{.}\)
Example A.1.10.
\begin{align*}
3\begin{bmatrix}
1 & 0\\
-2 & 5\\
3 & 6
\end{bmatrix}
\amp =
\begin{bmatrix}
3(1) & 3(0)\\
3(-2) & 3(5)\\
3(3) & 3(6)
\end{bmatrix}\\
\amp =
\begin{bmatrix}
3 & 0\\
-6 & 15\\
9 & 18
\end{bmatrix}.
\end{align*}
Definition A.1.11.
We denote the zero matrix as \(\mathbf{0}_{n\times m}:=[0]_{n\times m}\) or \(\mathbf{0}\) if the dimensions are clear from context.
Theorem A.1.12.
For matrices \(A, B, C, D\text{,}\) and scalars \(c,d\text{,}\) assuming appropriate dimensions, the following hold.
\(A+B=B+A\text{.}\)
\((A+B)+C=A+(B+C)\text{.}\)
\(c(A+B)+C=cA+cB\text{.}\)
\((c+d)A = cA + dA\text{.}\)
\((cd)A=c(dA)\text{.}\)
\(1A=A\text{.}\)
\(0A = \mathbf{0}\text{.}\)
\(A+\mathbf{0} = \mathbf{0}+A = A\text{.}\)
\(A+(-A)=(-A)+A=0\text{.}\)
\((AB)C=A(BC)\text{.}\)
\(A(B+C)=AB+AC\text{.}\)
\((A+B)C=AC+BC\text{.}\)
\(c(AB)=A(cB)=(cA)B\text{.}\)
\(\mathbf{0}A=A\mathbf{0}\text{.}\)
\((A^\top)^\top=A\text{.}\)
\((A+B)^\top = A^\top +B^\top\text{.}\)
\((cA)^\top =c(A^\top)\text{.}\)
\((AB)^\top = B^\top A^\top\text{.}\)
Definition A.1.13.
\(A=[a_{ij}]_{n\times n}\) is a square matrix. The entries where \(i=j\) are the diagonal of \(A\text{.}\) If \(a_{ij}=0\) when \(i\neq j\text{,}\) then \(A\) is a diagonal matrix.
Definition A.1.14.
The identity matrix \(I_n\) is the \(n\times n\) diagonal square matrix where the diagonal entriies are all \(1\text{.}\)
Theorem A.1.15.
For \(A\) a \(n\times n\) matrix, \(A I_n=I_nA=A\text{.}\)
Definition A.1.16.
For \(A\) a \(n\times n\) matrix, we say \(A\) is invertible if there exists a \(n\times n\) matrix \(B\) such that \(AB=BA=I_n\text{.}\) We usual call \(B\) the inverse of \(A\) and denote it \(A^{-1}\text{.}\)
Theorem A.1.17.
If \(A\) is an invertible square matrix, then \(A^{-1}\) is unique.
Example A.1.18.
If \(A=\begin{bmatrix} 3 & -1 \\ -2 & 4\end{bmatrix}\) then \(A^{-1} = \begin{bmatrix} \frac{2}{5} & \frac{1}{10} \\ \frac{1}{5} & \frac{3}{10}\end{bmatrix}\text{,}\) one can check this.
Note that not every matrix is invertible. For example \(\begin{bmatrix} 3 & -1 \\ -6 & 2\end{bmatrix}\) is not invertible.
Definition A.1.19.
Let a set \(V\) be equipped with operations \(+\) and a scalar product. Let \(\x, \y, \z \in V\) and \(a,b\) be scalars. Then \(V\) is a vector space if it satisfies the following axioms:
Associativity of vector addition: \((\x + \y)+\z=\x+(\y+\z)\text{.}\)
Commutativity of vector addition: \(\x + \y = \y+\x\text{.}\)
Identity element of vector addition: there exists a vector \(\mathbf{0}\) called the zero vector such that \(\mathbf{0} + \x = \x + \mathbf{0} = \x\text{.}\)
Inverse elements of vector addition: for each vector \(\x\text{,}\) there exists a vector \(-\x\) called the additive inverse of \(\x\) such that \(-\x + \x = \x + (-\x) = \mathbf{0}\text{.}\)
Compatibility of scalar multiplication with real multiplication: \((ab)\x = a(b\x)\text{.}\)
Identity element of scalar multiplication: \(1\x = \x\text{.}\)
Distributivity of scalar multiplication with respect to vector addition: \(a(\x+\y)=a\x+a\y\text{.}\)
Distributivity of scalar multiplication with respect to field addition: \((a+b)\x=a\x+b\x\text{.}\)
There are a wide variety of interesting vector spaces spanning across all subfields of math. However, for our puposes, we will stick to boring ol’ \(\mathbb{R}^n\text{.}\)
Definition A.1.20.
Let \(V\) be a vector space, then \(W\subseteq V\) is a subspace of \(V\) if it is also a vector space, with the same operations.
Theorem A.1.21.
Let \(V\) be a vector space, then \(W\subseteq V\) is a subspace of \(V\text{,}\) if \(W\) is non-empty, and if for any \(\p, \q\in W\) and scalars \(a,b\text{,}\) we have that \(a\p+b\q\in W\text{.}\)
Example A.1.22.
\(\mathbb{R}^4\text{,}\)
\begin{equation*}
W = \left\{ \begin{bmatrix}
u \\ 0 \\ w \\ 0 \end{bmatrix}: u,w\in \mathbb{R}\right\}
\end{equation*}
\(\mathbb{R}^4\text{.}\)
Definition A.1.23.
Let \(V\) be a vector space and \(S\subseteq V\text{.}\) Then a linear combination of these vectors is a sum:
\begin{equation*}
\sum a_i\vs_i, \text{ where } a_i\in \mathbb{R}, \vs_i\in S
\end{equation*}
Definition A.1.24.
Let \(V\) be a vector space and \(S\subseteq V\text{.}\) Then the span of \(S\) defined
\begin{equation*}
\mathrm{span}(S)=\left\{ \sum a_i\vs_i: a_i\in \mathbb{R}, \vs_i\in S \right\}.
\end{equation*}
If \(\mathrm{span}(S)=V\) we say that \(S\) spans \(V\text{.}\)
Example A.1.25.
The set
\begin{equation*}
S = \left\{ \begin{bmatrix}
1 \\ 2 \\ 3 \end{bmatrix}, \begin{bmatrix}
2 \\ 5 \\ -4 \end{bmatrix},
\begin{bmatrix}
3 \\ 1 \\ 2 \end{bmatrix},
\begin{bmatrix}
2 \\ -3 \\ 19 \end{bmatrix}
\right\}
\end{equation*}
spans \(\mathbb{R}^3\text{.}\)
Definition A.1.26.
Let \(V\) be a vector space and \(S\subseteq V\text{.}\) Then \(S\) is linearly independent if the equation
\begin{equation*}
\sum_{\vs\in S}a_i\vs_i=\mathbf{0}
\end{equation*}
if and only if each \(a_i\)=0.
Otherwise, \(S\) is dependent.
Example A.1.27.
Since
\begin{equation*}
3\begin{bmatrix}
1 \\ 2 \\ 3 \end{bmatrix}+(-2)\begin{bmatrix}
2 \\ 5 \\ -4 \end{bmatrix} +(1)
\begin{bmatrix}
3 \\ 1 \\ 2 \end{bmatrix}+(1)
\begin{bmatrix}
2 \\ -3 \\ 19 \end{bmatrix}=
\begin{bmatrix}
0 \\ 0 \\ 0 \end{bmatrix}
\end{equation*}
so \(\left\{ \begin{bmatrix}
1 \\ 2 \\ 3 \end{bmatrix}, \begin{bmatrix}
2 \\ 5 \\ -4 \end{bmatrix},
\begin{bmatrix}
3 \\ 1 \\ 2 \end{bmatrix},
\begin{bmatrix}
2 \\ -3 \\ 19 \end{bmatrix}
\right\}\) is dependent.
\(\left\{ \begin{bmatrix}
1 \\ 2 \\ 3 \end{bmatrix}, \begin{bmatrix}
2 \\ 5 \\ -4 \end{bmatrix},
\begin{bmatrix}
3 \\ 1 \\ 2 \end{bmatrix}
\right\}\) is linearly independent.
Theorem A.1.28.
Let \(V\) be a vector space.
Any superset of a spanning set of \(V\) is also a spaning set.
Any subset of a linearly independent set of vectors in \(V\) is also linearly independent.
Definition A.1.29.
Let \(V\) be a vector space and \(B\subseteq V\text{.}\) Then \(B\) is a basis of \(V\) if for any \(\x\in V\text{,}\)
\begin{equation*}
\sum_{\vb\in B}a_i\vb_i=\mathbf{x}
\end{equation*}
always has a unique solution.
Theorem A.1.30.
Let \(V\) be a vector space. A spaning, linearly indendent subset of \(V\) is a basis of \(V\text{.}\)
Example A.1.31.
\begin{equation*}
\left\{ \begin{bmatrix}
1 \\ 2 \\ 3 \end{bmatrix}, \begin{bmatrix}
2 \\ 5 \\ -4 \end{bmatrix},
\begin{bmatrix}
3 \\ 1 \\ 2 \end{bmatrix}
\right\}
\end{equation*}
is a basis for \(\mathbb{R}^3\text{.}\)