This is an extremely brief review of linear algebra. It is understood that linear algebra is a pre-requisite for this course. However, everyone needs refreshers or a reference for specifics from time to time.
If a more thorough treatment is needed, then there are numerous linear algebra texts, and many that are OERs like this text. “Understanding Linear Algebra” by David Austin is an excellent text with a focus on developing geometric intuition, and less so on formal proofs. For a more theory oriented text, “Linear Algebra” by Jim Hefferon is an excellent choice.
Also denoted \(A=[a_{ij}]_{n\times m}\) is a \(n\times m\) matrix, denoting that \(A\) has \(n\) rows and \(m\) columns. We note that \(a_{ij}\) is the entry of \(A\) in row \(i\text{,}\) column \(j\text{.}\)
Given a \(n\times m\) matrix \(A\text{,}\) we define the transpose of \(A\) denoted \(A^\top\) as \(A=[a_{ij}]_{n\times m}^\top = [a_{ji}]_{m\times n}\) or
Given two matrices of the same dimensions \(A=[a_{ij}]_{n\times m}, B=[b_{ij}]_{n\times m}\text{,}\) we define their sum entrywise, that is: \(A+B=[a_{ij}+b_{ij}]_{n\times m}\text{.}\)
Given matrices \(A=[a_{ij}]_{n\times m}, B=[b_{ij}]_{m\times \ell}\text{,}\) we define their product to be \(AB =[c_{ij}]_{n\times \ell}= [\sum_{k=1}^m a_{ik}b_{kj}]_{n\times \ell}\text{.}\)
Note that this dry and technical presentation fails to capture even an iota of the beautiful and deep theory this operation is meant to encapsulate. Nor is it meant to. Please see the aforementioned texts for a deeper and richer discussion.
\(A=[a_{ij}]_{n\times n}\) is a square matrix. The entries where \(i=j\) are the diagonal of \(A\text{.}\) If \(a_{ij}=0\) when \(i\neq j\text{,}\) then \(A\) is a diagonal matrix.
For \(A\) a \(n\times n\) matrix, we say \(A\) is invertible if there exists a \(n\times n\) matrix \(B\) such that \(AB=BA=I_n\text{.}\) We usual call \(B\) the inverse of \(A\) and denote it \(A^{-1}\text{.}\)
Let a set \(V\) be equipped with operations \(+\) and a scalar product. Let \(\x, \y, \z \in V\) and \(a,b\) be scalars. Then \(V\) is a vector space if it satisfies the following axioms:
Identity element of vector addition: there exists a vector \(\mathbf{0}\) called the zero vector such that \(\mathbf{0} + \x = \x + \mathbf{0} = \x\text{.}\)
Inverse elements of vector addition: for each vector \(\x\text{,}\) there exists a vector \(-\x\) called the additive inverse of \(\x\) such that \(-\x + \x = \x + (-\x) = \mathbf{0}\text{.}\)
There are a wide variety of interesting vector spaces spanning across all subfields of math. However, for our purposes, we will stick to boring ol’ \(\mathbb{R}^n\text{.}\)
Let \(V\) be a vector space, then \(W\subseteq V\) is a subspace of \(V\text{,}\) if \(W\) is nonempty, and if for any \(\p, \q\in W\) and scalars \(a,b\text{,}\) we have that \(a\p+b\q\in W\text{.}\)