The midterm will take place in class on Thursday, February 28, at 8:30, in the usual room.
The structure of this document will follow the organisation of the textbook, not of the lectures (though the two should be similar).
1Chapter 1: Vector spaces¶
1.1Complex numbers¶
Nothing new
1.2Definition of a vector space¶
A set $V$ along with the operations of addition and scalar multiplication such that the following properties hold:
- Commutativity of addition
- Associativity of addition and scalar multiplication
- Additive identity
- Additive inverse
- Multiplicative identity (scalar)
- Distributivity
1.3Properties of vector spaces¶
- The additive identity is unique
- Additive inverse are unique for each element
- $0v = 0$ for $v \in V$
- $a0 = 0$ for $a \in \mathbb F$
- $(-1)v = -v$ for $v \in V$
1.4Subspaces¶
A subset $U$ of $V$, which:
- Contains the zero vector $0 \in V$
- Is closed under addition
- Is closed under scalar multiplication
1.5Sums and direct sums¶
1.5.1Sums¶
$U_1 + \ldots + U_m$ is the set of all elements $u$ such that $u = u_1 + \ldots + u_m$ where $u_i \in U_i$.
If $U_1, \ldots, U_m$ are subspaces of $V$, their sum is also a subspace of $V$.
Also, the sum is the smallest subspace of $V$ containing all of $U_1, \ldots, U_m$.
If $V$ is the sum of $U_1, \ldots, U_m$, then any element in $V$ can be written as the sum etc.
1.5.2Direct sums¶
If each element of $V$ can be written uniquely as a sum of $u_1 + \ldots + u_m$, then $V$ is the direct sum of $U_1 + \ldots + U_m$.
For $V$ to be a direct sum of $U_1, \ldots, U_m$, then it must be the sum, and there can only be one way to write the 0 vector. The converse is true too. The proof for $\Rightarrow$ is trivial; the proof for $\Leftarrow$ is pretty simple too, we just set 0 to be one representation minus another, and so all the vectors must be zero, etc.
$V = U \oplus W$ if and only if $V = U + W$ and $U \cap W = \{0\}$. Proof for $\Rightarrow$: the first is from the definition; for the second, assume that $v \in U \cap W$. Then $-v \in U$ and $-v \in W$ (closure under scalar multiplication by -1). So $v + (-v) = 0$ is a sum of a vector in $U$ and a vector in $W$. We know that $0 + 0 = 0$. Since this representation is unique, by the fact that $V$ is the direct sum, then $v = 0$. Thus 0 is the only element in the intersection. For $\Leftarrow$, we need to show that the zero vector can only be written one way. Suppose that $0 = w + u$ where $u\in U$ and $w \in W$. Then $w = -u$ so $w \in U$ and also $u \in W$. Thus $u$ and $w$ are both in the intersection. But the only such element is 0. Thus $u = w = 0$ and so there is a unique way of writing the zero vector as a sum of elements in $U$ and $W$. $\blacksquare$
1.6Exercises¶
(Interesting results encountered in the exercises.)
- If $V = W \oplus U_1 = W \oplus U_2$, $U_1$ and $U_2$ can be different.
2Chapter 2: Finite-dimensional vector spaces¶
2.1Span and linear independence¶
- Span
- set of all linear combinations; always a subspace of whatever the ambient space is
- Finite-dimensional vector space
- There is a list of vectors (recall that lists must be finite) that spans the vector space
- Linear independence for $v_1, \ldots, v_m$
- $a_1v_1 + \ldots + a_mv_m = 0$ only happens when all the a's are 0
- Linear dependence lemma
- If the vectors are linearly dependent, then one vector in the list can be written as a linear combination of the others (proof: at least one coefficient is non-zero so divide by it). Also, we can remove one without affecting the span (proof: use the previous bit and just replace that vector with the linear combination of the others).
- Spanning sets
- The length of any independent list $\leq$ the length of any spanning set
2.2Bases¶
- Basis for $V$
- Linearly independent spanning set. Any vector in $V$ can be written uniquely as a linear combination of basis elements. We can reduce a spanning list to a basis by removing linearly dependent elements, and we can extend a linearly independent list to a basis by adding elements from a spanning list that are not in the span of the lin ind list.
For finite-dimensional $V$: Given that $U \subseteq V$, then there exists $W \subseteq W$ such that $V = U \oplus W$. Proof: $U$ has a basis due to the finite thing. We can extend the basis of $U$ to a basis of $V$ by adding some vectors $w_1,\ldots, w_n$, which we let be the basis for $W$. Then we just have to prove that $V$ is their sum and their intersection is 0.
2.3Dimension¶
Any two bases of a fdvs have the same length. Length of a subspace never exceeds (basis can be extended, etc). Any spanning list or linearly independent list with length $\dim V$ is a basis for $V$.
2.3.1Lunch in Chinatown¶
$$\dim(U_1 + U_2) + \dim U_1 + \dim U_2 -\dim(U_1 \cap U_2)$$
Proof: construct a basis for everything, look at the number of vectors in each.
If $V$ is the sum, and $\dim V$ is the sum of the dims, then $V$ is the direct sum. Proof: bases.
2.4Exercises¶
- $w$ is in the span of $(v_1+w, \ldots, v_n+w)$ if it's dep. Proof: write out the definition for lin dep, rearrange terms, divide by coefficient for $w$ (non-zero).
3Chapter 3: Linear maps¶
3.1Definitions and examples¶
Linear map from $V$ to $W$: function $T: V \to W$ that satisfies additivity ($T(u+v) = Tu + Tv$) and homogeneity ($T(av) = a(Tv)$, aka it preserves scalar multiplication). Also, $\mathcal L(V, W)$ is a vector space. We add a "product" operation, which is really just composition, in the expected order.
3.2Nullspaces and ranges¶
Nullspace: $\{ v \in V: Tv = 0\}$. Always a subspace. Easy to prove. $T$ is injective if and only if its nullspace is $\{0\}$.
Range: $\{Tv: v \in V\}$. Also a subspace.
3.2.1Dimension of the range and nullspace¶
$$\dim V = \dim \text{null} T +\dim \text{range} T$$
Proof: create bases, apply $T$, the nullspace parts disappear because they go to 0.
Corollary: If $\dim V > \dim W$, there are no injective maps from $V$ to $W$, and there are no surjective maps from $W$ to $V$.
3.3The matrix of a linear map¶
To find: Evaluate each element of the given basis and make each result a column.
The matrix of a vector with respect to some basis of length $n$ is just the $n \times 1$ column vector of the coefficients for the basis vectors. Then, $M(Tv) = M(T)M(v)$.
3.4Invertibility¶
$T \in \mathcal L(V, W)$ is invertible if there exists $S \in \mathcal L(W, V)$ such that $ST = TS = I$ (obviously unique).
Invertibility $\Leftrightarrow$ injectivity and/or surjectivity.
Isomorphic vector spaces: there is an invertible lienar map between them. Thus there is a bijection between them, and they have the same dimension (from the nullspace-range formula)
$$\dim \mathcal L(V, W) = \dim V \cdot \dim W$$
3.5Exercises¶
- We can always extend a linear map on a subspace to a linear map on the ambient space. It won't be injective, obviously, but we just ignore the part of any vector that we don't have a rule for (in terms of basis vectors, etc).
- If $T \in \mathcal L(V, \mathbb F)$, and $u \notin \text{null}T$, then $V$ is the direct sum of the nullspace and $\{au: a\in \mathbb F\}$. Proof: show that their intersection is zero, and that their sum is $V$ (take one vector from each such that their sum is an arbitrary $v$).
- If $T \in \mathcal L(V, W)$ is injective, then $T$ applied to each element of a lin ind list gives a lin ind list. Proof: by additivity of $T$, if some linear combination is equal to 0, then $T(a_1v_1 + \ldots + a_nv_n) = 0$, and by injectivity, the nullspace is 0, so this inherits the independence of the original list.
- A product of injective maps is injective. Proof: apply argument inductively, starting from the last.
- $ST$ is invertible $\Leftrightarrow$ $S$ and $T$ are both invertible. Proof of $\Rightarrow$: $T$ is injective (nullspace has only 0), and $S$ is surjective (range is $V$). In the other direction, just multiply to get $I$.
4Chapter 4: Polynomials¶
4.1Degree¶
$p$ has degree $m \geq 1$. Then $\lambda$ is a root $\Leftrightarrow$ there exists a $q$ with degree $m-1$ such that
$$p(z) = (z-\lambda)q(z)$$
Proof of the non-trivial direction: $p(z) - p(\lambda) = p(z) - 0 = p(z)$ but when you write it all out you can factor out $(z-\lambda)$ from each, so yeah.
Corollary: $p$ has at most $m$ distinct roots.
4.2Complex coefficients¶
4.2.1Fundamental theorem of algebra¶
Every nonconstant polynomial over $\mathbb C$ has a root $\in \mathbb C$.
Corollary: $p$ has a unique factorisation of the form $p(z) = c(z-\lambda)\cdots(z-\lambda_m)$ where the lambdas are the roots.
4.3Real coefficients¶
Complex conjugate definition: negative of the imaginary part
If $\lambda$ is a root, so is $\overline{\lambda}$.
$p$ has a unique factorisation consisting of linear and quadratic polynomials (all irreducible).
5Chapter 5: Eigenvalues and eigenvectors¶
5.1Invariant subspaces¶
If $u \in U$, then $Tu \in U$.
For one-dimensional $V$, the only invariant subspaces are the trivial ones (zero, whole space - aren't they the only subspaces?).
For a self-map (operator), both the nullspace and range are invariant.
Eigenvalue $\lambda$: there exists a nonzero $v \in V$ such that $Tv = \lambda v$. So, $T$ has a one-dimensional invariant subspace $\Leftrightarrow$ $T$ has an eigenvalue. Also, $\lambda$ is an eigenvalue if and only if $T - \lambda I$ is not injective (or invertible, or surjective). $v$ is an eigenvector (found by looking at the nullspace of $T-\lambda I$).
The eigenvectors are always linearly independent if the eigenvalues are distinct. Proof: let $v \in \text{span}$, write it out as a linear combo, multiply both sides by $\lambda$, then since they're all distinct, the coefficients must be 0.
The maximum number of eigenvalues is $\dim V$, since the eigenvectors must be distinct (same number, etc).
5.2Polynomials applied to operators¶
$T^m$ is $T$ applied $m$ times
5.3Upper-triangular matrices¶
Every operator on a fd, non-zero complex vector space has an eigenvalue. Proof: consider the vectors $(v, Tv, T^2v, \ldots, T^nv)$. This is not linearly independent for $\dim V = n$. So we can write 0 as a linear combination of not-all-zero complex coefficients. Take the largest nonzero coefficient, and make all of them up to there the coefficients of a polynomial. We can factor that over the complex numbers! Replace $z$ with $T$ and we see that $T-\lambda_j$ is not injective for at least one $j$, since it sends something non-zero to 0.
Upper triangular matrix: everything below the diagonal is 0 etc.
If $T \in \mathcal L(V)$, then TFAE: (1) the matrix with respect to the basis is upper triangular; (2) $Tv_k$ for any basis vector $v_k$ is in the span of the first $k$ basis vectors; (3) the span of the first $k$ basis vectors is invariant under $T$.
Any operator has an upper-triangular matrix with respect to some basis. Proof: by induction.
$T$ is invertible $\Leftrightarrow$ its upper triangular matrix has no zeros on the diagonal (eigenvalues). Proof: if one eigenvalue is 0, then $Tv = 0$ for some nonzero $v$, so it's not injective, so it's not invertible. If it's not invertible, it's not injective, so there's some $v$ such that $Tv = 0$. That means there's an eigenvalue of 0.
5.4Diagonal matrices¶
An operator has a diagonal matrix with respect to some basis if the eigenvalues are distinct (and thus the eigenvectors comprise a linearly independent list, and hence, a basis). (It can also have a diagonal matrix even when the eigenvalues are not distinct, sometimes.)
TFAE:
- $T$ has a diagonal
- Eigenvectors form a basis of $V$
- The sum of one-dimensional invariant subspaces is the whole space
- The sum of nullspaces of $T-\lambda_k I$ is the whole space
- The sum of dims of the above fits
5.5Invariant subspaces on real vector spaces¶
Every operator has an invariant subspace of dim 1 or 2. Proof: polynomial thing again.
Every operator on an odd-dimensional space has an eigenvalue.
5.6Exercises¶
- $T$ has at most $(\dim \text{range} T) + 1$ distinct eigenvalues. Proof: dim of the nullspace has to be at most 1 (if there is a 0 eigenvalue).
- If $\lambda$ is an eigenvalue of $T$, $1/\lambda$ is of $T^{-1}$.
- $ST$, $TS$ have the same eigenvalues.
- If every vector is an eigenvector, then we have a scalar multiple of identity operator. Same if every subspace with one dim less is invariant.
- If $P^2=P$, then the ambient space is the direct sum of the nullspace and range. Proof: $u-Pv$ is in the nullspace, and $Pv$ is in the range.