Tuesday, January 15, 2013 CC-BY-NC
Bases

Maintainer: admin

In this class, we fleshed out the proof for Theorem 2.6, and started our discussion of bases.

1Proof for Theorem 2.6

Let $(u_1, \ldots, u_m)$ be a linearly independent set, and let $(v_1, \ldots, v_n)$ be a set that spans all of $V$, as before. The idea: for each $u$ in the independent set, we add it to the spanning set and delete a $v$ such that the list still spans $V$. For $u_1$, since $u_1 \in V$, we can write it as a linear combination of vectors in the spanning set: $u_1 = a_1v_1 + \ldots + a_nv_n$ where the coefficients come from the field and at least one $a_k \neq 0$ (since if they were all zero, we would just get the zero vector, which can't be part of a linearly independent set). Then we swap $u_1$ and $a_kv_k$, and divide both sides by $a_k$. Without loss of generality, we can re-order the vectors and coefficients such that $k=1$, and then replace the new coefficients with $b$'s, so we that we have $v_1 = u_1 + b_1v_2 + \ldots + b_nv_n$. I'll finish this later.

1.1Alternative proof

Here's an alternative proof, which uses proof-by-contradiction techniques. As before, $(u_1, \ldots, u_m)$ is a linearly independent set of vectors in $V$, and $(v_1, \ldots, v_n)$ is a set that spans all of $V$. Assume that $m > n$. Then we can write our linearly independent set as $(u_1, \ldots, u_n, \ldots, u_m)$. Let's replace each vector $u$ by a linear combination of $v$s, since we know that we can write any $v \in V$ as such a linear combination. So our set becomes $((a_{1,1}v_1 + \ldots + a_{1,n}v_n), \ldots, (a_{n, 1}v_1 + \ldots + a_{n, n}v_n), \ldots, (a_{m, 1}v_1 + \ldots + a_{m, n}v_n))$. We also know that for each of the $m$ linear combinations, at least one of the coefficients is non-zero (because otherwise that would imply that the zero vector is in our linearly independent set, which is not possible from the definition of linear independence). Since we have $n$ vectors in spanning set, by the pigeonhole principle, we know that there is at least one vector $v_k$ such that two of its associated coefficients are non-zero.

Actually, never mind, I don't like this proof anymore.

1.2Proposition 2.7

Every subspace of a finite-dimensional vector space is finite-dimensional.

Proof: left as an exercise.

2Bases

A basis is a linearly independent list of vectors in $V$ whose span is $V$. The standard basis for $\mathbb F^3$, for instance, is $((1, 0, 0), (0, 1, 0), (0, 0, 1))$ (also known as $i, j, k$); for $P_n(\mathbb F)$, the standard basis is $(1, z, \ldots, z^n)$.

2.1Proposition 2.8

$(v_1, \ldots, v_n)$ is a basis for $V$ if and only if every $v \in V$ can be written uniquely as a linear combination of those vectors.

Proof: ($\to$) Assume that $(v_1, \ldots, v_n)$ is a basis. By definition, it is a spanning set, and so every $v \in V$ can be written as a linear combination of vectors in the basis. We just need to show that there is a unique linear combination for each $v$. Assume that there are two ways of writing $v$: $v = a_1v_1 + \ldots + a_nv_n$ and $v = b_1v_1 + \ldots + b_nv_n$. Then $0 = v - v = (a_1v_1 + \ldots + a_nv_n) - (b_1v_1 + \ldots + b_nv_n) = (a_1-b_1)v_1 + \ldots + (a_n-b_n)v_n$. But we know that $(v_1, \ldots, v_n)$ is linearly independent, which means that the coefficients must be 0 if the linear combination results in the zero vector. So we have that $a_1 - b_1 = 0$, $\ldots$, $a_n-b_n = 0$ which then tells us that $a_1 = b_1$, $\ldots$, $a_n=b_n$. So there is only one unique way of writing each vector $v$ as a linear combination of the vectors in the basis.

($\rightarrow$) Assume that every $v \in V$ can be written uniquely as a linear combination of $(v_1, \ldots, v_n)$. In particular, the zero vector can be written uniquely as a linear combination: $0 = a_1v_1 + \ldots + a_nv_n$. We know that if we set $a_1 = \ldots = a_n = 0$ then we have a valid solution. By the uniqueness property, this must be the only solution. This guarantees that $(v_1, \ldots, v_n)$ is linearly independent. Also, since every vector $v \in V$ can be written as a linear combination of $(v_1, \ldots, v_n)$, then $\text{span}(v_1, \ldots, v_n) = V$. This concludes the proof. $\blacksquare$

2.2Theorem 2.10

Given a list of vectors that we know spans some vector space $V$, we can reduce this list to a basis simply by removing the ones that make it linearly dependent (while preserving the property that it spans $V$). Here's an example. Let's say we have the following vectors, whose span is $\mathbb F^3$:

$$((1, 0, 6), (i/2, 0, 3i), (1, 0, 0), (0, 1, 0)) = (v_1, v_2, v_3, v_4)$$

So let's go through the vectors in the list:

  • Since $v_1$ is not the zero vector, keep it.
  • Is $v_2 \in \text{span}(v_1)$? Yes - if $a_1 = i/2$, $v_2 = a_1v_1$. So we discard it.
  • Is $v_3 \in \text{span}(v_1)$? Clearly not. (Proof left as an exercise for the reader.)
  • Is $v_4 \in \text{span}(v_1, v_3)$? Again, clearly not. (Proof left again as an exercise.)

The resulting basis would be $((1, 0, 6), (1, 0, 0), (0, 1, 0))$, which is indeed a linearly independent spanning set.

The proof of the theorem is left as an exercise for the reader.

2.2.1Corollary

Every finite-dimensional vector space has a basis.

The proof is pretty trivial - a finite dimensional vector space is one that can be spanned by a finite list of vectors. If we simply remove the vectors that make it linearly dependent (if any), using the above algorithm, then the result would be a linearly independent spanning set - i.e., a basis.

2.3Theorem 2.12

Every linearly independent list of vectors can be extended to a basis of a vector space $V$, if we have a spanning set for $V$ as a reference. This is sort of the "opposite" of the previous theorem. Example: let $((1, 1, 1))$ be our linearly independent list of vectors, and let the spanning set be as in the previous section. This actually uses the same sort of algorith as in the previous section - just keep adding vectors from the spanning set unless they result in linear dependence. The proof for this is again left as an exercise.