Monday, January 21, 2013 CC-BY-NC
Introduction to linear maps

Maintainer: admin

Material covered: proposition 2.19 and the end of chapter 2, and the start of chapter 3 (linear maps).

1Proposition 2.19

Suppose $V$ is a finite-dimensional vector space. Let $U_1, \ldots, U_m$ be subspaces of $V$, such that the following hold true:

$$\begin{cases} & V = U_1 + \ldots + U_m \, (1) \\ & \dim V = \dim U_1 + \ldots + \dim U_m \, (2) \end{cases}$$

Then, $V = U_1 \oplus \ldots \oplus U_m$.

Proof: Let $B_i = (V_1^i, \ldots, V_{n_i}^i)$ be a basis for $U_i$. (The use of superscripts is not exponentation or anything like that, it's just notation for differentiating the bases from each other.) Now, let's take all the basis vectors and combine them in one list, $B = (B_1, \ldots, B_m)$ (with each new basis extending the list as opposed to being appended). Clearly, $B$ spans $V$, by (1). Also, the length of $B$ is $\dim(U_1) + \ldots + \dim(U_m) = \dim(V)$. Hence, $B$ is a basis for $V$, by proposition 2.16 (#2 here).

Now we need to show that if $0 = u_1 + \ldots + u_m$1, where $u_i \in U_i$, then $u_1 = \ldots = u_m = 0$. We can write each vector $u_i$ as a linear combination of the relevant basis vectors:

$$0 = \sum_{j=1}^n a_j^1 v_j^1 + \sum_{j=1}^{n_2} a_j^2 v_j^2 + \ldots + \sum_{j=1}^{n_m} a_j^m v_j^m$$

But $B$ is a basis. So we have $a_j^1 = a_j^2 = \ldots = a_j^m = 0$. Thus, $u_1 = \ldots = u_m = 0$. So the set is linearly independent. I feel like there may be a more elegant way of doing it, but I can't find it. Good enough for now $\blacksquare$

2Chapter 3: Linear maps

A linear map (or linear transformation, or linear operator), defined by $T: V \to W$ where $V$ and $W$ are vector spaces, has the property that for all $u, v \in V$ and $\alpha \in \mathbb F$:

$$T(\alpha \cdot u + v) = \alpha \cdot T(u) + T(v)$$

2.1The underlying vector space structure

Let $\mathcal L(V, W) = \{T: V \to W \,|\, T \text{ is linear } \}$ be the set of all linear maps from $V$ to $W$ (for some specified $V$ and $W$). We claim that $\mathcal (V, W)$, together with the operations of addition and scalar multiplication (defined in the most natural way), is a vector space.

Here are the definitions of addition and scalar multiplication:

  • Addition: $(S+T)(v) = S(v) + T(V)$, where $S, T \in \mathcal L(V, W)$, $v \in V$
  • Scalar multiplication: $(\alpha \cdot T)(v) = \alpha \cdot T(v)$, where $T \in \mathcal L(V, W)$, $v \in V$ and $\alpha \in \mathbb F$

Let's check that this proposed vector space is closed under addition:

$\begin{align} (S+T)(\alpha v + u) & = S(\alpha v + u) + T(\alpha v + u) \tag{by the definition of addition} \\ & = \alpha S(v) + S(u) + \alpha T(v) + T(u) \tag{by the definition of linear operators} \\ & = \alpha(S(v) + T(v)) + (S(u) + T(u)) \tag{by distributivity} \\ & = \alpha(S+T)(v) + (S+T)(u) \tag{by the definition of scalar multiplication} \\ \therefore & \mathcal(L, W) \text{ is closed under addition.} \, \blacksquare \end{align}$

Verifying closure under scalar multiplication (as well as the rest of the properties) is left as an exercise for the reader.

The fact that the set of all linear maps is itself a vector space is quite an interesting result. Vector spaces, all the way down.

2.2Examples

  1. The zero map. $0: V \to W, v \mapsto 0$
  2. The identity/self map. $I: V \to V, v \mapsto v$
  3. The differentiation map, defined on real polynomials2. $\displaystyle D: P(\mathbb R) \to P(\mathbb R), p(x) \mapsto \frac{dp}{dx}$. Proving that this is a linear operator requires a basic fact from calculus: $\displaystyle \frac{d(\alpha p(x) + q(x))}{dx} = \alpha\frac{dp}{dx} + \frac{dq}{dx}$. As $D$ is a linear map, we can say that $D \in \mathcal L(P(\mathbb R), P (\mathbb R))$.
  4. The integration map, defined on real polynomials for a particular interval3. $\displaystyle S: P(\mathbb R) \to \mathbb R, p(x) \mapsto \int_0^1 p(x) \,dx$. Proving that this is a linear operator again requires a basic fact from calculus (omitted) - we just have to show that it satisfies the defining property of a linear map (mentioned at the beginning of this chapter).
  5. $T: \mathbb R^3 \to \mathbb R^2, (x, y, z) \mapsto (3x-z, y)$. $T \in \mathcal L(\mathbb R^3, \mathbb R^2)$. Proof left as an exercise.
  6. The general linear transformation between Euclidean spaces: $T: \mathbb F^n \to \mathbb F^m, (x_1, \ldots, x_n) \mapsto (a_{1,1}x_1 + \ldots + a_{1,m}x_n, \ldots, a_{1, m} x_1 + \ldots + a_{n, m}x_n)$.

2.3Product structures

We can define a "product structure" (i.e., composition) on a set of linear maps, where $T \in \mathcal (U, V)$ and $S \in \mathcal (V, W)$:

$$(S \circ T)(v) = (ST)(v) = S(T(V))$$

From this, we shall see, matrices will naturally arise. We will continue this next lesson.

  1. Note the absence of coefficients here (usually needed to prove linear independence). It's not needed in this because they are just arbitrary vectors from the various subspaces. 

  2. This is of course not the general differentation map; it is merely a specific example of one. 

  3. Same as above