# Thursday, February 7, 2013 Polynomials, introduction to eigenvalues

## 1Polynomials¶

### 1.1Proposition 4.10: Complex conjugate roots¶

Suppose $p$ is a polynomial with real coefficents. If $\alpha \in \mathbb C$ is a root of $p$, so is $\overline \lambda$ (the complex conjugate).

Proof: write $p$ as $p(x) = a_0 + a_1x + \ldots + a_mx^m$, where $a_i \in \mathbb R$. Then, $\lambda \in \mathbb C$ being a root of $p$ means that $p(\lambda) = a_0 + a_1 \lambda + \ldots + a_m\lambda^m = 0$. Let's take the conjugate of both sides of that:

$$\overline{p(\lambda)} = \overline 0 = 0 \quad \therefore \overline{(a_0 + a_1\lambda + \ldots + a_m\lambda^m)} = 0$$

Now, we know that $\overline{a_i} = a_i$ for each $a_i$ since they're all real. Also, by the properties of the conjugate operator, we know that $\overline{u+v} = \overline u + \overline v$ and $\overline{u \cdot v} = \overline u \cdot \overline v$. Thus, we have that

$$a_0 + a_1\overline \lambda + \ldots + a_m\overline \lambda^m = 0 = p(\overline \lambda)$$

which tells us that $\overline \lambda$ is a root of $p$. $\blacksquare$

Let $\alpha, \beta \in \mathbb R$. If we can factor $x^2 + \alpha x + \beta$ as $(x-\lambda_1)(x-\lambda_2)$ where $\lambda_1, \lambda_2 \in \mathbb R$, then $\alpha^2 \geq 4\beta$ holds true. (This is really just an application of the quadratic formula to a less general form.)

Proof: ($\rightarrow$) Complete the square. $x^2 + \alpha x + \beta = (x+\alpha/2)^2 + (\beta - \alpha/4)^2$. Suppose that $\alpha^2 < 4\beta$. Then, $\beta - \alpha^2/4 > 0$, and so $(x + \alpha/2)^2$ will always be positive. Thus the function will always be above the $x$-axis, and so there are no roots. (This was a proof using the contrapositive.)

($\leftarrow$) Assume that $\alpha^2 \geq 4\beta$. Then

$$x^2 + \alpha x + \beta = (x+\alpha/2)^2 - \underbrace{(\alpha/4-\beta)}_{= c^2}$$

(We can set the latter term to some arbitrary constant $c^2$ for some $c \in \mathbb R$, since it's a positive term.) Then, $x^2 + \alpha x + \beta = (x + \alpha/2)^2 - c^2 = (x + \alpha/2 - c)(x + \alpha/2+c)$ and so $\lambda_1 = \alpha/2 -c$, $\lambda_2 = \alpha/2+c$. $\blacksquare$

### 1.3Theorem 4.14: Unique factorization¶

If $o \in P(\mathbb R)$ is a non-constant polynomial, then $p$ has a unique factorisation (ignoring differences in order) of the form

$$p(x) = c(x-\lambda_1)\ldots(x-\lambda_m)(x^2+\alpha_1x+\beta_1) \ldots (x^2+\alpha_nx+\beta_nx)$$

where the $\lambda$s are in $\mathbb C$, $c \in \mathbb R$, and $\alpha_i^2 < 4\beta_i$.

Thus, we can always factor any polynomial into its irreducible linear/quadratic factors.

Also, if we know that $(x-\lambda)$ for some complex (non-real) $\lambda$, then we also know that $(x-\overline \lambda)$ is a factor. So $(x-\lambda)(x -\overline \lambda)$ is a factor, and this happens to be a quadratic with real coefficients. So this will show up as a quadratic factor.

Proof: in the textbook. QED $\blacksquare$ $\checkmark$ $\dagger$ $\heartsuit$ $\sharp$ $\Im$ $\leadsto$

## 2Eigenvalues and invariant subspaces¶

Let $V$ be a finite-dimensional, non-trivial vector space. Recall that $\mathcal L(V) = \mathcal L(V, V)$ (the set of linear operators - that is, linear maps from a vector space to itself).

### 2.1Invariant subspaces¶

Let $T \in \mathcal L(V)$, and let $U$ be a subspace of $V$. We say that $U$ is invariant under $T$ if for every $u \in U$, $Tu \in U$.

#### 2.1.1Examples¶

$T: P_7(\mathbb R) \to P_7(\mathbb R), p(x) \mapsto \frac{dp(x)}{dx}$ (that is, it maps a polynomial to its derivative). Let $U = P_5(\mathbb R)$. Thus $U$ is invariant under $T$, obviously. (Any subspace would be ...) Although if the linear operator were integration and not differentiation, no subspace would be invariant under it.

For any $T \in \mathcal L(V)$, are its nullspace and range invariant? Yes, obviously - if $u \in \text{null}(T)$, then $Tu = 0$ and so $T(Tu) = T(0) = 0$. Similarly for the range: if $v \in \text{range}(T)$, then $Tv$ is also in the range (since $v \in V$, the domain).

### 2.2Eigenvalues¶

$\lambda \in \mathbb F$ is an eigenvalue of $T \in \mathcal L(V)$ if there exists a non-zero vector $v \in V$ such that $Tu = \lambda v$.

If $T$ has an eigenvalue, then $T$ has a one-dimensional invariant subspace, and vice versa. Proof: let $u \neq 0$, and let $U = \{au \mid a \in \mathbb F \}$, which is a one-dimensional subspace of $V$. Assume that $Tu = \lambda u$ for some $\lambda$. Then $U$ is invariant, by definition. Conversely, if $u \in U$ and $U$ is invariant, then we have $Tu \in U$. But then $Tu$ is just $au$ for some $a$. Like, $a=\lambda$. Thus, eigenvalue. $\heartsuit$

Notice that the equation $Tu = \lambda u$ is equivalent to $(T - \lambda I)u = 0$ (where $I$ is the identity operator, not matrix). Thus $\lambda$ is an eigenvalue of $T$ if and only if $T-\lambda I$ is not injective (otherwise its nullspace would consist only of the zero vector). Correspondingly, $T-\lambda I$ cannot be surjective or invertible either. Linear algebra is pretty cool.

(I got the last paragraph from the textbook. I vaguely remember something along those lines being written on the board, but at that point I had already put away my notebook and to bring it back again just to write down a few lines felt defeatist. If you have notes for that please let me know.)