Tuesday, March 26, 2013 CC-BY-NC

Maintainer: admin

1Matrices as Vectors

$\mu_{nxn}(\mathbb{F})$ the set of $n \times n$ matrices with entries from $\mathbb{F}$

e.g.

$$\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \leftrightarrow \begin{pmatrix} a_{11}\\ a_{12} a_{12}\\ a_{21} \end{pmatrix} 2 \times 2 \text{ matrix } \quad \leftrightarrow \text{ vector in } \mathbb{F}^2$$

$$\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} + \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} = \begin{pmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\ a_{21}+b_{21} & a_{22}+b_{22} \end{pmatrix} \leftrightarrow \begin{pmatrix} a_{11}+b_{11}\\ 1\\ 1 \end{pmatrix} = \begin{pmatrix} a_{11}\\ 1\\ 1 \end{pmatrix} + \begin{pmatrix} b_{11}\\ 1\\ 1 \end{pmatrix}$$

$$\alpha(\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}) = \left( \begin{pmatrix} \alpha a_{11} & \alpha a_{12} \\ \alpha a_{21} & \alpha a_{22} \end{pmatrix} \right) etc.$$

$dim_{F} M_{n \times n}(\mathbb{F}) = n^2$

Definition: a basis is a set of matrices with exactly one entry = 1, the rest equal 0

Hence for $A \in M_{n \times n}(\mathbb{F})$ the set $\{I, A,A^{2},\ldots,A^{n^2}\}$ of $n^{2}+1$ matrices from $m$ is linearly dependent

Hence there are coefficients $\alpha_{0}, \ldots \alpha_{n^2} \in \mathbb{F}$

$I\alpha_0+A\alpha_1 + \ldots + A\alpha_{n^2} = 0$ matrix for some $\alpha_{j} \neq 0$

So we can reinterpret this as a polynomial of $x$:

Define $f(x) = \alpha_0 + x\alpha_1 + \ldots + x^{n^2}\alpha_n$ then $f(A) = 0$ matrix (note $x_0 \to I, \alpha_1 x \to A)$

$A$ satisfies some polynomial of degree $\leq n^{2}$

2Cayley-Hamilton Theorem (first full proof given by Frobenius)

Define $X_{A}(\lambda) = det(\lambda{I} - A) = \lambda^{n} - (\text{trace} A)\lambda^{n-1} + \ldots + (-1)^{n}detA$

Theorem $X_{A}(A) = \mathbf{0}$ zero polynomial

Note:$ X_{A}(\lambda)$ has degree n

Example:

$A = \begin{pmatrix}2 & 1 \\ 1 & 2 \end{pmatrix}$

$$\text{det}(\lambda{I}-A) = \text{det}\begin{pmatrix} \lambda-2 & -1 \\ -1 \lambda-2 \end{pmatrix} = (\lambda-2)^2-(-1)^2$$

$X_A(\lambda) = \lambda^2-4\lambda+3$

Test $I = \begin{pmatrix}1&0\\0&1\end{pmatrix} \qquad A = \begin{pmatrix}2& 1 \\ 1 & 2\end{pmatrix} \qquad A^2=\begin{pmatrix}5 & 4 \\ 4 & 5\end{pmatrix}$

$$A^2 -4A + 3I = \begin{pmatrix}5 & 4 \\ 4 & 5\end{pmatrix} - 4 \begin{pmatrix}2& 1 \\ 1 & 2\end{pmatrix} + 3 \begin{pmatrix}1& 0 \\ 0 & 1\end{pmatrix} = \begin{pmatrix}5-8+3& 4-4 \\ 4-4 & 5-8+3\end{pmatrix} = \begin{pmatrix}0 & 0 \\ 0 & 0\end{pmatrix}$$

So $A(A-4I) = -3I \qquad A [(A-4I)(\frac{1}{-3})] = 1\quad$, thus $(A-4I)(\frac{1}{-3}) = A^{-1}$

$$\begin{aligned} (A-4I)(\frac{1}{-3}) &= \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} \frac{2}{3} & \frac{-1}{3} \\ \frac{-1}{2} & \frac{2}{3} \end{pmatrix} \\ &= \begin{pmatrix} \frac{3}{3} & 0 \\ 0 & \frac{3}{3} \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{aligned}$$

Example

$$\begin{aligned} A = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} \qquad X_{A}(\lambda) &= det(\lambda{I}-A) \\ &= \text{det}\begin{pmatrix} \lambda-1 & -1 & -1 \\ -1 & \lambda-1 & -1 \\ -1 & -1 & \lambda-1 \end{pmatrix} \\ &= (\lambda -1) \begin{vmatrix} \lambda -1 & -1 \\ -1 & \lambda -1 \end{vmatrix} - (-1) \begin{vmatrix} -1 & -1 \\ -1 & \lambda -1 \end{vmatrix} +(-1) \begin{vmatrix} -1 & \lambda-1 \\ -1 & -1 \end{vmatrix} \\ &= (\lambda-1)^{3} - 3(\lambda-1)-2 = \lambda^2(\lambda-3)\end{aligned}$$

Note $\lambda = 0$ gives $(-1)^3 + 3 - 2 = 0$ $\lambda = 3$ gives $0$

test $A(A-3I) = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} -2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & -2 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} $

$M_{A}(\lambda) = \lambda(\lambda-3)$ is the minimal polynomial for this $A$

e.g. $A = \begin{pmatrix} 4 & 1 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 2 \end{pmatrix} \\ \text{det}(A-\lambda{I}) = \text{det} \begin{pmatrix} \lambda-4 & 1 & 0 \\ 0 & \lambda-4 & 0 \\ 0 & 0 & \lambda-2 \end{pmatrix} =(\lambda-4)^{2}(\lambda-2) $

In this case $m_{A}(\lambda) = X_{A}(\lambda)$

e.g. $B = \begin{pmatrix} 4 & 1 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix} \\ \text{det}(A-\lambda{I}) = \text{det} \begin{pmatrix} \lambda-4 & 1 & 0 \\ 0 & \lambda-4 & 0 \\ 0 & 0 & \lambda-4 \end{pmatrix} =(\lambda-4)^{3}$

Test $B - 4I = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} $

$$(B-4I)^2 = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$

So $m_B(\lambda) = (\lambda -4)^2$

Return to $A = \begin{pmatrix} 4 & 1 & 0\\ 0 & 4 & 0\\ 0 & 0 & 2 \end{pmatrix}$

$X_{A}(\lambda) = (\lambda-4)^{2}(\lambda-2)$

$$A-4I = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \end{pmatrix} \qquad A-2I = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$

Test $(A-4I)(A-2I) = \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & -2 \end{pmatrix} \begin{pmatrix} 2 & 1 & 0\\ 0 & 2 & 0\\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} - & 1 & -\\ - & - & -\\ - & - & - \end{pmatrix} \neq 0$ in this case, no $m_A(\lambda) = X_A(\lambda)$ (verify)

so $m_{A}(\lambda) \neq (\lambda-4)(\lambda-2)$

Minimum polynomial always factor of characteristic polynomial.

Base where you throw away the multiplicities must be a factor of the minimum polynomial

3Theory of Polynomials over a field $\mathbb{F}$

[Note: $\mathbb{C}$ is special because over $\mathbb{C}$ all polynomials factor completely into linear factors]

$P_{N}[x] =$ Set of all polynomials in $x$ coefficients in $\mathbb{F}$

$f(x) \in P_{F}[x] \to$ there is $k > 0$ integer

$f(x) = a_kx^{k} + \ldots + a_{1}x + a_0$ (finitely many powers of $x$ involved)

Theorem (division algorithm) given $f(x),g(x) \in P_{F}[x]$ and deg $ g \leq \text{deg } f$ then there are polynomials $q(x),r(x)$ with $\text{deg } \ r(x) < \text{deg } \ g$ and $f(x) = g(x)q(x)+r(x)$

Theorem if $\alpha$ is a root common to $f(x)$ and $g(x)$ then $r(\alpha) = 0$

Proof $0 = \text{cancel}(f(\alpha))$ = $q(\alpha)\text{cancel}(g(\alpha))+r(\alpha)$

$(x-\lambda)$ is then a factor of $f(x)$

Take $g(x) = x - \alpha$, then deg $r \leq 0$ with $r$ the constraints and $\alpha$ ratio of both $f,g$ given $r = 0$

4Collections of Polynomials

Given a subset $\emptyset \neq S \subseteq P_F[x]$:

If $p(x),q(x) \in S \to p(x) + q(x) \in S$ and $l(x) \in \mathbb{F}, p \in S \to l(x)p(x) \in S$ then, if $\emptyset \neq S \neq P_{F}[x]$:

Conclude the existence of a polynomial $f(x) \in S$ with S = $\{ f(x)l(x)\ |\ l(x) \in P_{F}[x] \}$

Proof to each polynomial $np \in S$ assign deg $p$ (exclude 0 polynomial) where $\{\text{deg } p \ | \ p(x) \in S \}$ is a set of non-negative integers

Thus (induction axiom for non-negative integers), there is a integer in this set.

Take $f(x) \in S$ with degree $f = k$ (it must exist) and $h(x) \in S$, then

$$\text{deg } f \leq \text{deg } h \qquad h(x) = f(x)l(x) + r(x) \qquad \text{deg } r < \text{deg } f$$

but $r(x) = h(x) - l(x)f() \in S$ and $\text{deg } r < \text{deg } f$ contradict minimality of $k = \text{deg } f$ forcing $r(x)$ is the same as 0 and $h(x) = f(x)l(x)$ as claimed

Define $\{h(x)|h(A) = \underline{\underline{0}} \text{ matrix}\} = H$. This satisfies hypothesis on $S$ $\to$ there is a minimum degree member $H(x) = f(x)l(x)$ for any $h \in H$

Note $X_{A}(x)$ satisfies $X(A) = 0$ matrix $\therefore f(x)$ divides $X_A(x)$