# Fall 2010 Final Student-created solutions for the Fall 2010 final exam for MATH 223. You can find the original exam through WebCT or possibly docuum if anyone has uploaded it there in the meantime, but all the questions will be stated on this page.

## 1Question 1¶

Find a basis for each of the following spaces: the row space, column space and null space of the following matrix over the complex numbers.

$$\begin{pmatrix}1 & 1+i & 1-i & 2i \\ 1-i & 2 & 1-2i & -1-i \\ 1+2i & -1+i & 3+i & 0 \\ 3+i & 2+2i & 5-2i & -1+i \end{pmatrix}$$

### 1.1Solution¶

Row-reducing the matrix yields:

$$\begin{pmatrix}1 & 1+i & 1-i & 2i \\ 1-i & 2 & 1-2i & -1-i \\ 1+2i & -1+i & 3+i & 0 \\ 3+i & 2+2i & 5-2i & -1+i \end{pmatrix} \mapsto \begin{pmatrix}1 & 0 & 0 & 7-i \\ 0 & 1 & 0 & 1+2i \\ 0 & 0 & 1 & -3-3i \\ 0 & 0 & 0 & 0 \end{pmatrix}$$

From the row-reduced matrix, take the non-zero rows to obtain the basis for the row space:

$$\{1, 0, 0, 7-i\}, \{0, 1, 0, 1+2i\}, \{0, 0, 1, -3-3i\}$$

Where the columns in the row-reduced matrix have leading ones, the corresponding columns in the original matrix make up a basis for the column space:

$$\left \{ \begin{pmatrix}1 \\ 1-i \\ 1+2i \\ 3+i\end{pmatrix}, \begin{pmatrix}1+i \\ 2 \\ -1+i \\ 2+2i\end{pmatrix}, \begin{pmatrix}1-i \\ 1-2i \\ 3+i \\ 5-2i\end{pmatrix} \right \}$$

Now we solve the homogenous system for the row-reduced matrix to obtain the basis for the null space. Let $x_4$ be a free variable $t$:

$$\begin{pmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} = \begin{pmatrix} (-7+i)t \\ (-1-2i)t \\ (3+3i)t \\ t \end{pmatrix} = \begin{pmatrix}-7+i \\ -1-2i \\ 3+3i \\ 1 \end{pmatrix}$$

### 1.2Accuracy and discussion¶

Confirmed by Wolfram Alpha, according to @clarle.

## 2Question 2¶

Let $V = P_3(x)$ be the real vector space of polynomials of degree at most 3 and for $p(x) \in V$ define:

$$Tp(x) = (x^2 + 2x + 1) p''(x) + (-4x - 4)p'(x) + 6p(x)$$

(a) Verify that $T$ is a linear operator on $V$.
(b) Find a basis for each of $\ker(T)$ and $\text{im}(T)$. (If you like, you can use some matrix representing $T$ to help you with this. Or you can do it straight from the definitions.)

### 2.1Solution¶

For part (a):

Let $p(x)$ and $q(x) \in V$. Then:

\begin{align} T(p(x) + q(x)) & = (x^2 + 2x + 1)(p + q)''(x) + (-4x - 4)(p + q)'(x) + 6(p + q)(x) \\ & = (x^2 + 2x + 1)( p''(x) + q''(x) ) + (-4x - 4)( p'(x) + q'(x) ) + 6( p(x) + q(x) ) \\ & = (x^2 + 2x + 1)p''(x) + (-4x - 4)p'(x) + 6p(x) + x^2 + 2x + 1)q''(x) + (-4x - 4)q'(x) + 6q(x) \\ & = T(p(x)) + T(q(x)) \end{align}

Condition II: Scalar Multiplication

Let $p(x) \in V$ and some constant $r \in \mathbb{R}$:

\begin{align} T(rp(x)) & = (x^2 + 2x + 1)(rp)''(x) + (-4x - 4)(rp)'(x) + 6(rp)(x) \\ & = r(x^2 + 2x + 1)p''(x) + r(-4x - 4)p'(x) + r6p(x) \\ & = r[(x^2 + 2x + 1)p''(x) + (-4x - 4)p'(x) + 6p(x)] \end{align}

As both conditions are satisfied, $T$ is a linear operator on $V$.

For part (b):

Now we must find a 4 by 4 matrix that outputs into B. To do this, we evaluate T for each term in the standard basis $\{1, x, x^2, x^3\}$, then express the answer in column vector format (in terms of the basis):

\begin{align}T(1) & = 0 + 0 + 6(1) = 6 \mapsto \begin{bmatrix} 6 & 0 & 0 & 0 \end{bmatrix}^T \\ T(x) & = (x^2 + 2x + 1)(0) + (-4x-4)(1) + 6(x) = 2x - 4 \mapsto \begin{bmatrix} -4 & 2 & 0 & 0 \end{bmatrix}^T \\ T(x^2) & = (x^2 + 2x + 1)(2) + (-4x-4)(2x) + 6(x^2) = -4x + 2 \mapsto \begin{bmatrix} 2 & -4 & 0 & 0 \end{bmatrix}^T \\ T(x^3) & = (x^2 + 2x + 1)(6x) + (-4x-4)(3x^2) + 6(x^3) = 6x \mapsto \begin{bmatrix} 0 & 6 & 0 & 0 \end{bmatrix}^T \end{align}

These correspond to the columns of the matrix $[T]_B$, from left to right. So

$$[T]_B = \begin{bmatrix} 6 & -4 & 2 & 0 \\ 0 & 2 & -4 & 6 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$

To find the kernel and image space we use the matrix we found above. First we find the nullspace, using the same method we always use - the matrix row-reduces to

$$\begin{bmatrix} 3 & -2 & 1 & 0 \\ 0 & 1 & -2 & 3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & -1 & 2 \\ 0 & 1 & -2 & 3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$

so the nullspace is simply $\left \{ \begin{pmatrix} 1 & 2 & 1 & 0 \end{pmatrix}^T, \begin{pmatrix}-2 & -3 & 0 & 1 \end{pmatrix} \right \}$, which corresponds to a kernel of $\{ x^2 + 2x + 1, x^3-3x-2\}$. Note that the nullspace differs from the kernel in this case because the kernel is expressed in terms of the vector space being used.

To find a basis for the column space, we just take the columns with the leading ones from the RREF matrix and get the corresponding columns from the original matrix:

$$\{ \begin{pmatrix} 6 & 0 & 0 & 0 \end{pmatrix}^T, \begin{pmatrix} -4 & 2 & 0 & 0 \end{pmatrix}^T \}$$

which corresponds to an image space of $\{6, 2x-4\}$. Note that the same distinction exists between imagespace and column space as above.

### 2.2Accuracy and discussion¶

• Answer is wrong, according to Wolfram Alpha the correct Nullspace is {{1,2,1,0},{-2,-3,0,1}}, the correct Im(T) is {6, 2x - 4} and the correct Ker(T) is {$x^2$ + 2x + 1, $x^3$ - 3x -2} - @Paul

## 3Question 3¶

Let $A = \begin{pmatrix} -1 & 2 \\ 0 & 2 \end{pmatrix}$

Show that $A$ is diagonalizable but there is no orthogonal matrix $P$ such that $P^T AP$ is diagonal.

### 3.1Solution¶

Finding the characteristic polynomial of the matrix, $det(A-\lambda I)$:

$$\det(A-\lambda I) = det\begin{pmatrix}-1-\lambda & 2 \\ 0 & 2-\lambda \end{pmatrix}$$

So the characteristic polynomial is:

$$(-1 - \lambda)(2 - \lambda)$$

$A$ has two distinct eigenvalues, and via a proof in Assignment 6, A must be diagonalizable.

If we let $\lambda = -1$:

$A + I = \begin{pmatrix}0 & 2 \\ 0 & 3 \end{pmatrix}$ yields eigenvector of $\begin{pmatrix}1 \\ 0\end{pmatrix}$

If we let $\lambda = 2$

$A - 2I = \begin{pmatrix}-3 & 2 \\ 0 & 0 \end{pmatrix}$ yields eigenvector of $\begin{pmatrix}2 \\ 3\end{pmatrix}$

Subtracting twice the first eigenvector from the second eigenvector yields:

$\begin{pmatrix}2 \\ 3\end{pmatrix} - 2\begin{pmatrix}1 \\ 0\end{pmatrix} = \begin{pmatrix}0 \\ 3 \end{pmatrix}$

This is an orthogonal vector to the first eigenvector by quick inspection, and thus this forms an orthogonal basis for the space.

Normalizing the new vector, however, we get:

$\frac{1}{\sqrt{9}} \begin{pmatrix}0 \\ 3\end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$

Putting both final vectors as the columns of the orthogonal matrix P, we get the identity matrix:

$P = \begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}$

The transpose of the identity matrix is still the identity matrix, so:

$P^T AP = \begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 2 \\ 0 & 2 \end{pmatrix} \begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} = A$

Since A is not a diagonal matrix, and the identity matrix is the only possible orthogonal matrix P that can be obtained, there is no ''orthogonal'' matrix $P$ such that $P^T AP$ is diagonal. This completes the proof.

Alternative solution to the second part of the question: Proof by contradiction

If $D = P^TAP$ where $D$ is a diagonal matrix, then $A=PDP^T$ since $PDP^T = P(P^TAP)P^T = (PP^T)A(PP^T) = (PP^{-1})A(PP^{-1}) = A$ where $P$ is orthogonal. $A=PDP^T$ implies A is orthogonally diagonalizable.

By the spectral theorem, $A$ is orthogonally diagonalizable if and only if it is symmetric. However, $A$ is not symmetric and this leads to a contradiction. Therefore, there is no orthogonal matrix P such that $D = P^TAP$.

### 3.2Accuracy and discussion¶

Solution provided by @clarle.

• @clarle's solution is spot on but in case $P$ did not turn out to be the identity matrix and calculating $P^TDP$ proved to be horrendous, I added another simple way to explain why there is no orthogonal matrix $P$ such that $P^TAP$ is diagonal. -- @eleyine

## 4Question 4¶

### 4.1Question¶

Give an explicit, nonrecursive formula for $x_n$, where $x_n$ is defined recursively by:

$$x_0 = x_1 = 2, x_{n+2} = 2x_{n+1}+ 4x_n, \, n \geq 0$$

### 4.2Solution¶

We first find a matrix to represent this problem, then find the eigenvalues of that matrix. Let the system representing this problem be given by

$$\begin{pmatrix}x_{n+2} \\ x_{n+1} \end{pmatrix} = \begin{pmatrix} 2 & 4 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} x_{n+1} \\ x_n \end{pmatrix}$$

where we used the fact that $x_{n+2} = 2x_{n+1} + 4x_n$ and $x_{n+1} = x_{n+1}$ (yes) to find the values of the matrix, which we'll call A. We then find the characteristic polynomial:

$$det(\lambda I - A) = \lambda^2 - 2\lambda - 4 = 0 \\ \lambda^2 - 2\lambda - 4 + 5 = 5 \\ (\lambda - 1)^2 = 5 \\ \lambda -1 = \pm\sqrt{5}$$

which gives us the eigenvalues $\lambda_1 = 1 + \sqrt{5},\,\lambda_2 = 1 - \sqrt{5}$.

The formula for $x_n$ can somehow magically be given by $\alpha \lambda_1^n + \beta \lambda_2^n = \alpha (1+\sqrt{5})^n + \beta (1-\sqrt{5})^n$ where alpha and beta are scalars. We can solve for alpha and beta using $x_0$ and $x_1$:

$$x_0 = \alpha + \beta = 2 \quad x_1 = \alpha(1+\sqrt{5}) + \beta(1-\sqrt{5}) = 2$$

Combining both equations:

$$\alpha(1+\sqrt{5}) + \beta(1-\sqrt{5}) = \alpha + \beta \\ \alpha\sqrt{5} = \beta\sqrt{5} \\ \alpha = \beta \\ 2\alpha = 2 \\ \alpha = \beta = 1$$

Now we have the explicit formula for $x_n$:

$$x_n = (1+\sqrt{5})^n + (1-\sqrt{5})^n$$

### 4.3Accuracy and discussion¶

Solution provided by @clarle.

## 5Question 5¶

Suppose that $T$ is a linear transformation from the real vector space $V$ to the real vector space $W$ and $ker(T)$ is trivial. If $\{\vec{v_1},...,\vec{v_k}\}$ is a basis for $V$, prove that $\{T\vec{v_1},..., T\vec{v_k}\}$ is a basis for the image $im(T)$.

### 5.1Solution¶

To show that $\{T\vec{v_1},..., T\vec{v_k}\}$ is a basis for the image $im(T)$, we need to show the following:

(1) $span \left \{T\vec{v_1},...,T\vec{v_k} \right \} = im(T)$ (i.e. the proposed basis vectors span the entire imagespace under consideration after transformation)

(2) $a_1T\vec{v_1}+a_2T\vec{v_2}+...+a_kT\vec{v_k} = 0$ if and only if $a_1 = ... = a_k = 0$ (i.e. the transformed vectors are linearly independent)

First condition: spanning the image

We start by proving the first condition. Any vector in $V$ can be expressed as a linear combination of basis vectors, say $\vec{v} = a_1\vec{v_1} + a_2\vec{v_2}+ ... + a_k\vec{v_k}$

Then:

$$T\vec{v} = T( a_1\vec{v_1} + a_2\vec{v_2} + ... + a_k\vec{v_k} ) = a_1T\vec{v_1} + a_2T\vec{v_2} + ... + a_kT\vec{v_k}$$

And hence:

$\{T\vec{v_1},...,T\vec{v_k}\}$ spans the image of $T$.

Second condition: linear independence

To prove the second condition:

$$a_1T\vec{v_1} + a_2T\vec{v_2} + ... + a_kT\vec{v_k} = T(a_1\vec{v_1} + ... + a_k\vec{v_k}) = 0$$

The kernel of $T$ is empty, so $T(\vec{0}) = \vec{0}$.

Thus:

$a_1\vec{v_1} + ... + a_k\vec{v_k} = 0$ and these vectors are independent so $a_1 = ... = a_k = 0$, completing the proof.

### 5.2Accuracy and discussion¶

Solution provided by @clarle. Layout and math typesetting modified by @dellsystem, who also tried to add explanations to the proof but didn't really succeed. Anyone else want to try?

## 6Question 6¶

Find explicitly, $A^{15}$, where $A$ is the matrix below. [Note: 215 = 32768]

$$A = \begin{pmatrix}3 & 0 & -2 \\ 0 & -1 & 0 \\ 1 & 0 & 0 \end{pmatrix}$$

### 6.1Solution¶

First, let's find the eigenvalues from the characteristic polynomial by expanding along the first row:

$$det(A - \lambda I) = det \begin{pmatrix} 3 - \lambda & 0 & -2 \\ 0 & -1 - \lambda & 0 \\ 1 & 0 & -\lambda \end{pmatrix}= (3-\lambda)(-1-\lambda)(-\lambda) + 2(-1-\lambda) = (-\lambda-1)(1-\lambda)(2-\lambda)$$

We get eigenvalues of $\lambda_1 = 2, \, \lambda_2 = 1, \, \lambda_3 = -1$. Let's find the associated eigenvectors:

$$\lambda_1 : \begin{pmatrix} 1 & 0 & -2 \\ 0 & -3 & 0 \\ 1 & 0 & -2 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -2 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_1 = \begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix}$$
$$\lambda_2 : \begin{pmatrix} 2 & 0 & -2 \\ 0 & -2 & 0 \\ 1 & 0 & -1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_2 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}$$
$$\lambda_3: \begin{pmatrix} 4 & 0 & -2 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{pmatrix} \mapsto \begin{pmatrix} 2 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{pmatrix} \quad \therefore \vec v_3 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}$$

The matrix P is thus:
$$\begin{pmatrix} \vec v_1 & \vec v_2 & \vec v_3 \end{pmatrix} = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}$$

We now need to solve for the inverse of P. We can do it through row-reducing an augmented matrix, with the identity matrix on the right. I'm a bit too lazy to type it all out at the moment, so please enjoy this inverted matrix courtesy of Wolfram|Alpha:

$$P^{-1} = \begin{pmatrix} 1 & 0 & -1 \\ -1 & 0 & 2 \\ 0 & 1 & 0 \end{pmatrix}$$

As $P^{-1}AP = D$, where D is the diagonal matrix whose diagonal elements are the eigenvalues of A (in order), we can rearrange the function around a bit:

$$PP^{-1}APP^{-1} = A = PDP^{-1}$$

\begin{align} \therefore A^{15} & = PD^{15}P^{-1} = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix}^{15} \begin{pmatrix} 1 & 0 & -1 \\ -1 & 0 & 2 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 2^{15} & 0 & 0 \\ 0 & 1^{15} & 0 \\ 0 & 0 & -1^{15} \end{pmatrix} \begin{pmatrix} 1 & 0 & -1 \\ -1 & 0 & 2 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 32768 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0 & -1 \\ -1 & 0 & 2 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 65535 & 0 & -65534 \\ 0 & -1 & 0 \\ 32767 & 0 & -32766 \end{pmatrix} \end{align}

### 6.2Accuracy and discussion¶

Solution provided by @clarle, using the same layout as that used for question 7 in the Fall 2007 final, so there may be copying/pasting errors.

## 7Question 7¶

Find an orthonormal basis for $W$, and an orthonormal basis for $W^{\perp}$, where $W$ is the subspace of $\mathbb{C}^4$ spanned by

$$\left \{ \begin{pmatrix} 1+i \\ 1 \\ 1-i \\ 1 \end{pmatrix}, \begin{pmatrix} 2+2i \\ -1 \\ 2-2i \\ -1 \end{pmatrix} \right \}$$

### 7.1Solution¶

Let:

$$\vec{v_1} = \begin{pmatrix} 1+i \\ 1 \\ 1-i \\ 1 \end{pmatrix} \quad \vec{v_2} = \begin{pmatrix} 2+2i \\ -1 \\ 2-2i \\ -1 \end{pmatrix}$$

A quick inspection of the vectors lets us see that subtracting $\vec{v_1}$ from $\vec{v_2}$ will yield a third vector that is orthogonal to $\vec{v_1}$. If you didn't see this (it's okay, we didn't either), you could have just done Gram-Schmidt and obtained the same result but with more work.

Now that we have a set of orthogonal vectors, we just need to normalize them to obtain our orthonormal basis for $W$:

$$\left \{ \frac{1}{\sqrt{6}} \begin{pmatrix}1+i \\ 1 \\ 1-i \\ 1 \end{pmatrix}, \frac{1}\sqrt{12}}\begin{pmatrix}1+i \\ -2 \\ 1-i \\ -2 \end{pmatrix} \right \$$

We can continue with Gram-Schmidt to get the orthonormal basis for $W^{\perp}$ but it may be faster to just solve the system $A\vec{x} = \vec{0}$ where:

$$A = \begin{pmatrix}1+i & 1 & 1-i & 1 \\ 1+i & -2 & 1-i & -2 \end{pmatrix}$$

Solving for the null space of this matrix we get:

$$\left \{ \begin{pmatrix}0 \\ -1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}i \\ 0 \\ 1 \\ 0 \end{pmatrix} \right \}$$

However, we need to take the conjugate of each of the entries due to these being in the complex field1, so doing that and normalizing:

$$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix}-i \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix}0 \\ -1 \\ 0 \\ 1 \end{pmatrix} \right \}$$

This provides the orthonormal basis for $W^{\perp}$, as required.

### 7.2Accuracy and discussion¶

Initial solution created by @clarle, but it turns out his solution was wrong because of Wolfram not doing the complex inner product the way we would want it to. Fix attempted by @dellsystem, and it should be mostly accurate now.

## 8Question 8¶

Find a unitary matrix $P$ such that $P^T AP$ is diagonal, for:

$$A = \begin{pmatrix} 1 & 0 & 1-i & 0 \\ 0 & 1 & 0 & 1+2i \\ 1+i & 0 & 2 & 0 \\ 0 & 1-2i & 0 & 5 \end{pmatrix}$$

where 0 is an eigenvalue of $A$.

### 8.1Solution¶

First, obtain the characteristic polynomial of the matrix by finding $det(A - \lambda I)$:

$$A = \begin{pmatrix} 1 - \lambda & 0 & 1-i & 0 \\ 0 & 1 - \lambda & 0 & 1+2i \\ 1+i & 0 & 2 - \lambda & 0 \\ 0 & 1-2i & 0 & 5 - \lambda \end{pmatrix}$$

We can do this problem much more quickly by simplifying the matrix out to upper triangular form, so by row-reducing we obtain:

$$A = \begin{pmatrix} 1 - \lambda & 0 & 1-2i & 0 \\ 0 & 1 - \lambda & 0 & 1-2i \\ 0 & 0 & \frac{\lambda^2 - 3\lambda}{1-\lambda} & 0 \\ 0 & 0 & 0 & \frac{\lambda^2 - 6\lambda}{1-\lambda} \end{pmatrix}$$

Now everything is amazing, and we can just multiply the diagonal to get our characteristic polynomial:

$$det(A - \lambda I) = (1 - \lambda)(1 - \lambda)(\frac{\lambda^2 - 3\lambda}{1-\lambda})(\frac{\lambda^2 - 5\lambda}{1-\lambda}) = (\lambda^2 - 3\lambda)(\lambda^2 - 6\lambda) = (\lambda)(\lambda)(\lambda - 6)(\lambda - 3)$$

So our eigenvalues are easily seen to be 0 (with algebraic multiplicity 2), 6, and 3.

For an eigenvalue of 0:

$$A - 0I = \begin{pmatrix} 1 & 0 & 1-i & 0 \\ 0 & 1 & 0 & 1+2i \\ 1+i & 0 & 2 & 0 \\ 0 & 1-2i & 0 & 5 \end{pmatrix} \mapsto \begin{pmatrix}1 & 0 & 1-i & 0 \\ 0 & 1 & 0 & 1+2i \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}$$

So we obtain two eigenvectors, $\begin{pmatrix} 1-i \\ 0 \\ -1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix}0 \\ 1+2i \\ 0 \\ -1\end{pmatrix}$

For an eigenvalue of 3:

$$A - 3I = \begin{pmatrix} -2 & 0 & 1-i & 0 \\ 0 & -2 & 0 & 1+2i \\ 1+i & 0 & -1 & 0 \\ 0 & 1-2i & 0 & 2 \end{pmatrix} \mapsto \begin{pmatrix}1-i & 0 & i & 0 \\ 0 & 1-i & 0 & 0 \\ 0 & 0 & 0 & 1-i \\ 0 & 0 & 0 & 0 \end{pmatrix}$$

So we obtain one eigenvector, $\begin{pmatrix} 1-i \\ 0 \\ 2 \\ 0 \end{pmatrix}$

For an eigenvalue of 6:

$$A - 6I = \begin{pmatrix} -5 & 0 & 1-i & 0 \\ 0 & -5 & 0 & 1+2i \\ 1+i & 0 & -4 & 0 \\ 0 & 1-2i & 0 & -1 \end{pmatrix} \mapsto \begin{pmatrix}1-2i & 0 & 0 & 0 \\ 0 & 1-2i & 0 & -1 \\ 0 & 0 & 1-2i & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}$$

So we obtain one eigenvector, $\begin{pmatrix} 0 \\ 1+2i \\ 0 \\ 5 \end{pmatrix}$

Everything is already orthogonalized, YESSS. (This is possibly the longest problem thus far.) Put everything as the columns of a matrix and normalize, and you have your orthogonal matrix, as required.

As an aside, notice that the original matrix A is a Hermitian matrix. The eigenvectors of a Hermitian matrix will always be mutually orthogonal. The same is true over the reals with symmetrical matrices.

$$P = \begin{pmatrix} \frac{1-i}{\sqrt{3}} & 0 & \frac{1-i}{\sqrt{6}} & 0 \\ 0 & \frac{1+2i}{\sqrt{6}} & 0 & \frac{1+2i}{\sqrt{30}} \\ \frac{-1}{\sqrt{3}} & 0 & \frac{2}{\sqrt{6}} & 0 \\ 0 & \frac{-1}{\sqrt{6}} & 0 & \frac{5}{\sqrt{30}} \end{pmatrix}$$

### 8.2Accuracy and discussion¶

Written by @clarle, layout updated by @dellsystem. The numbers may be very wrong though, please add your concerns to this section (or just correct any errors).

## 9Question 9¶

Let $V = C[0, 1]$ be the real vector space of continuous real-valued functions defined on the closed interval $[0, 1]$.

(a) If we define

$$\langle f, g\rangle = \int_0^{1} x^3 f(x) g(x) dx$$

show that this gives an inner product on $V$.

(b) Show that, for any $f \in V$, we have

$$\left [\int_0^1 x^5 f(x) dx \right ]^2 \le \frac{1}{8} \int_0^1 x^3 f(x)^2 dx$$

Identify all functions $f$ such that equality holds.

### 9.1Solution¶

(a) To verify that this is defines an inner product, we have to show that it respects linearity in the first argument, conjugate symmetry, and positive definiteness.

Linearity in the first argument:

Prove that it respects vector addition:

$$\langle f_1 + f_2, g \rangle = \int_0^{1} x^3 (f_1 + f_2)(x) g(x) dx = \int_0^{1} x^3 [f_1(x) + f_2(x)] g(x) dx = \int_0^{1} x^3 f_1(x) g(x) dx + \int_0^{1} x^3 f_2(x) g(x) dx = \langle f_1, g \rangle + \langle f_2, g \rangle$$

Prove that it preserves scalar multiplication:

$$\langle \alpha f, g\rangle = \int_0^{1} x^3 (\alpha f)(x) g(x) dx = \alpha \int_0^{1} x^3 f(x) g(x) dx = \alpha \langle f, g \rangle$$

Conjugate symmetry:

$$\langle g, f \rangle = \langle f, g\rangle = \int_0^{1} x^3 f(x) g(x) dx = \int_0^{1} x^3 g(x) f(x) dx = \langle f, g \rangle \text{ for all } f, g\in V$$

Positive definiteness:

For any $f \in V, \, f\neq 0$:

$$\langle f, f \rangle = \int_0^{1} x^3 f(x)^2 dx$$

Since $x^3$ is always positive for positive values of $x$ and $[f(x)]^2$ is also always positive for $f \neq 0$, the integrand is thus always positive over the given interval, and so the integral must be positive as well. If $f = 0$, the integral evaluates to 0, as the integrand is just the zero function.

(b) As we have verified that this defines an inner product, the Cauchy-Schwarz inequality applies. Therefore:

$$|\langle f, x^2 \rangle|^2 = \left ( \int_0^1 x^5 f(x) \,dx \right )^2 \le \langle f, f\rangle \langle x^2, x^2 \rangle \\ \therefore \left ( \int_0^1 x^5 f(x) \,dx \right )^2 \le \int_0^1 x^3 [f(x)]^2\,dx \int_0^1 x^7 \,dx$$

Now we evaluate the integral:

$$\int_0^1 x^7 \,dx = \left [ \frac{1}{8}x^8 \right ]_0^1 = \frac{1}{8} - 0 = \frac{1}{8}$$

Therefore, by Cauchy-Schwarz and integration, we have $\displaystyle \left [\int_0^1 x^5 f(x) dx \right ]^2 \le \frac{1}{8} \int_0^1 x^3 f(x)^2 dx$, as required.

Equality should only occur when f is a constant multiple of $x^2$, by Cauchy-Schwarz. So equality only occurs when f is in the form $cx^2$, where $c$ is a constant.

### 9.2Accuracy and discussion¶

Written by @clarle and maybe @dellsystem a bit, I can't remember.

## 10Question 10¶

Suppose that $U$ and $W$ are subspaces of the inner product space $V$.

(a) Show that, if $U \subseteq W$, then $W^{\perp} \subseteq U^{\perp}$.

(b) Show that, if $V$ is finite-dimensional and $W^{\perp} \subseteq U^{\perp}$, then $U \subseteq W$.

### 10.1Solution¶

Since $U \subseteq W$, all vectors in $U$ can be found as vectors in $W$. Let us then represent the sets by the following:

$$U = \operatorname{span} \{\vec{u_1}, \vec{u_2}, ... , \vec{u_k}\} \\ W = \operatorname{span} \{\vec{u_1}, \vec{u_2}, ... , \vec{u_k}, \vec{w_1}, \vec{w_2}, ... , \vec{w_k} \}$$

For the subspaces $U, W \in V$:

$$U \cap U^{\perp} = {\vec{0}} \quad W \cap W^{\perp} = {\vec{0}}$$

We can define $U^{\perp}$ as the set of vectors $\{\vec{v} \in V : \langle \vec{v}, \vec{u} \rangle = 0 \}$ for all $\vec{u} \in U$.

Since all vectors of $U$ are found in $W$, then $U^{\perp}$ must contain all of the vectors in $V^{\perp}$.

As such:

$$W^{\perp} \subseteq U^{\perp}$$

And the proof is complete.

### 10.2Accuracy and discussion¶

Solution provided by @clarle, who thinks it makes sense from a logical perspective.

1. I'm not sure why. Anyone know?