Fall 2007 Final CC-BY-NC

Maintainer: admin

Student-created solutions for the Fall 2007 final exam for MATH 223. You can find the original exam through the McGill library or docuum anyone has uploaded it there in the meantime, but all the questions will be stated on this page.

1Question 1

Find a basis for the row, column and null space of the following matrix over the complex numbers:

$$\begin{pmatrix} 1 & 1-2i & 1+i \\ i & 2+i & -1+i \\ 2-i & -5i & 4+i \\ 3 & 3-6i & 4+3i \end{pmatrix}$$

1.1Solution

Row-reduce the matrix:

$$\begin{pmatrix} 1 & 1-2i & 1+i \\ i & 2+i & -1+i \\ 2-i & -5i & 4+i \\ 3 & 3-6i & 4+3i \end{pmatrix} \mapsto \begin{pmatrix} 1 & 1-2i & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$

Row space (take the independent rows in the RREF matrix):

$$\{\begin{pmatrix} 1 & 1-2i & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 \end{pmatrix}\}$$

Column space (take the columns from the original matrix corresponding to the independent columns in the RREF matrix):

$$\left \{ \begin{pmatrix} 1 \\ i \\ 2-i \\ 3 \end{pmatrix}, \begin{pmatrix} 1+i \\ -1+i \\ 4+i \\ 4+3i \end{pmatrix} \right \}$$

Null space (solve the homogenous system):

$$\left \{ \begin{pmatrix} -1+2i \\ 1 \\ 0 \end{pmatrix} \right \}$$

1.2Accuracy and discussion

The row-reduction was checked using Wolfram. No solutions that I could find for this exam, so it should be accurate but there are no guarantees. - @dellsystem (19:21, 18 April 2011)

2Question 2

Let V be the real vector space of $3 \times 3$ matrices with real entries. Identify which of the following subsets of V are subspaces of V. Justify your answers.

(a) $\{X \in V | tr(X) = 0 \}$ (recall that tr(X) is the trace of X, i.e. the sum of the diagonal entries of X)
(b) $\{X \in V | X \begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix} = X^T \begin{pmatrix} 1\\ 2\\ 3 \end{pmatrix} \}$
(c) $\{ X \in V | det(X) = 0 \}$

2.1Solution

(a) This is a subspace - it is closed under scalar multiplication and vector addition, and contains the "zero vector" (the 3 by 3 matrix whose entries are all 0).

Closure under scalar multiplication:

Let A be any matrix $\in V$ in this subset and let $\alpha$ be a scalar $\in \mathbb{R}$. Let the elements along the main diagonal of A be $a_1, a_2, a_3$. As the trace is 0, we know that $tr(A) = a_1 + a_2+ a_3 = 0$. However, $tr(\alpha A) = \alpha a_1 + \alpha a_2 + \alpha a_3 = \alpha (a_1 + a_2 + a_3) = \alpha \times 0 = 0$ and so $\alpha A$ also has a trace of 0, and thus is part of this subset. So the subset is closed under scalar multiplication.

Closure under vector addition:

Let A and B be any two matrices $\in V$ in this subset. Let $a_1, a_2, a_3$ and $b_1, b_2, b_3$ be the elements along the diagonal of A and B respectively. Since the trace of both matrices is 0, we know that $a_1 + a_2 + a_3 = b_1+b_2+b_3 = 0$. The sum of A and B would have, along its diagonal, the elements $a_1+b_1, a_2 + b_2, a_3+b_3$, so the trace would be $(a_1 + b_1) + (a_2+b_2) +(a_3+b_3) = (a_1+a_2+a_3) + (b_1+b_2+b_3) = 0$ (by the commutativity of addition of real numbers) and so the sum of any two matrices in the subset would also have a trace of 0 and thus would also be in the subset. So the subset is closed under vector addition.

Contains the zero vector:

The "zero vector" clearly has a trace of 0. So this subset is a subspace.

(b) This is probably a subspace. It turns out X doesn't have to be symmetric as one might suspect upon first glance, but that doesn't matter. Clearly the zero vector is in this subspace, and it is clearly closed under scalar multiplication. It turns out to be closed under vector addition as well: If we let A, B be matrices in this subspace, then $(A+B)\vec v = A\vec v + B \vec v = A^T \vec v + B^T \vec v = (A^T + B^T) \vec v$ so their sum is in the subspace as well. So it is a subspace (unless I made a mistake in my reasoning).

(c) This is definitely not a subspace - it's not closed under vector addition. For example:

$$A = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \quad B = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 0 & 0 & \\ 0 & 0 & 0 \end{pmatrix} \quad A + B = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix}$$

A and B both clearly have a determinant of 0, and are thus in this subset, but their sum (A+B) has a non-zero determinant and is thus not in this subset.

2.2Accuracy and discussion

Reasonably certain of (a) and (c). Not sure about (b) - it looks like it's not limited to symmetric matrices (possibly), but what bearing does that have on whether or not it's a subspace? To be continued. - @dellsystem (20:14, 18 April 2011)

Finished (b) ... should be right? - @dellsystem (18:20, 19 April 2011)

3Question 3

(a) Find an invertible matrix P such that $P^{-1}AP$ is diagonal, where

$$A = \begin{pmatrix} 5 & 0 & -6 \\ 0 & 1 & 0 \\ 2 & 0 & -3 \end{pmatrix}$$

(b) Find (explicitly) $A^{10}$ where A is from part (a) of this problem. Note that $3^{10} = 59049$.

3.1Solution

(a) We just need to diagonalise this matrix. First let's find the eigenvalues from the characteristic polynomial, by expanding along the second row:

$$\det(\lambda I - A) = \det \begin{pmatrix} \lambda - 5 & 0 & 6 \\ 0 & \lambda - 1 & 0 \\ -2 & 0 & \lambda + 3 \end{pmatrix}= (\lambda-1)(\lambda+3)(\lambda-5) + 12(\lambda-1) = (\lambda-3)(\lambda+1)(\lambda-1)$$

We get eigenvalues of $\lambda_1 = 3, \, \lambda_2 = -1, \, \lambda_3 = 1$. Let's find the associated eigenvectors:

$$\begin{align} & \lambda_1: \begin{pmatrix} 2 & 0 & -6 \\ 0 & -2 & 0 \\ 2 & 0 & -6 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -3 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_1 = \begin{pmatrix} 3 \\ 0 \\ 1 \end{pmatrix} \\ & \lambda_2 : \begin{pmatrix} 6 & 0 & -6 \\ 0 & 2 & 0 \\ 2 & 0 & -2 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_2 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \\ & \lambda_3 : \begin{pmatrix} 4 & 0 & -6 \\ 0 & 0 & 0 \\ 2 & 0 & -4 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_3 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \end{align}$$

The matrix P is thus:

$$\begin{pmatrix} \vec v_1 & \vec v_2 & \vec v_3 \end{pmatrix} = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}$$

(b) We first need to solve for the inverse of P. We can do it through row-reducing an augmented matrix, with the identity matrix on the right:

$$\begin{pmatrix} 3 & 1 & 0 & | & 1 & 0 & 0 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \\ 1 & 1 & 0 & | & 0 & 0 & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 & | & 0.5 & 0 & -0.5 \\ 1 & 1 & 0 & | & 0 & 0 & 1 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 & | & 0.5 & 0 & -0.5 \\ 0 & 1 & 0 & | & -0.5 & 0 & 1.5 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \end{pmatrix}$$

$$\therefore P^{-1} = \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix}$$

As $P^{-1}AP = D$, where D is the diagonal matrix whose diagonal elements are the eigenvalues of A (in order), we can rearrange the function around a bit:

$$\begin{align} PP^{-1}APP^{-1} = A = PDP^{-1} \quad \therefore A^{10} & = PD^{10}P^{-1} = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 3 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}^{10} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 3^{10} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 3^{11} & 1 & 0 \\ 0 & 0 & 1 \\ 3^{10} & 1 & 0 \end{pmatrix} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} \frac{3^{11} - 1}{2} & 0 & -\frac{3^{11}-3}{2} \\ 0 & 1 & 0 \\ \frac{3^{10}-1}{2} & 0 & -\frac{3^{10}-3}{2} \end{pmatrix} \\ & = \begin{pmatrix} 88573 & 0 & -88572 \\ 0 & 1 & 0 \\ 29524 & 0 & -29523 \end{pmatrix} \end{align}$$

3.2Accuracy and discussion

Confirmed the value of $A^{10}$ using Wolfram, so the method and numbers should be correct, barring typos etc - @dellsystem (20:55, 18 April 2011 )

4Question 4

Let $P_3(t)$ be the real vector space of polynomials of degree at most 3, and let $V$ be the subspace of $P_3(t)$ consisting of those polynomials $p(t)$ such that $p(0) = p(1)$. Define the function $L : V \to V$ by

$$L(p(t)) = t(t-1)p''(t)$$

where $p''(t)$ denotes the second derivative of $p(t)$ with respect to $t$.

(a) Show that $L$ is a linear operator on $V$.
(b) Find the matrix $[L]_B$, where $B$ be the basis of V given by

$$B = \{ 1, t^2-t, t^3-t^2 \}$$

(c) Find bases for $\ker(L)$ and $\text{im}(L)$.\ (d) Find a basis $B'$ of $V$ such that $[L]_V = D$ is diagonal, and find $D$.

4.1Solution

(a) First, we show that L respects vector addition and scalar multiplication.

Vector addition

Let $p_1(t), p_2(t)$ be polynomials in the vector space. Then, $L((p_1 + p_2)(t)) = t(t-1)(p_1+p_2)''(t) = t(t-1)p_1''(t) + t(t-1)p_2''(t) = L(p_1(t)) + L(p_2(t))$ by distributivity of the vector space (?)

Scalar multiplication

Let $\alpha \in \mathbb{R}$. Then, $L(\alpha p(t)) = t(t-1)(\alpha p)''(t) = \alpha (t(t-1)p''(t)) = \alpha L(p(t))$, by associativity I think

(b) Let's apply the transformation $L$ to each thing in the basis:

$$\begin{align}L(1) & = t(t-1)0 = 0 = 0(1) + 0(t^2-t) + 0(t^3-t^2) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \\ L(t^2-t) & = t(t-1)(2) = 2t^2-2t = 0(1) + 2(t^2-t) + 0(t^3-t^2) \mapsto \begin{pmatrix} 0 & 2 & 0 \end{pmatrix}^T \\ L(t^3-t^2) & = t(t-1)(6t-2) = (t^2-t)(6t-2) = 6t^3 -8t^2 + 2t = 0(1) -2(t^2-t) +6(t^3-t^2) \mapsto \begin{pmatrix} 0 & -2 & 6 \end{pmatrix}^T \\ \therefore [L]_B & = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & -2 \\ 0 & 0 & 6 \end{pmatrix}\end{align}$$

(c) Let's first row-reduce the matrix, then find the null space and column space:

$$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & -2 \\ 0 & 0 & 6 \end{pmatrix} \mapsto \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}$$

Null space

$\left \{ \begin{pmatrix} 1 & 0 & 0 \end{pmatrix}^T \right \}$ which corresponds to a kernel of $\{ 1 \}$.

Column space

$\left \{ \begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ -2 \\ 6 \end{pmatrix} \right \}$ which corresponds to an image space of $\{ 2(t^2-t), -2(t^2-t) + 6(t^3-t^2) \}$ which is incidentally equivalent to $\{ t^2-t, t^3-t^2 \}$.

(d) Basis: $\{ a(t), b(t), c(t) \}$ such that

$$L(a(t)) = \alpha a(t) \quad L(b(t)) = \beta b(t) \quad L(c(t)) = \gamma c(t)$$

So the linear operator maps it to a scalar multiple of itself.

Based on how two of the vectors in B behave, let's try this:

$$a(t) = 1, b(t) = t^2-t, c(t) = t$$

Then:

$$\begin{align}L(a(t)) & = L(1) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \\ L(b(t)) & = L(t^2-t) \mapsto \begin{pmatrix} 0 & 2 & 0 \end{pmatrix}^T \\ L(c(t)) & = L(t) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \end{align}$$

So the basis is $\{ 1, t^2-t, t \}$ and $D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{pmatrix}$

4.2Accuracy and discussion

Needs someone to check over the calculations and method. - @dellsystem (21:28, 18 April 2011)

Part (d) is wrong ... - @dellsystem (16:57, 20 April 2011)

Why is it wrong??? - @dellsystem (14:01, 16 December 2012)

5Question 5

Suppose that $A$ is an invertible matrix and that $\lambda$ is an eigenvalue of A. Show that $\lambda^{-1}$ is an eigenvalue of $A^{-1}$.

5.1Solution

An eigenvalue of A satisfies the equation $A \vec v = \lambda \vec v$ where v is a non-zero vector. We know that $A^{-1}A = AA^{-1} = I$. If we multiply the first equation by A inverse on the left, we get $A^{-1} A \vec v = A^{-1} \lambda \vec v \mapsto \vec v = \lambda A^{-1} \vec v$ by associativity of scalar multiplication (as the eigenvalue is a scalar). Now, we know that all of the eigenvalues of A are non-zero, because A is invertible - if A had zero as an eigenvalue, then $\det(A - \lambda I) = \det(A) = 0$ which means that A is singular and thus not invertible. So as A is invertible, none of its eigenvalues are 0, and so we can divide by the eigenvalue, resulting in $\frac{\vec v}{\lambda} = \frac{\lambda}{\lambda} A^{-1} \vec v$ which can also be written as $A^{-1}\vec v = \lambda^{-1} \vec v$ thus proving that $\lambda^{-1}$ is an eigenvalue of A inverse (as it satisfies the equation for an eigenvalue etc).

5.2Accuracy and discussion

Not sure if this is a sufficient proof, but I can't think of anything else - @dellsystem (21:35, 18 April 2011)

Added the fact that you can divide by the eigenvalue because it can't be equal to 0, as A is invertible. - @dellsystem (22:07, 18 April 2011 )
Seems legit to me - @tahnok (00:45, 21 April 2011)

6Question 6

Suppose the matrices $A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & j \end{pmatrix}$ and $B = \begin{pmatrix} a & b & c \\ d* & e* & f* \\ g & h & j \end{pmatrix}$ have complex entries, $\det(A) = 1 + i$ and $\det(B) = 3-2i$. Find the determinant of

$$\begin{pmatrix} (1+2i)a & 2id + (1-i)d* & g + (-6+3i)a \\ (1+2i)b & 2ie + (1-i)e* & h+(-6+3i)b \\ (1+2i)c & 2if+(1-i)f* & j+(-6+3i)c \end{pmatrix}$$

Justify your answer.

6.1Solution

Let $C = \begin{pmatrix} a & b & c \\ (1-i)d* & (1-i)e* & (1-i)f* \\ g & h & j \end{pmatrix}$. Since C resulted from multiplying one row of B by (1-i), we can find its determinant easily: $\det(C) = (1-i)\det(B) = (1-i)(3-2i) = 3 - 2i - 3i -2 = 1 -5i$.

Let $D = \begin{pmatrix} a & b & c \\ (2i)d & (2i)e & (2i)f \\ g & h & j \end{pmatrix}$. Since D resulted from multiplying one row of A by (2i), we can find its determinant easily as well: $\det(A) = 2i \det(A) = 2i(1+i) = -2 + 2i$

Let $E = \begin{pmatrix} a & b & c \\ 2id + (1-i)d* & 2ie + (1-i)e* & 2if + (1-i)f* \\ g & h & j \end{pmatrix}$. The first and third rows are the same as those of C and D, while the second row is the sum of the corresponding rows in C and D. Since C and D differ by only one row, and that row in E represents the sum of those two rows, we have that $\det(E) = \det(C) + \det(D)$ (no idea what the name of this theorem is or when we learned it). So $\det(E) = (1-5i) + (-2+2i) = -1 -3i$.

Now let's perform a simple row operation on E - add a scalar multiple of the first row the third row. This doesn't change the determinant, so we have $F = \begin{pmatrix} a & b & c \\ 2id + (1-i)d* & 2ie + (1-i)e* & 2if + (1-i)f* \\ g +(-6+3i)a & h +(-6+3i)b & j+(-6+3i)c \end{pmatrix}$ and $\det(F) = \det(E) = -1-3i$.

Now let's take the transpose of F. Since transposing doesn't change the determinant either, we have that $G = F^T = \begin{pmatrix} a & 2id + (1-i)d & g +(-6+3i)a \\ b & 2ie + (1-i)e* & h +(-6+3i)b \\ c & 2if + (1-i)f* & j+(-6+3i)c \end{pmatrix}$ and $\det(G) = \det(F) = -1-3i$.

This is starting to look a lot like the matrix we want to find the determinant of. If we let $H = \begin{pmatrix} (1+2i)a & 2id + (1-i)d & g +(-6+3i)a \\ (1+2i)b & 2ie + (1-i)e* & h +(-6+3i)b \\ (1+2i)c & 2if + (1-i)f* & j+(-6+3i)c \end{pmatrix}$, where H is derived from multiplying the first column of G by a scalar multiple, then we have that $\det(H) = (1+2i)\det(G) = (1+2i)(-1-3i) = -1 -3i -2i +6 = 5 - 5i$. Since H is just the matrix we're trying to find the determinant of, the determinant is just $5-5i$.

6.2Accuracy and discussion

Method should be correct (see Question 7 from the winter 2010 final, for example, which we were given solutions for), numbers could use some checking. The formatting also sucks, go ahead and fix it if you're so inclined. - @dellsystem (22:01, 18 April 2011)

Formatting is slightly better now - @dellsystem (14:00, 16 December 2012)

7Question 7

Let V be the real vector space of continuous real-valued functions on the interval $[-1, 1]$, and for $f, g \in V$ let

$$\langle f, g \rangle = \int_{-1}^1 x^4 f(x)g(x)\,dx.$$

(a) Verify that this defines an inner product on V.
(b) Show that, for any $f \in V$,

$$\left ( \int_{-1}^1 x^5 f(x) \,dx \right )^2 \le \frac{2}{7} \int_{-1}^1 x^4 [f(x)]^2 \,dx.$$

For which f does equality hold?

7.1Solution

(a) This part of the question is actually identical to question 3 from the fall 2009 final, so I'm just going to copy and paste. To verify that this is defines an inner product, we have to show that it respects linearity in the first argument, (conjugate) symmetry, and positive definiteness.

Linearity in the first argument

Preserves vector addition: Let $f_1, f_2 \in V$. Then:

$$\langle f_1 + f_2, g \rangle = \int_{-1}^1 x^4 (f_1+f_2)(x)g(x)\,dx = \int_{-1}^1 x^4f_1(x)g(x)\,dx + \int_{-1}^1 x^4 f_2(x)g(x)\,dx = \langle f_1,g \rangle + \langle f_2, g \rangle$$

Preserves scalar multiplication: Let $\alpha \in F$ (a scalar, in the field). Then:

$$\langle \alpha f, g \rangle = \int_{-1}^1 x^4 (\alpha f(x))g(x)\,dx = \alpha \int_{-1}^1 x^4 f(x)g(x)\,dx = \alpha \langle f, g \rangle$$

(Conjugate) symmetry

$\langle g, f \rangle = \int_{-1}^1 x^4 g(x)f(x)\,dx = \int_{-1}^1 x^4 f(x)g(x) \,dx = \langle f, g \rangle$ as multiplication here is commutative.

Positive definiteness

For any $f \in V, \, f \neq 0$:

$$\langle f, f \rangle = \int_{-1}^1 x^4 [f(x)]^2 \,dx$$

Since $x^4$ is always non-negative, and $[f(x)]^2$ is always positive, the integrand is always equal to or greater than 0. The integral is thus always positive, as the integrand is only equal to 0 at one point in the interval and is positive the rest of the time. So we have positive definiteness. (Still sketchy but meh)

(b) Differs slightly from question 3 of fall 2009, but, same template:

As this defines an inner product, we can of course use the Cauchy-Schwarz inequality. Therefore:

$$| \langle f, x \rangle |^2 = \left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \langle f, f \rangle \langle x, x \rangle$$

$$\therefore \left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \int_{-1}^1 x^4 [f(x)]^2 \,dx \int_{-1}^1 x^6 \,dx$$

Evaluate the integral: $\int_{-1}^1 x^6 \,dx = \left [ \frac{x^7}{7} \right ]_{-1}^{1} = \frac{1}{7} - \frac{-1}{7} = \frac{2}{7}$ which is exactly what we want omg!!!1

So by the Cauchy-Schwarz (don't misspell this) inequality we have $\left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \frac{2}{7} \int_{-1}^1 x^4[f(x)]^2 \,dx$ QED.

By Cauchy-Schwarz, we only have equality when $f(x) = x$ or a constant multiple thereof. So there you go.

7.2Accuracy and discussion

Should be right, unless I made a mistake in copying and pasting or typo'ed or something. - @dellsystem (22:16, 18 April 2011)

8Question 8

Let W be the subspace of $\mathbb{R}^4$ spanned by $\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix}$.

(a) Find an orthonormal basis for each of $W$ and $W^{\perp}$.
(b) Find the orthogonal projections $Proj_w(v)$ and $Proj_{w^{\perp}}(v)$, where

$$v = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix}.$$

8.1Solution

(a) Let $\vec v_1 = \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$, $\vec v_2 = \begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix}$. We set $\vec w_1 = \vec v_1,\, \vec w_2 = \vec v_2 - Proj_{w_1} v_2 =\vec v_2 - \frac{\langle v_2, w_1 \rangle}{\langle w_1, w_1 \rangle}\vec w_1$.

$$\langle v_2, w_2 \rangle = 1(4) + 1(4) = 8$, $\langle w_1, w_1 \rangle = 2$$

$$\therefore \vec w_2 = \vec v_2 - \frac{8}{2} \vec w_1 = \begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix} - \begin{pmatrix} 4 \\ 0 \\ 4 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix}$$

This is orthogonal to $\vec w_1$. If we normalise both the vectors, we get the following orthonormal basis for W:

$$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{18}} \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix} \right \}$$

To find an orthonormal basis for $W^{\perp}$, we solve the system Ax = 0, where $A = \begin{pmatrix} 1 & 0 & 1 & 0 \\ 4 & 3 & 4 & 3 \end{pmatrix}$, which row-reduces to $\begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{pmatrix}$. The solution set is $\begin{pmatrix} -1 \\ 0 \\ 1 \\ 0\end{pmatrix}, \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}$. This is already orthogonal, but we need to normalise it - this gives us the following orthonormal basis for $W^{\perp}$:

$$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix} \right \}$$

(b) Let's use the orthogonal basis for W that we obtained in part (a): $\{ \vec u_1, \vec u_2 \} = \left \{ \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix} \right \}$. $Proj_{W}\vec v = \frac{\langle v, u_1 \rangle}{\langle u_1, u_1 \rangle} \vec u_1 + \frac{\langle v, u_2 \rangle}{\langle u_2, u_2 \rangle} \vec u_2$.

$$\langle v, u_1 \rangle = 4,\, \langle u_1, u_1 \rangle = 2, \, \langle v, u_2 \rangle = 6 ,\, \langle u_2, u_2 \rangle = 2$$

$$\therefore Proj_{W}\vec v = \frac{4}{2} \vec u_1 + \frac{6}{2} \vec u_2 = 2 \vec u_1 + 3 \vec u_2 = \begin{pmatrix} 2 \\ 0 \\ 2 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix} = \begin{pmatrix}2 \\ 3 \\ 2 \\ 3 \end{pmatrix}$$

$$Proj_{W^{\perp}} \vec v = \vec v - Proj_{W}\vec v = \begin{pmatrix} -1 \\ -1 \\ 1 \\ 1 \end{pmatrix}$$

8.2Accuracy and discussion

Need to learn this shit first, will come back to it - @dellsystem (22:19, 18 April 2011)

Learned it - definitely needs someone to look over it though - @dellsystem (19:23, 19 April 2011)

Just looked over it, but it's been a year and a half so I have no idea what's going on. Cleaned up the formatting though. Past me is a lot smarter than present me. - @dellsystem (13:59, 16 December 2012)

9Question 9

Find a unitary matrix U such that $\bar U^{T} H U$ is diagonal, where H is the following Hermitian matrix:

$$H = \begin{pmatrix} -3 & i & 1 \\ -i & -3 & -i \\ 1 & i & -3 \end{pmatrix}.$$

[Hint: -4 is an eigenvalue of H.]

9.1Solution

We assume that we're being told one of the eigenvalues because we don't need to find out the other(s). So let's first find the eigenvectors associated with the eigenvalue we're given:

$A + 4\lambda = \begin{pmatrix} 1 & i & 1 \\ -i & 1 & -i \\ 1 & i & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & i & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ (multiply the second row by i)

This gives us the following eigenvectors: $\begin{pmatrix}-i \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix}$

Since this eigenvalue has a geometric multiplicity of two, we can infer that it also has an algebraic multiplicity of two, otherwise the matrix would not be diagonalisable. Therefore there is only one other distinct eigenvalue, and we can find it from the trace of this matrix (as the sum of the eigenvalues is equal to the trace). The trace of this matrix is $(-3) + (-3) + (-3)) = -9$ so the remaining eigenvalue is $-9 - (-4) - (-4) = -1$.

Let's find the eigenvector for this eigenvalue:

$$A + \lambda = \begin{pmatrix} -2 & i & 1 \\ -i & -2 & -i \\ 1 & i & -2 \end{pmatrix} \mapsto \begin{pmatrix}1 & i & -2 \\ 1 & -2i & 1 \\ -2 & i & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & i & -2 \\ 0 & -3i & 3 \\ 0 & 3i & -3\end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & i \\ 0 & 0 & 0 \end{pmatrix}$$

which gives us the eigenvector $\begin{pmatrix}1 & -i & 1 \end{pmatrix}^T$

Now that we have the eigenvectors, let's orthogonalise them via Gram-Schmidt. Let $\vec v_1 = \begin{pmatrix} -i \\ 1 \\ 0 \end{pmatrix},\, \vec v_2 = \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix},\,\vec v_3 = \begin{pmatrix}1 \\ -i \\ 1 \end{pmatrix}$. We note that the third eigenvector is perpendicular to the first two, let's take that one as our first:

Let $\vec w_1 = \vec v_3$

Let $\vec w_2 = \vec v_2$ (as it's already perpendicular to the first)

Let $\vec w_3 = \vec v_1 - \frac{\langle \vec v_1, \vec w_1 \rangle}{\langle \vec w_1, \vec w_1 \rangle} \vec w_1 - \frac{\langle \vec v_1, \vec w_2 \rangle}{\langle \vec w_2, \vec w_2 \rangle} \vec w_2$

Let's first calculate all the inner products (note that calculating an inner product between two complex vectors requires taking the conjugate of the second vector): $\langle \vec v_1, \vec w_1 \rangle = 0,\, \langle \vec w_1, \vec w_1 \rangle = 3,\,\langle \vec v_1, \vec w_2 \rangle = i,\,\langle \vec w_2, \vec w_2 \rangle = 2$

So $\vec w_3 = \begin{pmatrix} -i \\ 1 \\ 0 \end{pmatrix} -\frac{i}{2} \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} -i \\ 1 \\ -i \end{pmatrix}$

Now we just need to normalise them:

$$\vec w_1 \mapsto \frac{1}{\sqrt{3}} \begin{pmatrix} 1 \\ -i \\ 1 \end{pmatrix} \quad \vec w_2 \mapsto \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} \quad \vec w_3 \mapsto \frac{1}{\sqrt{3}} \begin{pmatrix} -i \\ 1 \\ -1 \end{pmatrix}$$

So the unitary matrix is $\begin{pmatrix} \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & -\frac{i}{\sqrt{3}} \\ -\frac{i}{\sqrt{3}} & 0 & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{3}} \end{pmatrix}$

9.2Accuracy and discussion

lol - @dellsystem (22:20, 18 April 2011)

Learned it. Checked eigenvectors with Wolfram but the numbers in the rest might have errors. Also, temporarily broke the site (accidentally dropped a table lol) but it's okay now, and I will stop trying to free up disk space. Not really relevant to this but I wanted to preserve it here so I will forever (assuming I don't accidentally drop any other tables anyway) have a record of my idiocy - @dellsystem (02:18, 20 April 2011)

The calculations of the inner products are wrong, if i'm not mistaken eigenvectors from distinct eigenvalues should be already orthogonal. You should only need to do gram schmidt on your v2 - anonymous

Should be fixed now, thanks. Needs proofreading. - @dellsystem (16:18, 20 April 2011)

I'm pretty sure that eigen-vectors found from different eigenvalues are naturally orthogonal to each other. You shouldn't need to apply G-S on the third vector (the one found with e-value = -1) - anonymous

Yep, I know. I only applied Gram-Schmidt to one of the eigenvectors found from the first eigenvalue, because although it is perpendicular to the eigenvector from the other eigenvalue, it's not perpendicular to the other one. Check the numbers, it should be right. - @dellsystem (16:43, 20 April 2011)

10Question 10

Suppose that V is a real inner product space. Prove the following version of the Pythagorean theorem.

If $v, w \in V$ are orthogonal, then

$$\left \|v + w \right \|^2 = \left \| v \right \|^2 + \left \| w \right \|^2$$

10.1Solution

If v and w are orthogonal, then $\langle v,w \rangle = 0$.

$$\left \| u+v \right \|^2 = \langle u+v, u+v \rangle = \langle u,u \rangle + \langle u,v \rangle + \langle v,u \rangle + \langle v,v \rangle$$

Since $\langle u,v \rangle = \langle v,u \rangle$ (as it's a real inner product space):

$$\left \| u+v \right \|^2 = \langle u,u \rangle + 2 \langle u,v \rangle + \langle v,v \rangle$$

Since $\langle u, v \rangle = 0$, $\left \| u + v \right \|^2 = \langle u, u \rangle + \langle v, v \rangle$

And as $\langle u, u \rangle = \left \| u \right \|^2,\, \langle v, v \rangle = \left \| v \right \|^2$, $\therefore \left \| u + v \right \|^2 = \left \| u \right \|^2 + \left \| v \right \|^2$ QED

10.2Accuracy and discussion

Looks cool, will attempt it later - @dellsystem (22:23, 18 April 2011)

Proved it, I'll be glad if someone will take their time to convert it to LaTeX though. - Emir (04:01, 20 April 2011)

Brilliant, makes perfect sense. Just converted it to LaTeX. You should consider signing up for an account! - @dellsystem (13:00, 20 April 2011)