# Winter 2010 Final Student-created solutions (with some questions solved in-class as well) for the Winter 2010 final exam for MATH 223. You can find the original exam through WebCT or possibly docuum if anyone has uploaded it there in the meantime, but all the questions will be stated on this page.

## 1Question 1¶

Find a basis for each of the following spaces: the row space, column space, and null space of the following matrix over the complex numbers.

$$\begin{pmatrix}1 & 2-i & 0 & 2i & 3-i \\ 3+i & 7-i & 1+i & 0 & 10 \\ 1+2i & 4+3i & 2+2i & -10i & 5+5i \end{pmatrix}$$

### 1.1Solution¶

First we row-reduce the matrix, as follows:

$$\begin{pmatrix}1 & 2-i & 0 & 2i & 3-i \\ 3+i & 7-i & 1+i & 0 & 10 \\ 1+2i & 4+3i & 2+2i & -10i & 5+5i \end{pmatrix} \to \begin{pmatrix}1 & 2-i & 0 & 2i & 3-i \\ 0 & 0 & 1+i & 2-6i & 0 \\ 0 & 0 & 2+2i & 4-12i & 0 \end{pmatrix} \to \begin{pmatrix}1 & 2-i & 0 & 2i & 3-i \\ 0 & 0 & 1 & -2-4i & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$$

The first row-reduction consisted of subtracting row 2 by (3+i) times row 1, and subtracting row 3 by (1+2i) times row 1. The second row-reduction consisted of subtracting the third row by 2 times the second row and then dividing the second row by (1-i) (which was done with the assistance of the complex conjugate).

We can then use the row-reduced matrix to find a basis for the row space, which simply consists of the two non-zero rows in the RREF matrix:

$$\left\{ \begin{pmatrix} 1 & 2-i & 0 & 2i & 3-i \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 & -2-4i & 0 \end{pmatrix} \right\}$$

To find a basis for the column space, we take the two columns in the RREF matrix that contain leading ones, identify the corresponding columns in the original matrix, and use those:

$$\left\{ \begin{pmatrix} 1 \\ 3+i \\ 1+2i \end{pmatrix}, \begin{pmatrix} 0 \\ 1+i \\ 2+2i \end{pmatrix} \right\}$$

The nullspace can be found by solving the homogeneous equation. Let each column in the matrix correspond to $x_1 \ldots x_5$ and parametrise $x_2 =t,\,x_4=s,\,x_5=r$:

$$x_1 + (2-i)t + (2i)s + (3-i)r = 0 \quad \therefore x_1 = (1-2)t - (2i)s + (i-3)r$$
$$x_3 + (-2-4i)s = 0 \quad \therefore x_3 = (2+4i)s$$

So the basis for the nullspace is

$$\left\{ \begin{pmatrix} i -2 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} -2i \\ 0 \\ 2+4i \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} i-3 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} \right\}$$

### 1.2Accuracy and discussion¶

Corresponds with solutions given in-class. Also, Wolfram alpha agrees. So, most likely accurate. - @dellsystem

I've also done it by hand and it matches. I'd call it accurate - @tahnok

## 2Question 2¶

Here $V = P_3(x)$, the vector space of polynomials with real coefficients and degree at most 3. Let $T \,:\, V \to V$ be defined by

$$T(p(x)) = p(x+2) - 2p(x+1)+p(x)$$

(a) (2 marks) Show that T is a linear operator on V.
(b) (4 marks) With $B = (1,x,x^2,x^3)$, find $[T]_B$.
(c) (4 marks) Find a basis for each of $ker(T)$ and $im(T)$.

### 2.1Solution¶

(a) To show that T is a linear operator we simply need to show that it preserves/respects vector addition and scalar multiplication.

For vector addition: Let $p(x),\,q(x)$ be any two polynomials $\in V$. Then:

$$T(p(x)+q(x)) = T((p+q)(x)) = (p+q)(x+2) - 2(p+q)(x+1) + (p+q)(x) = p(x+2) + q(x+2) - 2p(x+1) -2q(x+1) + p(x) + q(x) = T(p(x)) + T(q(x))$$

For scalar multiplication: Let $\alpha \in \mathbb{R}$ (a scalar):

$$T(\alpha p(x)) = \alpha p(x+2) -2 \alpha p(x+1) + \alpha p(x) = \alpha (p(x+2) - 2 p(x+1) +p(x)) = \alpha T(p(x))$$

(b) Now we must find a 4 by 4 matrix that outputs into B. To do this, we evaluate T for each polynomial in the basis, then express the answer in column vector format (in terms of the basis):

\begin{align}T(1) & = 1 - 2 + 1 = 0 \mapsto \begin{bmatrix} 0 & 0 & 0 & 0 \end{bmatrix}^T \\ T(x) & = (x+2) - 2(x+1) +x = 0 \mapsto \begin{bmatrix} 0 & 0 & 0 & 0 \end{bmatrix}^T \\ T(x^2) & = (x+2)^2 - 2(x+1)^2 + x^2 = x^2 + 4x + 4 - 2x^2 -4x -2 + x^2 = 2 \mapsto \begin{bmatrix} 2 & 0 & 0 & 0 \end{bmatrix}^T \\ T(x^3) & = (x+2)^3 - 2(x+1)^3 + x^3 = 6x+6 \mapsto \begin{bmatrix} 6 & 6 & 0 & 0 \end{bmatrix}^T \end{align}

These correspond to the columns of the matrix $[T]_B$, from left to right. So

$$[T]_B = \begin{bmatrix} 0 & 0 & 2 & 6 \\ 0 & 0 & 0 & 6 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$

(c) To find the kernel and image space we use the matrix we found above. First we find the nullspace, using the same method we always use - the matrix row-reduces to

$$\begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$

so the nullspace is simply $\operatorname{span} \{ \begin{pmatrix} 1 & 0 & 0 & 0 \end{pmatrix}^T, \begin{pmatrix} 0 & 1 & 0 & 0 \end{pmatrix}^T \}$, which corresponds to a kernel of $\operatorname{span} \{ 1, x \}$. Note that the nullspace differs from the kernel in this case because the kernel is expressed in terms of the vector space being used (polynomials of degree at most three in this case).

To find a basis for the column space, we just take two non-zero columns from the RREF matrix and get the corresponding columns from the original matrix:

$$\{ \begin{pmatrix} 2 & 0 & 0 & 0 \end{pmatrix}^T, \begin{pmatrix} 6 & 6 & 0 & 0 \end{pmatrix} \}$$

which corresponds to an imagespace of $\operatorname{span} \{2, 6x+6 \}$ which is incidentally equivalent to $\operatorname{span} \{ 1, x \}$. Note that the same distinction exists between imagespace and column space as above.

### 2.2Accuracy and discussion¶

Corresponds with solutions arrived at in-class. - @dellsystem

## 3Question 3¶

Give the general solution to the following system of equations:

\begin{align} y_1' & = y_1 - 2y_2 \\ y_2' & = y_1 + 4y_2 \end{align}

### 3.1Solution¶

For problems like this, we need to diagonalise the matrix. First we find the coefficient matrix A:

$$A = \begin{pmatrix} 1 & -2 \\ 1 & 4 \end{pmatrix}$$

The characteristic polynomial is:

$$\det(A-\lambda I) = (\lambda -1)(\lambda-4) +2 = \lambda^2 - 5\lambda + 6 = (\lambda-2)(\lambda-3)$$

giving us the eigenvalues $\lambda_1 = 2,\, \lambda_2 = 3$. We then find each associated eigenvector:

• For $\lambda_1:\, A- 2I = \begin{pmatrix} -1 & -2 \\ 1 & 2 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 2 \\ 0 & 0 \end{pmatrix}$, which gives us the eigenvector $\begin{pmatrix} -2 \\ 1 \end{pmatrix}$.
• For $\lambda_2: \, A-3I = \begin{pmatrix} -2 & -2 \\ 1 & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}$, which gives us the eigenvector $\begin{pmatrix} -1 \\ 1 \end{pmatrix}$.

From the eigenvectors we get the invertible matrix P, and from the eigenvalues, the diagonal matrix D:

$$P = \begin{pmatrix} -2 & -1 \\ 1 & 1 \end{pmatrix} \quad D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$$

We then make the substitution $\vec z = P^{-1} \vec y$, so that the system becomes $D \vec z = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} \vec z$ which gives us $z_1' = 2z_1, \, z_2' = 3z_2$ so we get $z_1 = c_1e^{2x}, \, z_2 = c_2 e^{3x}$ where $c_1, c_2$ are just any constant. Now that we have solved for $\vec z$, we can multiply it by P on the left to get y:

$$\begin{pmatrix} y_1 \\ y_2 \end{pmatrix} = P\vec z = \begin{pmatrix}-2 & -1 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} c_1 e^{2x} \\ c_2 e^{3x} \end{pmatrix} = \begin{pmatrix} -2c_1 e^{2x} - c_2 e^{3x} \\ c_1 e^{2x} + c_2 e^{3x} \end{pmatrix}$$

So the solution to this system of differential equations is $y_1 = -2c_1 e^{2x} - c_2 e^{3x}$ and $y_2 = c_1 e^{2x} + c_2 e^{3x}$.

### 3.2Accuracy and discussion¶

Corresponds to solutions given in-class. Followed methods from assignment 7. - @dellsystem

Question: Where did you get the e2x and e3x in the solution? - anonymous

From the diagonal matrix. Check the answers to question 3 from assignment 7 for a better explanation of the method. - @dellsystem

## 4Question 4¶

Give an explicit, nonrecursive formula for $x_n$, where $x_n$ is defined recursively by

$$x_0 = 0,\,x_1 = 2,\,x_{n+2} = 2x_{n+1} - 2x_n, \quad n \ge 0$$

### 4.1Solution¶

We first find a matrix to represent this problem, then find the eigenvalues of that matrix. Let the system representing this problem be given by

$$\begin{pmatrix}x_{n+2} \\ x_{n+1} \end{pmatrix} = \begin{pmatrix} 2 & -2 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} x_{n+1} \\ x_n \end{pmatrix}$$

where we used the fact that $x_{n+2} = 2x_{n+1} - 2x_n$ and $x_{n+1} = x_{n+1}$ (yes) to find the values of the matrix, which we'll call A. We then find the characteristic polynomial:

$$det(\lambda I - A) = \lambda^2 - 2\lambda + 2 = (\lambda-1)^2 +1 \quad \therefore (\lambda - 1) = \sqrt{-1} = \pm i$$

which gives us the complex eigenvalues $\lambda_1 = 1 + i,\,\lambda_2 = 1 - i$.

The formula for $x_n$ can somehow magically (and by magic we mean the proof found in assignment 7) be given by $\alpha \lambda_1^n + \beta \lambda_2^n = \alpha (1+i)^n + \beta (1-i)^n$ where alpha and beta are scalars. We can solve for alpha and beta using $x_0$ and $x_1$:

$$x_0 = \alpha + \beta = 0 \quad \therefore \alpha = -\beta$$
$$x_1 = \alpha(1+i) + \beta(1-i) = \alpha (1+i) - \alpha (1-i) = 2i\alpha = 2 \quad \therefore \alpha = -i, \, \beta = i$$

Now we have the explicit formula for $x_n$:

$$x_n = -i(1+i)^n + i(1-i)^n$$

### 4.2Accuracy and discussion¶

Corresponds with the solutions given in-class, and with the method described in assignment 7, question 1. - @dellsystem

## 5Question 5¶

Suppose that A is a square matrix with minimal polynomial

$$min_A(x) = x^k + a_{k-1}x^{k-1} + \ldots + a_1 x + a_0$$

with $a_0 \neq 0$. Prove that $A^{-1}$ exists and that there is a polynomial $q(x)$ such that $q(A) = A^{-1}$.

(Recall that $min_A(x)$ is the nonzero monic polynomial p(x) of smallest possible degree such that $p(A) = 0$.)

### 5.1Solution¶

We have $A^k + a_{k-1}A^{k-1} + \ldots + a_1 A + a_0 I = 0$. Since we know $a_0 \neq 0$, we can move that term to the other side of the equation and divide by $-a_0$:

$$-\frac{1}{a_0}A^k - \frac{a_{k-1}}{a_0}A^{k-1} - \ldots - \frac{a_1}{a_0}A = I$$

We can then factor out an A:

$$A\left (-\frac{1}{a_0}A^{k-1} - \frac{a_{k-1}}{a_0}A^{k-2} - \ldots - \frac{a_1}{a_0}I \right ) = I$$

So we have $A^{-1} = -\frac{1}{a_0}A^{k-1} - \frac{a_{k-1}}{a_0}A^{k-2} - \ldots - \frac{a_1}{a_0}I$. This shows that A has an inverse, and also gives us the polynomial q(x):

$$q(x) = -\frac{1}{a_0}x^{k-1} - \frac{a_{k-1}}{a_0}x^{k-2} - \ldots - \frac{a_1}{a_0}$$

### 5.2Accuracy and discussion¶

Corresponds to solutions given in-class, and actually sort of makes sense in general. - @dellsystem

## 6Question 6¶

Let $W = span \left \{ \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix} , \begin{pmatrix} 1 \\ 1 \\ 1 \\ 2 \end{pmatrix} \right \}$ be a subspace of $\mathbb{R}^4$. Find an orthonormal basis for each of $W$ and $W^{\perp}$.

### 6.1Solution¶

Let $\vec{v_1} = \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$ and $\vec{v_2} = \begin{pmatrix} 1 \\ 1 \\ 1 \\ 2 \end{pmatrix}$.

Normally we would use Gram-Schmidt to make the vectors orthogonal to each other, but a quick inspection of the vectors allows us to realize we can simply subtract $\vec{v_1}$ from $\vec{v_2}$ to create another vector that has zeroes opposite of that of $\vec{v_1}$. Since the zeroes are opposite of each another, it follows that the inner product of the two vectors will clearly end up being 0 (since everything you're multiplying will always equal 0), showing that the two vectors will be orthogonal to one another.

We'll assign the vectors like this for clarity:

$$\vec{w_1} = \vec{v_1} = \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \quad \vec{w_2} = \vec{v_2} - \vec{v_1} = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 2 \end{pmatrix}$$

Since these vectors are orthogonal, all we need to do is normalize them, so an orthonormal basis for W is:

$$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{5}} \begin{pmatrix} 0 \\ 1 \\ 0 \\ 2 \end{pmatrix} \right \}$$

For $W^{\perp}$, we could again continue on with Gram-Schmidt, but we can also simply just solve the system $A\vec{x} = \vec{0}$, and get the orthonormal basis for $W^{\perp}$ as well.

Let $A = \begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 2 \end{pmatrix}$

Finding the null space of A, we obtain the orthogonal vectors in $W^{\perp}$:

$$\left \{ \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ -2 \\ 0 \\ 1 \end{pmatrix} \right \}$$

Finally, normalizing this result, we get the orthonormal basis in $W^{\perp}$:

$$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{5}} \begin{pmatrix} 0 \\ -2 \\ 0 \\ 1 \end{pmatrix} \right \}$$

### 6.2Accuracy and discussion¶

Pretty accurate, follows class solution - @clarle

## 7Question 7¶

Let $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$, $B = \begin{pmatrix} \hat a & \hat b \\ c & d \end{pmatrix}$ and $C = \begin{pmatrix} (1+i)a + 3\hat a + 2i c & (1+i)b + 3\hat b + 2i d \\ (1-2i)c & (1-2i)d \end{pmatrix}$ be matrices with complex entries. If we know that $det(A) = 2i$ and $det(B) = 1+ i$, what is $det(-3C)$?

### 7.1Solution¶

Forgive the editorialising but this is actually a pretty cool problem, one that I only realised how to do when I was halfway through it. We will use the theorem that $det(C) = det(A) + det(B)$ IF A and B differ only by one row and that row in C is the sum of the corresponding row in A and B. See theorem 2 on this page (Paul's notes) for a better explanation. So let's make up a few matrices and figure out their determinants:

$$D = \begin{pmatrix}(1+i)a & (1+i)b \\ c & d \end{pmatrix}$$

which has a determinant of $(1+i)det(A) = (1+i)(2i) = -2+2i$ since it's just the result of multiplying one row of A by $(1+i)$.

$$E = \begin{pmatrix} 3\hat a + 2i c & 3\hat b + 2i d \\ c & d \end{pmatrix}$$

which has a determinant of $3det(B) = 3(1+i) = 3+3i$ since it's just multiplying the first row by three and then performing an elementary row operation that does not change the determinant (adding two rows).

$$F = \begin{pmatrix}(1+i)a + 3\hat a + 2i c & (1+i)b + 3\hat b + 2i d \\ c & d \end{pmatrix}$$

which has a determinant of $det(D) + det(E) = (-2 + 2i) + (3+3i) = 1 + 5i$ due to the theorem mentioned before (as D and E only differ by one row etc).

If we then multiply the second row of F by $(1-2i)$, we get the matrix C. The matrix C has a determinant equal to $(1-2i)det(F) = (1-2i)(1+5i) = 1 +3i +10 = 11 + 3i$

To find the determinant of $-3C$, we simply square $-3$ and multiply it by the determinant of C, as C is a 2x2 matrix (recall that multiplying a [square] matrix by a constant a results in the determinant increasing by $a^n$ where n is the number of rows and columns). So $det(-3C) = (-3)^2det(C) = 9(11+3i) = 99+27i$. And there you go.

### 7.2Accuracy and discussion¶

As always, corresponds to the solutions given in-class. Plus I actually understand it this time. So it should be fairly accurate. - @dellsystem

## 8Question 8¶

Let A be the following real symmetric matrix. Find an orthogonal matrix P such that $P^T AP$ is diagonal, and find the diagonal matrix.

$$A = \begin{pmatrix} 0 & 2 & 3 \\ 2 & 3 & 6 \\ 3 & 6 & 8 \end{pmatrix}$$

Note that -1 is an eigenvalue of A.

### 8.1Solution¶

Recall that an orthogonal matrix is one such that its columns are filled with orthonormal vectors. To solve this one, we first must solve for the characteristic polynomial of the matrix by finding $\det A - \lambda I$:

$$\det A - \lambda I = -\lambda^3 + 11\lambda^2 + 25\lambda + 13$$

Since we know that -1 is an eigenvalue, we can factor the characteristic polynomial to find the other eigenvalues easily:

$$\det A - \lambda I = -(\lambda + 1)^2(\lambda - 13)$$

Now, finding the eigenvectors associated with the given eigenvalue of -1:

$$A-(-1)I = A+I = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$

Because it has multiplicity two, we get two eigenvectors, $\begin{pmatrix}-2 \\ 1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix} -3 \\ 0 \\ 1 \end{pmatrix}$.

Now, finding the eigenvectors associated with the given eigenvalue of 13:

$$A-(13)I = A-13I = \begin{pmatrix} -13 & 2 & 3 \\ 2 & -10 & 6 \\ 3 & 6 & -5 \end{pmatrix} \mapsto \frac{1}{3}\begin{pmatrix} 3 & 0 & -1 \\ 0 & 3 & -2 \\ 0 & 0 & 0 \end{pmatrix}$$

From this eigenvalue, we get one eigenvector, $\begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix}$.

Also note that we could have found the last eigenvalue using the trace of the matrix, as the trace is equal to the sum of the eigenvalues. Since there are two eigenvectors associated with the eigenvalue of -1, the remaining eigenvalue must be equal to the trace + 2, so it's 13. Just a shortcut (one that Loveys kind of prompts you to use, with his mention of only one eigenvalue) but not necessary to solve the problem or anything.

Now that we have all of the eigenvectors, we can orthogonalize them via Gram-Schmidt:

Let:

$\vec{v_1} = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}, \vec{v_2} = \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix}, \vec{v_3} = \begin{pmatrix} -3 \\ 0 \\ 1 \end{pmatrix}$

Then:

$$\vec{w_1} = \vec{v_1} = \begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix}$$

$$\vec{w_2} = \vec{v_2} - \frac{\langle \vec{v_2}, \vec{w_1} \rangle}{\langle \vec{w_1}, \vec{w_1} \rangle} \vec w_1 = \begin{pmatrix}-2 \\ 1 \\ 0 \end{pmatrix} - \frac{0}{14}\begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix} = \begin{pmatrix}-2 \\ 1 \\ 0 \end{pmatrix}$$

$$\vec{w_3} = \vec{v_3} - \frac{\langle \vec{v_3}, \vec{w_1} \rangle}{\langle \vec{w_1}, \vec{w_1} \rangle} \vec w_1 - \frac{\langle \vec{v_3}, \vec{w_2} \rangle}{\langle \vec{w_2}, \vec{w_2} \rangle} \vec w_2 = \begin{pmatrix}-3 \\ 0 \\ 1 \end{pmatrix} - \frac{0}{14}\begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix} - \frac{6}{5}\begin{pmatrix}-2 \\ 1 \\ 0 \end{pmatrix} = \frac{1}{5}\begin{pmatrix}-3 \\ -6 \\ 5 \end{pmatrix}$$

Now that we have our set of orthogonal vectors, all we need to do now is normalize them. So our matrix $P$ is:

$$P = \begin{pmatrix}\frac{1}{\sqrt{14}} & \frac{-2}{\sqrt{5}} & \frac{-3}{\sqrt{70}} \\ \frac{2}{\sqrt{14}} & \frac{1}{\sqrt{5}} & \frac{-6}{\sqrt{70}} \\ \frac{3}{\sqrt{14}} & 0 & \frac{5}{\sqrt{70}}\end{pmatrix}$$

### 8.2Accuracy and discussion¶

Correct, but there may be a faster way to solve this problem (without having to find the characteristic polynomial). Will check back again if I figure it out. - @clarle

Added the diagonal matrix, and a note to the effect that we don't actually need to find the characteristic polynomial (could just use trace etc). Also added some missing things in the Gram-Schmidt orthogonalisation process. Not sure if your numbers are right or not, but now the formulas should be. - @dellsystem

Two eigenvectors for the eigenvalue of -1 not 0. - @dellsystem

## 9Question 9¶

Suppose that V is the vector space of continuous functions on the closed interval $[0, \ln (3)]$. For $f, g \in V$, we define

$$\langle f, g\rangle = \int_0^{\ln 3} e^x f(x) g(x) \,dx$$

(a) (2 marks) Verify that this defines an inner product on V.
(b) (6 marks) Show that, for any $f \in V$,

$$\left ( \int_0^{\ln 3} e^x f(x) \,dx \right )^2 \le 2 \int_0^{\ln 3} e^x [ f(x) ]^2 \,dx$$

(c) (2 marks) Identify those functions in V for which equality holds in part (b).

### 9.1Solution¶

(a) To verify that this is defines an inner product, we have to show that it respects linearity in the first argument, conjugate symmetry, and positive definiteness.

Linearity in the first argument:

Prove that it respects vector addition:

$$\langle f_1 + f_2, g \rangle = \int_0^{\ln 3} e^x (f_1+f_2)(x) g(x) \,dx = \int_0^{\ln 3} e^x f_1(x) g(x)\,dx + \int_0^{\ln 3} e^x f_2(x) g(x) \,dx = \langle f_1, g\rangle + \langle f_2, g\rangle$$

Prove that it preserves scalar multiplication:

$$\langle \alpha f, g\rangle = \int_0^{\ln 3} e^x (\alpha f(x)) g(x) \, dx = \alpha \int_0^{\ln 3} e^x f(x)g(x)\,dx = \alpha \langle f, g \rangle$$

[Conjugate] symmetry:

$$\langle g, f \rangle = \int_0^{\ln 3} e^x g(x) f(x) \,dx = \int_0^{\ln 3} e^x f(x) g(x) \,dx = \langle f, g \rangle \text{ for all } f, g\in V$$

Positive definiteness:

For any $f \in V, \, f\neq 0$:

$$\langle f, f \rangle = \int_0^{\ln 3} e^x [f(x)]^2 \,dx$$

Since $e^x$ is always positive and $[f(x)]^2$ is also always positive for $f \neq 0$, the integrand is thus always positive over the given interval, and so the integral must be positive as well. If $f = 0$, the integral evaluates to 0, as the integrand is just the zero function.

(b) As we have verified that this defines an inner product, the Cauchy-Schwarz inequality applies. Therefore:

$$|\langle f, 1 \rangle|^2 = \left ( \int_0^{\ln 3} e^x f(x) \,dx \right )^2 \le \langle f, f\rangle \langle 1, 1 \rangle$$

$$\therefore \left ( \int_0^{\ln 3} e^x f(x) \,dx \right )^2 \le \int_0^{\ln 3} e^x [f(x)]^2\,dx \int_0^{\ln 3} e^x \,dx$$

Evaluate the integral: $\int_0^{\ln 3} e^x \,dx = \left [ e^x \right ]_0^{\ln 3} = 3 - e^0 = 2$

Therefore, by Cauchy-Schwarz and evaluation of integrals, we have $\left ( \int_0^{\ln 3} e^x f(x) \,dx \right )^2 \le 2 \int_0^{\ln 3} e^x [f(x)]^2 \,dx$

(c) When is $\left ( \int_0^{\ln 3} e^x f(x) \,dx \right )^2 = 2 \int_0^{\ln 3} e^x [f(x)]^2 \,dx$?

Well, equality should only occur when f is a constant multiple of 1, by Cauchy-Schwarz. So equality only occurs when f is a constant function.

### 9.2Accuracy and discussion¶

The solution for (a) corresponds to the answer given in-class. The answers for (b) and (c) were either not given or the person taking down the answers fell asleep or something, but they make sense and they are similar to the solutions provided for assignment 9, question 1, so it should be all good. - @dellsystem

## 10Question 10¶

Suppose that V is a finite-dimensional inner product space and $W_1$ and $W_2$ are subspaces of V such that $V = W_1 \oplus W_2$. Prove that $V = W_1^{\perp} \oplus W_2^{\perp}$.

### 10.1Solution¶

Suppose $\vec{v} \in W_1^{\perp} \cap W_2^{\perp}$. Then:

$\langle \vec{v}, \vec{w_1} \rangle = 0$ for all $\vec{w_1} \in W_1$

$\langle \vec{v}, \vec{w_2} \rangle = 0$ for all $\vec{w_2} \in W_2$

Any element of V is $\vec{w_1} + \vec{w_2}$ for some $\vec{w_1} \in W_1$ and $\vec{w_2} \in W_2$. So:

\begin{align} \langle \vec{v}, \vec{w_1} + \vec{w_2} \rangle &= \langle \vec{v}, \vec{w_1} \rangle + \langle \vec{v}, \vec{w_2} \rangle \\ & = 0 + 0 = 0 \end{align}

So:

$$\vec{v} \in V^{\perp} = \{\vec{0}\}, \quad W_1^{\perp} \cap W_2^{\perp} = \{\vec{0}\}$$

Suppose $\dim V = n, \dim W_1 = m$; then $\dim W_2 = n - m$. Then also:

$$\dim W_1^{\perp} = n - m, \quad \dim W_2^{\perp} = m$$

So $\dim (W_1^{\perp} + W_2^{\perp}) = n$ and thus:

$$W_1^{\perp} + W_2^{\perp} = V$$

Since we have proven that $W_1^{\perp} \cap W_2^{\perp} = \{\vec{0}\}$ and $W_1^{\perp} + W_2^{\perp} = V$:

$$V = W_1^{\perp} \oplus W_2^{\perp}$$

And thus the proof is complete.

### 10.2Accuracy and discussion¶

Follows the solution given in class. - @clarle