Processing math: 100%

Thursday, February 21, 2013 CC-BY-NC
Introduction to inner product spaces

Maintainer: admin

In this lecture, we started chapter 6 (inner product spaces). This topic will not be covered on the midterm, but it will be a big part of the final, so don't neglect it.

1Inner product spaces

Definition: an inner product space on V is a function

V×VF(u,v)u,v

such that the following conditions are satisfied:

  1. Positive definiteness: v,v0 for all vV with equality if and only if v=0
  2. Linearity in the first argument: au+v,w=au,w+v,w
  3. Conjugate symmetry: v,w=¯w,v

1.1Examples

The Euclidean inner product: V=Fn. Then (v1,,vn),(w1,,wn)=v1¯w1++vn¯wn which is basically the dot product of the two vectors, but, like, conjugate. (If F=R then this is just the standard dot product operation that we all know and love.)

Integration over a particular region: V=Pm(F). Then

p,q=10p(x)¯q(x)dx

As an exercise, verify that this satisfies the 4 conditions above.

1.2Inner products as linear functions

For a fixed wV, we can define the function

fw:VFvv,w

Let's show that this function is linear:

fw(au+v)=au+v,w=au,w+v,w=afw(u)+fw(u)

As an exercise, prove the following properties:

  1. v,0=0,v=0
  2. u,av+w=au,v+u,w

From now on, we assume that V denotes an inner product space - i.e., a vector space equipped with some inner product.

1.3Norms

For vV, the norm of V, denoted by v, is defined as

v=v,v

Note that the radicand is always 0 due to positive definiteness.

1.3.1Examples

V=Fn, with the Euclidean inner product as defined before:

(z1,,zn)=|z1|2++|zn|2

V=P3(R), p(x)=x2+1:

p(x)=x2+1,x2+1=10(x2+1)2dx

1.3.2Properties

Just one for now. Proof: left as an exercise.

For any aF, vV:

av=|a|v

1.3.3Orthogonality

u,vV are orthogonal if u,v=0. (Note that this is a reflexive relation.)

1.3.4The Pythagorean theorem

If u,v=0, then:

u+v2=u2+v2

Proof:

u+v2=u+v,u+v=u2+v2+u,v=0+v,ualso =0=u2+v2

which was a nice one-liner.

(To see that u,v=v,u=0, just use conjugate symmetry. Proof left as an exercise for the astute reader.)

If we are working in Rn, then the converse of the theorem holds true as well: u+v2=u2+v2 implies that u,v=v,u=0.

1.3.5The Cauchy-Schwarz inequality

|u,v|uv

This feels familiar.

Equality holds when u=λv for some λF (i.e., when the two vectors are scalar multiples of each other).

Proof: let u,vV. If v=0, then we have 0 on both sides of the inequality and we're done; equality holds trivially. So let's assume that v0. Consider

u=u,vv2v+w

where w is some vector orthogonal to v. By the Pythagorean theorem,

u2=u,vv2v+w2=|u,v|2v2+w20|u,v|2v2

Consequently, by multiplying both sides by the denominator, we get

|u,v|2u2u2

and if we take the square root of both sides then we get the Cauchy-Schwarz inequality.