Proof strategies and definitions CC-BY-NC

Maintainer: admin

Definitions of important concepts and proof strategies for important results. These are things that you should make an effort to learn. Presented in vocabulary list format. $\DeclareMathOperator{\n}{null} \DeclareMathOperator{\r}{range}$

View flashcards on StudyBlue

1Chapter 1

Vector space
A set, with addition (comm, assoc, identity, inverse, dist) and scalar mult (assoc, identity, dist)
Subspace
Subset. Contains 0, closed under addition, closed under scalar mult.
Subspaces of $\mathbb R^2$
Zero, lines passing through origin, all of $\mathbb R^2$
Sum of subspaces
Subspace, consisting of elements formed as a sum of vectors from each subspace. This is the smallest subspace containing all of the components.
Direct sum
Unique representation. Must be sum, and only 1 way to write 0 (i.e., intersection contains only zero).
Intersection of subspaces
always a subspace
Union of subspaces
only a subspace if one is contained within the other
Addition on vector spaces
comm, assoc, identity is $\{0\}$

2Chapter 2

Span
subspace. set of all linear combos
Finite-dimensional vector space
spanned by a finite list of vectors
Linear independence
A linear combo is 0 only when all the coefficients are 0
one vector can be written as a linear combo of the others
we can remove one without affecting the span
Length of spanning sets and linearly independent lists
Spanning sets are never shorter. Proof: replace vectors from a spanning list with vectors from a lin ind list, should still span, etc.
Basis
linearly independent spanning list
any vector can be written as a unique combination of basis vectors
any lin ind list can be reduced to a basis
any spanning set can be extended to a basis
Existence of a direct sum
Given a subspace $U$ of $V$, there is a $W \subseteq V$ such that $U \oplus W = V$. Use the basis vectors, show $V$ is sum, intersection is 0 (from lin ind).
Dimension of a vector space
Number of vectors in a basis (any)
Lunch in chinatown
$\dim(U_1 + U_2) = \dim(U_1) + \dim(U_2) - \dim(U_1 \cap U_2)$; proof via bases (only holds for 2 subspaces)

3Chapter 3

Linear map
Function $T: V \to W$, satisfies additivity and homogeneity
Assoc, dist, identity I; not comm
Product of linear maps
Composition, in the same order ($(ST)(v) = S(Tv)$)
Nullspace
Things in the domain that go to 0
Always a subspace
Injective $\Leftrightarrow$ nullspace only contains 0
Range
Whatever things in the domain map to
Always a subspace
Surjective $\Leftrightarrow$ range is $W$
Rank-nullity theorem
$\dim V = \dim \n T + \dim \r T$
Proof: Create bases, apply T, nullspace parts disappear because they go to 0; this proves the spanning. To prove lin ind, use lin ind of the bases of the ambient space
Corollary: no injective maps to a smaller VS, no surjective maps to a larger VS
Matrix of a linear map
Each column is $T$ applied to a basis vector (each row: coefficient for that basis vector)
Invertibility of operators
$T$ is invertible if $\exists \, S$ (unique) with $ST = TS = I$
Invertibility $\Leftrightarrow$ bijectivity. Proof: show injectivity, surjectivity for one direction, and prove linearity of the inverse for the other
Isomorphic vector spaces
There is an invertible linear map (and thus bijection) between them
Same dimension
Dimension of the vector space of linear operators
$\mathcal L(V, W) = \dim V \cdot \dim W$
Operator
Linear map from $V$ to $V$
Proving direct sum
To show that $U \oplus W = V$, show that their intersection is 0, and that you can make an arbitrary vector from $V$ by taking one from each
Product of injective maps
Also injective. Proof: apply argument inductively, starting from the end
Injective maps and linear independence
Applying an injective map on each element of a lin ind list gives a lin ind list. Proof: if a linear combo is 0, then apply $T$ gives 0, but nullspace is 0
Invertibility of a product
$ST$ is invertible $\Leftrightarrow$ $S, T$ both invertible. Proof: $T$ is inj, $S$ is surj; multiply to get $I$ in the other direction

4Chapter 4

Roots of polynomials
$p(\lambda) = 0$
$p(z) = (z-\lambda)q(z)$. Proof: $p(z) - p(\lambda) = p(z) -0 $ and when you write it all out you can factor out $(z-\lambda)$ from each.
At most $\deg p$ distinct roots
Division algorithm for polynomials
$q = sp + r$ where $\deg r < \deg p$
Proof of uniqueness: assume two different representation, $(s-s')p = (r-r')$, look at degrees.
Fundamental theorem of algebra
Any polynomial has a unique factorisation into linear polynomials, with complex roots
Unique factorisation of polynomials over the reals
Linear and quadratic terms, all irreducible

5Chapter 5

Invariant subspace
If $u \in U$, then $Tu \in U$
Means range is subspace of the domain
Both range and nullspace are invariant for ops
One-dimensional invariant subspaces
Trivial (zero, whole space)
Two-dimensional invariant subspaces
Only the trivial ones for $\mathbb R$; $\mathbb C$ has some others
Eigenvalue
$\lambda$ such that $Tv = \lambda v$ for some $v \neq 0$
$T$ has a one-dimensional invariant subspace
$T-\lambda I$ is not injective/invertible/surjective (so it's 0 at some point)
Eigenvectors
In the nullspace of $T-\lambda I$ for an eigenvalue $\lambda$
If every vector is an evector, then $T$ is $aI$ ($a$ is the only evector)
Linear independence of eigenvector from distinct eigenvalues
Proof: let $v$ be in the span of the previous, write it out as a linear combo and apply $T$ to get $\lambda_i$ as a coeff for each basis vector, multiply both sides by $\lambda$ and subtract from the above, then since $\lambda \neq \lambda_i$ and the LHS is 0, the coefficients must be 0.
Number of eigenvalues
Max: $\dim V$ since the eigenvectors are linearly independent
Rotation
No real eigenvalues - only complex (usually $\pm i$)
Eigenvalues over a complex vector space
Every operator has at least one. Proof: $(v, Tv, \ldots, T^nv)$ is not lin ind, so write 0 as a combo, turn this into a poly, factor over complex
Upper triangular matrix
Everything below the diagonal is 0
Every op has one wrt some basis
Means that $Tv_k$ for any basis vector is in the span of the previous ones
Means that the span of the first $k$ basis vectors is invariant under $T$
Zeroes and upper triangular matrices
$T$ is invertible $\Leftrightarrow$ no zeroes on the diagonal of a UT matrix. Proof: $Tv = 0$ for some $v$, so not injective, so not invertible; other direction, write $Tv = 0$ as a combo of basis vectors, then $Tv_k$ is in span of previous, so the coeff of $v_k$ is 0
Diagonal elements of upper triangular matrices
These are eigenvalues. Proof: $T - \lambda I$ is not invertible iff there is a zero on the diagonal, which happens if $\lambda = \lambda_i$ for some $i$
Diagonal matrices (TFAE)
An operator has a diagonal matrix
Evectors form a basis
Sum of one-dimensional invariant subspaces is the whole space
The sum of nullspaces of $T - \lambda_k I$ is the whole space
Invariant subspaces on real vector spaces
There is always one of dim 1 or 2 (proof by unique factorisation)
Eigenvalues on odd-dimensional spaces
Every op has one
Invariance under every operator
Then $U$ must be trivial (0 or V). Proof: Suppose there is nonzero $v \in U$, and $w \notin U$. Extend $v$ to a basis, map $av$ to $aw$, but that's not in $U$ so contradiction
Eigenspace invariance if $ST = TS$
Any eigenspace (or nullspace of $T-\lambda I$) of $T$ is invariant under $S$. Proof: If $v$ is in the eigenspace, then $(T-\lambda I)v = 0$. Apply to $Sv$, by distributivity we get 0, so $Sv$ is in eigenspace too, so the eigenspace is invariant
Number of distinct eigenvalues
Dimension of range + 1 if there is a 0 eigenvalue, just dim range otherwise. Proof: at most one from the nullspace (0)
Eigenvalues of the inverse
$1/\lambda$. Proof is easy
Eigenvalues of $ST$ and $TS$
The same
Nullspace and range of $P^2=P$
The direct sum is $V$. Proof: $u-Pv$ is in the nullspace, $Pv$ is in the range

6Chapter 6

Inner product
function that takes in $u, v \in V$, outputs something from the field
Positive definiteness ($(\langle v, v \rangle \geq 0$), linearity in the first arg, conjugate symmetry, conjugate homogeneity in the second arg
Standard inner products
Euclidean (dot product with conjugate of second vector) on $\mathbb F^n$
Integration for $p$
Inner products and linear maps
$f: v \to \langle v, w\rangle$ for fixed $w$ is a linear map (by the way IPs are defined) thus $\langle w, 0 \rangle = \langle 0, w \rangle = 0$
Norm
$\|v \| = \sqrt{\langle v, v \rangle}$
Inherits positive definiteness (only 0 if $v$ is)
$\|av\|^2 = |a|^2\|v\|^2$
Orthogonal
$\langle u, v \rangle = 0$
0 is orthogonal to everything, and is the only vector orthogonal to itself
Pythagorean theorem
If $u$ and $v$ are orthogonal, then $\|u+v\|^2 = \|u\|^2 + \|v\|^2$
Proof: $\|u+v\|^2 = \langle u+v, u+v \rangle = \|u\|^2 + \|v\|^2 + \langle u, v \rangle + \langle v, u \rangle$ but the last two terms are 0 since they're orthogonal
Orthogonal decomposition
$v = au + w$ where $w$ is ortho to $u$. Set $w = v - au$, choose $a = \langle v, u \rangle / \|u\|^2$. Derivation: from $\langle v-au, u \rangle = 0$, rewrite in terms of norms
Cauchy-Schwarz inequality
$|\langle u, v \rangle | \leq \|u\|\|v\|$, equality if scalar multiple
Proof: Assume $\|u\|^2 \neq 0$, divide by it, write ortho decomp of $v = u+w$, use Pythagorean to get $\|v\|^2$, multiply both sides by $\|u\|^2$, make an inequality; equality only if $w=0$
Triangle inequality
$\|u+v\| \leq \|u\| + \|v\|$
Proof: Write it out, use Cauchy-Schwarz, equality if non-negative scalar mult
Paralellogram inequality
$\|u+v\|^2 + \|u-v\|^2 = 2(\|u\|^2+\|v\|^2)$
Proof: from inner products
Orthonormal list
any two vectors are ortho, each has a norm of 1
$\| a_1e_1 + \ldots + a_ne_n\|^2 = |a_1|^2 + \ldots + |a_n|^2$ by Pythagorean theorem
Always linearly independent (use the previous thing, try to make 0)
Linear combinations of orthonormal bases
$v = \langle v, e_1\rangle e_1 + \ldots + \langle v, e_n \rangle$
Proof: $v = a_1e_1 + \ldots$, take inner product of $v$ with each $e_j$, get $a_j$
Also, $\|v\|^2 = |\langle v, e_1 \rangle |^2 + \ldots$
Gram-Schmidt
For creating orthonormal bases (for the same VS) out of lin ind lists
$e_1 = v_1 / \|v_1\|$; $e_j = v_j-\langle v_j, e_1\rangle e_1 - \cdots$ then divide by the norm (not zero since the $v$s are lin ind)
Proof: Norms are clearly 1. Orthogonality take the inner product $\langle e_j, e_k \rangle$, most terms disappear because of pairwise ortho, so we just have $\langle v_j, e_k \rangle - \langle v_j, e_k \rangle$. Also, $v_j$ is in the span (just rearrange the formulas) so they span the same space.
Corollary: any FDIPS has an ortho basis
If the list is linearly dependent, we get a division by 0
Upper triangular matrices and orthonormal bases
If an op has a UT matrix wrt some basis, it has one wrt to an ortho one (so in a complex VS, this is every op)
Proof: span of $(v_1, \ldots, v_j)$ is invariant under $T$ for each $j$, apply Gram-Schmidt to get an ortho basis
Orthogonal complement
$U^{\perp}$ = set of all vectors orthogonal to every vector in $U$
Is a subspace
$(U^{\perp})^{\perp} = U$ for obvious reasons
Direct sum of orthogonal complement
$U \oplus U^{\perp} = V$
Proof: use an ortho basis for $U$. $v = (\langle v, e_1\rangle + \ldots) + (v -\langle v, e_1\rangle - \ldots) = u + w$. Clearly $u \in U$, and $w\in U^{\perp}$ because $\langle w, e_j\rangle = 0$ (so $w$ is ortho to every basis vector of $U$). Intersection is obviously only 0, by positive definiteness
Orthogonal projections
$P_Uv$ maps $v$ to the part of its ortho decomp that is in $U$
Range is $U$, nullspace is $U^{\perp}$
$v-P_Uv$ is in the nullspace
$P_U^2 = P_U$
$\|P_Uv \| \leq \|v \|$
Minimisation problems
Find $u \in U$ to minimise $\|v-u\|$ for fixed $v$. Answer: $u = P_Uv$!
Proof: $\|v-P_Uv \|^2 \leq \|v-P_Uv\|^2 + \|P_Uv-u\|^2 = \|v-u\|^2$ by Pythagorean theorem (applicable since vectors are ortho; middle terms cancel out), equality only when $u = P_Uv$
First, find an ortho basis for the subspace we're interested in (e.g., polynomials), take inner product of $v$ with each basis vector and use as coefficient
Linear functional
$\varphi: V \to \mathbb F$ (so sending $v$ to $\langle v, u\rangle$ for fixed $u$)
Existence of $u$: write $v$ in terms of ortho basis, use homogeneity, find $u$
Uniqueness of $u$: assume $\langle v u_1, \rangle = \langle v, u_2 \rangle$ so $0 = \langle v, u_1-u_2\rangle$ for any $v$ including $u_1 - u_2$ thus $u_1-u_2 = 0$ QED
Adjoint
If $T \in \mathcal L(V, W)$, $T^* \in \mathcal L(W, V)$ such that $\langle Tv, w \rangle = \langle v, T^*w \rangle$
Always a linear map
Eigenvalues are conjugates of $T$'s eigenvalues (proof by contradiction: $T-\lambda I \neq 0$, so there's an inverse, apply adjoint op to both sides, also invertible, thus not an eigenvalue)
If injective, original is surjective, etc (all 4 possibilities)
The adjoint operator
The operator analogue of [conjugate] matrix transposition
Additivity ($(S+T)^* = S^* + T^*$), conjugate homogeneity ($(aT)^* = \overline{a}T^*$)
$(T^*)^* = T$, $I^* = I$
$(ST)^* = T^*S^*$
Nullspace of $T^*$ is complement of range of $T$, and range is complement of nullspace of $T$, and the other way around

7Chapter 7

Self-adjoint operator (Hermitian)
$T^* = T$, matrix is equal to conjugate transpose (ONLY WRT AN ORTHO BASIS)
Preserved under addition, real scalar mult
All eigenvalues are real (proof: definitions and conj symmetry; $\lambda \|v\|^2 = \overline{\lambda}\|v\|^2$)
$\langle Tv, v \rangle \in \mathbb R$ (proof: subtract conjugate, use conj symmetry, zero operator thing below)
Product of two self-adjoint ops only self-adjoint if the multiplication is commutative
Set of all self-adjoint ops is a subspace only in a real IPS, not a complex IPS
Orthogonal projections are self-adjoint
Zero operators on complex inner product spaces
If $\langle Tv, v \rangle = 0$ for all $v$ then $T = 0$
Normal operator
$T^*T = TT^*$
May not be self-adjoint
$\lVert Tv\rVert = \lVert T^*v\rVert$. Proof: $\langle (TT^*-T^*T)v, v \rangle = 0$ so $\langle TT^*v, v \rangle = \langle T^*Tv , v \rangle$ then use adjoint def to get $\langle T^*v, T^*v \rangle = \langle Tv, Tv \rangle$
Eigenvectors for $\lambda$ are eigenvectors for $\overline{\lambda}$. Proof: $(T-\lambda I)$ is normal, times $v$ is 0, use norm relation above to get $\|(T-\lambda I)^*v \| =0$, distribute the adjoint
Set of all normal ops on a VS with $\dim >1$ is not a subspace (additivity not satisfied)
$T^k, T$ have the same range and nullspace
Every normal op on a complex IPS has roots
Orthogonality of eigenvectors of normal operators
Evectors for distinct evalues are ortho. Proof: $(\lambda_1 - \lambda_2)\langle v_1, v_2\rangle = \langle Tv_1, v_1 \rangle - \langle v_2, T^*v_2 \rangle = 0$ but evalues are distinct so inner product must be 0.
Spectral theorem
For which ops can evectors form an ortho basis (or have diagonal matrices wrt to an ortho basis)
Complex spectral theorem
Normal $\Leftrightarrow$ eigenvectors form an ortho basis
Proof: Diagonal matrix wrt a basis, conjugate transpose also diagonal, so they commute, so $T$ is normal. Other direction: $T$ has a UT matrix, this is diagonal if you look at the sum of squares in the $j$th row versus $j$th column and use induction (and the fact that $\|Te_j \| = \|Te^*e_j\|$)
Real spectral theorem
Self-adjoint $\Leftrightarrow$ eigenvectors form an orthonormal basis
Lemma: self-adjoint op has an eigenvalue in a real IPS. Proof: consider $(v, Tv, \ldots, T^nv)$, not lin ind, write 0, factor over the reals, none of the quadratics is 0, so one of the linears is 0
Proof: Induction. $T$ has at least one evalue and evector $u$, which is the basis for a 1-d subspace $U$. $U^{\perp}$ is invariant under $T$ (reduces to $\lambda \langle u, v \rangle = 0$). Create a new op $S|_{U^{\perp}}$, which is self-adjoint, apply the IH, join it with $u$
Normal operators on two-dimensional spaces (TFAE)
$T$ is normal but not self-adjoint
Matrix wrt any ortho basis looks like $\displaystyle \begin{pmatrix} a & -b \\ b & a \end{pmatrix}$ but not diagonal. Proof: $\|Te_1 \|^2 = \|T^*e_1\|^2$ to find $c$, use the fact that $T$ is normal to do matrix mult and find $d$
For some ortho basis, $b < 0$ (above). Proof: use $(-e_1, -e_2)$ as the basis
Block matrix
When an entry of a matrix is itself a matrix
Invariant subspaces and normal operators
$U^{\perp}$ is also invariant under $T$. Proof: consider basis vectors, and the matrix, which has all zeros under the first few columns since $U$ is invariant, but since $\|Te_j\|^2 = \|T^*e_j\|$ then yeah
$U$ is invariant under $T^*$ (take transpose of above matrix)
Block diagonal matrix
Square matrix, diagonal consists of block matrices (of any form), all others are 0 (includes all square matrices really)
Product of two block diagonal matrices: multiply the matrices together along the diagonal, stick the result where you expect
Block diagonal matrices of normal operators
Normal $\Leftrightarrow$ has a block diagonal matrix wrt some ortho basis, each block is 1x1 or 2x2 of the form $\displaystyle \begin{pmatrix} a & -b \\ b & a \end{pmatrix}$ with $b > 0$. Proof by strong induction, eigenvalues, invariant subspaces to form ortho bases, if dim is 2 then $T|_U$ is normal but not self-adjoint, apply IH to $U^{\perp}$ and join it with the ortho basis of $U$
Generalised eigenvector
$(T-\lambda I)^jv =0$ for some $j > 0$
Used to write $V$ as the decomp of nullspaces (generalised eigenspaces basically), where $j = \dim V$
Nullspaces of powers $T^0, T^1, T^2, \ldots$
Size of nullspace always monotonically increasing
If two consecutive nullspaces are equal, all subsequent ones are equal too (proof: consider nullspace of $T^{m+k+1}$, so $T^kv$ is in nullspace of $T^{m+1}$, so it's in the nullspace of $T^{m}$ too
$T^{\dim V}$ and $T^{\dim V+1}$ have the same nullspace etc which is how we get $\dim V$ for $j$ above
Nilpotent operators
$N^k = 0$ for some $k$
$k \leq \dim V$ since every $v$ is a generalised eigenvector for $\lambda = 0$ so it inherits it from the previous statements
$(v, Tv, Tv^2, \ldots, T^mv)$ is lin ind if $T^mv \neq 0$ (try to make 0, apply $T$ to both sides, all coeffs are 0)
If $ST$ is nilpotent, so is $TS$ (proof: $(TS)^{n+1} = 0$; regroup etc)
Only $N=0$ is self-adjoint and nilpotent
If 0 is the only evalue on a CVS, $N$ is nilpotent, because every vector is a generalised evector
Ranges of powers
Size monotically decreasing
$T^{\dim V}$ and $T^{\dim V+1}$ have the same range
Multiplicity
Means geometric multiplicity (dimension of eigenspace)
Controls the number of times an eigenvalue shows up on the diagonal
Sum of multiplicities if $\dim V$ on a complex VS
Characteristic polynomial
Factors of eigenvalues, algebraic multiplicity. Degree is $\dim V$. Roots are eigenvalues.
Cayley-Hamilton
Char poly is $q(z)$, then $q(T) = 0$
Proof: Show $q(T)v_i = 0$ for every basis vector $v_i$. Strong induction on dim. For $n=1$, obvious, only factor vanishes. For $n$, from the UT matrix we can tell that $v_i$ is in the span of the previous $v$s, so we write it as a linear combo, and apply the IH (so all the other factors go to 0)
Nullspaces of polynomials of $T$
Invariant under $T$ somehow
Decomposition into nilpotent operators
The generalised eigenspaces
Bases from generalised eigenvectors
Always enough to form a basis for a complex VS
Matrices of nilpotent operators
There's always a UT matrix with 0's along the diagonal wrt some basis (choose bases from the nullspace of $N$, then $N^2$, put them all together in one giant basis)
Upper-triangular block matrices
If $T$ has $m$ distinct eigenvalues, $T$ has a block diagonal matrix where each block is UT with an eigenvalue repeated along the diagonal, the number of times acc. to its geometric mult
Square roots of operators
If $N$ is nilpotent $I+N$ has a square root (and any other root); proof by Taylor expansion, which is a finite sum because $N^m = 0$ for all $m$ after something
Any invertible op has a sqrt on a complex VS - eah eigenspace has a nilpotent op ($\lambda I + N$) and since $T$ is invertible, none of the eigenvalues are 0, so we can divide by $\lambda$, showing that $S = \lambda I + N$ which is just $T$ limited to one subspace, then extend this to the ambient space
Minimal polynomial
The unique monic poly $p$ of smallest degree so that $p(T) = 0$
Existence proof: $(I, T, T^2, \ldots T^{n^2})$ is lin dep (since $\dim \mathcal L(V) = n^2$) so make 0 using some choice of coeffs, not all zero
Unique if we limit to the smallest degree (so instead of $n^2$ use the smallest $m$ that makes it dep)
$q(T) = 0$ $\Leftrightarrow$ min poly divides $q$ (for any $q$, including the char poly); proof by division algo
Roots are eigenvalues, proof by non-zeroness of eigenvectors
All factors of the char poly are in the min poly, multiplicity is reduced to 1 only if all the geometric and algebraic multiplicities are the same (so eigenvectors form a basis)
Nilpotent operators and bases
We can make a basis out of $(v_1, Nv_1, \ldots, N^{a_1}v_1, v_2, \ldots)$ (the $a$'s are the highest power that don't make it 0)
Proof: strong induction on dim, apply IH on range since nullspace is not just zero (cuz nilpotent), make a basis out of range vectors, then consider complement of nullspace within range
Jordan form
block-diagonal matrix, each block has size determined by geometric multiplicity and has the evalue along the diagonal and 1 in the line above it
Existence proof for $T$ in a CVS: works for nil ops $(N^{a_1}v_1, \ldots, Nv_1, v_1)$ giving us zero along the diagonal, and eigenspaces are given by nil ops etc
The matrix of $(v_n, \ldots, v_1)$ has each block transposed, so the diagonal is flipped along the / axis (still a diagonal, in reverse order) - so the 1s are under the diagonal

8Chapter 9

Omitted (not covered)

9Chapter 10

Change-of-basis matrix
To convert from one basis to another, figure it out from first principles I guess
Equivalent to the matrix for the op that maps each basis vector to the corresponding one
Inverse matrix gives the opposite direction
Trace
Independent of the basis
On a CVS, equal to sum of eigenvalues (repeated acc. to geometric multiplicity)
On a RVS, sum of eigenvalues minus sum of first coordinates of eigenpairs (again, geo mult)
sum along the diagonal for a UT matrix (same as sum of evalues), or even a non-UT matrix
$BA = AB$ (square, same size)
No operators such that $ST - TS = I$, by looking at trace
Is a non-negative int if $P^2=P$
trace of $(T^*T) = \|Te_1\|^2 + \ldots$ (cuz trace is given by $\langle T^*T e_1, e_1 \rangle$, since that's how the matrix works etc)
Determinant
$(-1)^{\dim V}$ times the constant term in the char poly
CVS: product of eigenvalues (incl repeats, acc. to geo mult)
RVS: product of evalues and second coordinates of eigenpairs (geo mult)
Invertible op $\Leftrightarrow$ non-zero determinant. Proof: det is zero iff an eigenvalue is 0, in which case, not invertible
Char poly is $\det(zI-T)$
For a diagonal matrix, just take the product of the diagonal elements
For block UT matrices, the det is the prod of the det of the block matrices along the diagonal
Changing two columns flips the sign (permutation theory)
If a column is a scalar mult of another, the det is 0
$\det(AB) = \det(BA) = \det(A)\det(B)$
Dets of ops are independent of the basis used for the matrix