A linear map is an isometry if for every . In words, preserves the norm of , or equivalently distances in remain the same in through .
Let , an orthonormal basis for and an orthonormal basis for . Then the following are equivalent: a) is an isometry. b) c) for every . d) is an orthonormal list in . e) The columns of form an orthonormal list of vectors in
Proof:
a) b): We will use the fact that is self-adjoint to show that it is the zero operator as follows:
Therefore, must be the zero operator. b) c):
c) d): Because the inner product does not change after applying , must also be orthonormal. d) e): This follows from the last result because the coordinates of the columns are from an orthonormal basis. So the inner product becomes the standard inner product on e) a): Because we are working with an orthonormal basis for , the fact that the columns are orthonormal mean the images of through form an orthonormal list in . Now, because the inner product is biliniar, we only needed to show that this property held for the basis vectors, so we're done.
An operator is unitary if it is an invertible isometry.
Suppose . Suppose is an orthonormal basis of . Then the following are equivalent: a) is a unitary operator. b) c) is invertible with inverse . d) is an orthonormal basis for . e) The rows of form an orthonormal basis for with respect to the standard inner product on f) is a unitary operator.
Proof:
a) b): Since is unitary, it is invertible and an isometry (and hence ). So
Therefore b) c): Because , must be injective. And because it is an operator, must therefore be invertible. Furthermore,
c) d): We know from the last result in the previous section that is equivalent to being an orthonormal basis (parts b and d in particular). So this just follows from that equivalence. d) e): The main idea behind this proof is that the adjoint is the conjugate transpose and thus has columns that are conjugates of an orthonormal basis, which also form an orthonormal basis. The full proof is longer: Suppose is an orthonormal basis for . Then from the last result of the previous section, this is equivalent to being an isometry. Because is an injective operator on a finite-dimensional vector space, must be invertible, so is a unitary operator (an invertible isometry). Now, from part e) of the last result in the previous section, this means that the columns of our matrix with respect to this basis must form an orthonormal basis for . This finally means that the rows also form an orthonormal basis, since they are just the conjugates of the matrix of the adjoint, which means they are also an orthonormal basis. e) f): Suppose the rows of form an orthonormal basis for . Then the columns of also form an orthonormal basis for and thus is an isometry. So and are inverses of each other. f) a): Suppose is unitary and apply all of the previous implications to , showing that is also unitary. And since , we are done.
Suppose is an eigenvalue of a unitary operator. Then .
Proof:
This follows from teh preservation of the inner product, and hence the norm: If is an eigenvector with eigenvalue , then
Suppose and . Then the following are equivalent: a) is a unitary operator. b) There is an orthonormal basis of consisting of eigenvectors of whose corresponding eigenvalues all have absolute value 1.
Proof:
: If is unitary then it is normal. So by the complex spectral theorem, it is diagonalizable with respect to an orthonormal basis. We also know that its eigenvalues must have an absolute value of 1, so we're done. : Let be an orthonormal basis of eigenvectors of with eigenvalues with absolute value 1. Then the -images of these vectors are also orthonormal, as
Now, this means that is unitary from the first result of this chapter.
A square matrix is unitary if its columns form an orthonormal basis in
Suppose is an matrix. Then the following are equivalent. a) is a unitary matrix. b) The rows of form tan orthonormal list in . c) for every d) , where represents the identity matrix.
Proof:
(Proof idea) The matrix is a linear operator from to . All of these conditions have been proven to be equivalent for operators.
Suppose is a square matrix with linearly independent columns. Then there exist unique matrices and , where is unitary, is upper-triangular, with only positive numbers on its diagonal, and .
Proof:
Apply the Gram-Schmidt procedure to the columns of . 's columns consist of these orthonormal basis vectors. Now, is the right-inverse of the matrix whose columns would apply the linear combinations resulting in the Gram-Schmidt procedure on to result in . Specifically, each entry of can be found as where is the th column of and is the th column of . Then is unitary because its columns are orthonormal. Furthermore, is upper-triangular because all of the changes in the Gram-Schmidt procedure add scalar multiples of previous vectors only, making values below the diagonal all zero. Furthermore, the diagonals are positive because the last step is dividing by the norm, which is positive.