There is now only one more possible doubt that the reader might (or, at any rate, should) have. Many of our preceding results were consequences of such reflexivity relations as \(A^{* *}=A\) ; do these remain valid after the brackets-to-parentheses revolution? More to the point is the following way of asking the question. Everything we say about a unitary space \(\mathcal{V}\) must also be true about the unitary space \(\mathcal{V}^{*}\) ; in particular it is also in a natural conjugate isomorphic relation with its dual space \(\mathcal{V}^{**}\) . If now to every vector in \(\mathcal{V}\) we make correspond a vector in \(\mathcal{V}^{* *}\) , by first applying the natural conjugate isomorphism from \(\mathcal{V}\) to \(\mathcal{V}^{*}\) and then going the same way from \(\mathcal{V}^{*}\) to \(\mathcal{V}^{* *}\) , then this mapping is a rival for the title of natural mapping from \(\mathcal{V}\) to \(\mathcal{V}^{* *}\) , a title already awarded in Chapter I to a seemingly different correspondence. What is the relation between the two natural correspondences? Our statements about the coincidence, except for trivial modifications, of the parenthesis and bracket theories, are really justified by the fact, which we shall n ow prove, that the two mappings are the same. (It should not be surprising, since \(\bar{\bar{\alpha}}=\alpha\) , that after two applications the bothersome conjugation disappears.) The proof is shorter than the introduction to it.
Let \(y_{0}\) be any element of \(\mathcal{V}\) ; to it there corresponds the linear functional \(y_{0}^{*}\) in \(\mathcal{V}^{*}\) , defined by \(y_{0} *(x)=(x, y_{0})\) , and to \(y_{0}^{*}\) , in turn, there corresponds the linear functional \(y_{0}^{* *}\) in \(\mathcal{V}^{* *}\) , defined by \(y_{0}^{* *}(y^{*})=(y^{*}, y_{0}^{*})\) . Both these correspondences are given by the mapping introduced in this chapter. Earlier (see Section: Reflexivity ) the correspondent \(y_{0}^{* *}\) in \(\mathcal{V}^{* *}\) of \(y_{0}\) in \(\mathcal{V}\) was defined by \(y_{0}^{* *}(y^{*})=y^{*}(y_{0})\) for all \(y^{*}\) in \(\mathcal{V}^{*}\) ; we must show that \(y_{0}^{**}\) , as we here defined it, satisfies this identity. Let \(y^{*}\) be any linear functional on \(\mathcal{V}\) (that is, any element of \(\mathcal{V}^{*}\) ); we have \[y_{0}^{* *}(y^{*})=(y^{*}, y_{0}^{*})=(y_{0}, y)=y^{*}(y_{0}).\] (The middle equality comes from the definition of inner product in \(\mathcal{V}^{*}\) .) This settles all our problems.
EXERCISES
Exercise 1. If \(\mathcal{M}\) and \(\mathcal{N}\) are subspaces of a finite-dimensional inner product space, then \[(\mathcal{M} + \mathcal{N})^{\perp}=\mathcal{M}^{\perp} \cap \mathcal{N}^{\perp}\] and \[(\mathcal{M} \cap \mathcal{N})^{\perp}=\mathcal{M}^{\perp}+\mathcal{N}^{\perp}.\]
Exercise 2. If \(y^{\prime}(x)=\frac{1}{3}(\xi_{1}+\xi_{2}+\xi_{3})\) for each \(x=(\xi_{1}, \xi_{2}, \xi_{3})\) in \(\mathbb{C}^{3}\) , find a vector \(y\) in \(\mathbb{C}^3\) such that \(y^{\prime}(x)=(x, y)\) .
Exercise 3. If \(y\) is a vector in an inner product space, if \(A\) is a linear transformation on that space, and if \(f(x)=\overline{(y, A x)}\) for every vector \(x\) , then \(f\) is a linear functional; find a vector \(y^{*}\) such that \(f(x)=(x, y^{*})\) for every \(x\) .
Exercise 4.
- If \(A\) is a linear transformation on a finite-dimensional inner product space, then \(\operatorname{tr}(A^{*} A) \geq 0\) ; a necessary and sufficient condition that \(\operatorname{tr}(A^{*} A)=0\) is that \(A=0\) . (Hint: look at matrices.) This property of traces can often be used to obtain otherwise elusive algebraic facts about products of transformations and their adjoints.
- Prove by a trace argument, and also directly, that if \(A_{1}, \ldots, A_{k}\) are linear transformations on a finite-dimensional inner product space and if \(\sum_{j=1}^{k} A_{j}^{*} A_{j}=0\) , then \(A_{1}=\cdots=A_{k}=0\) .
- If \(A^{*} A=B^{*} B-B B^{*}\) , then \(A=0\) .
- If \(A^{*}\) commutes with \(A\) and if \(A\) commutes with \(B\) , then \(A^{*}\) commutes with B. (Hint: if \(C=A^{*} B-B A^{*}\) and \(D=A B-B A\) , then \[\operatorname{tr}(C^{*} C)=\operatorname{tr}(D^{*} D)+\operatorname{tr}[(A^{*} A-A A^{*})(B^{*} B-B B^{*})].)\]
Exercise 5.
- Suppose that \(\mathcal{H}\) is a unitary space, and form the set of all ordered pairs \(\langle x, y\rangle\) with \(x\) and \(y\) in \(\mathcal{H}\) (that is, the direct sum of \(\mathcal{H}\) with itself). Prove that the equation \[\big(\langle x_{1}, y_{1}\rangle,\langle x_{2}, y_{2}\rangle\big)=(x_{1}, x_{2})+(y_{1}, y_{2})\] defines an inner product in the direct sum \(\mathcal{H} \oplus \mathcal{H}\) .
- If \(U\) is defined by \(U\langle x, y\rangle=\langle y,-x\rangle\) , then \(U^{*} U=1\) .
- The graph of a linear transformation \(A\) on \(\mathcal{H}\) is the set of all those elements \(\langle x, y\rangle\) of \(\mathcal{H} \oplus \mathcal{H}\) for which \(y=A x\) . Prove that the graph of every linear transformation on \(\mathcal{H}\) is a subspace of \(\mathcal{H} \oplus \mathcal{H}\) .
- If \(A\) is a linear transformation on \(\mathcal{H}\) with graph \(\mathcal{G}\) , then the graph of \(A^{*}\) is the orthogonal complement (in \(\mathcal{H} \oplus \mathcal{H}\) ) of the image under \(U\) (see (b)) of the graph of \(A\) .
Exercise 6.
- If for every linear transformation \(A\) on a finite-dimensional inner product space \(N(A)=\sqrt{\operatorname{tr}(A^{*} A)}\) , then \(N\) is a norm (on the space of all linear transformations).
- Is the norm \(N\) induced by an inner product?
Exercise 7.
- Two linear transformations \(A\) and \(B\) on an inner product space are called congruent if there exists an invertible linear transformation \(P\) such that \(B=P^{*} A P\) . (The concept is frequently defined for the "quadratic forms" associated with linear transformations and not for the linear transformations themselves; this is largely a matter of taste. Note that if \(\alpha(x)=(A x, x)\) and \(\beta(x)=(B x, x)\) , then \(B=P^{*} A P\) implies that \(\beta(x)=\alpha(P x)\) .) Prove that congruence is an equivalence relation.
- If \(A\) and \(B\) are congruent, then so also are \(A^{*}\) and \(B^{*}\) .
- Does there exist a linear transformation \(A\) such that \(A\) is congruent to a scalar \(\alpha\) , but \(A \neq \alpha\) ?
- Do there exist linear transformations \(A\) and \(B\) such that \(A\) and \(B\) are congruent, but \(A^{2}\) and \(B^{2}\) are not?
- If two invertible transformations are congruent, then so are their inverses.