It becomes necessary now to straighten out the relation between general vector spaces and inner product spaces. The theorem of the preceding section shows that, as long as we are careful about complex conjugation, \((x, y)\) can completely take the place of \([x, y]\) . It might seem that it would have been desirable to develop the entire subject of general vector spaces in such a way that the concept of orthogonality in a unitary space becomes not merely an analogue but a special case of some previously studied general relation between vectors and functionals. One way, for example, of avoiding the unpleasantness of conjugation (or, rather, of shifting it to a less conspicuous position) would have been to define the dual space of a complex vector space as the set of conjugate linear functionals, that is, the set of numerically valued functions \(y\) for which \[y(\alpha_{1} x_{1}+\alpha_{2} x_{2})=\bar{\alpha}_{1} y(x_{1})+\bar{\alpha}_{2} y(x_{2}).\] Because it seemed pointless (and contrary to common usage) to introduce this complication into the general theory, we chose instead the roundabout way that we just traveled. Since from now on we shall deal with inner product spaces only, we ask the reader mentally to revise all the preceding work by replacing, throughout, the bracket \([x, y]\) by the parenthesis \((x, y)\) . Let us examine the effect of this change on the theorems and definitions of the first two chapters.
The replacement of \(\mathcal{V}^{\prime}\) by \(\mathcal{V}^{*}\) is merely a change of notation; the new symbol is supposed to remind us that something new (namely, an inner product) has been added to \(\mathcal{V}^{\prime}\) . Of a little more interest is the (conjugate) isomorphism between \(\mathcal{V}\) and \(\mathcal{V}^{*}\) ; by means of it the theorems of Section: Dual bases , asserting the existence of linear functionals with various properties, may now be interpreted as asserting the existence of certain vectors in \(\mathcal{V}\) itself. Thus, for example, the existence of a dual basis to any given basis \(\mathcal{X} = \{x_{1}, \ldots, x_{n}\}\) implies now the existence of a basis \(\mathcal{Y} = \{y_{1}, \ldots, y_{n}\}\) (of \(\mathcal{V}\) ) with the property that \((x_{i}, y_{j})=\delta_{i j}\) .
More exciting still is the implied replacement of the annihilator \(\mathcal{M}^{0}\) of a subspace \(\mathcal{M}\) ( \(\mathcal{M}^0\) lying in \(\mathcal{V}^{\prime}\) or \(\mathcal{V}^{*}\) ) by the orthogonal complement \(\mathcal{M}^{\perp}\) (lying, along with \(\mathcal{M}\) , in \(\mathcal{V}\) ). The most radical new development, however, concerns the adjoint of a linear transformation. Thus we may write the analogue of Section: Adjoints , (1), and corresponding to every linear transformation \(A\) on \(\mathcal{V}\) we may define a linear transformation \(A^{*}\) by writing \[(A x, y)=(x, A^{*} y)\] for every \(x\) . It follows from this definition that \(A^{*}\) is again a linear transformation defined on the same vector space \(\mathcal{V}\) , but, because of the Hermitian symmetry of \((x, y)\) , the relation between \(A\) and \(A^{*}\) is not quite the same as the relation between \(A\) and \(A^{\prime}\) . The most notable difference is that (in a unitary space) \((\alpha A)^{*}=\bar{\alpha} A^{*}\) (and not \((\alpha A)^{*}=\alpha A^{*}\) ). Associated with this phenomenon is the fact that if the matrix of \(A\) , with respect to some fixed basis, is \((\alpha_{i j})\) , then the matrix of \(A^{*}\) , with respect to the dual basis, is not \((\alpha_{j i})\) but \((\bar{\alpha}_{j i})\) . For determinants we do not have \(\operatorname{det} A^{*}=\operatorname{det} A\) but \(\operatorname{det} A^{*}=\overline{\operatorname{det} A}\) , and, consequently, the proper values of \(A^{*}\) are not the same as those of \(A\) , but rather their conjugates. Here, however, the differences stop. All the other results of Section: Adjoints on the anti-isomorphic nature of the correspondence \(A \rightleftarrows A^{*}\) are valid; the identity \(A=A^{* *}\) is strictly true and does not need the help of an isomorphism to interpret it.
Presently we shall discuss linear transformations on inner product spaces and we shall see that the principal new feature that differentiates their study from the discussion of Chapter II is the possibility of comparing \(A\) and \(A^{*}\) as linear transformations on the same space, and of investigating those classes of linear transformations that bear a particularly simple relation to their adjoints.