There is one important case in which multiplication does not get turned around, that is, when \((A B)^{\prime}=A^{\prime} B^{\prime}\) ; namely, the case when \(A\) and \(B\) commute. We have, in particular, \((A^{n})^{\prime}=(A^{\prime})^{n}\) , and, more generally, \((p(A))^{\prime}=p(A^{\prime})\) for every polynomial \(p\) . It follows from this that if \(E\) is a projection, then so is \(E^{\prime}\) . The question arises: what direct sum decomposition is \(E^{\prime}\) associated with?

Theorem 1. If \(E\) is the projection on \(\mathcal{M}\) along \(\mathcal{N}\) , then \(E^{\prime}\) is the projection on \(\mathcal{N}^{0}\) along \(\mathcal{M}^{0}\) .

Proof. We know already that \((E^{\prime})^{2}=E^{\prime}\) and \(\mathcal{V}^{\prime}=\mathcal{N}^{0} \oplus \mathcal{M}^{0}\) (cf. Section: Dual of a direct sum ). It is necessary only to find the subspaces consisting of the solutions of \(E^{\prime} y=0\) and \(E^{\prime} y=y\) . This we do in four steps.

  1. If \(y\) is in \(\mathcal{M}^{0}\) , then, for all \(x\) , \[[x, E^{\prime} y]=[E x, y]=0,\] so that \(E^{\prime} y=0\) .
  2. If \(E^{\prime} y=0\) , then, for all \(x\) in \(\mathcal{M}\) , \[[x, y]=[E x, y]=[x, E^{\prime} y]=0,\] so that \(y\) is in \(\mathcal{M}^{0}\) .
  3. If \(y\) is in \(\mathcal{N}^{0}\) , then, for all \(x\) , \begin{align} &= [E x, y]+[(1-E) x, y]\\ &= [E x, y]\\ &= [x, E^{\prime} y], \end{align} so that \(E^{\prime} y=y\) .
  4. If \(E^{\prime} y=y\) , then for all \(x\) in \(\mathcal{N}\) , \[[x, y]=[x, E^{\prime} y]=[E x, y]=0,\] so that \(y\) is in \(\mathcal{N}^{0}\) .

Steps (i) and (ii) together show that the set of solutions of \(E^{\prime} y=0\) is precisely \(\mathcal{M}^{0}\) ; steps (iii) and (iv) together show that the set of solutions of \(E^{\prime} y=y\) is precisely \(\mathcal{N}^{0}\) . This concludes the proof of the theorem. ◻

Theorem 2. If \(\mathcal{M}\) is invariant under \(A\) , then \(\mathcal{M}^{0}\) is invariant under \(A^{\prime}\) ; if \(A\) is reduced by \((\mathcal{M}, \mathcal{N})\) , then \(A^{\prime}\) is reduced by \((\mathcal{M}^{0}, \mathcal{N}^{0})\) .

Proof. We shall prove only the first statement; the second one clearly follows from it. We first observe the following identity, valid for any three linear transformations \(E\) , \(F\) , and \(A\) , subject to the relation \(F=1-E\) : \[F A F-F A=E A E-A E \tag{1}\] (Compare this with the proof of Section: Projections and invariance , Theorem 2.) Let \(E\) be any projection on \(\mathcal{M}\) ; by Section: Projections and invariance , Theorem 1, the right member of (1) vanishes, and, therefore, so does the left member. By taking adjoints, we obtain \(F^{\prime} A^{\prime} F^{\prime}=A^{\prime} F^{\prime}\) ; since, by Theorem 1 of the present section, \(F^{\prime}=1-E^{\prime}\) is a projection on \(\mathcal{M}^{0}\) , the proof of Theorem 2 is complete. (Here is an alternative proof of the first statement of Theorem 2, a proof that does not make use of the fact that \(\mathcal{V}\) is the direct sum of \(\mathcal{M}\) and some other subspace. If \(y\) is in \(\mathcal{M}^{0}\) , then \([x, A^{\prime} y]=[A x, y]=0\) for all \(x\) in \(\mathcal{M}\) , and therefore \(A^{\prime} y\) is in \(\mathcal{M}^{0}\) . The only advantage of the algebraic proof given above over this simple geometric proof is that the former prepares the ground for future work with projections.) ◻

We conclude our treatment of adjoints by discussing their matrices; this discussion is intended to illuminate the entire theory and to enable the reader to construct many examples.

We shall need the following fact: if \(\mathcal{X}=\{x_{1}, \ldots, x_{n}\}\) is any basis in the \(n\) -dimensional vector space \(\mathcal{V}\) , if \(\mathcal{X}^{\prime}=\{y_{1}, \ldots, y_{n}\}\) is the dual basis in \(\mathcal{V}^{\prime}\) , and if the matrix of the linear transformation \(A\) in the coordinate system \(\mathcal{X}\) is \((\alpha_{i j})\) , then \[\alpha_{i j}=[A x_{j}, y_{i}]. \tag{2}\] This follows from the definition of the matrix of a linear transformation; since \(A x_{j}=\sum_{k} \alpha_{k j} x_{k}\) , we have \[[A x_{j}, y_{i}]=\sum_{k} \alpha_{k j}[x_{k}, y_{i}]=\alpha_{i j}.\] To keep things straight in the applications, we rephrase formula (2) verbally, thus: to find the \((i, j)\) element of \([A]\) in the basis \(\mathcal{X}\) , apply \(A\) to the \(j\) -th element of \(\mathcal{X}\) and then take the value of the \(i\) -th linear functional (in \(\mathcal{X}^{\prime}\) ) at the vector so obtained.

It is now very easy to find the matrix \((\alpha^{\prime}_{i j})=[A^{\prime}]\) in the coordinate system \(\mathcal{X}^{\prime}\) ; we merely follow the recipe just given. In other words, we consider \(A^{\prime} y_{j}\) , and take the value of the \(i\) -th linear functional in \(\mathcal{X}^{\prime \prime}\) (that is, of \(x_{i}\) considered as a linear functional on \(\mathcal{X}^{\prime}\) ) at this vector; the result is that \[\alpha_{i j}^{\prime}=[x_{i}, A^{\prime} y_{j}].\] Since \([x_{i}, A^{\prime} y_{j}]=[A x_{i}, y_{j}]=\alpha_{j i}\) , so that \(\alpha_{i j}^{\prime}=\alpha_{j i}\) , this matrix \([A^{\prime}]\) is called the transpose of \([A]\) .

Observe that our results on the relation between \(E\) and \(E^{\prime}\) (where \(E\) is a projection) could also have been derived by using the facts about the matricial representation of a projection together with the present result on the matrices of adjoint transformations.