We have already seen ( Section: Transformations of rank one , Theorem 2) that every linear transformation \(A\) of rank \(\rho\) is the sum of \(\rho\) linear transformations of rank one. It is easy to see (using the spectral theorem) that if \(A\) is self-adjoint, or positive, then the summands may also be taken self-adjoint, or positive, respectively. We know ( Section: Transformations of rank one , Theorem 1) what the matrix of transformation of rank one has to be; what more can we say if the transformation is self-adjoint or positive?
Theorem 1. If \(A\) has rank one and is self-adjoint (or positive), then in every orthonormal coordinate system the matrix \((\alpha_{i j})\) of \(A\) is given by \(\alpha_{i j}=\kappa \beta_{i} \bar{\beta}_{j}\) with a real \(\kappa\) (or by \(\alpha_{i j}=\gamma_{i} \bar{\gamma}_{j}\) ). If, conversely, \([A]\) has this form in some orthonormal coordinate system, then \(A\) has rank one and is self-adjoint (or positive).
Proof. We know that the matrix \((\alpha_{i j})\) of a transformation \(A\) of rank one, in any orthonormal coordinate system \(\mathcal{X}=\{x_{1}, \ldots, x_{n}\}\) , is given by \(\alpha_{i j}=\beta_{i} \gamma_{j}\) . If \(A\) is self-adjoint, we must also have \(\alpha_{i j}=\bar{\alpha}_{j i}\) , whence \(\beta_{i} \gamma_{j}=\overline{\beta_{j} \gamma_{i}}\) . If \(\beta_{i}=0\) and \(\gamma_{i} \neq 0\) for some \(i\) , then \(\bar{\beta}_{j}=\beta_{i} \gamma_{j} / \bar{\gamma}_{i}=0\) for all \(j\) , whence \(A=0\) . Since we assumed that the rank of \(A\) is one (and not zero), this is impossible. Similarly \(\beta_{i} \neq 0\) and \(\gamma_{i}=0\) is impossible; that is, we can find an \(i\) for which \(\beta_{i} \gamma_{i} \neq 0\) . Using this \(i\) , we have \[\bar{\beta}_{j}=(\beta_{i} / \bar{\gamma}_{i}) \gamma_{j}=\kappa \gamma_{j}\] with some non-zero constant \(\kappa\) , independent of \(j\) . Since the diagonal elements \(\alpha_{j j}=(A x_{j}, x_{j})=\beta_{j} \gamma_{j}\) of a self-adjoint matrix are real, we can even conclude that \(\alpha_{i j}=\kappa \beta_{i} \bar{\beta}_{j}\) with a real \(\kappa\) .
If, moreover, \(A\) is positive, then we even know that \(\kappa \beta_{j} \bar{\beta}_{j}=\alpha_{j j}=(A x_{j}, x_{j})\) is positive, and therefore so is \(\kappa\) . In this case we write \(\lambda=\sqrt{\kappa}\) ; the relation \(\kappa \beta_{i} \bar{\beta}_{j}=(\lambda \beta_{i})(\lambda \bar{\beta}_{j})\) shows that \(\alpha_{i j}\) is given by \(\alpha_{i j}=\gamma_{i} \bar{\gamma}_{j}\) .
It is easy to see that these necessary conditions are also sufficient. If \(\alpha_{i j}=\kappa \beta_{i} \bar{\beta}_{j}\) with a real \(\kappa\) , then \(A\) is self-adjoint. If \(\alpha_{i j}=\gamma_{i} \bar{\gamma}_{j}\) , and \(x=\sum_{i} \xi_{i} x_{i}\) , then \begin{align} (A x, x) &= \sum_{i} \sum_{j} \alpha_{i j} \bar{\xi}_{i} \xi_{j}\\ &= \sum_{i} \sum_{j} \gamma_{i} \bar{\gamma}_{j} \bar{\xi}_{i} \xi_{j} \\ &= \Big(\sum_{i} \gamma_{i} \bar{\xi}_{i}\Big) \overline{\Big(\sum_{j} \gamma_{j} \bar{\xi}_{j}\Big)}\\ &= |\sum_{i} \gamma_{i} \bar{\xi}_{i}|^{2}\\ &\geq 0, \end{align} so that \(A\) is positive. ◻
As a consequence of Theorem 1 it is very easy to prove a remarkable theorem on positive matrices.
Theorem 2. If \(A\) and \(B\) are positive linear transformations whose matrices in some orthonormal coordinate system are \((\alpha_{i j})\) and \((\beta_{i j})\) respectively, then the linear transformation \(C\) , whose matrix \((\gamma_{i j})\) in the same coordinate system is given by \(\gamma_{i j}=\alpha_{i j} \beta_{i j}\) for all \(i\) and \(j\) , is also positive.
Proof. Since we may write both \(A\) and \(B\) as sums of positive transformations of rank one, so that \[\alpha_{i j}=\sum_{p} \alpha_{i}^{p} \bar{\alpha}_{j}^{p}\] and \[\beta_{i j}=\sum_{q} \beta_{i}^{q} \bar{\beta}_{j}^{q},\] it follows that \[\gamma_{i j}=\sum_{p} \sum_{q} \alpha_{i}^{p} \beta_{i}^{q} \overline{(\alpha_{j}^{p} \beta_{j}^{q})}.\] (The superscripts here are not exponents.) Since a sum of positive matrices is positive, it will be sufficient to prove that, for each fixed \(p\) and \(q\) , the matrix \(((\alpha_{i}^{p} \beta_{i}^{q}) \overline{(\alpha_{j}^{p} \beta_{j}^{q})})\) is positive, and this follows from Theorem 1. ◻
The proof shows, by the way, that Theorem 2 remains valid if we replace "positive" by "self-adjoint" in both hypothesis and conclusion; in most applications, however, it is only the actually stated version that is useful. The matrix \((\gamma_{i j})\) described in Theorem 2 is called the Hadamard product of \((\alpha_{i j})\) and \((\beta_{i j})\) .
EXERCISES
Exercise 1. Suppose that \(\mathcal{U}\) and \(\mathcal{V}\) are finite-dimensional inner product spaces (both real or both complex).
- There is a unique inner product on the vector space of all bilinear forms on \(\mathcal{U} \oplus \mathcal{V}\) such that if \(w_{1}(x, y)=(x, x_{1})(y, y_{1})\) and \(w_{2}(x, y)=(x, x_{2})(y, y_{2})\) , then \((w_{1}, w_{2})=(x_{2}, x_{1})(y_{2}, y_{1})\) .
- There is a unique inner product on the tensor product \(\mathcal{U} \otimes \mathcal{V}\) such that if \(z_{1}=x_{1} \otimes y_{1}\) and \(z_{2}=x_{2} \otimes y_{2}\) , then \((z_{1}, z_{2})=(x_{1}, x_{2})(y_{1}, y_{2})\) .
- If \(\{x_{i}\}\) and \(\{y_{p}\}\) are orthonormal bases in \(\mathcal{U}\) and \(\mathcal{V}\) , respectively, then the vectors \(x_{i} \otimes y_{p}\) form an orthonormal basis in \(\mathcal{U} \otimes \mathcal{V}\) .
Exercise 2. Is the tensor product of two Hermitian transformations necessarily Hermitian? What about unitary transformations? What about normal transformations?