Glancing back at the last section, the reader will observe that we did not give any non-trivial examples of alternating \(k\) -linear forms, and we did not even indirectly hint at any existence theorem concerning them. In fact they do not always exist; Section: Alternating forms , Theorem 2 implies, for instance, that if \(k > n\) , then \(0\) is the only alternating \(k\) -linear form. (See Section: Dimension , Theorem 2.) For the applications we have in mind, we need only one existence theorem; we proceed to prove a rather sharp form of it.
Theorem 1. The vector space of alternating \(n\) -linear forms on an \(n\) -dimensional vector space is one-dimensional.
Proof. We show first that if \(1 \leq k \leq n\) , then there exists at least one non-zero alternating \(k\) -linear form; the proof goes by induction on \(k\) . If \(k = 1\) , the desired result follows from the existence of non-trivial linear functionals (see Section: Dual bases , Theorem 3). If \(1 \leq k < n\) , we assume that \(v\) is a non-zero alternating \(k\) -linear form; using \(v\) we shall construct a non-zero alternating \((k + 1)\) -linear form \(w\) . Since \(v \neq 0\) , we can find vectors \(x_1^0, \ldots, x_k^0\) such that \(v(x_1^0, \ldots, x_k^0) \neq 0\) (the superscripts are just indices here). Since \(k < n\) , we can find a vector \(x_{k+1}^0\) that does not belong to the subspace spanned by \(x_1^0, \ldots, x_k^0\) , and (see Section: Annihilators , Theorem 1) then we can find a linear functional \(u\) such that \(u(x_1^0) = \cdots = u(x_k^0) = 0\) and \(u(x_{k+1}^0) \neq 0\) .
The promised \((k + 1)\) -linear form \(w\) is obtained from the linear functional \(u\) and the \(k\) -linear form \(v\) by writing \begin{align} w(x_1, \ldots, x_k, x_{k+1}) &= \sum_{i=1}^{k+1} (i, k + 1) \, v(x_1, \ldots, x_{k+1}) u(x_{k + 1}) \notag \\ & \quad - v(x_1, \ldots, x_k)u(x_{k+1}). \end{aligned} \tag{1}\] Thus, for instance, if \(k = 3\) , then \begin{align} w(x_1, x_2, x_3, x_4) &= v(x_4, x_2, x_3) u(x_1) + v(x_1, x_4, x_3) u(x_2) \\ & \quad + v(x_1, x_2, x_4) u(x_3) - v(x_1, x_2, x_3) u(x_4). \end{align}
It follows from the elementary discussion in Section: Multilinear forms that \(w\) is indeed a \((k + 1)\) -linear form; we are to prove that it is non-zero and alternating.
The fact that \(w\) is not identically zero is easy to prove. Indeed, since \(u(x_i) = 0\) for \(i = 1, \ldots, k\) , it follows that if we replace each \(x_i\) by \(x_i^0\) in (1), \(i = 1, \ldots, k + 1\) , then the first \(k\) terms of the sum on the right all vanish, and, consequently, \[w(x_1^0, \ldots, x_k^0, x_{k+1}^0) = -v(x_1^0, \ldots, x_k^0) u(x_{k+1}^0) \neq 0. \tag{2}\]
Suppose now that \(x_1, \ldots, x_k, x_{k+1}\) are vectors and \(i\) and \(j\) are integers such that \(1 \leq i < j \leq k + 1\) and \(x_i = x_j\) . We are to prove that, under these circumstances, \(w(x_1, \ldots, x_k, x_{k+1}) = 0\) . We note that both \(x_i\) and \(x_j\) occur in the argument of \(v\) in all but two of the \(k + 1\) terms on the right side of (1). Since \(v\) is alternating, the terms in which both \(x_i\) and \(x_j\) do so occur all vanish.
The remainder of the proof splits naturally into two cases. If \(j = k + 1\) , then all that is left is \[(i, k + 1) v(x_1, \ldots, x_k) u(x_{k+1}) - v(x_1, \ldots, x_k) u(x_{k+1}),\] and, since \(x_i = x_{k+1}\) , this is clearly equal to \(0\) . If \(j \leq k\) , then each of the two possibly non-vanishing terms that are still left can be obtained from the other by an application of the transposition \((i, j)\) . It follows that those terms differ in sign only, and hence that their sum is zero. This proves that \(w\) is alternating and proves, therefore, that the dimension of the space of alternating \(n\) -linear forms is not less than \(1\) .
The fact that the dimension of the space of alternating \(n\) -linear forms is not more than \(1\) is an immediate consequence of Section: Alternating forms , Theorem 4. ◻
This concludes our discussion of multilinear algebra. The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases); that application belongs to the next chapter.
EXERCISES
Exercise 1. If \(w\) is a \(k\) -linear form, and if the characteristic of the underlying field of scalars is different from \(2\) (that is, if \(1 + 1 \neq 0\) ), then \(w\) is the sum of a symmetric \(k\) -linear form and a skew-symmetric one. What if the characteristic is \(2\) ?
Exercise 2. Give an example of a skew-symmetric multilinear form that is not alternating. (Recall that in view of the discussion in Section: Alternating forms the field of scalars must have characteristic \(2\) .)
Exercise 3. Give an example of a non-zero alternating \(k\) -linear form \(w\) on an \(n\) -dimensional space ( \(k < n\) ), such that \(w(x_1, \ldots, x_k) = 0\) for some set of linearly independent vectors \(x_1, \ldots, x_k\) .
Exercise 4. What is the dimension of the space of all symmetric \(k\) -linear forms? What about the skew-symmetric ones? What about the alternating ones?