A \(k\) -linear form \(w\) is skew-symmetric if \(\pi w = -w\) for every odd permutation \(\pi\) in \(\mathcal{S}_k\) . Equivalently, \(w\) is skew-symmetric if \(\pi w = (\operatorname{sgn} \pi) w\) for every permutation \(\pi\) in \(\mathcal{S}_k\) . (If \(\pi w = (\operatorname{sgn} \pi) w\) for all \(\pi\) , then, in particular, \(\pi w = -w\) whenever \(\pi\) is odd. If, conversely, \(\pi w = -w\) for all odd \(\pi\) , then, given an arbitrary \(\pi\) , factor it into transpositions, say, \(\pi = \tau_1 \ldots \tau_q\) , observe that \(\operatorname{sgn} \pi = (-1)^q\) , and, since \(\pi w = (-1)^q w\) , conclude that \(\pi w = (\operatorname{sgn} \pi) w\) , as asserted. This proof makes tacit use of the unproved but easily available fact that if \(\sigma\) and \(\tau\) are permutations in \(\mathcal{S}_k\) , then \((\sigma \tau) w = (\sigma (\tau w))\) .) The set of all skew-symmetric \(k\) -linear forms is a subspace of the space of all \(k\) -linear forms. To get a non-trivial example of a skew-symmetric bilinear form \(w\) , let \(y_1\) and \(y_2\) be linear functionals and write \[w(x_1, x_2) = y_1(x_1) y_2(x_2) - y_1(x_2) y_2(x_1).\] More generally, if \(w\) is an arbitrary \(k\) -linear form, a skew-symmetric \(k\) -linear form can be obtained from \(w\) by forming \(\sum (\operatorname{sgn} \pi) \pi w\) , where the summation is extended over all permutations \(\pi\) in \(\mathcal{S}_k\) .

A \(k\) -linear form \(w\) is called alternating if \(w(x_1, \ldots, x_k) = 0\) whenever two of the \(x\) ’s are equal. (Note that if \(k = 1\) , then this condition is vacuously satisfied.) The set of all alternating \(k\) -linear forms is a subspace of the space of all \(k\) -linear forms. There is an important relation between alternating and skew-symmetric forms.

Theorem 1. Every alternating multilinear form is skew-symmetric.

Proof. Suppose that \(w\) is an alternating \(k\) -linear form, and that \(i\) and \(j\) are integers, \(1 \leq i < j \leq k\) . If \(x_1, \ldots, x_k\) are vectors, we write \[w_0(x_i, x_j) = w(x_1, \ldots, x_k);\] if the \(x\) ’s other than \(x_i\) and \(x_j\) are held fixed (temporarily), then \(w_0\) is an alternating bilinear form of its two arguments. Since, by bilinearity, \[w_0(x_i + x_j, x_i + x_j) = w_0(x_i, x_i) + w_0(x_i, x_j) + w_0(x_j, x_i) + w_0(x_j, x_j),\] and since, by the alternating character of \(w_0\) , the left side and the two extreme terms of the right side of this equation all vanish, we see that \(w_0(x_j, x_i) = -w_0(x_i, x_j)\) . This, however, says that \[(i, j) \,w(x_1, \ldots, x_k) = -w(x_1, \ldots, x_k),\] or, since the \(x\) ’s are arbitrary, that \((i, j) \,w = -w\) . Since every odd permutation \(\pi\) is the product of an odd number of transpositions, such as \((i, j)\) , it follows that \(\pi w = -w\) for every odd \(\pi\) , and the proof of the theorem is complete. ◻

The connection between alternating forms and skew-symmetric ones involves one subtle point. Consider the following “proof” of the converse of Theorem 1: if \(w\) is a skew-symmetric \(k\) -linear form, if \(1 \leq i < j \leq k\) , and if \(x_1, \ldots, x_k\) are vectors such that \(x_i = x_j\) , then \[(i, j)\, w(x_1, \ldots, x_k) = w(x_1, \ldots, x_k)\] since \(x_i = x_j\) , and at the same time, \[(i, j)\, w(x_1, \ldots, x_k) = -w(x_1, \ldots, x_k)\] since \(w\) is skew-symmetric; consequently \(w(x_1, \ldots, x_k) = -w(x_1, \ldots, x_k)\) , so that \(w\) is alternating. This argument is wrong; the trouble is in the inference “if \(w = -w\) , then \(w = 0\) .” If we examine that inference in more detail, we find that it is based on the following reasoning: if \(w = -w\) , then \(w + w = 0\) , so that \((1 + 1) w = 0\) . This is correct. The trouble is that in certain fields \(1 + 1 = 0\) , and therefore the inference from \((1 + 1) w = 0\) to \(w = 0\) is not justified; the converse of Theorem 1 is, in fact, false for vector spaces over such fields.

Theorem 2. If \(x_1, \ldots, x_k\) are linearly dependent vectors and if \(w\) is an alternating \(k\) -linear form, then \(w(x_1, \ldots, x_k) = 0\) .

Proof. If \(x_i = 0\) for some \(i\) , the conclusion is trivial. If all the \(x_i\) are different from \(0\) , we apply the theorem of Section: Linear combinations to find an \(x_h\) , \(2 \leq h \leq k\) , that is a linear combination of the preceding ones. If, say, \(x_h = \sum_{i=0}^{h-1} \alpha_i x_i\) , we replace \(x_h\) in \(w(x_1, \ldots, x_k)\) by this expansion, and use the linearity of \(w\) in its \(h\) -th argument, and draw the desired conclusion by an argument of the same type. ◻

In one extreme case (namely, when \(k = n\) ) a sort of converse of Theorem 2 is true.

Theorem 3. If \(w\) is a non-zero alternating \(n\) -linear form, and if \(x_1, \ldots, x_n\) are linearly independent vectors, then \(w(x_1, \ldots, x_n) \neq 0\) .

Proof. Since ( Section: Dimension , Theorem 2) the vectors \(x_1, \ldots, x_n\) form a basis, we may, given an arbitrary set of \(n\) vectors \(y_1, \ldots, y_n\) , write each \(y\) as a linear combination of the \(x\) ’s. If we replace each \(y\) in \(w(y_1, \ldots, y_n)\) by the corresponding linear combination of \(x\) ’s and expand the result by multilinearity, we obtain a long linear combination of terms such as \(w(z_1, \ldots, z_n)\) , where each \(z\) is one of the \(x\) ’s. If, in such a term, two of the \(z\) ’s coincide, then, since \(w\) is alternating, that term must vanish. If, on the other hand, all the \(z\) ’s are distinct, then \(w(z_1, \ldots, z_n) = \pi w(x_1, \ldots, x_n)\) for some permutation \(\pi\) . Since (Theorem 1) \(w\) is skew-symmetric, it follows that \(w(z_1, \ldots, z_n) = (\operatorname{sgn} \pi) w(x_1, \ldots, x_n)\) . If \(w(x_1, \ldots, x_n) = 0\) , it would follow that \(w(z_1, \ldots, z_n) = 0\) , and hence that \(w(y_1, \ldots, y_n) = 0\) for all \(y_1, \ldots, y_n\) , contradicting the assumption that \(w \neq 0\) . ◻

The proof (not the statement) of this result yields a valuable corollary.

Theorem 4. Any two alternating \(n\) -linear forms are linearly dependent.

Proof. Suppose that \(w_1\) and \(w_2\) are alternating \(n\) -linear forms and that \(\{x_1, \ldots, x_n\}\) is a basis. Given any \(n\) vectors \(y_1, \ldots, y_n\) , write each of them as a linear combination of the \(x\) ’s, and, just as above, replace each of them, in both \(w_1(y_1, \ldots, y_n)\) and \(w_2(y_1, \ldots, y_n)\) , by the corresponding linear combination. It follows that each of \(w_1(y_1, \ldots, y_n)\) and \(w_2(y_1, \ldots, y_n)\) is a linear combination (the same linear combination) of terms such as \(w_1(z_1, \ldots, z_n)\) and \(w_2(z_1, \ldots, z_n)\) , where each \(z\) is one of the \(x\) ’s. Since \(w_1(x_1, \ldots, x_n)\) and \(w_2(x_1, \ldots, x_n)\) are scalars, they are linearly dependent, so that there exist scalars \(\alpha_1\) and \(\alpha_2\) not both zero, such that \(\alpha_1 w_1(x_1, \ldots, x_n) + \alpha_2 w_2(x_1, \ldots, x_n) = 0\) ; from these facts we may infer that \(\alpha_1 w_1 + \alpha_2 w_2 = 0\) , as asserted. ◻