Now that we have described the spaces we shall work with, we must specify the relations among the elements of those spaces that will be of interest to us.
We begin with a few words about the summation notation. If corresponding to each of a set of indices \(i\) there is given a vector \(x_{i}\) , and if it is not necessary or not convenient to specify the set of indices exactly, we shall simply speak of a set \(\{x_{i}\}\) of vectors. (We admit the possibility that the same vector corresponds to two distinct indices. In all honesty, therefore, it should be stated that what is important is not which vectors appear in \(\{x_{i}\}\) , but how they appear.) If the index-set under consideration is finite, we shall denote the sum of the corresponding vectors by \(\sum_{i} x_{i}\) (or, when desirable, by a more explicit symbol such as \(\sum_{i=1}^{n} x_{i}\) ). In order to avoid frequent and fussy case distinctions, it is a good idea to admit into the general theory sums such as \(\sum_{i} x_{i}\) even when there are no indices \(i\) to be summed over, or, more precisely, even when the index-set under consideration is empty. (In that case, of course, there are no vectors to sum, or, more precisely, the set \(\{x_{i}\}\) is also empty.) The value of such an "empty sum" is defined, naturally enough, to be the vector \(0\) .
Definition 1. A finite set \(\{x_{i}\}\) of vectors is linearly dependent if there exists a corresponding set \(\{\alpha_{i}\}\) of scalars, not all zero, such that \[\sum_{i} \alpha_{i} x_{i}=0\]
If, on the other hand, \(\sum_{i} \alpha_{i} x_{i}=0\) implies that \(\alpha_{i}=0\) for each \(i\) , the set \(\{x_{i}\}\) is linearly independent .
The wording of this definition is intended to cover the case of the empty set; the result in that case, though possibly paradoxical, dovetails very satisfactorily with the rest of the theory. The result is that the empty set of vectors is linearly independent. Indeed, if there are no indices \(i\) , then it is not possible to pick out some of them and to assign to the selected ones a non-zero scalar so as to make a certain sum vanish. The trouble is not in avoiding the assignment of zero; it is in finding an index to which something can be assigned. Note that this argument shows that the empty set is not linearly dependent; for the reader not acquainted with arguing by "vacuous implication," the equivalence of the definition of linear independence with the straightforward negation of the definition of linear dependence needs a little additional intuitive justification. The easiest way to feel comfortable about the assertion “ \(\sum_{i} \alpha_{i} x_{i}=0\) implies that \(\alpha_{i}=0\) for each \(i\) " in case there are no indices \(i\) , is to rephrase it this way: “if \(\sum_{i} \alpha_{i} x_{i}=0\) , then there is no index \(i\) for which \(\alpha_{i} \neq 0\) .” This version is obviously true if there is no index \(i\) at all.
Linear dependence and independence are properties of sets of vectors; it is customary, however, to apply the adjectives to vectors themselves, and thus we shall sometimes say " a set of linearly independent vectors" instead of "a linearly independent set of vectors." It will be convenient also to speak of the linear dependence and independence of a not necessarily finite set, \(\mathcal{X}\) , of vectors. We shall say that \(\mathcal{X}\) is linearly independent if every finite subset of \(\mathcal{X}\) is such; otherwise \(\mathcal{X}\) is linearly dependent.
To gain insight into the meaning of linear dependence, let us study the examples of vector spaces that we already have.
Example 1. If \(x\) and \(y\) are any two vectors in \(\mathbb{C}^{1}\) , then \(x\) and \(y\) form a linearly dependent set. If \(x=y=0\) , this is trivial; if not, then we have, for example, the relation \(y x+(-x) y=0\) . Since it is clear that every set containing a linearly dependent subset is itself linearly dependent, this shows that in \(\mathbb{C}^{1}\) every set containing more than one element is a linearly dependent set.
Example 2. More interesting is the situation in the space \(\mathcal{P}\) . The vectors \(x\) , \(y\) , and \(z\) , defined by \begin{align} x(t) &= 1-t \\ y(t) &= t(1-t) \\ z(t) &= 1-t^{2} \end{align} are, for example, linearly dependent, since \(x+y-z=0\) . However, the infinite set of vectors \(x_{0}, x_{1}, x_{2}, \ldots\) , defined by \[x_{0}(t)=1, \quad x_{1}(t)=t, \quad x_{2}(t)=t^{2}, \quad \ldots\] is a linearly independent set, for if we had any relation of the form \[\alpha_{0} x_{0}+\alpha_{1} x_{1}+\cdots+\alpha_{n} x_{n}=0,\] then we should have a polynomial identity \[\alpha_{0}+\alpha_{1} t+\cdots+\alpha_{n} t^{n}=0,\] whence \[\alpha_{0}=\alpha_{1}=\cdots=\alpha_{n}=0.\]
Example 3. As we mentioned before, the spaces \(\mathbb{C}^{n}\) are the prototype of what we want to study; let us examine, for example, the case \(n=3\) . To those familiar with higher-dimensional geometry, the notion of linear dependence in this space (or, more properly speaking, in its real analogue \(\mathbb{R}^{3}\) ) has a concrete geometric meaning, which we shall only mention. In geometrical language, two vectors are linearly dependent if and only if they are collinear with the origin, and three vectors are linearly dependent if and only if they are coplanar with the origin. (If one thinks of a vector not as a point in a space but as an arrow pointing from the origin to some given point, the preceding sentence should be modified by crossing out the phrase "with the origin" both times that it occurs.) We shall presently introduce the notion of linear manifolds (or vector subspaces) in a vector space, and, in that connection, we shall occasionally use the language suggested by such geometrical considerations.