A possible relation between subspaces \(\mathcal{M}\) of a vector space and linear transformations \(A\) on that space is invariance. We say that \(\mathcal{M}\) is invariant under \(A\) , if \(x\) in \(\mathcal{M}\) implies that \(A x\) is in \(\mathcal{M}\) . (Observe that the implication relation is required in one direction only; we do not assume that every \(y\) in \(\mathcal{M}\) can be written in the form \(y=A x\) with \(x\) in \(\mathcal{M}\) ; we do not even assume that \(A x\) in \(\mathcal{M}\) implies \(x\) in \(\mathcal{M}\) . Presently we shall see examples in which the conditions we did not assume definitely fail to hold.) We know that a subspace of a vector space is itself a vector space; if we know that \(\mathcal{M}\) is invariant under \(A\) , we may ignore the fact that \(A\) is defined outside \(\mathcal{M}\) and we may consider \(A\) as a linear transformation defined on the vector space \(\mathcal{M}\) . Invariance is often considered for sets of linear transformations, as well as for a single one; \(\mathcal{M}\) is invariant under a set if it is invariant under each member of the set.

What can be said about the matrix of a linear transformation \(A\) on an \(n\) -dimensional vector space \(\mathcal{V}\) if we know that some \(\mathcal{M}\) is invariant under \(A\) ? In other words: is there a clever way of selecting a basis \(\mathcal{X}=\{x_{1}, \ldots, x_{n}\}\) in \(\mathcal{V}\) so that \([A]=[A; \mathcal{X}]\) will have some particularly simple form? The answer is in Section: Dimension of a subspace , Theorem 2; we may choose \(\mathcal{X}\) so that \(x_{1}, \ldots, x_{m}\) are in \(\mathcal{M}\) and \(x_{m+1}, \ldots, x_{n}\) are not. Let us express \(A x_{j}\) in terms of \(x_{1}, \ldots, x_{n}\) . For \(m+1 \leq j \leq n\) , there is not much we can say: \(A x_{j}=\sum_{i} \alpha_{i j} x_{i}\) . For \(1 \leq j \leq m\) , however, \(x_{j}\) is in \(\mathcal{M}\) , and therefore (since \(\mathcal{M}\) is invariant under \(A\) ) \(A x_{j}\) is in \(\mathcal{M}\) . Consequently, in this case \(A x_{j}\) is a linear combination of \(x_{1}, \ldots, x_{m}\) ; the \(\alpha_{i j}\) with \(m+1 \leq i \leq n\) are zero. Hence the matrix \([A]\) of \(A\) , in this coordinate system, will have the form \[[A]=\begin{bmatrix} {[A_{1}]} & {[B_{0}]} \\ {[0]} & {[A_{2}]} \end{bmatrix},\] where \([A_{1}]\) is the ( \(m\) -rowed) matrix of \(A\) considered as a linear transformation on the space \(\mathcal{M}\) (with respect to the coordinate system \(\{x_{1}, \ldots, x_{m}\}\) ), \([A_{2}]\) and \([B_{0}]\) are some arrays of scalars (in size \((n-m)\) by \((n-m)\) and \(m\) by \((n-m)\) respectively), and \([0]\) denotes the rectangular ( \((n-m)\) by \(m\) ) array consisting of zeros only. (It is important to observe the unpleasant fact that \([B_{0}]\) need not be zero.)