The rank–nullity theorem is a theorem in linear algebra, which asserts:
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.
Let be a linear transformation between two vector spaces where 's domain is finite dimensional. Then
Linear maps can be represented with matrices. More precisely, an matrix M represents a linear map where is the underlying field. [5] So, the dimension of the domain of is n, the number of columns of M, and the rank–nullity theorem for an matrix M is
Here we provide two proofs. The first [2] operates in the general case, using linear maps. The second proof [6] looks at the homogeneous system where is a with rank and shows explicitly that there exists a set of linearly independent solutions that span the null space of .
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
Let be vector spaces over some field and defined as in the statement of the theorem with .
As is a subspace, there exists a basis for it. Suppose and let
We may now, by the Steinitz exchange lemma, extend with linearly independent vectors to form a full basis of .
Let
We now claim that is a basis for . The above equality already states that is a generating set for ; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose is not linearly independent, and let
Thus, owing to the linearity of , it follows that
To summarize, we have , a basis for , and , a basis for .
Finally we may state that
This concludes our proof.
Let be an matrix with linearly independent columns (i.e. ). We will show that:
To do this, we will produce an matrix whose columns form a basis of the null space of .
Without loss of generality, assume that the first columns of are linearly independent. So, we can write
This means that for some matrix (see rank factorization) and, hence,
Let
Therefore, each of the columns of are particular solutions of .
Furthermore, the columns of are linearly independent because will imply for :
We next prove that any solution of must be a linear combination of the columns of .
For this, let
be any vector such that . Since the columns of are linearly independent, implies .
Therefore,
This proves that any vector that is a solution of must be a linear combination of the special solutions given by the columns of . And we have already seen that the columns of are linearly independent. Hence, the columns of constitute a basis for the null space of . Therefore, the nullity of is . Since equals rank of , it follows that . This concludes our proof.
When is a linear transformation between two finite-dimensional subspaces, with and (so can be represented by an matrix ), the rank–nullity theorem asserts that if has rank , then is the dimension of the null space of , which represents the kernel of . In some texts, a third fundamental subspace associated to is considered alongside its image and kernel: the cokernel of is the quotient space , and its dimension is . This dimension formula (which might also be rendered ) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra. [7] [8]
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
In the finite-dimensional case, this formulation is susceptible to a generalization: if
Intuitively, is the number of independent solutions of the equation , and is the number of independent restrictions that have to be put on to make solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
We see that we can easily read off the index of the linear map from the involved spaces, without any need to analyze in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.
The rank–nullity theorem is a theorem in linear algebra, which asserts:
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.
Let be a linear transformation between two vector spaces where 's domain is finite dimensional. Then
Linear maps can be represented with matrices. More precisely, an matrix M represents a linear map where is the underlying field. [5] So, the dimension of the domain of is n, the number of columns of M, and the rank–nullity theorem for an matrix M is
Here we provide two proofs. The first [2] operates in the general case, using linear maps. The second proof [6] looks at the homogeneous system where is a with rank and shows explicitly that there exists a set of linearly independent solutions that span the null space of .
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
Let be vector spaces over some field and defined as in the statement of the theorem with .
As is a subspace, there exists a basis for it. Suppose and let
We may now, by the Steinitz exchange lemma, extend with linearly independent vectors to form a full basis of .
Let
We now claim that is a basis for . The above equality already states that is a generating set for ; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose is not linearly independent, and let
Thus, owing to the linearity of , it follows that
To summarize, we have , a basis for , and , a basis for .
Finally we may state that
This concludes our proof.
Let be an matrix with linearly independent columns (i.e. ). We will show that:
To do this, we will produce an matrix whose columns form a basis of the null space of .
Without loss of generality, assume that the first columns of are linearly independent. So, we can write
This means that for some matrix (see rank factorization) and, hence,
Let
Therefore, each of the columns of are particular solutions of .
Furthermore, the columns of are linearly independent because will imply for :
We next prove that any solution of must be a linear combination of the columns of .
For this, let
be any vector such that . Since the columns of are linearly independent, implies .
Therefore,
This proves that any vector that is a solution of must be a linear combination of the special solutions given by the columns of . And we have already seen that the columns of are linearly independent. Hence, the columns of constitute a basis for the null space of . Therefore, the nullity of is . Since equals rank of , it follows that . This concludes our proof.
When is a linear transformation between two finite-dimensional subspaces, with and (so can be represented by an matrix ), the rank–nullity theorem asserts that if has rank , then is the dimension of the null space of , which represents the kernel of . In some texts, a third fundamental subspace associated to is considered alongside its image and kernel: the cokernel of is the quotient space , and its dimension is . This dimension formula (which might also be rendered ) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra. [7] [8]
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
In the finite-dimensional case, this formulation is susceptible to a generalization: if
Intuitively, is the number of independent solutions of the equation , and is the number of independent restrictions that have to be put on to make solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
We see that we can easily read off the index of the linear map from the involved spaces, without any need to analyze in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.