User:Jim.belk/Draft:Rank (linear algebra)

In linear algebra, the rank of a matrix is the maximum number of linearly independent rows (or columns) that can be chosen from the matrix. If we row reduce the matrix, the rank is equal to the number of pivots in the resulting row echelon form. Geometrically, the rank of a matrix is the dimension of the image of the associated linear transformation.

Definition edit

The rank of a matrix is the maximum number of linearly independent rows that can be chosen from the matrix:

  This matrix has rank zero because the single vector (0, 0, 0) is considered linearly dependent.
  This matrix has rank one because any two of its rows are linearly dependent.
  This matrix has rank two. Any two rows of this matrix are linearly independent, but the three rows r1r2r3 satsify the equation r1 + r2 – r3 = 0.
  This matrix also has rank two. The first and third rows are linearly independent (as are the second and third), but the three rows together satisfy the equation 2r1 – r2 = 0.
  This matrix has rank three; its three rows are linearly independent.

The rank of a matrix is the same as the dimension of the row space (the subspace of Rn spanned by the rows of the matrix).

Sometimes the quantity above is called the row rank of the matrix, and the column rank is defined to the the maximum number of linearly independent columns that can be chosen from the matrix (i.e. the dimension of the column space). It is an important theorem of linear algebra that the row rank and column rank of a matrix are equal.

The column space of a matrix is the same as the range (or image) of the associated linear transformation, so the rank is the same as the dimension of the range.

Alternative descriptions edit

Rank as a dimension edit

The span of the row vectors of a matrix A is called the row space of A, and the span of the column vectors is the column space of A. The row and column spaces of A always have the same dimension, and this is equal to the rank of A. This definition of rank is essentially equivalent to the one given above.

The column space of A is the same as the image of the corresponding linear transformation. The dimension of this image is equal to the rank of A.

Submatrices edit

A submatrix of a matrix A is any matrix formed by deleting certain rows and columns from A. For example,

 

The rank of A is greater than or equal to the rank of any submatrix of A.

A square submatrix is nonsingular if it has an inverse (i.e. if it has nonzero determinant). The rank A is equal to the size of the largest nonsingular square submatrix of A. For example, the matrix

 

has rank two because the 2 × 2 submatrix     is nonsingular.

The determinant of a k × k submatrix of A is sometimes called a minor of A. Using this terminology, the rank of A is the largest k for which A has a nonzero k × k minor.

Rank-nullity theorem edit

The nullity of a matrix A is the dimension of its null space. Because the row space and null space of A are orthogonal complements, the rank and nullity are related by the equation

 

where n is the number of columns of A. This is known as the rank-nullity theorem.

Alternative definitions edit

The maximal number of linearly independent columns of the m-by-n matrix A with entries in the field F is equal to the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A). Since the column rank and the row rank are the same, we can also define the rank of A as the dimension of the row space of A.

If one considers the matrix A as a linear map

f : FnFm

with the rule

f(x) = Ax

then the rank of A can also be defined as the dimension of the image of f (see linear map for a discussion of image and kernel). This definition has the advantage that they can be applied to any linear map without need for a specific matrix. The rank can also be defined as n minus the dimension of the kernel of f; the rank-nullity theorem states that this is the same as the dimension of the image of f.

Another equivalent definition of the rank of a matrix is the order of the greatest non-vanishing minor in the matrix.

Properties edit

We assume that A is an m-by-n matrix over the field F and describes a linear map f as above.

  • only the zero matrix has rank 0
  • the rank of A is at most min(m,n)
  • f is injective if and only if A has rank n (in this case, we say that A has full column rank).
  • f is surjective if and only if A has rank m (in this case, we say that A has full row rank).
  • In the case of a square matrix A (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
  • If B is any n-by-k matrix, then the rank of AB is at most the minimum of the rank of A and the rank of B.
As an example of the "<" case, consider the product
 
Both factors have rank 1, but the product has rank 0.
  • If B is an n-by-k matrix with rank n, then AB has the same rank as A.
  • If C is an l-by-m matrix with rank m, then CA has the same rank as A.
  • The rank of A is equal to r if and only if there exists an invertible m-by-m matrix X and an invertible n-by-n matrix Y such that
 
where Ir denotes the r-by-r identity matrix.
  • The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix (this is the "rank theorem" or the "rank-nullity theorem").

Computation edit

The easiest way to compute the rank of a matrix A is given by the Gauss elimination method. The row-echelon form of A produced by the Gauss algorithm has the same rank as A, and its rank can be read off as the number of non-zero rows.

Consider for example the 4-by-4 matrix

 

We see that the second column is twice the first column, and that the fourth column equals the sum of the first and the third. The first and the third columns are linearly independent, so the rank of A is two. This can be confirmed with the Gauss algorithm. It produces the following row echelon form of A:

 

which has two non-zero rows.

When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting, which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.

Applications edit

One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system has at least one solution if the rank of the coefficient matrix equals the rank of the augmented matrix. In that case, it has precisely one solution if the rank equals the number of variables. If the rank of the augmented matrix is greater than the rank of coefficient matrix the general solution has k free parameters where k is the difference between the number of equations and the rank. Otherwise the system is inconsistent.

In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

Generalization edit

There are different generalisations of the concept of rank to matrices over arbitrary rings. In those generalisations, column rank, row rank, dimension of column space and dimension of row space of a matrix may be different from the others or may not exist.

There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.

References edit

  • Horn, Roger A. and Johnson, Charles R. Matrix Analysis. Cambridge University Press, 1985. ISBN 0-521-38632-2.
  • Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors [1] and System of Equations [2]