Elementary Matrix Operations and Systems of Linear Equations
Elementary matrix operations and elementary matrices
Let be an matrix. Any one of the following three operations on the rows [columns] of is called an elementary row [column] operation:
-
interchanging any two rows [columns] of ;
-
multiplying any row [column] of by a nonzero scalar;
-
adding any scalar multiple of a row [column] of to another row [column].
An elementary matrix is a matrix obtained by performing an elementary operation on . The elementary matrix is said to be of type 1, 2, or 3 according to whether the elementary operation performed on is a type 1, 2, or 3 operation, respectively.
Theorems
-
Let , and suppose that is obtained from by performing an elementary row [column] operation. Then there exists an [] elementary matrix such that []. In fact, is obtained from [] by performing the same elementary row [column] operation as that which was performed on to obtain .
-
Elementary matrices are invertible, and the inverse of an elementary matrix is an elementary matrix of the same type.
The rank of a matrix and matrix inverses
If , we define the rank of , denoted , to be the rank of the linear transformation .
Let and be and matrices, respectively. By the augmented matrix , we mean the matrix , that is, the matrix whose first columns are the columns of , and whose last columns are the columns of .
Theorems
- Let be a linear transformation between finite-dimensional vector spaces, and let and be ordered bases for and , respectively. Then
-
Let be an matrix. If and are invertible and matrices, respectively, then
(a) ,
(b) , and therefore,
(c) .
-
The rank of any matrix equals the maximum number of its linearly independent columns; that is, the rank of a matrix is the dimension of the subspace generated by its columns.
-
Let be an matrix of rank . Then , , and, by means of a finite number of elementary row and column operations, can be transformed into the matrix
where , , and are zero matrices. Thus for and otherwise.
-
Let be an matrix of rank . Then there exist invertible matrices and of sizes and , respectively, such that , where
is the matrix in which , , and are zero matrices.
-
(a) .
(b) The rank of any matrix equals the maximum number of its linearly independent rows; that is, the rank of a matrix is the dimension of the subspace generated by its rows.
(c) The rows and columns of any matrix generate subspaces of the same dimension, numerically equal to the rank of the matrix.
-
(a) .
(b) .
(c) .
(d) .
Systems of linear equations
where and ( and ) are scalars in a field and are variables taking values in , is called a system of linear equations in unknowns over the field .
The matrix
is called the coefficient matrix of the system .
If we let
then the system may be rewritten as a single matrix equation
To exploit the results that we have developed, we often consider a system of linear equations as a single matrix equation. A solution to the system is an -tuple
such that . The set of all solutions to the system is called the solution set of the system. System is called consistent if its solution set is nonempty; otherwise it is called inconsistent.
A system of linear equations in unknowns is said to be homogeneous if . Otherwise the system is said to be nonhomogeneous.
Any homogeneous system has at least one solution, namely, the zero vector.
Theorems
- Let be the solution set of a system of linear equations , and let be the solution set of the corresponding homogeneous system . Then for any solution to ,
-
Let be a homogeneous system of linear equations in unknowns over a field . Let denote the set of all solutions to . Then
hence is a subspace of of dimension
-
Let be a system of linear equations in unknowns. If is invertible, then the system has exactly one solution, namely . Conversely, if the system has exactly one solution, then is invertible.
-
Let be a system of linear equations. Then the system is consistent if and only if
Gaussian elimination
A matrix is said to be in reduced row echelon form if the following three conditions are satisfied:
(a) Any row containing a nonzero entry precedes any row in which all the entries are zero (if any).
(b) The first nonzero entry in each row is the only nonzero entry in its column.
(c) The first nonzero entry in each row is and it occurs in a column to the right of the first nonzero entry in the preceding row.
The reduced row echelon form of a matrix is unique.
Let be a system of nonzero equations in unknowns. Suppose that and that is in reduced row echelon form. Then
(a) .
(b) If the general solution obtained by the procedure above is of the form
then is a basis for the solution set of the corresponding homogeneous system, and is a solution to the original system.
LU decomposition
Let be an matrix. Suppose that admits an LU decomposition of the form
where
- is an lower triangular matrix with 's on the diagonal, and
- is an upper triangular matrix.
We consider the system
Because , the system becomes
Let
Then the solution process consists of the following two steps.