Let V and W be vector spaces over a field F. A function T:V→W is called a linear transformation from V to W if, for all x,y∈V and c∈F, the following conditions hold:
We define the range (or image) of T, denoted R(T), to be the subset of W consisting of all images T(x) of vectors x∈V. That is,
R(T)={T(x):x∈V}.
Let V and W be vector spaces, and let T:V→W be linear. If N(T) and R(T) are finite-dimensional, then we define the nullity of T, denoted nullity(T), and the rank of T, denoted rank(T), to be the dimensions of N(T) and R(T), respectively.
Let V and W be vector spaces and T:V→W be linear. Then N(T) and R(T) are subspaces of V and W, respectively.
Let V and W be vector spaces, and let T:V→W be linear. If β={v1,v2,…,vn} is a basis for V, then
R(T)=span(T(β))=span({T(v1),T(v2),…,T(vn)}).
(Dimension Theorem) Let V and W be vector spaces, and let T:V→W be linear. If V is finite-dimensional, then
nullity(T)+rank(T)=dim(V).
Let V and W be vector spaces, and let T:V→W be linear. Then T is one-to-one if and only ifN(T)={0}.
Let V and W be vector spaces of equal (finite) dimension, and let T:V→W be linear. Then the following are equivalent:
T is one-to-one.
T is onto.
rank(T)=dim(V).
Let V and W be vector spaces over F, and suppose that {v1,v2,…,vn} is a basis for V. For w1,w2,…,wn in W, there exists exactly one linear transformation T:V→W such that T(vi)=wi for i=1,2,…,n.
Corollary: Let V and W be vector spaces, and suppose that V has a finite basis {v1,v2,…,vn}. If U,T:V→W are linear and U(vi)=T(vi) for i=1,2,…,n, then U=T.
The matrix representation of a linear transformation
Let V be a finite-dimensional vector space. An ordered basis for V is a basis for V endowed with a specific order; that is, an ordered basis for V is a finite sequence of linearly independent vectors in V that generates V.
Let β={u1,u2,…,un} be an ordered basis for a finite-dimensional vector space V. For x∈V, let a1,a2,…,an be the unique scalars such that
x=i=1∑naiui.
We define the coordinate vector of x relative to β, denoted [x]β, by
[x]β=a1a2⋮an.
Let us now proceed with the promised matrix representation of a linear transformation.
Suppose that V and W are finite-dimensional vector spaces with ordered bases β={v1,v2,…,vn} and γ={w1,w2,…,wm}, respectively. Let T:V→W be linear. Then for each j, 1≤j≤n, there exist unique scalars aij∈F, 1≤i≤m, such that
T(vj)=i=1∑maijwifor 1≤j≤n.
Using the notation above, we call the m×n matrix A defined by Aij=aij the matrix representation of T in the ordered bases β and γ and write
A=[T]βγ.
If V=W and β=γ, then we write simply
A=[T]β.
Notice that the jth column of A is simply [T(vj)]γ. Also observe that if U:V→W is a linear transformation such that [U]βγ=[T]βγ, then U=T.
Let T,U:V→W be arbitrary functions, where V and W are vector spaces over F, and let a∈F. We define T+U:V→W by
(T+U)(x)=T(x)+U(x)for all x∈V,
and aT:V→W by
(aT)(x)=aT(x)for all x∈V.
Let V and W be vector spaces over F. We denote the vector space of all linear transformations from V into W by L(V,W). In the case that V=W, we write L(V) instead of L(V,W).
Let T:V→W and U:W→Z be linear transformations, and let A=[U]βγ and B=[T]αβ, where α={v1,v2,…,vn}, β={w1,w2,…,wm}, and γ={z1,z2,…,zp} are ordered bases for V, W, and Z, respectively.
We would like to define the product AB of two matrices so that
Let A be an m×n matrix with entries from a field F. We denote by LA the mapping LA:Fn→Fm defined by LA(x)=Ax (the matrix product of A and x) for each column vector x∈Fn. We call LA a left-multiplication transformation.
Let A be an m×n matrix with entries from F. Then the left-multiplication transformation LA:Fn→Fm is linear. Furthermore, if B is any other m×n matrix (with entries from F) and β and γ are the standard ordered bases for Fn and Fm, respectively, then we have the following properties.
[LA]βγ=A.
LA=LB if and only if A=B.
LA+B=LA+LB and LaA=aLA for all a∈F.
If T:Fn→Fm is linear, then there exists a unique m×n matrix C such that T=LC. In fact, C=[T]βγ.
Let V and W be vector spaces, and let T:V→W be linear. A function U:W→V is said to be an inverse of T if TU=IW and UT=IV. If T has an inverse, then T is said to be invertible. As noted in Appendix B, if T is invertible, then the inverse of T is unique and is denoted by T−1.
We often use the fact that a function is invertible if and only if it is both one-to-one and onto.
Let V and W be vector spaces, and let T:V→W be linear and invertible. Then T−1:W→V is linear.
Let A be an n×n matrix. Then A is invertible if there exists an n×n matrix B such that AB=BA=I.
Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W).
Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T:V→W be linear. Then T is invertible if and only if [T]βγ is invertible. Furthermore,[T−1]γβ=([T]βγ)−1.
Let V and W be vector spaces. We say that V is isomorphic to W if there exists a linear transformation T:V→W that is invertible. Such a linear transformation is called an isomorphism from V onto W.
Let β and β′ be two ordered bases for a finite-dimensional vector space V, and let Q=[IV]ββ′. Then
(a) Q is invertible.
(b) For any v∈V, [v]β=Q[v]β′.
The matrix Q=[IV]ββ′ defined in Theorem 2.22 is called a change of coordinate matrix.
Let T be a linear operator on a finite-dimensional vector space V, and let β and β′ be ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β′-coordinates into β-coordinates. Then
[T]β′=Q−1[T]βQ.
Let A and B be matrices in Mn×n(F). We say that B is similar to A if there exists an invertible matrix Q such that
For a vector space V over F, we define the dual space of V to be the vector space L(V,F), denoted by V∗.We also define the double dualV∗∗ of V to be the dual of V∗.
Suppose that V is a finite-dimensional vector space with the ordered basis β={x1,x2,…,xn}. Let fi (1≤i≤n) be the ith coordinate function with respect to β as just defined, and let β∗={f1,f2,…,fn}. Then β∗ is an ordered basis for V∗, and, for any f∈V∗, we have
f=i=1∑nf(xi)fi.
we call the ordered basis β∗={f1,f2,…,fn} of V∗ that satisfies fi(xj)=δij (1≤i,j≤n) the dual basis of β.
Let V and W be finite-dimensional vector spaces over F with ordered bases β and γ, respectively. For any linear transformation T:V→W, the mapping Tt:W∗→V∗ defined by Tt(g)=gT for all g∈W∗ is a linear transformation with the property that
[Tt]γ∗β∗=([T]βγ)t.
For a vector x∈V, we define x:V∗→F by x(f)=f(x) for every f∈V∗. It is easy to verify that x is a linear functional on V∗, so x∈V∗∗.
Let V be a finite-dimensional vector space, and let x∈V. If x(f)=0 for all f∈V∗, then x=0.