How to do linear Transformations

how linear transformations affect the mean and variance, how to solve linear transformations, how to do linear transformations with matrices, linear transformations pdf free download
LexiWills Profile Pic
LexiWills,United Kingdom,Professional
Published Date:31-07-2017
Your Website URL(Optional)
Comment
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for nancial gain, and may be downloaded and/or photocopied, with or without permission from the author. However, this document may not be kept on any information storage and retrieval system without permission from the author, unless such system is not accessible to any individuals other than its owners. Chapter 8 LINEAR TRANSFORMATIONS 8.1. Euclidean Linear Transformations n m n m n By a transformation fromR intoR , we mean a function of the type T :R R , with domainR m n m and codomain R . For every vector x2R , the vector T (x)2R is called the image of x under the transformation T , and the set n R(T ) =fT (x) :x2R g; of all images under T , is called the range of the transformation T . n Remark. For our convenience later, we have chosen to use R(T ) instead of the usual T (R ) to denote the range of the transformation T . n For every x = (x ;:::;x )2R , we can write 1 n T (x) =T (x ;:::;x ) = (y ;:::;y ): 1 n 1 m Here, for every i = 1;:::;m, we have y =T (x ;:::;x ); (1) i i 1 n n where T :R R is a real valued function. i n m Definition. A transformation T : R R is called a linear transformation if there exists a real matrix 0 1 a ::: a 11 1n . . A . . A = . . a ::: a m1 mn Chapter 8 : Linear Transformations page 1 of 35c Linear Algebra W W L Chen, 1997, 2008 n such that for every x = (x ;:::;x )2R , we have T (x ;:::;x ) = (y ;:::;y ), where 1 n 1 n 1 m y = a x +::: + a x ; 1 11 1 1n n . . . y =a x +::: +a x ; m m1 1 mn n or, in matrix notation, 0 1 0 10 1 y a ::: a x 1 11 1n 1 B . C . . . A A . . . . = : (2) A . . . . y a ::: a x m m1 mn n The matrix A is called the standard matrix for the linear transformation T . n m Remarks. (1) In other words, a transformation T : R R is linear if the equation (1) for every i = 1;:::;m is linear. n m (2) If we write x2R and y2R as column matrices, then (2) can be written in the form y =Ax, n and so the linear transformation T can be interpreted as multiplication of x2 R by the standard matrix A. n m Definition. A linear transformationT :R R is said to be a linear operator ifn =m. In this case, n we say that T is a linear operator onR . 5 3 Example 8.1.1. The linear transformation T :R R , de ned by the equations y = 2x + 3x + 5x + 7x 9x ; 1 1 2 3 4 5 y = 3x + 4x + 2x ; 2 2 3 5 y = x + 3x 2x ; 3 1 3 4 can be expressed in matrix form as 0 1 x 1 0 1 0 1 y 2 3 5 7 9 Bx C 1 2 B C A A y = 0 3 4 0 2 Bx C: 2 3 A y 1 0 3 2 0 x 3 4 x 5 If (x ;x ;x ;x ;x ) = (1; 0; 1; 0; 1), then 1 2 3 4 5 0 1 1 0 1 0 1 0 1 y 2 3 5 7 9 B 0C 2 1 B C A A A y = 0 3 4 0 2 B 1C = 6 ; 2 A y 1 0 3 2 0 0 4 3 1 so that T (1; 0; 1; 0; 1) = (2; 6; 4). n m Example 8.1.2. Suppose that A is the zero mn matrix. The linear transformation T :R R , n n m where T (x) =Ax for every x2R , is the zero transformation fromR intoR . Clearly T (x) =0 for n every x2R . n n Example 8.1.3. Suppose that I is the identity nn matrix. The linear operator T :R R , where n n n T (x) =Ix for every x2R , is the identity operator onR . Clearly T (x) =x for every x2R . Chapter 8 : Linear Transformations page 2 of 35c Linear Algebra W W L Chen, 1997, 2008 n m PROPOSITION 8A. Suppose that T :R R is a linear transformation, and thatfe ;:::;e g is 1 n n the standard basis for R . Then the standard matrix for T is given by A = (T (e ) ::: T (e ) ); 1 n where T (e ) is a column matrix for every j = 1;:::;n. j Proof. This follows immediately from (2). 2 8.2. Linear Operators on R 2 In this section, we consider the special case when n = m = 2, and study linear operators on R . For 2 every x2R , we shall write x = (x ;x ). 1 2 Example 8.2.1. Consider re ection across the x -axis, so that T (x ;x ) = (x ;x ). Clearly we have 2 1 2 1 2     1 0 T (e ) = and T (e ) = ; 1 2 0 1 and so it follows from Proposition 8A that the standard matrix is given by   1 0 A = : 0 1 It is not dicult to see that the standard matrices for re ection across the x -axis and across the line 1 x =x are given respectively by 1 2     1 0 0 1 A = and A = : 0 1 1 0 Also, the standard matrix for re ection across the origin is given by   1 0 A = : 0 1 We give a summary in the table below: Linear operator Equations Standard matrix   n y =x 1 0 1 1 Re ection across x -axis 2 y =x 0 1 2 2   n y =x 1 0 1 1 Re ection across x -axis 1 y =x 2 2 0 1   n y =x 1 2 0 1 Re ection across x =x 1 2 y =x 2 1 1 0   n y =x 1 0 1 1 Re ection across origin y =x 0 1 2 2 Example 8.2.2. For orthogonal projection onto thex -axis, we haveT (x ;x ) = (x ; 0), with standard 1 1 2 1 matrix   1 0 A = : 0 0 Chapter 8 : Linear Transformations page 3 of 35c Linear Algebra W W L Chen, 1997, 2008 Similarly, the standard matrix for orthogonal projection onto the x -axis is given by 2   0 0 A = : 0 1 We give a summary in the table below: Linear operator Equations Standard matrix   n y =x 1 1 1 0 Orthogonal projection onto x -axis 1 y = 0 2 0 0    y = 0 0 0 1 Orthogonal projection onto x -axis 2 y =x 0 1 2 2 Example 8.2.3. For anticlockwise rotation by an angle , we have T (x ;x ) = (y ;y ), where 1 2 1 2 y + iy = (x + ix )(cos + i sin); 1 2 1 2 and so      y cos sin x 1 1 = : y sin cos x 2 2 It follows that the standard matrix is given by   cos sin A = : sin cos We give a summary in the table below: Linear operator Equations Standard matrix    y =x cosx sin cos sin 1 1 2 Anticlockwise rotation by angle  y =x sin +x cos sin cos 2 1 2 Example 8.2.4. For contraction or dilation by a non-negative scalar k, we haveT (x ;x ) = (kx ;kx ), 1 2 1 2 with standard matrix   k 0 A = : 0 k The operator is called a contraction if 0k 1 and a dilation ifk 1, and can be extended to negative values of k by noting that for k 0, we have      k 0 1 0 k 0 = : 0 k 0 1 0 k This describes contraction or dilation by non-negative scalark followed by re ection across the origin. We give a summary in the table below: Linear operator Equations Standard matrix    y =kx k 0 1 1 Contraction or dilation by factor k y =kx 0 k 2 2 Chapter 8 : Linear Transformations page 4 of 35Linear Algebra c WWL Chen, 1997, 2006 c Line Linear ar A Algebr lgebraa c WWL W W L Chen, Chen, 1997, 1997, 2006 2008 Example 8.2.5. For expansion or compression in the x -direction by a positive factor k, we have Example Example 8.2.5. 8.2.5. Fo Forr expansion expansion or or compression compression in in the the xx1-direction -direction b by y a a p positiv ositivee factor factor kk,, w we e h ha ave ve 1 1 T(x ,x )=(kx ,x ), with standard matrix TT( (xx1,x ;x2)=( ) = (kx kx1,x ;x2), ), with with standard standard matrix matrix 1 1 2 2 1 1 2 2 "  " k 0 kk 0 0 A = . A A = = .: 01 01 0 1 This can be extended to negative values of k by noting that fork 0, we have This This can can b be e extended extended to to negativ negativee v values alues of of kk by by noting noting that that for fork k 0, 0, w we e ha hav ve e " " " " " "      k 0 −10 −k 0 k 0 −10 −k 0 k 0 1 0 k 0 = . = . = : 01 01 01 01 01 01 0 1 0 1 0 1 This describes expansion or compression in the x -direction by positive factor−k followed by reflection This describes expansion or compression in the x1-direction by positive factor−k followed by reflection This describes expansion or compression in the x -direction by positive factork followed by re ection 1 1 across the x -axis. Similarly, for expansion or compression in the x -direction by a non-zero factor k, across the x2-axis. Similarly, for expansion or compression in the x2-direction by a non-zero factor k, across the x -axis. Similarly, for expansion or compression in the x -direction by a non-zero factor k, 2 2 2 2 we have the standard matrix we have the standard matrix we have the standard matrix " "   10 10 1 0 A = . A = . A = : 0 k 0 k 0 k We give a summary in the table below: We give a summary in the table below: We give a summary in the table below: Linear operator Equations Standard matrix Linear operator Equations Standard matrix Linear operator Equations Standard matrix " "    y =kx k 0 y1 =kx1 k 0 y1 =kx1 k 0 1 1 Expansion or compression in x -direction Expansion or compression in x1-direction Expansion or compression in x1-direction 1 y = x 01 y2 = x2 01 2 2 y =x 0 1 2 2 " "   ny = x y1 = x1 10 10 y1 =x1 1 1 1 0 Expansion or compression in x -direction Expansion or compression in x2-direction Expansion or compression in x2-direction 2 y =kx y2 =kx2 0 k 2 2 0 k y =kx 2 2 0 k Example 8.2.6. For shears in the x -direction with factor k, we have T(x ,x )=(x +kx ,x ), with Example 8.2.6. For shears in the x1-direction with factor k, we have T(x1,x2)=(x1 +kx2,x2), with 1 1 2 1 2 2 Example 8.2.6. For shears in the x -direction with factor k, we have T (x ;x ) = (x +kx ;x ), with 1 1 2 1 2 2 standard matrix standard matrix standard matrix " "   1 k 1 k 1 k A = . A = . A = : 01 01 0 1 For the case k=1,wehave the following. For the case k=1,wehave the following. For the case k = 1, we have the following. T T • • • • • • • • (k=1) (k=1) • • • • • • • • For the case k =−1, we have the following. For the case k =−1, we have the following. For the case k =1, we have the following. T T • • • • • • • • (k=−1) (k=−1) • • • • • • • • Chapter 8 : Linear Transformations page 5 of 35 Chapter 8 : Linear Transformations page 5 of 35 Chapter 8 : Linear Transformations page 5 of 35c Linear Algebra W W L Chen, 1997, 2008 Similarly, for shears in the x -direction with factor k, we have standard matrix 2   1 0 A = : k 1 We give a summary in the table below: Linear operator Equations Standard matrix    y =x +kx 1 k 1 1 2 Shear in x -direction 1 y =x 0 1 2 2   n y =x 1 1 1 0 Shear in x -direction 2 y =kx +x 2 1 2 k 1 2 2 Example8.2.7. Consider a linear operatorT :R R which consists of a re ection across thex -axis, 2 followed by a shear in the x -direction with factor 3 and then re ection across the x -axis. To nd the 1 1 2 standard matrix, consider the e ect of T on a standard basisfe ;eg ofR . Note that 1 2         1 1 1 1 e = 7 7 7 =T (e ); 1 1 0 0 0 0         0 0 3 3 e = 7 7 7 =T (e ); 2 2 1 1 1 1 so it follows from Proposition 8A that the standard matrix for T is   1 3 A = : 0 1 Let us summarize the above and consider a few special cases. We have the following table of invertible linear operators withk6= 0. Clearly, ifA is the standard matrix for an invertible linear operatorT , then 1 1 the inverse matrix A is the standard matrix for the inverse linear operator T . 1 1 Linear operator T Standard matrix A Inverse matrix A Linear operator T     0 1 0 1 Re ection across Re ection across line x =x line x =x 1 2 1 2 1 0 1 0     1 k 0 k 0 Expansion or compression Expansion or compression in xdirection in xdirection 1 1 0 1 0 1     1 0 1 0 Expansion or compression Expansion or compression 1 in xdirection in xdirection 2 2 0 k 0 k     1 k 1 k Shear Shear in xdirection in xdirection 1 1 0 1 0 1     1 0 1 0 Shear Shear in xdirection in xdirection 2 2 k 1 k 1 Next, let us consider the question of elementary row operations on 2 2 matrices. It is not dicult to see that an elementary row operation performed on a 2 2 matrixA has the e ect of multiplying the Chapter 8 : Linear Transformations page 6 of 35c Linear Algebra W W L Chen, 1997, 2008 matrix A by some elementary matrix E to give the product EA. We have the following table. Elementary row operation Elementary matrix E   0 1 Interchanging the two rows 1 0   k 0 Multiplying row 1 by non-zero factor k 0 1   1 0 Multiplying row 2 by non-zero factor k 0 k   1 k Adding k times row 2 to row 1 0 1   1 0 Adding k times row 1 to row 2 k 1 Now, we know that any invertible matrix A can be reduced to the identity matrix by a nite number of elementary row operations. In other words, there exist a nite number of elementary matricesE ;:::;E 1 s of the types above with various non-zero values of k such that E :::E A =I; s 1 so that 1 1 A =E :::E : s 1 We have proved the following result. 2 2 PROPOSITION 8B. Suppose that the linear operatorT :R R has standard matrixA, whereA is invertible. Then T is the product of a succession of nitely many re ections, expansions, compressions and shears. In fact, we can prove the following result concerning images of straight lines. 2 2 PROPOSITION 8C. Suppose that the linear operator T :R R has standard matrix A, where A is invertible. Then (a) the image under T of a straight line is a straight line; (b) the image under T of a straight line through the origin is a straight line through the origin; and (c) the images under T of parallel straight lines are parallel straight lines. 1 Proof. Suppose that T (x ;x ) = (y ;y ). Since A is invertible, we have x =A y, where 1 2 1 2     x y 1 1 x = and y = : x y 2 2 The equation of a straight line is given by x + x = or, in matrix form, by 1 2   x 1 ( ) = ( ): x 2 Hence   y 1 1 ( )A = ( ): y 2 Chapter 8 : Linear Transformations page 7 of 35c Linear Algebra W W L Chen, 1997, 2008 Let 1 0 0 ( ) = ( )A : Then   y 1 0 0 ( ) = ( ): y 2 0 0 In other words, the image underT of the straight line x + x = is y + y = , clearly another 1 2 1 2 straight line. This proves (a). To prove (b), note that straight lines through the origin correspond to = 0. To prove (c), note that parallel straight lines correspond to di erent values of for the same values of and . 8.3. Elementary Properties of Euclidean Linear Transformations In this section, we establish a number of simple properties of euclidean linear transformations. n m m k PROPOSITION 8D. Suppose that T : R R and T : R R are linear transformations. 1 2 n k Then T =T T :R R is also a linear transformation. 2 1 Proof. SinceT andT are linear transformations, they have standard matricesA andA respectively. 1 2 1 2 n m In other words, we have T (x) =A x for every x2R and T (y) =A y for every y2R . It follows 1 1 2 2 n that T (x) =T (T (x)) =A A x for every x2R , so that T has standard matrix A A . 2 1 2 1 2 1 2 2 2 2 Example 8.3.1. Suppose that T : R R is anticlockwise rotation by =2 and T : R R is 1 2 orthogonal projection onto the x -axis. Then the respective standard matrices are 1     0 1 1 0 A = and A = : 1 2 1 0 0 0 It follows that the standard matrices for T T and T T are respectively 2 1 1 2     0 1 0 0 A A = and A A = : 2 1 1 2 0 0 1 0 Hence T T and T T are not equal. 2 1 1 2 2 2 2 2 Example 8.3.2. Suppose that T : R R is anticlockwise rotation by  and T : R R is 1 2 anticlockwise rotation by . Then the respective standard matrices are     cos sin cos sin A = and A = : 1 2 sin cos sin cos It follows that the standard matrix for T T is 2 1     cos cos sin sin cos sin sin cos cos( +) sin( +) A A = = : 2 1 sin cos + cos sin cos cos sin sin sin( +) cos( +) Hence T T is anticlockwise rotation by  +. 2 1 2 Example 8.3.3. The reader should check that inR , re ection across the x -axis followed by re ection 1 across the x -axis gives re ection across the origin. 2 Linear transformations that map distinct vectors to distinct vectors are of special importance. Chapter 8 : Linear Transformations page 8 of 35c Linear Algebra W W L Chen, 1997, 2008 n m 0 00 n Definition. A linear transformation T :R R is said to be one-to-one if for every x;x 2R , we 0 00 0 00 have x =x whenever T (x ) =T (x ). 2 2 Example 8.3.4. If we consider linear operators T :R R , then T is one-to-one precisely when the 0 00 standard matrix A is invertible. To see this, suppose rst of all that A is invertible. If T (x ) =T (x ), 0 00 1 0 00 then Ax = Ax . Multiplying on the left by A , we obtain x = x . Suppose next that A is not 2 invertible. Then there exists x2R such that x =6 0 and Ax =0. On the other hand, we clearly have A0 =0. It follows that T (x) =T (0), so that T is not one-to-one. n n PROPOSITION 8E. Suppose that the linear operator T :R R has standard matrix A. Then the following statements are equivalent: (a) The matrix A is invertible. (b) The linear operator T is one-to-one. n n (c) The range of T isR ; in other words, R(T ) =R . 0 00 0 00 1 Proof. ((a))(b)) Suppose thatT (x ) =T (x ). ThenAx =Ax . Multiplying on the left byA gives 0 00 x =x . n ((b))(a)) Suppose that T is one-to-one. Then the system Ax =0 has unique solution x =0 inR . It follows thatA can be reduced by elementary row operations to the identity matrix I, and is therefore invertible. n 1 ((a))(c)) For any y2R , clearly x =A y satis es Ax =y, so that T (x) =y. n n ((c))(a)) Suppose thatfe ;:::;e g is the standard basis for R . Let x ;:::;x 2R be chosen to 1 n 1 n satisfy T (x ) =e , so that Ax =e , for every j = 1;:::;n. Write j j j j C = (x ::: x ): 1 n Then AC =I, so that A is invertible. n n Definition. Suppose that the linear operatorT :R R has standard matrixA, whereA is invertible. 1 n n 1 1 n Then the linear operator T : R R , de ned by T (x) = A x for every x2 R , is called the inverse of the linear operator T . 1 1 n Remark. Clearly T (T (x)) =x and T (T (x)) =x for every x2R . 2 2 2 Example 8.3.5. Consider the linear operator T : R R , de ned by T (x) = Ax for every x2 R , where   1 1 A = : 1 2 Clearly A is invertible, and   2 1 1 A = : 1 1 1 2 2 1 1 2 Hence the inverse linear operator is T :R R , de ned by T (x) =A x for every x2R . 2 2 Example 8.3.6. Suppose that T : R R is anticlockwise rotation by angle . The reader should 1 2 2 check that T :R R is anticlockwise rotation by angle 2. Next, we study the linearity properties of euclidean linear transformations which we shall use later to discuss linear transformations in arbitrary real vector spaces. Chapter 8 : Linear Transformations page 9 of 35c Linear Algebra W W L Chen, 1997, 2008 n m PROPOSITION 8F. A transformation T : R R is linear if and only if the following two conditions are satis ed: n (a) For every u;v2R , we have T (u +v) =T (u) +T (v). n (b) For every u2R and c2R, we have T (cu) =cT (u). n m Proof. Suppose rst of all thatT :R R is a linear transformation. LetA be the standard matrix n for T . Then for every u;v2R and c2R, we have T (u +v) =A(u +v) =Au +Av =T (u) +T (v) and T (cu) =A(cu) =c(Au) =cT (u): Suppose now that (a) and (b) hold. To show that T is linear, we need to nd a matrix A such that n n T (x) =Ax for every x2R . Suppose thatfe ;:::;e g is the standard basis forR . As suggested by 1 n Proposition 8A, we write A = (T (e ) ::: T (e ) ); 1 n where T (e ) is a column matrix for every j = 1;:::;n. For any vector j 0 1 x 1 . A . x = . x n n inR , we have 0 1 x 1 . A . Ax = (T (e ) ::: T (e ) ) =x T (e ) +::: +x T (e ): 1 n 1 1 n n . x n Using (b) on each summand and then using (a) inductively, we obtain Ax =T (x e ) +::: +T (x e ) =T (x e +::: +x e ) =T (x) 1 1 n n 1 1 n n as required. To conclude our study of euclidean linear transformations, we brie y mention the problem of eigen- values and eigenvectors of euclidean linear operators. n n Definition. Suppose that T : R R is a linear operator. Then any real number 2 R is called n an eigenvalue of T if there exists a non-zero vector x2R such that T (x) =x. This non-zero vector n x2R is called an eigenvector of T corresponding to the eigenvalue . Remark. Note that the equation T (x) = x is equivalent to the equation Ax = x. It follows that there is no distinction between eigenvalues and eigenvectors of T and those of the standard matrix A. We therefore do not need to discuss this problem any further. 8.4. General Linear Transformations Suppose thatV andW are real vector spaces. To de ne a linear transformation from V intoW , we are motivated by Proposition 8F which describes the linearity properties of euclidean linear transformations. Chapter 8 : Linear Transformations page 10 of 35c Linear Algebra W W L Chen, 1997, 2008 By a transformation from V into W , we mean a function of the type T : V W , with domain V and codomain W . For every vector u2 V , the vector T (u)2 W is called the image of u under the transformation T . Definition. A transformation T : V W from a real vector space V into a real vector space W is called a linear transformation if the following two conditions are satis ed: (LT1) For every u;v2V , we have T (u +v) =T (u) +T (v). (LT2) For every u2V and c2R, we have T (cu) =cT (u). Definition. A linear transformation T :V V from a real vector space V into itself is called a linear operator on V . Example 8.4.1. Suppose that V and W are two real vector spaces. The transformation T : V W , where T (u) =0 for every u2V , is clearly linear, and is called the zero transformation from V to W . Example 8.4.2. Suppose thatV is a real vector space. The transformationI :V V , whereI(u) =u for every u2V , is clearly linear, and is called the identity operator on V . Example 8.4.3. Suppose that V is a real vector space, and that k2 R is xed. The transformation T : V V , where T (u) = ku for every u2 V , is clearly linear. This operator is called a dilation if k 1 and a contraction if 0k 1. Example 8.4.4. Suppose thatV is a nite dimensional vector space, with basisfw ;:::;w g. De ne a 1 n n n transformation T :V R as follows. For every u2V , there exists a unique vector ( ;:::; )2R 1 n such that u = w +::: + w . We let T (u) = ( ;:::; ). In other words, the transformation T 1 1 n n 1 n gives the coordinates of any vector u2 V with respect to the given basisfw ;:::;w g. Suppose now 1 n that v = w +::: + w is another vector in V . Then u +v = ( + )w +::: + ( + )w , so 1 1 n n 1 1 1 n n n that T (u +v) = ( + ;:::; + ) = ( ;:::; ) + ( ;:::; ) =T (u) +T (v): 1 1 n n 1 n 1 n Also, if c2R, then cu =c w +::: +c w , so that 1 1 n n T (cu) = (c ;:::;c ) =c( ;:::; ) =cT (u): 1 n 1 n Hence T is a linear transformation. We shall return to this in greater detail in the next section. Example 8.4.5. Suppose that P denotes the vector space of all polynomials with real coecients and n degree at most n. De ne a transformation T :P P as follows. For every polynomial n n n p =p +p x +::: +p x 0 1 n in P , we let n n T (p) =p +p x +::: +p x : n n1 0 n Suppose now that q =q +q x +::: +q x is another polynomial in P . Then 0 1 n n n p +q = (p +q ) + (p +q )x +::: + (p +q )x ; 0 0 1 1 n n so that n T (p +q) = (p +q ) + (p +q )x +::: + (p +q )x n n n1 n1 0 0 n n = (p +p x +::: +p x ) + (q +q x +::: +q x ) =T (p) +T (q): n n1 0 n n1 0 Chapter 8 : Linear Transformations page 11 of 35c Linear Algebra W W L Chen, 1997, 2008 n Also, for any c2R, we have cp =cp +cp x +::: +cp x , so that 0 1 n n n T (cp) =cp +cp x +::: +cp x =c(p +p x +::: +p x ) =cT (p): n n1 0 n n1 0 Hence T is a linear transformation. Example8.4.6. LetV denote the vector space of all real valued functions di erentiable everywhere inR, and letW denote the vector space of all real valued functions de ned onR. Consider the transformation 0 T :V W , where T (f) =f for every f2V . It is easy to check from properties of derivatives that T is a linear transformation. Example 8.4.7. LetV denote the vector space of all real valued functions that are Riemann integrable over the interval 0; 1. Consider the transformation T :V R, where Z 1 T (f) = f(x) dx 0 for every f2V . It is easy to check from properties of the Riemann integral that T is a linear transfor- mation. Consider a linear transformation T :V W from a nite dimensional real vector space V into a real vector space W . Suppose thatfv ;:::;v g is a basis of V . Then every u2V can be written uniquely 1 n in the form u = v +::: + v , where ;:::; 2R. It follows that 1 1 n n 1 n T (u) =T ( v +::: + v ) =T ( v ) +::: +T ( v ) = T (v ) +::: + T (v ): 1 1 n n 1 1 n n 1 1 n n We have therefore proved the following generalization of Proposition 8A. PROPOSITION 8G. Suppose that T : V W is a linear transformation from a nite dimensional real vector space V into a real vector space W . Suppose further thatfv ;:::;v g is a basis of V . Then 1 n T is completely determined by T (v );:::;T (v ). 1 n 2 Example8.4.8. Consider a linear transformationT :P R, whereT (1) = 1,T (x) = 2 andT (x ) = 3. 2 2 Sincef1;x;xg is a basis of P , this linear transformation is completely determined. In particular, we 2 have, for example, 2 2 T (5 3x + 2x ) = 5T (1) 3T (x) + 2T (x ) = 5: 4 Example 8.4.9. Consider a linear transformation T :R R, where T (1; 0; 0; 0) = 1, T (1; 1; 0; 0) = 2, 4 T (1; 1; 1; 0) = 3 and T (1; 1; 1; 1) = 4. Sincef(1; 0; 0; 0); (1; 1; 0; 0); (1; 1; 1; 0); (1; 1; 1; 1)g is a basis ofR , this linear transformation is completely determined. In particular, we have, for example, T (6; 4; 3; 1) =T (2(1; 0; 0; 0) + (1; 1; 0; 0) + 2(1; 1; 1; 0) + (1; 1; 1; 1)) = 2T (1; 0; 0; 0) +T (1; 1; 0; 0) + 2T (1; 1; 1; 0) +T (1; 1; 1; 1) = 14: We also have the following generalization of Proposition 8D. PROPOSITION 8H. Suppose that V;W;U are real vector spaces. Suppose further that T :V W 1 and T :WU are linear transformations. Then T =T T :V U is also a linear transformation. 2 2 1 Proof. Suppose that u;v2V . Then T (u +v) =T (T (u +v)) =T (T (u) +T (v)) =T (T (u)) +T (T (v)) =T (u) +T (v): 2 1 2 1 1 2 1 2 1 Also, if c2R, then T (cu) =T (T (cu)) =T (cT (u)) =cT (T (u)) =cT (u): 2 1 2 1 2 1 Hence T is a linear transformation. Chapter 8 : Linear Transformations page 12 of 35c Linear Algebra W W L Chen, 1997, 2008 8.5. Change of Basis Suppose that V is a real vector space, with basisB =fu ;:::;u g. Then every vector u2 V can be 1 n written uniquely as a linear combination u = u +::: + u ; where ;:::; 2R: (3) 1 1 n n 1 n n It follows that the vector u can be identi ed with the vector ( ;:::; )2R . 1 n Definition. Suppose that u2V and (3) holds. Then the matrix 0 1 1 B . C . u = A B . n is called the coordinate matrix of u relative to the basisB =fu ;:::;u g. 1 n Example 8.5.1. The vectors u = (1; 2; 1; 0); u = (3; 3; 3; 0); u = (2;10; 0; 0); u = (2; 1;6; 2) 1 2 3 4 4 4 are linearly independent in R , and soB =fu ;u ;u ;ug is a basis of R . It follows that for any 1 2 3 4 4 u = (x;y;z;w)2R , we can write u = u + u + u + u : 1 1 2 2 3 3 4 4 In matrix notation, this becomes 0 1 0 10 1 x 1 3 2 2 1 y 2 3 10 1 B C B CB C 2 = ; A A A z 1 3 0 6 3 w 0 0 0 2 4 so that 0 1 0 110 1 1 3 2 2 x 1 2 3 10 1 y B C B C B C 2 u = = : A A A B 1 3 0 6 z 3 0 0 0 2 w 4 n Remark. Consider a function  :V R , where (u) = u for every u2V . It is not dicult to see B that this function gives rise to a one-to-one correspondence between the elements of V and the elements n ofR . Furthermore, note that u +v = u + v and cu =cu ; B B B B B so that (u +v) = (u) +(v) and (cu) = c(u) for every u;v2 V and c2R. Thus  is a linear n transformation, and preserves much of the structure of V . We also say that V is isomorphic toR . In practice, once we have made this identi cation between vectors and their coordinate matrices, then we n can basically forget about the basisB and imagine that we are working inR with the standard basis. Clearly, if we change from one basisB =fu ;:::;u g to another basisC =fv ;:::;v g ofV , then we 1 n 1 n also need to nd a way of calculating u in terms of u for every vector u2V . To do this, note that C B each of the vectorsv ;:::;v can be written uniquely as a linear combination of the vectors u ;:::;u . 1 n 1 n Suppose that for i = 1;:::;n, we have v =a u +::: +a u ; where a ;:::;a 2R; i 1i 1 ni n 1i ni Chapter 8 : Linear Transformations page 13 of 35c Linear Algebra W W L Chen, 1997, 2008 so that 0 1 a 1i . A . v = : iB . a ni For every u2V , we can write u = u +::: + u = v +::: + v ; where ;:::; ; ;:::; 2R; 1 1 n n 1 1 n n 1 n 1 n so that 0 1 0 1 1 1 B . C B . C . . u = and u = : A A B C . . n n Clearly u = v +::: + v 1 1 n n = (a u +::: +a u ) +::: + (a u +::: +a u ) 1 11 1 n1 n n 1n 1 nn n = ( a +::: + a )u +::: + ( a +::: + a )u 1 11 n 1n 1 1 n1 n nn n = u +::: + u : 1 1 n n Hence = a +::: + a ; 1 1 11 n 1n . . . = a +::: + a : n 1 n1 n nn Written in matrix notation, we have 0 1 0 1 0 1 a ::: a 1 11 1n 1 B . C . . B . C A . . . . = : A A . . . . a ::: a n1 nn n n We have proved the following result. PROPOSITION 8J. Suppose thatB =fu ;:::;u g andC =fv ;:::;v g are two bases of a real 1 n 1 n vector space V . Then for every u2V , we have u =P u ; B C where the columns of the matrix P = ( v ::: v ) 1B nB are precisely the coordinate matrices of the elements ofC relative to the basisB. Remark. Strictly speaking, Proposition 8J gives u in terms of u . However, note that the matrix B C 1 P is invertible (why?), so that u =P u . C B Definition. The matrixP in Proposition 8J is sometimes called the transition matrix from the basisC to the basisB. Chapter 8 : Linear Transformations page 14 of 35c Linear Algebra W W L Chen, 1997, 2008 Example 8.5.2. We know that with u = (1; 2; 1; 0); u = (3; 3; 3; 0); u = (2;10; 0; 0); u = (2; 1;6; 2); 1 2 3 4 and with v = (1; 2; 1; 0); v = (1;1; 1; 0); v = (1; 0;1; 0); v = (0; 0; 0; 2); 1 2 3 4 4 bothB =fu ;u ;u ;ug andC =fv ;v ;v ;vg are bases ofR . It is easy to check that 1 2 3 4 1 2 3 4 v =u ; 1 1 v =2u +u ; 2 1 2 v = 11u 4u +u ; 3 1 2 3 v =27u + 11u 2u +u ; 4 1 2 3 4 so that 0 1 1 2 11 27 0 1 4 11 B C P = ( v v v v ) = : A 1B 2B 3B 4B 0 0 1 2 0 0 0 1 4 Hence u =P u for every u2R . It is also easy to check that B C u =v ; 1 1 u = 2v +v ; 2 1 2 u =3v + 4v +v ; 3 1 2 3 u =v 3v + 2v +v ; 4 1 2 3 4 so that 0 1 1 2 3 1 B 0 1 4 3C Q = ( u u u u ) = : 1C 2C 3C 4C A 0 0 1 2 0 0 0 1 4 Hence u =Qu for every u2R . Note that PQ =I. Now let u = (6;1; 2; 2). We can check that C B u =v + 3v + 2v +v , so that 1 2 3 4 0 1 1 B 3C u = : C A 2 1 Then 0 10 1 0 1 1 2 11 27 1 10 0 1 4 11 3 6 B CB C B C u = = : A A A B 0 0 1 2 2 0 0 0 0 1 1 1 Check that u =10u + 6u +u . 1 2 4 Chapter 8 : Linear Transformations page 15 of 35c Linear Algebra W W L Chen, 1997, 2008 Example 8.5.3. Consider the vector space P . It is not too dicult to check that 2 2 2 u = 1 +x; u = 1 +x ; u =x +x 1 2 3 2 form a basis of P . Let u = 1 + 4xx . Then u = u + u + u , where 2 1 1 2 2 3 3 2 2 2 2 1 + 4xx = (1 +x) + (1 +x ) + (x +x ) = ( + ) + ( + )x + ( + )x ; 1 2 3 1 2 1 3 2 3 so that + = 1, + = 4 and + =1. Hence ( ; ; ) = (3;2; 1). If we write 1 2 1 3 2 3 1 2 3 B =fu ;u ;ug, then 1 2 3 0 1 3 A u = 2 : B 1 On the other hand, it is also not too dicult to check that 2 v = 1; v = 1 +x; v = 1 +x +x 1 2 3 form a basis of P . Also u = v + v + v , where 2 1 1 2 2 3 3 2 2 2 1 + 4xx = + (1 +x) + (1 +x +x ) = ( + + ) + ( + )x + x ; 1 2 3 1 2 3 2 3 3 so that + + = 1, + = 4 and =1. Hence ( ; ; ) = (3; 5;1). If we write 1 2 3 2 3 3 1 2 3 C =fv ;v ;vg, then 1 2 3 0 1 3 A u = 5 : C 1 Next, note that 1 1 1 v = u + u u ; 1 1 2 3 2 2 2 v =u ; 2 1 1 1 1 v = u + u + u : 3 1 2 3 2 2 2 Hence 0 1 1=2 1 1=2 A P = ( v v v ) = 1=2 0 1=2 : 1B 2B 3B 1=2 0 1=2 To verify that u =P u , note that B C 0 1 0 10 1 3 1=2 1 1=2 3 A A A 2 = 1=2 0 1=2 5 : 1 1=2 0 1=2 1 8.6. Kernel and Range n m Consider rst of all a euclidean linear transformation T : R R . Suppose that A is the standard matrix for T . Then the range of the transformation T is given by n n R(T ) =fT (x) :x2R g =fAx :x2R g: Chapter 8 : Linear Transformations page 16 of 35c Linear Algebra W W L Chen, 1997, 2008 It follows thatR(T ) is the set of all linear combinations of the columns of the matrix A, and is therefore the column space of A. On the other hand, the set n fx2R :Ax =0g is the nullspace of A. Recall that the sum of the dimension of the nullspace of A and dimension of the column space ofA is equal to the number of columns of A. This is known as the Rank-nullity theorem. The purpose of this section is to extend this result to the setting of linear transformations. To do this, we need the following generalization of the idea of the nullspace and the column space. Definition. Suppose that T :V W is a linear transformation from a real vector space V into a real vector space W . Then the set ker(T ) =fu2V :T (u) =0g is called the kernel of T , and the set R(T ) =fT (u) :u2Vg is called the range of T . Example 8.6.1. For a euclidean linear transformation T with standard matrix A, we have shown that ker(T ) is the nullspace of A, while R(T ) is the column space of A. Example 8.6.2. Suppose that T :V W is the zero transformation. Clearly we have ker(T ) =V and R(T ) =f0g. Example 8.6.3. Suppose that T :V V is the identity operator on V . Clearly we have ker(T ) =f0g and R(T ) =V . 2 2 Example 8.6.4. Suppose that T :R R is orthogonal projection onto the x -axis. Then ker(T ) is 1 the x -axis, while R(T ) is the x -axis. 2 1 n n n Example 8.6.5. Suppose that T :R R is one-to-one. Then ker(T ) =f0g andR(T ) =R , in view of Proposition 8E. Example 8.6.6. Consider the linear transformation T :V W , where V denotes the vector space of all real valued functions di erentiable everywhere in R, where W denotes the space of all real valued 0 functions de ned inR, and whereT (f) =f for everyf2V . Then ker(T ) is the set of all di erentiable functions with derivative 0, and so is the set of all constant functions in R. Example 8.6.7. Consider the linear transformation T : V R, where V denotes the vector space of all real valued functions Riemann integrable over the interval 0; 1, and where Z 1 T (f) = f(x) dx 0 for every f2 V . Then ker(T ) is the set of all Riemann integrable functions in 0; 1 with zero mean, while R(T ) =R. PROPOSITION 8K. Suppose that T :V W is a linear transformation from a real vector space V into a real vector space W . Then ker(T ) is a subspace of V , while R(T ) is a subspace of W . Chapter 8 : Linear Transformations page 17 of 35c Linear Algebra W W L Chen, 1997, 2008 Proof. Since T (0) =0, it follows that 02 ker(T )V and 02R(T )W . For any u;v2 ker(T ), we have T (u +v) =T (u) +T (v) =0 +0 =0; so that u +v2 ker(T ). Suppose further that c2R. Then T (cu) =cT (u) =c0 =0; so thatcu2 ker(T ). Hence ker(T ) is a subspace ofV . Suppose next thatw;z2R(T ). Then there exist u;v2V such that T (u) =w and T (v) =z. Hence T (u +v) =T (u) +T (v) =w +z; so that w +z2R(T ). Suppose further that c2R. Then T (cu) =cT (u) =cw; so that cw2R(T ). Hence R(T ) is a subspace of W . To complete this section, we prove the following generalization of the Rank-nullity theorem. PROPOSITION 8L. Suppose that T :V W is a linear transformation from an n-dimensional real vector space V into a real vector space W . Then dim ker(T ) + dimR(T ) =n: Proof. Suppose rst of all that dim ker(T ) =n. Then ker(T ) =V , and so R(T ) =f0g, and the result follows immediately. Suppose next that dim ker(T ) = 0, so that ker(T ) =f0g. Iffv ;:::;v g is a 1 n basis ofV , then it follows thatT (v );:::;T (v ) are linearly independent inW , for otherwise there exist 1 n c ;:::;c 2R, not all zero, such that 1 n c T (v ) +::: +c T (v ) =0; 1 1 n n so that T (c v +::: +c v ) = 0, a contradiction since c v +::: +c v 6= 0. On the other hand, 1 1 n n 1 1 n n elements ofR(T ) are linear combinations ofT (v );:::;T (v ). Hence dimR(T ) =n, and the result again 1 n follows immediately. We may therefore assume that dim ker(T ) =r, where 1rn. Letfv ;:::;vg 1 r be a basis of ker(T ). This basis can be extended to a basisfv ;:::;v ;v ;:::;v g ofV . It suces to 1 r r+1 n show that fT (v );:::;T (v )g (4) r+1 n is a basis of R(T ). Suppose that u2V . Then there exist ;:::; 2R such that 1 n u = v +::: + v + v +::: + v ; 1 1 r r r+1 r+1 n n so that T (u) = T (v ) +::: + T (v ) + T (v ) +::: + T (v ) 1 1 r r r+1 r+1 n n = T (v ) +::: + T (v ): r+1 r+1 n n It follows that (4) spans R(T ). It remains to prove that its elements are linearly independent. Suppose that c ;:::;c 2R and r+1 n c T (v ) +::: +c T (v ) =0: (5) r+1 r+1 n n Chapter 8 : Linear Transformations page 18 of 35c Linear Algebra W W L Chen, 1997, 2008 We need to show that c =::: =c = 0: (6) r+1 n By linearity, it follows from (5) that T (c v +::: +c v ) =0, so that r+1 r+1 n n c v +::: +c v 2 ker(T ): r+1 r+1 n n Hence there exist c ;:::;c 2R such that 1 r c v +::: +c v =c v +::: +c v ; r+1 r+1 n n 1 1 r r so that c v +::: +c v c v :::c v =0: 1 1 r r r+1 r+1 n n Sincefv ;:::;v g is a basis of V , it follows that c =::: =c =c =::: =c = 0, so that (6) holds. 1 n 1 r r+1 n This completes the proof. Remark. We sometimes say that dimR(T ) and dim ker(T ) are respectively the rank and the nullity of the linear transformation T . 8.7. Inverse Linear Transformations In this section, we generalize some of the ideas rst discussed in Section 8.3. Definition. A linear transformationT :V W from a real vector spaceV into a real vector spaceW 0 00 0 00 0 00 is said to be one-to-one if for every u;u 2V , we have u =u whenever T (u ) =T (u ). The result below follows immediately from our de nition. PROPOSITION 8M. Suppose that T :V W is a linear transformation from a real vector space V into a real vector space W . Then T is one-to-one if and only if ker(T ) =f0g. Proof. ()) Clearly 02 ker(T ). Suppose that ker(T ) =6 f0g. Then there exists a non-zero v2 ker(T ). It follows that T (v) =T (0), and so T is not one-to-one. 0 00 (() Suppose that ker(T ) =f0g. Given any u;u 2V , we have 0 00 0 00 T (u )T (u ) =T (u u ) =0 0 00 0 00 if and only if u u =0; in other words, if and only if u =u . We have the following generalization of Proposition 8E. PROPOSITION 8N. Suppose that T :V V is a linear operator on a nite-dimensional real vector space V . Then the following statements are equivalent: (a) The linear operator T is one-to-one. (b) We have ker(T ) =f0g. (c) The range of T is V ; in other words, R(T ) =V . Proof. The equivalence of (a) and (b) is established by Proposition 8M. The equivalence of (b) and (c) follows from Proposition 8L. Chapter 8 : Linear Transformations page 19 of 35c Linear Algebra W W L Chen, 1997, 2008 Suppose that T :V W is a one-to-one linear transformation from a real vector space V into a real vector spaceW . Then for everyw2R(T ), there exists exactly one u2V such thatT (u) =w. We can 1 1 therefore de ne a transformation T :R(T )V by writing T (w) =u, where u2V is the unique vector satisfying T (u) =w. PROPOSITION8P. Suppose thatT :V W is a one-to-one linear transformation from a real vector 1 space V into a real vector space W . Then T :R(T )V is a linear transformation. 1 1 Proof. Suppose that w;z2R(T ). Then there exist u;v2V such that T (w) =u and T (z) =v. It follows that T (u) =w and T (v) =z, so that T (u +v) =T (u) +T (v) =w +z, whence 1 1 1 T (w +z) =u +v =T (w) +T (z): Suppose further that c2R. Then T (cu) =cw, so that 1 1 T (cw) =cu =cT (w): This completes the proof. We also have the following result concerning compositions of linear transformations and which requires no further proof, in view of our knowledge concerning inverse functions. PROPOSITION 8Q. Suppose that V;W;U are real vector spaces. Suppose further that T :V W 1 and T :WU are one-to-one linear transformations. Then 2 (a) the linear transformation T T :V U is one-to-one; and 2 1 1 1 1 (b) (T T ) =T T . 2 1 1 2 8.8. Matrices of General Linear Transformations Suppose thatT :V W is a linear transformation from a real vector spaceV to a real vector spaceW . Suppose further that the vector spacesV andW are nite dimensional, with dimV =n and dimW =m. We shall show that if we make use of a basisB of V and a basisC of W , then it is possible to describe T indirectly in terms of some matrix A. The main idea is to make use of coordinate matrices relative to the basesB andC. Let us recall some discussion in Section 8.5. Suppose thatB =fv ;:::;v g is a basis of V . Then 1 n every vector v2V can be written uniquely as a linear combination v = v +::: + v ; where ;:::; 2R: (7) 1 1 n n 1 n The matrix 0 1 1 B C . . v = (8) B A . n is the coordinate matrix of v relative to the basisB. n Consider now a transformation  : V R , where (v) = v for every v2 V . The proof of the B following result is straightforward. PROPOSITION 8R. Suppose that the real vector space V has basisB =fv ;:::;v g. Then the 1 n n transformation  : V R , where (v) = v satis es (7) and (8) for every v 2 V , is a one- B n to-one linear transformation, with range R() = R . Furthermore, the inverse linear transformation 1 n 1  :R V is also one-to-one, with range R( ) =V . Chapter 8 : Linear Transformations page 20 of 35

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.