Lecture Notes for Linear Algebra

how is linear algebra used in computer science and how is linear algebra used in economics. how is linear algebra used in engineering pdf free download
LaylaChadwick Profile Pic
LaylaChadwick,New Zealand,Teacher
Published Date:15-07-2017
Your Website URL(Optional)
Comment
Lecture Notes for Linear Algebra James S. Cook Liberty University Department of Mathematics and Physics Fall 20092 introduction and motivations for these notes These notes should cover what is said in lecture at a minimum. However, I'm still learning so I may have a new thought or two in the middle of the semester, especially if there are interesting questions. We will have a number of short quizzes and the solutions and/or commentary about those is not in these notes for the most part. It is thus important that you never miss class. The text is good, but it does not quite t the lecture plan this semester. I probably would have changed it if I realized earlier but alas it is too late. Fortunately, these notes are for all intents and purposes a text for the course. But, these notes lack exercises, hence the required text. The text does have a good assortment of exercises but please bear in mind that the ordering of the exercises assigned matches my lecture plan for the course and not the text's. Finally, there are a couple topics missing from the text which we will cover and I will write up some standard homework problems for those sections. As usual there are many things in lecture which you will not really understand until later. I will regularly give quizzes on the material we covered in previous lectures. I expect you to keep up with the course. Procrastinating until the test to study will not work in this course. The diculty and predictability of upcoming quizzes will be a function of how I percieve the class is following along. Doing the homework is doing the course. I cannot overemphasize the importance of thinking through the homework. I would be happy if you left this course with a working knowledge of: X how to solve a system of linear equations X Gaussian Elimination and how to interpret the rref(A) X concrete and abstract matrix calculations X determinants X real vector spaces both abstract and concrete X subspaces of vector space X how to test for linear independence X how to prove a set spans a space X coordinates and bases X column, row and null spaces for a matrix X basis of an abstract vector space X linear transformations3 X matrix of linear transformation X change of basis on vector space X geometry of Euclidean Space X orthogonal bases and the Gram-Schmidt algorthim X least squares tting of experimental data X best t trigonmetric polynomials (Fourier Analysis) X Eigenvalues and Eigenvectors X Diagonalization X how to solve a system of linear di erential equations X principle axis theorems for conic sections and quadric surfaces I hope that I have struck a fair balance between pure theory and application. The treatment of systems of di erential equations is somewhat unusual for a rst course in linear algebra. No apolo- gies though, I love the example because it shows nontrivial applications of a large swath of the theory in the course while at the same time treating problems that are simply impossible to solve without theory. Generally speaking, I tried to spread out the applications so that if you hate the theoretical part then there is still something fun in every chapter. If you don't like the applications then you'll just have to deal (as my little sister says) Before we begin, I should warn you that I assume quite a few things from the reader. These notes are intended for someone who has already grappled with the problem of constructing proofs. I assume you know the di erence between) and,. I assume the phrase "i " is known to you. I assume you are ready and willing to do a proof by induction, strong or weak. I assume you know whatR,C,Q,N andZ denote. I assume you know what a subset of a set is. I assume you know how to prove two sets are equal. I assume you are familar with basic set operations such as union and intersection (although we don't use those much). More importantly, I assume you have started to appreciate that mathematics is more than just calculations. Calculations without context, without theory, are doomed to failure. At a minimum theory and proper mathematics allows you to communicate analytical concepts to other like-educated individuals. Some of the most seemingly basic objects in mathematics are insidiously complex. We've been taught they're simple since our childhood, but as adults, mathematical adults, we nd the actual de nitions of such objects asR orC are rather involved. I will not attempt to provide foundational arguments to build numbers from basic set theory. I believe it is possible, I think it's well-thought- out mathematics, but we take the existence of the real numbers as an axiom for these notes. We assume thatR exists and that the real numbers possess all their usual properties. In fact, I assume R, C, Q, N and Z all exist complete with their standard properties. In short, I assume we have numbers to work with. We leave the rigorization of numbers to a di erent course.4Contents 1 Gauss-Jordan elimination 9 1.1 systems of linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 Gauss-Jordan algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 classi cation of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4 applications to curve tting and circuits . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5 conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 matrix arithmetic 29 2.1 basic terminology and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2 addition and multiplication by scalars . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4 elementary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5 invertible matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.6 systems of linear equations revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.6.1 concatenation for solving many systems at once . . . . . . . . . . . . . . . . . 45 2.7 how to calculate the inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.8 all your base are belong to us (e and E that is) . . . . . . . . . . . . . . . . . . . . 47 i ij 2.9 matrices with names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.9.1 symmetric and antisymmetric matrices . . . . . . . . . . . . . . . . . . . . . . 52 2.9.2 exponent laws for matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.9.3 diagonal and triangular matrices . . . . . . . . . . . . . . . . . . . . . . . . . 55 2.10 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.11 conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3 determinants 61 3.1 determinants and geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 cofactor expansion for the determinant . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3 adjoint matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.4 Kramer's Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.5 properties of determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 examples of determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 56 CONTENTS 3.8 conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4 linear algebra 83 4.1 de nition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.3 spanning sets and subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.5 bases and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.5.1 how to calculate a basis for a span of row or column vectors . . . . . . . . . . 109 4.5.2 calculating basis of a solution set . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.5.3 what is dimension? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.6 general theory of linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.7 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.8 conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5 linear transformations 129 5.1 examples of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 5.2 matrix of a linear operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 5.3 composition of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.4 isomorphism of vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.5 change we can believe in (no really, no joke) . . . . . . . . . . . . . . . . . . . . . . . 138 n1 5.5.1 change of basis for linear transformations onR . . . . . . . . . . . . . . . 140 5.5.2 similar matrices and invariant quantities for abstract linear operators . . . . 145 5.6 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5.7 conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6 linear geometry 151 n 6.1 Euclidean geometry ofR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 n1 6.2 orthogonality inR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 6.3 orthogonal complements and projections . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.4 the closest vector problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 6.5 inconsistent equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.6 least squares analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.6.1 linear least squares problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.6.2 nonlinear least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 6.7 orthogonal transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.8 inner products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 6.8.1 examples of inner-products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 6.8.2 Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194CONTENTS 7 7 eigenvalues and eigenvectors 197 7.1 geometry of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 7.2 de nition and properties of eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 203 7.2.1 eigenwarning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 7.3 eigenvalue properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 7.4 real eigenvector examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 7.5 complex eigenvector examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 7.6 linear independendence of real eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 213 7.7 linear independendence of complex eigenvectors . . . . . . . . . . . . . . . . . . . . . 218 7.8 diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 7.9 calculus of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 7.10 introduction to systems of linear di erential equations . . . . . . . . . . . . . . . . . 225 7.11 the matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 7.11.1 analysis for matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 7.11.2 formulas for the matrix exponential . . . . . . . . . . . . . . . . . . . . . . . 229 7.12 solutions for systems of DEqns with real eigenvalues . . . . . . . . . . . . . . . . . . 235 7.13 solutions for systems of DEqns with complex eigenvalues . . . . . . . . . . . . . . . . 243 7.14 geometry and di erence equations revisited . . . . . . . . . . . . . . . . . . . . . . . 245 7.14.1 di erence equations vs. di erential equations . . . . . . . . . . . . . . . . . . 245 7.15 conic sections and quadric surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 7.15.1 quadratic forms and their matrix . . . . . . . . . . . . . . . . . . . . . . . . . 246 7.16 intertia tensor, an application of quadratic forms . . . . . . . . . . . . . . . . . . . . 253 I sections for future courses or bonus work 255 8 Vector Spaces over C 257 8.1 complex matrix calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 8.2 inner products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 8.3 Hermitian matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 9 Additional Topics 259 9.1 minimal polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.2 quotient spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.3 tensor products and blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.4 Cayley-Hamiliton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.5 singular value decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.6 spectral decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 9.7 QR-factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2598 CONTENTS 10 Multilinear Algebra 261 10.1 dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.2 double dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.3 multilinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.4 tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.5 forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.6 determinants done right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 11 In nite Dimensional Linear Algebra 263 11.1 norms in in nite dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 11.2 examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 11.3 di erentiation in nite dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 11.4 di erentiation in in nite dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 12 appendix on nite sums 265Chapter 1 Gauss-Jordan elimination Gauss-Jordan elimination is an optimal method for solving a system of linear equations. Logically it may be equivalent to methods you are already familar with but the matrix notation is by far the most ecient method. This is important since throughout this course we will be faced with the problem of solving linear equations. Additionally, the Gauss-Jordan produces the reduced row echelon form(rref) of the matrix. Given a particular matrix the rref is unique. This is of particular use in theoretical applications. 1.1 systems of linear equations Let me begin with a few examples before I state the general de nition. Example 1.1.1. Consider the following system of 2 equations and 2 unknowns, x +y = 2 xy = 0 Adding equations reveals 2x = 2 hence x = 1. Then substitute that into either equation to deduce y = 1. Hence the solution (1; 1) is unique Example 1.1.2. Consider the following system of 2 equations and 2 unknowns, x +y = 2 3x + 3y = 6 We can multiply the second equation by 1=3 to see that it is equivalent to x +y = 2 thus our two equations are in fact the same equation. There are in nitely many equations of the form (x;y) where x +y = 2. In other words, the solutions are (x; 2x) for all x2R. Both of the examples thus far were consistent. 910 CHAPTER 1. GAUSS-JORDAN ELIMINATION Example 1.1.3. Consider the following system of 2 equations and 2 unknowns, x +y = 2 x +y = 3 These equations are inconsistent. Notice substracting the second equation yields that 0 = 1. This system has no solutions, it is inconsistent It is remarkable that these three simple examples reveal the general structure of solutions to linear systems. Either we get a unique solution, in nitely many solutions or no solution at all. For our examples above, these cases correspond to the possible graphs for a pair of lines in the plane. A pair of lines may intersect at a point (unique solution), be the same line (in nitely many solutions) 1 or be paralell (inconsistent). Remark 1.1.4. It is understood in this course that i;j;k;l;m;n;p;q;r;s are inN. I will not belabor this point. Please ask if in doubt. De nition 1.1.5. system of m-linear equations in n-unknowns Letx ;x ;:::;x bem variables and supposeb;A 2R for 1im and 1jn then 1 2 m i ij A x +A x + +A x =b 11 1 12 2 1n n 1 A x +A x + +A x =b 21 1 22 2 2n n 2 . . . . . . . . . . . . A x +A x + +A x =b m1 1 m2 2 mn n m is called a system of linear equations. Ifb = 0 for 1im then we say the system is i m homogeneous. The solution set is the set of all (x ;x ;:::;x )2R which satisfy all 1 2 n the equations in the system simultaneously. 1 I used the Graph program to generate these graphs. It makes nice pictures, these are ugly due to user error.1.1. SYSTEMS OF LINEAR EQUATIONS 11 Remark 1.1.6. We use variables x ;x ;:::;x mainly for general theoretical statements. In particular 1 2 n problems and especially for applications we tend to defer to the notation x;y;z etc...12 CHAPTER 1. GAUSS-JORDAN ELIMINATION De nition 1.1.7. The augmented coecient matrix is an array of numbers which provides an abbreviated notation for a system of linear equations. 2 3 2 3 A x +A x + +A x =b A A  A b 11 1 12 2 1n n 1 11 12 1n 1 6 7 6 7 A x +A x + +A x =b A A  A b 21 1 22 2 2n n 2 21 22 2n 2 6 7 6 7 abbreviated by : 6 7 6 7 . . . . . . . . . . . . . . . . . . . . 4 5 4 5 . . . . . . . . . . A x +A x + +A x =b A A  A b m1 1 m2 2 mn n m m1 m2 mn m The vertical bar is optional, I include it to draw attention to the distinction between the matrix of coecients A and the nonhomogeneous terms b . Let's revisit my three simple examples in this ij i new notation. I illustrate the Gauss-Jordan method for each. Example 1.1.8. The system x +y = 2 and xy = 0 has augmented coecient matrix:     1 1 2 1 1 2 r r r 2 1 2 1 1 0 0 2 2     1 1 2 1 0 1 r = 2r r r r 2 2 1 2 1 0 1 1 0 1 1 The last augmented matrix represents the equations x = 1 and y = 1. Rather than adding and subtracting equations we added and subtracted rows in the matrix. Incidentally, the last step is called the backward pass whereas the rst couple steps are called the forward pass. Gauss is credited with guring out the forward pass then Jordan added the backward pass. Calculators can accomplish these via the commands ref ( Gauss' row echelon form ) and rref (Jordan's reduced row echelon form). In particular,         1 1 2 1 1 2 1 1 2 1 0 1 ref = rref = 1 1 0 0 1 1 1 1 0 0 1 1 Example 1.1.9. The system x +y = 2 and 3x + 3y = 6 has augmented coecient matrix:     1 1 2 1 1 2 r 3r r 2 1 2 3 3 6 0 0 0 The nonzero row in the last augmented matrix represents the equation x +y = 2. In this case we cannot make a backwards pass so the ref and rref are the same. Example 1.1.10. The system x +y = 3 and x +y = 2 has augmented coecient matrix:     1 1 3 1 1 1 r 3r r 2 1 2 1 1 2 0 0 1 The last row indicates that 0x+0y = 1 which means that there is no solution since 06= 1. Generally, when the bottom row of the rref(Ajb) is zeros with a 1 in the far right column then the system Ax =b is inconsistent because there is no solution to the equation.1.2. GAUSS-JORDAN ALGORITHM 13 1.2 Gauss-Jordan algorithm To begin we need to identify three basic operations we do when solving systems of equations. I'll de ne them for system of 3 equations and 3 unknowns, but it should be obvious this generalizes tom equations andn unknowns without much thought. The following operations are called Elementary Row Operations. (1:) scaling row 1 by nonzero constant c 2 3 2 3 A A A b cA cA cA cb 11 12 13 1 11 12 13 1 4 5 4 5 A A A b cr r A A A b 21 22 23 2 1 1 21 22 23 2 A A A b A A A b 31 32 33 3 31 32 33 3 (2:) replace row 1 with the sum of row 1 and row 2 2 3 2 3 A A A b A +A A +A A +A b +b 11 12 13 1 11 21 12 22 13 23 1 2 4 5 4 5 A A A b r +r r A A A b 21 22 23 2 1 2 1 21 22 23 2 A A A b A A A b 31 32 33 3 31 32 33 3 (3:) swap rows 1 and 2 2 3 2 3 A A A b A A A b 11 12 13 1 21 22 23 2 4 5 4 5 A A A b r r A A A b 21 22 23 2 1 2 11 12 13 1 A A A b A A A b 31 32 33 3 31 32 33 3 Each of the operations above corresponds to an allowed operation on a system of linear equations. When we make these operations we will not change the solution set. Notice the notation tells us what we did and also where it is going. I do expect you to use the same notation. I also expect you can gure out what is meant by cr r or r 3r r . We are only allowed to make 2 2 1 2 1 a nite number of the operations (1.),(2.) and (3.). The Gauss-Jordan algorithm tells us which order to make these operations in order to reduce the matrix to a particularly simple format called the "reduced row echelon form" (I abbreviate this rref most places). The following de nition is borrowed from the text Elementary Linear Algebra: A Matrix Approach, 2nd ed. by Spence, Insel and Friedberg, however you can probably nd nearly the same algorithm in dozens of other texts.14 CHAPTER 1. GAUSS-JORDAN ELIMINATION De nition 1.2.1. Gauss-Jordan Algorithm. Given anm byn matrixA the following sequence of steps is called the Gauss-Jordan algo- rithm or Gaussian elimination. I de ne terms such as pivot column and pivot position as they arise in the algorithm below. Step 1: Determine the leftmost nonzero column. This is a pivot column and the topmost position in this column is a pivot position. Step 2: Perform a row swap to bring a nonzero entry of the pivot column below the pivot row to the top position in the pivot column ( in the rst step there are no rows above the pivot position, however in future iterations there may be rows above the pivot position, see 4). Step 3: Add multiples of the pivot row to create zeros below the pivot position. This is called "clearing out the entries below the pivot position". Step 4: If there is a nonzero row below the pivot row from (3.) then nd the next pivot postion by looking for the next nonzero column to the right of the previous pivot column. Then perform steps 1-3 on the new pivot column. When no more nonzero rows below the pivot row are found then go on to step 5. Step 5: the leftmost entry in each nonzero row is called the leading entry. Scale the bottommost nonzero row to make the leading entry 1 and use row additions to clear out any remaining nonzero entries above the leading entries. Step 6: If step 5 was performed on the top row then stop, otherwise apply Step 5 to the next row up the matrix. Steps (1.)-(4.) are called the forward pass. A matrix produced by a foward pass is called the reduced echelon form of the matrix and it is denoted ref(A). Steps (5.) and (6.) are called the backwards pass. The matrix produced by completing Steps (1.)-(6.) is called the reduced row echelon form of A and it is denoted rref(A). Theref(A) is not unique because there may be multiple choices for how Step 2 is executed. On the other hand, it turns out thatrref(A) is unique. The proof of uniqueness can be found in Appendix E of your text. The backwards pass takes the ambiguity out of the algorithm. Notice the forward pass goes down the matrix while the backwards pass goes up the matrix.1.2. GAUSS-JORDAN ALGORITHM 15 h i 1 23 1 Example 1.2.2. Given A = 2 4 0 7 calculate rref(A). 1 3 2 0 2 3 2 3 1 2 3 1 1 2 3 1 4 5 4 5 A = 2 4 0 7 r 2r r 0 0 6 5 r +r r 2 1 2 1 3 3 1 3 2 0 1 3 2 0 2 3 2 3 1 2 3 1 1 2 3 1 4 5 4 5 0 0 6 5 r r 0 5 1 1 = ref(A) 2 3 0 5 1 1 0 0 6 5 that completes the forward pass. We begin the backwards pass, 2 3 2 3 1 2 3 1 1 2 3 1 1 4 5 4 5 ref(A) = 0 5 1 1 r r 0 5 1 1 r +r r 3 3 2 3 2 6 0 0 6 5 0 0 1 5=6 2 3 2 3 1 2 3 1 1 2 0 21=6 1 4 5 4 5 0 5 0 11=6 r + 3r r 0 5 0 11=6 r r 1 3 1 2 2 5 0 0 1 5=6 0 0 1 5=6 2 3 2 3 1 2 0 21=6 1 0 0 83=30 4 5 4 5 0 1 0 11=30 r 2r r 0 1 0 11=30 = rref(A) 1 2 1 0 0 1 5=6 0 0 1 5=6 h i 11 1 33 0 Example 1.2.3. Given A = calculate rref(A). 223 2 3 2 3 1 1 1 1 1 1 4 5 4 5 A = 3 3 0 r 3r r 0 0 3 r 2r r 2 1 2 3 1 3 2 2 3 2 2 3 2 3 2 3 1 1 1 1 1 1 r r r 3r r 3 2 3 3 3 4 5 4 5 0 0 3 0 0 15 1 r r 5r r 2 2 2 2 15 0 0 5 0 0 15 2 3 2 3 1 1 1 1 1 0 4 5 4 5 0 0 1 r r r 0 0 1 = rref(A) 1 2 1 0 0 0 0 0 0 Note it is customary to read multiple row operations from top to bottom if more than one is listed between two of the matrices. The multiple arrow notation should be used with caution as it has great potential to confuse. Also, you might notice that I did not strictly-speaking follow Gauss-Jordan in the operations 3r r and 5r r . It is sometimes convenient to modify the algorithm slightly 3 3 2 2 in order to avoid fractions.16 CHAPTER 1. GAUSS-JORDAN ELIMINATION Example 1.2.4. easy examples are sometimes disquieting, let r2R,     1 v = 2 4 2r r r 1 2 r = rref(v) 1 1 2 Example 1.2.5. here's another next to useless example, 2 3 2 3 2 3 0 1 1 4 5 4 5 4 5 v = 1 r r 0 r 3r r 0 = rref(v) 1 2 3 1 3 3 3 0 Example 1.2.6. Find the rref of the matrix A given below: 2 3 2 3 1 1 1 1 1 1 1 1 1 1 4 5 4 5 A = 1 1 1 0 1 r r r 0 2 0 1 0 r +r r 2 1 2 3 1 3 1 0 1 1 1 1 0 1 1 1 2 3 2 3 1 1 1 1 1 1 1 1 1 1 4 5 4 5 0 2 0 1 0 r r 0 1 2 2 2 r + 2r r 2 3 3 2 3 0 1 2 2 2 0 2 0 1 0 2 3 2 3 1 1 1 1 1 4 4 4 4 4 4r r r r r 1 1 2 3 2 4 5 4 5 0 1 2 2 2 0 2 4 4 4 2r r r r r 2 2 1 3 1 0 0 4 3 4 0 0 4 3 4 2 3 2 3 r =4r 1 1 4 4 0 1 0 4 0 0 0 0 4 5 4 5 r =2r 2 2 0 2 0 1 0 r 2r r 0 2 0 1 0 1 2 1 0 0 4 3 4 0 0 4 3 4 r =4r 3 3 2 3 1 0 0 0 0 4 5 0 1 0 1=2 0 = rref(A) 0 0 1 3=4 11.2. GAUSS-JORDAN ALGORITHM 17 Example 1.2.7. 2 3 1 0 0 1 0 0 r 2r r 2 1 2 4 5 AjI = 2 2 0 0 1 0 r 4r r 3 1 3 4 4 4 0 0 1 2 3 2 3 1 0 0 1 0 0 1 0 0 1 0 0 r =2r 2 2 4 5 4 5 0 2 0 2 1 0 r 2r r 0 2 0 2 1 0 3 2 3 r =4r 3 3 0 4 4 4 0 1 0 0 4 0 2 1 2 3 1 0 0 1 0 0 4 5 0 1 0 1 1=2 0 = rrefAjI 0 0 1 0 1=2 1=4 Example 1.2.8. 2 3 2 3 1 0 1 0 1 0 1 0 6 7 6 7 0 2 0 0 0 2 0 0 6 7 6 7 A = r 3r r r r r 4 1 4 4 2 4 4 5 4 5 0 0 3 1 0 0 3 1 3 2 0 0 0 2 3 0 2 3 2 3 1 0 1 0 1 0 1 0 6 7 6 7 0 2 0 0 0 2 0 0 6 7 6 7 r +r r r r r 4 3 4 3 4 3 4 5 4 5 0 0 3 1 0 0 3 1 0 0 3 0 0 0 0 1 2 3 2 3 1 0 1 0 1 0 0 0 r =2r 2 2 6 7 6 7 0 2 0 0 0 1 0 0 6 7 6 7 r =3r 3 3 =rref(A) 4 5 4 5 0 0 3 0 0 0 1 0 r r r 1 3 1 0 0 0 1 0 0 0 1 Proposition 1.2.9. If a particular column of a matrix is all zeros then it will be unchanged by the Gaussian elimination. Additionally, if we know rref(A) =B thenrrefAj0 = Bj0 where 0 denotes one or more columns of zeros. Proof: adding nonzero multiples of one row to another will result in adding zero to zero in the column. Likewise, if we multiply a row by a nonzero scalar then the zero column is une ected. Finally, if we swap rows then this just interchanges two zeros. Gauss-Jordan elimination is just a nite sequence of these three basic row operations thus the column of zeros will remain zero as claimed.18 CHAPTER 1. GAUSS-JORDAN ELIMINATION Example 1.2.10. Use Example 1.2.3 and Proposition 1.2.9 to calculate, 2 3 2 3 1 0 1 0 0 1 0 0 0 0 6 7 6 7 0 2 0 0 0 0 1 0 0 0 6 7 6 7 rref = 4 5 4 5 0 0 3 1 0 0 0 1 0 0 3 2 0 0 0 0 0 0 1 0 Similarly, use Example 1.2.5 and Proposition 1.2.9 to calculate: 2 3 2 3 1 0 0 0 1 0 0 0 4 5 4 5 rref 0 0 0 0 = 0 0 0 0 3 0 0 0 0 0 0 0 I hope these examples suce. One last advice, you should think of the Gauss-Jordan algorithm as a sort of road-map. It's ok to take detours to avoid fractions and such but the end goal should remain in sight. If you lose sight of that it's easy to go in circles. Incidentally, I would strongly recommend you nd a way to check your calculations with technology. Mathematica will do any matrix calculation we learn. TI-85 and higher will do much of what we do with a few exceptions here and there. There are even websites which will do row operations, I provide a link on the course website. All of this said, I would remind you that I expect you be able perform Gaussian elimination correctly and quickly on the test and quizzes without the aid of a graphing calculator for the remainder of the course. The arithmetic matters. Unless I state otherwise it is expected you show the details of the Gauss-Jordan elimination in any system you solve in this course. Theorem 1.2.11. mn Let A2R then if R and R are both Gauss-Jordan eliminations of A then R =R . 1 2 1 2 In other words, the reduced row echelon form of a matrix of real numbers is unique. Proof: see Appendix E in your text for details. This proof is the heart of most calculations we make in this course. 1.3 classi cation of solutions Surprisingly Examples 1.1.8,1.1.9 and 1.1.10 illustrate all the possible types of solutions for a linear system. In this section I interpret the calculations of the last section as they correspond to solving systems of equations. Example 1.3.1. Solve the following system of linear equations if possible, x + 2y 3z = 1 2x + 4y = 7 x + 3y + 2z = 01.3. CLASSIFICATION OF SOLUTIONS 19 We solve by doing Gaussian elimination on the augmented coecient matrix (see Example 1.2.2 for details of the Gaussian elimination), 2 3 2 3 1 2 3 1 1 0 0 83=30 x = 83=30 4 5 4 5 rref 2 4 0 7 = 0 1 0 11=30 ) y = 11=30 1 3 2 0 0 0 1 5=6 z = 5=6 (We used the results of Example 1.2.2). Remark 1.3.2. The geometric interpretation of the last example is interesting. The equation of a plane with normal vector a;b;c is ax +by +cz = d. Each of the equations in the system of Example 1.2.2 has a solution set which is in one-one correspondance with a particular 3 plane inR . The intersection of those three planes is the single point (83=30; 11=30; 5=6). Example 1.3.3. Solve the following system of linear equations if possible, xy = 1 3x 3y = 0 2x 2y =3 Gaussian elimination on the augmented coecient matrix reveals (see Example 1.2.3 for details of the Gaussian elimination) 2 3 2 3 1 1 1 1 1 0 4 5 4 5 rref 3 3 0 = 0 0 1 2 2 3 0 0 0 which shows the system has no solutions . The given equations are inconsistent. Remark 1.3.4. The geometric interpretation of the last example is also interesting. The equation of a line in thexy-plane is isax +by =c, hence the solution set of a particular equation corresponds to a line. To have a solution to all three equations at once that would mean that there is an intersection point which lies on all three lines. In the preceding example there is no such point. Example 1.3.5. Solve the following system of linear equations if possible, xy +z = 0 3x 3y = 0 2x 2y 3z = 020 CHAPTER 1. GAUSS-JORDAN ELIMINATION Gaussian elimination on the augmented coecient matrix reveals (see Example 1.2.10 for details of the Gaussian elimination) 2 3 2 3 1 1 1 0 1 1 0 0 xy = 0 4 5 4 5 rref 3 3 0 0 = 0 0 1 0 ) z = 0 2 2 3 0 0 0 0 0 The row of zeros indicates that we will not nd a unique solution. We have a choice to make, either x or y can be stated as a function of the other. Typically in linear algebra we will solve for the variables that correspond to the pivot columns in terms of the non-pivot column variables. In this problem the pivot columns are the rst column which corresponds to the variable x and the third column which corresponds the variable z. The variables x;z are called basic variables while y is called a free variable. The solution set is f(y;y; 0)j y2Rg ; in other words, x = y;y = y and z = 0 for all y2R. You might object to the last example. You might ask why is y the free variable and not x. This is roughly equivalent to asking the question why is y the dependent variable and x the independent variable in the usual calculus. However, the roles are reversed. In the preceding example the variable x depends on y. Physically there may be a reason to distinguish the roles of one variable over another. There may be a clear cause-e ect relationship which the mathematics fails to capture. For example, velocity of a ball in ight depends on time, but does time depend on the ball's velocty ? I'm guessing no. So time would seem to play the role of independent variable. However, when vv o we write equations such as v =v gt we can just as well write t = ; the algebra alone does o g not reveal which variable should be taken as "independent". Hence, a choice must be made. In the case of in nitely many solutions, we customarily choose the pivot variables as the "dependent" or "basic" variables and the non-pivot variables as the "free" variables. Sometimes the word parameter is used instead of variable, it is synonomous. Example 1.3.6. Solve the following (silly) system of linear equations if possible, x = 0 0x + 0y + 0z = 0 3x = 0 Gaussian elimination on the augmented coecient matrix reveals (see Example 1.2.10 for details of the Gaussian elimination) 2 3 2 3 1 0 0 0 1 0 0 0 4 5 4 5 rref 0 0 0 0 = 0 0 0 0 3 0 0 0 0 0 0 0 we nd the solution set is f(0;y;z)j y;z2Rg . No restriction is placed the free variables y and z.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.