Lecture notes Mathematics

how math can be used in everyday life. and lecture notes on mathematics for economists and mathematics questions and answers mathematics quiz questions
ZackVincent Profile Pic
ZackVincent,United Kingdom,Teacher
Published Date:15-07-2017
Your Website URL(Optional)
Lecture notes for Math 115A (linear algebra) Fall of 2002 Terence Tao, UCLA http://www.math.ucla.edu/tao/resource/general/115a.3.02f/ The textbook used was Linear Algebra, S.H. Friedberg, A.J. Insel, L.E. Spence, Third Edition. Prentice Hall, 1999. Thanks to Radhakrishna Bettadapura, Yu Cao, Cristian Gonzales, Hannah Kim, Michael Smith, Wilson Sov, Luqing Ye, and Shijia Yu for corrections. 1Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered:  What is Linear algebra?  Overview of course  What is a vector? What is a vector space?  Examples of vector spaces  Vector subspaces  Span, linear dependence, linear independence  Systems of linear equations  Bases Overview of course  This course is an introduction to Linear algebra. Linear algebra is the study of linear transformations and their algebraic properties.  A transformation is any operation that transforms an input to an out- put. A transformation is linear if (a) every ampli cation of the input causes a corresponding ampli cation of the output (e.g. doubling of the input causes a doubling of the output), and (b) adding inputs together leads to adding of their respective outputs. We'll be more precise about this much later in the course.  A simple example of a linear transformation is the mapy := 3x, where the input x is a real number, and the output y is also a real number. Thus, for instance, in this example an input of 5 units causes an output of 15 units. Note that a doubling of the input causes a doubling of the output, and if one adds two inputs together (e.g. add a 3-unit input with a 5-unit input to form a 8-unit input) then the respective outputs 2(9-unit and 15-unit outputs, in this example) also add together (to form a 24-unit output). Note also that the graph of this linear transformation is a straight line (which is where the term linear comes from).  (Footnote: I use the symbol := to mean \is de ned as", as opposed to the symbol =, which means \is equal to". (It's similar to the distinction between the symbols = and == in computer languages such asC + +, or the distinction between causation and correlation). In many texts one does not make this distinction, and uses the symbol = to denote both. In practice, the distinction is too ne to be really important, so you can ignore the colons and read := as = if you want.) 2  An example of a non-linear transformation is the map y := x ; note now that doubling the input leads to quadrupling the output. Also if one adds two inputs together, their outputs do not add (e.g. a 3-unit input has a 9-unit output, and a 5-unit input has a 25-unit output, but a combined 3 + 5-unit input does not have a 9 + 25 = 34-unit output, but rather a 64-unit output). Note the graph of this transformation is very much non-linear.  In real life, most transformations are non-linear; however, they can of- ten be approximated accurately by a linear transformation. (Indeed, this is the whole point of di erential calculus - one takes a non-linear function and approximates it by a tangent line, which is a linear func- tion). This is advantageous because linear transformations are much easier to study than non-linear transformations.  In the examples given above, both the input and output were scalar quantities - they were described by a single number. However in many situations, the input or the output (or both) is not described by a single number, but rather by several numbers; in which case the input (or output) is not a scalar, but instead a vector. This is a slight oversimpli cation - more exotic examples of input and output are also possible when the transformation is non-linear.  A simple example of a vector-valued linear transformation is given by Newton's second law F =ma, or equivalently a =F=m: 3One can view this law as a statement that a force F applied to an object of mass m causes an acceleration a, equal to a := F=m; thus F can be viewed as an input and a as an output. Both F and a are vectors; if for instance F is equal to 15 Newtons in the East direction plus 6 Newtons in the North direction (i.e. F := (15; 6)N), and the object has massm := 3kg, then the resulting acceleration is the vector 2 2 2 a = (5; 2)m=s (i.e. 5m=s in the East direction plus 2m=s in the North direction).  Observe that even though the input and outputs are now vectors in this example, this transformation is still linear (as long as the mass stays constant); doubling the input force still causes a doubling of the output acceleration, and adding two forces together results in adding the two respective accelerations together.  One can write Newton's second law in co-ordinates. If we are in three dimensions, so thatF := (F ;F ;F ) anda := (a ;a ;a ), then the law x y z x y z can be written as F =ma + 0a + 0a x x y z F = 0a +ma + 0a y x y z F = 0a + 0a +ma : z x y z This linear transformation is associated to the matrix 0 1 m 0 0 A 0 m 0 : 0 0 m  Here is another example of a linear transformation with vector inputs and vector outputs: y = 3x + 5x + 7x 1 1 2 3 y = 2x + 4x + 6x ; 2 1 2 3 this linear transformation corresponds to the matrix   3 5 7 : 2 4 6 4As it turns out, every linear transformation corresponds to a matrix, although if one wants to split hairs the two concepts are not quite the same thing. Linear transformations are to matrices as concepts are to words; di erent languages can encode the same concept using di erent words. We'll discuss linear transformations and matrices much later in the course.  Linear algebra is the study of the algebraic properties of linear trans- formations (and matrices). Algebra is concerned with how to manip- ulate symbolic combinations of objects, and how to equate one such combination with another; e.g. how to simplify an expression such as (x 3)(x + 5). In linear algebra we shall manipulate not just scalars, but also vectors, vector spaces, matrices, and linear transformations. These manipulations will include familiar operations such as addition, multiplication, and reciprocal (multiplicative inverse), but also new op- erations such as span, dimension, transpose, determinant, trace, eigen- value, eigenvector, and characteristic polynomial. Algebra is distinct from other branches of mathematics such as combinatorics (which is more concerned with counting objects than equating them) or analysis (which is more concerned with estimating and approximating objects, and obtaining qualitative rather than quantitative properties). Overview of course  Linear transformations and matrices are the focus of this course. How- ever, before we study them, we rst must study the more basic concepts of vectors and vector spaces; this is what the rst two weeks will cover. (You will have had some exposure to vectors in 32AB and 33A, but we will need to review this material in more depth - in particular we concentrate much more on concepts, theory and proofs than on com- putation). One of our main goals here is to understand how a small set of vectors (called a basis) can be used to describe all other vectors in a vector space (thus giving rise to a co-ordinate system for that vector space).  In weeks 3-5, we will study linear transformations and their co-ordinate representation in terms of matrices. We will study how to multiply two 5transformations (or matrices), as well as the more dicult question of how to invert a transformation (or matrix). The material from weeks 1-5 will then be tested in the midterm for the course.  After the midterm, we will focus on matrices. A general matrix or linear transformation is dicult to visualize directly, however one can under- stand them much better if they can be diagonalized. This will force us to understand various statistics associated with a matrix, such as deter- minant, trace, characteristic polynomial, eigenvalues, and eigenvectors; this will occupy weeks 6-8.  In the last three weeks we will study inner product spaces, which are a fancier version of vector spaces. (Vector spaces allow you to add and scalar multiply vectors; inner product spaces also allow you to compute lengths, angles, and inner products). We then review the earlier material on bases using inner products, and begin the study of how linear transformations behave on inner product spaces. (This study will be continued in 115B).  Much of the early material may seem familiar to you from previous courses, but I de nitely recommend that you still review it carefully, as this will make the more dicult later material much easier to handle. What is a vector? What is a vector space?  We now review what a vector is, and what a vector space is. First let us recall what a scalar is.  Informally, a scalar is any quantity which can be described by a sin- gle number. An example is mass: an object has a mass of m kg for some real numberm. Other examples of scalar quantities from physics include charge, density, speed, length, time, energy, temperature, vol- ume, and pressure. In nance, scalars would include money, interest rates, prices, and volume. (You can think up examples of scalars in chemistry, EE, mathematical biology, or many other elds).  The set of all scalars is referred to as the eld of scalars; it is usually just R, the eld of real numbers, but occasionally one likes to work 6with other elds such as C, the eld of complex numbers, or Q, the eld of rational numbers. However in this course the eld of scalars will almost always be R. (In the textbook the scalar eld is often denoted F, just to keep aside the possibility that it might not be the reals R; but I will not bother trying to make this distinction.)  Any two scalars can be added, subtracted, or multiplied together to form another scalar. Scalars obey various rules of algebra, for instance x +y is always equal to y +x, and x (y +z) is equal to xy +xz.  Now we turn to vectors and vector spaces. Informally, a vector is any member of a vector space; a vector space is any class of objects which can be added together, or multiplied with scalars. (A more popular, but less mathematically accurate, de nition of a vector is any quantity with both direction and magnitude. This is true for some common kinds of vectors - most notably physical vectors - but is misleading or false for other kinds). As with scalars, vectors must obey certain rules of algebra.  Before we give the formal de nition, let us rst recall some familiar examples. 2  The vector space R is the space of all vectors of the form (x;y), where 2 x and y are real numbers. (In other words, R :=f(x;y) :x;y2 Rg). 2 2 For instance, (4; 3:5) is a vector in R . One can add two vectors in R by adding their components separately, thus for instance (1; 2)+(3; 4) = 2 (4; 6). One can multiply a vector in R by a scalar by multiplying each component separately, thus for instance 3 (1; 2) = (3; 6). Among all 2 2 the vectors in R is the zero vector (0; 0). Vectors in R are used for many physical quantities in two dimensions; they can be represented graphically by arrows in a plane, with addition represented by the parallelogram law and scalar multiplication by dilation. 3  The vector space R is the space of all vectors of the form (x;y;z), 3 wherex,y,z are real numbers: R :=f(x;y;z) :x;y;z2 Rg. Addition 2 and scalar multiplication proceeds similar to R : (1; 2; 3) + (4; 5; 6) = (5; 7; 9), and 4 (1; 2; 3) = (4; 8; 12). However, addition of a vector in 2 3 R to a vector in R is unde ned; (1; 2) + (3; 4; 5) doesn't make sense. 73 3 Among all the vectors in R is the zero vector (0; 0; 0). Vectors in R are used for many physical quantities in three dimensions, such as velocity, momentum, current, electric and magnetic elds, force, acceleration, and displacement; they can be represented by arrows in space. 4 5  One can similarly de ne the vector spaces R , R , etc. Vectors in these spaces are not often used to represent physical quantities, and are more dicult to represent graphically, but are useful for describing populations in biology, portfolios in nance, or many other types of quantities which need several numbers to describe them completely. De nition of a vector space  De nition. A vector space is any collection V of objects (called vec- tors) for which two operations can be performed:  Vector addition, which takes two vectors v and w in V and returns another vector v +w in V . (Thus V must be closed under addition).  Scalar multiplication, which takes a scalar c in R and a vector v in V , and returns another vector cv in V . (Thus V must be closed under scalar multiplication).  Furthermore, forV to be a vector space, the following properties must be satis ed:  (I. Addition is commutative) For all v;w2V , v +w =w +v.  (II. Addition is associative) For allu;v;w2V ,u+(v+w) = (u+v)+w.  (III. Additive identity) There is a vector 02V , called the zero vector, such that 0 +v =v for all v2V .  (IV. Additive inverse) For each vectorv2V , there is a vectorv2V , called the additive inverse of v, such thatv +v = 0.  (V. Multiplicative identity) The scalar 1 has the property that 1v =v for all v2V . 8 (VI. Multiplication is associative) For any scalars a;b2 R and any vector v2V , we have a(bv) = (ab)v.  (VII. Multiplication is linear) For any scalar a2 R and any vectors v;w2V , we have a(v +w) =av +aw.  (VIII. Multiplication distributes over addition) For any scalarsa;b2 R and any vector v2V , we have (a +b)v =av +bv. (Not very important) remarks  The number of properties listed is long, but they can be summarized brie y as: the laws of algebra work They are all eminently reasonable; one would not want to work with vectors for which v +w =6 w +v, for instance. Verifying all the vector space axioms seems rather tedious, but later we will see that in most cases we don't need to verify all of them.  Because addition is associative (axiom II), we will often write expres- sions such asu+v +w without worrying about which order the vectors are added in. Similarly from axiom VI we can write things like abv. We also write vw as shorthand for v + (w).  A philosophical point: we never say exactly what vectors are, only what vectors do. This is an example of abstraction, which appears everywhere in mathematics (but especially in algebra): the exact sub- stance of an object is not important, only its properties and functions. (For instance, when using the number \three" in mathematics, it is unimportant whether we refer to three rocks, three sheep, or whatever; what is important is how to add, multiply, and otherwise manipulate these numbers, and what properties these operations have). This is tremendously powerful: it means that we can use a single theory (lin- ear algebra) to deal with many very di erent subjects (physical vectors, population vectors in biology, portfolio vectors in nance, probability distributions in probability, functions in analysis, etc.). A similar phi- losophy underlies \object-oriented programming" in computer science. Of course, even though vector spaces can be abstract, it is often very 92 3 helpful to keep concrete examples of vector spaces such as R and R handy, as they are of course much easier to visualize. For instance, even when dealing with an abstract vector space we shall often still 2 3 just draw arrows in R or R , mainly because our blackboards don't have all that many dimensions.  Because we chose our eld of scalars to be the eld of real numbers R, these vector elds are known as real vector elds, or vector elds over R. Occasionally people use other elds, such as complex numbers C, to de ne the scalars, thus creating complex vector elds (or vector elds over C), etc. Another interesting choice is to use functions instead of numbers as scalars (for instance, one could have an indeterminate x, 3 2 3 2 4 and let things like 4x + 2x + 5 be scalars, and (4x + 2x + 5;x 4) be vectors). We will stick almost exclusively with the real scalar eld in this course, but because of the abstract nature of this theory, almost everything we say in this course works equally well for other scalar elds.  A pedantic point: The zero vector is often denoted 0, but technically it is not the same as the zero scalar 0. But in practice there is no harm in confusing the two objects: zero of one thing is pretty much the same as zero of any other thing. Examples of vector spaces n  n-tuples as vectors. For any integer n 1, the vector space R is de ned to be the space of all n-tuples of reals (x ;x ;:::;x ). These 1 2 n are orderedn-tuples, so for instance (3; 4) is not the same as (4; 3); two vectors are equal (x ;x ;:::;x ) and (y ;y ;:::;y ) are only equal if 1 2 n 1 2 n x =y , x =y , :::, and x =y . Addition of vectors is de ned by 1 1 2 2 n n (x ;x ;:::;x ) + (y ;y ;:::;y ) := (x +y ;x +y ;:::;x +y ) 1 2 n 1 2 n 1 1 2 2 n n and scalar multiplication by c(x ;x ;:::;x ) := (cx ;cx ;:::;cx ): 1 2 n 1 2 n The zero vector is 0 := (0; 0;:::; 0) 10and additive inverse is given by (x ;x ;:::;x ) := (x ;x ;:::;x ): 1 2 n 1 2 n  A typical use of such a vector is to count several types of objects. For instance, a simple ecosystem consisting ofX units of plankton,Y units of sh, and Z whales might be represented by the vector (X;Y;Z). Combining two ecosystems together would then correspond to adding the two vectors; natural population growth might correspond to mul- tiplying the vector by some scalar corresponding to the growth rate. (More complicated operations, dealing with how one species impacts another, would probably be dealt with via matrix operations, which we will come to later). As one can see, there is no reason for n to be restricted to two or three dimensions. n  The vector space axioms can be veri ed for R , but it is tedious to do so. We shall just verify one axiom here, axiom VIII: (a+b)v =av +bv. We can write the vector v in the form v := (x ;x ;:::;x ). The left- 1 2 n hand side is then (a +b)v = (a +b)(x ;x ;:::;x ) = ((a +b)x ; (a +b)x ;:::; (a +b)x ) 1 2 n 1 2 n while the right-hand side is av +bv =a(x ;x ;:::;x ) +b(x ;x ;:::;x ) 1 2 n 1 2 n = (ax ;ax ;:::;ax ) + (bx ;bx ;:::;bx ) 1 2 n 1 2 n = (ax +bx ;ax +bx ;:::;ax +bx ) 1 1 2 2 n n and the two sides match since (a +b)x = ax +bx for each j = j j j 1; 2;:::;n. n  There are of course other things we can do with R , such as taking dot products, lengths, angles, etc., but those operations are not common to all vector spaces and so we do not discuss them here.  Scalars as vectors. The scalar eld R can itself be thought of as a vector space - after all, it has addition and scalar multiplication. It 1 is essentially the same space as R . However, this is a rather boring 11vector space and it is often confusing (though technically correct) to 2 refer to scalars as a type of vector. Just as R represents vectors in 3 1 a plane and R represents vectors in space, R represents vectors in a line.  The zero vector space. Actually, there is an even more boring vector 0 space than R - the zero vector space R (also calledf0g), consisting solely of a single vector 0, the zero vector, which is also sometimes denoted () in this context. Addition and multiplication are trivial: 0 0 + 0 = 0 and c0 = 0. The space R represents vectors in a point. Although this space is utterly uninteresting, it is necessary to include it in the pantheon of vector spaces, just as the number zero is required to complete the set of integers.  Complex numbers as vectors. The space C of complex numbers can be viewed as a vector space over the reals; one can certainly add two complex numbers together, or multiply a complex number by a (real) scalar, with all the laws of arithmetic holding. Thus, for instance, 3+2i would be a vector, and an example of scalar multiplication would be 2 5(3+2i) = 15+10i. This space is very similar to R , although complex numbers enjoy certain operations, such as complex multiplication and 2 complex conjugate, which are not available to vectors in R .  Polynomials as vectors I. For any n  0, let P (R) denote the n vector space of all polynomials of one indeterminate variable x whose degree is at most n. Thus for instance P (R) contains the \vectors" 3 3 2 2 3 x + 2x + 4; x 4; 1:5x + 2:5x +; 0 but not p 4 x 3 3 x +x + 1; x; sin(x) +e ; x +x : Addition, scalar multiplication, and additive inverse are de ned in the standard manner, thus for instance 3 2 3 2 2 (x + 2x + 4) + (x +x + 4) = 3x + 8 (0.1) and 3 2 3 2 3(x + 2x + 4) = 3x + 6x + 12: The zero vector is just 0. 12 Notice in this example it does not really matter what x is. The space n+1 P (R) is very similar to the vector space R ; indeed one can match n one to the other by the pairing n n1 a x +a x +::: +a x +a () (a ;a ;:::;a ;a ); n n1 1 0 n n1 1 0 3 2 thus for instance in P (R), the polynomial x + 2x + 4 would be as- 3 sociated with the 4-tuple (1; 2; 0; 4). The more precise statement here n+1 is that P (R) and R are isomorphic vector spaces; more on this n later. However, the two spaces are still di erent; for instance we can do certain operations inP (R), such as di erentiate with respect tox, n n+1 which do not make much sense for R .  Notice that we allow the polynomials to have degree less than n; if we only allowed polynomials of degree exactly n, then we would not have a vector space because the sum of two vectors would not necessarily be a vector (see (0.1)). (In other words, such a space would not be closed under addition).  Polynomials as vectors II. Let P (R) denote the vector space of all polynomials of one indeterminate variable x - regardless of degree. (In S 1 other words, P (R) := P (R), the union of all the P (R)). Thus n n n=0 this space in particular contains the monomials 2 3 4 1;x;x ;x ;x ;::: though of course it contains many other vectors as well.  This space is much larger than any of the P (R), and is not isomor- n n phic to any of the standard vector spaces R . Indeed, it is an in nite dimensional space - there are in nitely many \independent" vectors in this space. (More on this later).  Functions as vectors I. Why stick to polynomials? LetC(R) denote the vector space of all continuous functions of one real variablex - thus this space includes as vectors such objects as 4 x 3 x +x + 1; sin(x) +e ; x + sin(x); jxj: 13One still has addition and scalar multiplication: x 3 3 x (sin(x) +e ) + (x + sin(x)) =x +e + x x 5(sin(x) +e ) = 5 sin(x) + 5e ; and all the laws of vector spaces still hold. This space is substantially larger than P (R), and is another example of an in nite dimensional vector space.  Functions as vectors II. In the previous example the real variable x could range over all the real line R. However, we could instead restrict the real variable to some smaller set, such as the interval 0; 1, and just consider the vector space C(0; 1) of continuous functions on 0; 1. This would include such vectors such as 4 x 3 x +x + 1; sin(x) +e ; x + sin(x); jxj: This looks very similar toC(R), but this space is a bit smaller because more functions are equal. For instance, the functions x andjxj are the same vector in C(0; 1), even though they are di erent vectors in C(R).  Functions as vectors III. Why stick to continuous functions? Let F(R; R) denote the space of all functions of one real variable R, re- gardless of whether they are continuous or not. In addition to all the vectors inC(R) the spaceF(R; R) contains many strange objects, such as the function  1 if x2 Q f(x) := 0 if x62 Q This space is much, much, larger than C(R); it is also in nite di- mensional, but it is in some sense \more in nite" than C(R). (More precisely, the dimension of C(R) is countably in nite, but the dimen- sion ofF(R; R) is uncountably in nite. Further discussion is beyond the scope of this course, but see Math 112).  Functions as vectors IV. Just as the vector spaceC(R) of continuous functions can be restricted to smaller sets, the spaceF(R; R) can also be restricted. For any subset S of the real line, letF(S; R) denote 14the vector space of all functions from S to R, thus a vector in this space is a function f which assigns a real number f(x) to each x in S. Two vectors f, g would be considered equal if f(x) = g(x) for each x in S. For instance, if S is the two element set S :=f0; 1g, then the 2 two functions f(x) :=x and g(x) :=x would be considered the same vector inF(f0; 1g; R), because they equal the same value at 0 and 1. Indeed, to specify any vectorf inf0; 1g, one just needs to specifyf(0) 2 and f(1). As such, this space is very similar to R .  Sequences as vectors. An in nite sequence is a sequence of real numbers (a ;a ;a ;a ;:::); 1 2 3 4 for instance, a typical sequence is (2; 4; 6; 8; 10; 12;:::): 1 Let R denote the vector space of all in nite sequences. These se- quences are added together by the rule (a ;a ;:::) + (b ;b ;:::) := (a +b ;a +b ;:::) 1 2 1 2 1 1 2 2 and scalar multiplied by the rule c(a ;a ;:::) := (ca ;ca ;:::): 1 2 1 2 This vector space is very much like the nite-dimensional vector spaces 2 3 R , R , :::, except that these sequences do not terminate.  Matrices as vectors. Given any integers m;n 1, we let M (R) mn be the space of all mn matrices (i.e. m rows and n columns) with real entries, thus for instance M contains such \vectors" as 23     1 2 3 0 1 2 ; : 4 5 6 3 4 5 Two matrices are equal if and only if all of their individual components match up; rearranging the entries of a matrix will produce a di erent 15matrix. Matrix addition and scalar multiplication is de ned similarly to vectors:       1 2 3 0 1 2 1 1 1 + = 4 5 6 3 4 5 1 1 1     1 2 3 10 20 30 = : 4 5 6 40 50 60 Matrices are useful for many things, notably for solving linear equations and for encoding linear transformations; more on these later in the course.  As you can see, there are (in nitely) many examples of vector spaces, 2 some of which look very di erent from the familiar examples of R and 3 R . Nevertheless, much of the theory we do here will cover all of these examples simultaneously. When we depict these vector spaces on the 2 3 blackboard, we will draw them as if they were R or R , but they are often much larger, and each point we draw in the vector space, which represents a vector, could in reality stand for a very complicated object such as a polynomial, matrix, or function. So some of the pictures we draw should be interpreted more as analogies or metaphors than as a literal depiction of the situation. Non-vector spaces  Now for some examples of things which are not vector spaces.  Latitude and longitude. The location of any point on the earth can be described by two numbers, e.g. Los Angeles is 34 N, 118 W. This 2 may look a lot like a two-dimensional vector in R , but the space of all latitude-longitude pairs is not a vector space, because there is no reasonable way of adding or scalar multiplying such pairs. For instance, how could you multiply Los Angeles by 10? 340 N, 1180 W does not make sense. 3  Unit vectors. In R , a unit vector is any vector with unit length, for 3 4 instance (0; 0; 1), (0;1; 0), and ( ; 0; ) are all unit vectors. However 5 5 162 the space of all unit vectors (sometimes denotedS , for two-dimensional sphere) is not a vector space as it is not closed under addition (or under scalar multiplication). +  The positive real axis. The space R of positive real numbers is closed under addition, and obeys most of the rules of vector spaces, but is not a vector space, because one cannot multiply by negative scalars. (Also, it does not contain a zero vector). 2 3  Monomials. The space of monomials 1;x;x ;x ;::: does not form a vector space - it is not closed under addition or scalar multiplication. Vector arithmetic  The vector space axioms I-VIII can be used to deduce all the other familiar laws of vector arithmetic. For instance, we have  Vector cancellation law Ifu;v;w are vectors such thatu+v =u+w, then v =w.  Proof: Since u is a vector, we have an additive inverseu such that u+u = 0, by axiom IV. Now we addu to both sides ofu+v =u+w: u + (u +v) =u + (u +w): Now use axiom II: (u +u) +v = (u +u) +w then axiom IV: 0 +v = 0 +w then axiom III: v =w:  As you can see, these algebraic manipulations are rather trivial. After the rst week we usually won't do these computations in such painful detail. 17 Some other simple algebraic facts, which you can amuse yourself with by deriving them from the axioms: 0v = 0; (1)v =v; (v+w) = (v)+(w); a0 = 0; a(x) = (a)x =ax Vector subspaces  Many vector spaces are subspaces of another. A vector space W is a subspace of a vector spaceV ifWV (i.e. every vector inW is also a vector in V ), and the laws of vector addition and scalar multiplication are consistent (i.e. ifv andv are inW , and hence inV , the rule that 1 2 W gives for addingv andv gives the same answer as the rule that V 1 2 gives for adding v and v .) 1 2  For instance, the space P (R) - the vector space of polynomials of 2 degree at most 2 is a subspace of P (R). Both are subspaces of P (R), 3 the vector space of polynomials of arbitrary degree. C(0; 1), the space of continuous functions on 0; 1, is a subspace ofF(0; 1; R). And 2 3 so forth. (Technically, R is not a subspace of R , because a two- 3 dimensional vector is not a three-dimensional vector. However, R 2 does contain subspaces which are almost identical to R . More on this later).  If V is a vector space, and W is a subset of V (i.e. W  V ), then of course we can add and scalar multiply vectors in W , since they are automatically vectors in V . On the other hand, W is not necessarily a subspace, because it may not be a vector space. (For instance, the 2 3 3 set S of unit vectors in R is a subset of R , but is not a subspace). However, it is easy to check when a subset is a subspace:  Lemma. Let V be a vector space, and let W be a subset of V . Then W is a subspace of V if and only if the following two properties are satis ed:  (W is closed under addition) If w and w are in W , then w +w is 1 2 1 2 also in W . 18 (W is closed under scalar multiplication) Ifw is inW andc is a scalar, then cw is also in W .  Proof. First suppose that W is a subspace of V . Then W will be closed under addition and multiplication directly from the de nition of vector space. This proves the \only if" part.  Now we prove the harder \if part". In other words, we assume thatW is a subset ofV which is closed under addition and scalar multiplication, and we have to prove that W is a vector space. In other words, we have to verify the axioms I-VIII.  Most of these axioms follow immediately because W is a subset of V , andV already obeys the axioms I-VIII. For instance, since vectorsv ;v 1 2 inV obey the commutativity propertyv +v =v +v , it automatically 1 2 2 1 follows that vectors in W also obey the property w +w = w +w , 1 2 2 1 since all vectors inW are also vectors inV . This reasoning easily gives us axioms I, II, V, VI, VII, VIII.  There is a potential problem with III though, because the zero vector 0 ofV might not lie inW . Similarly with IV, there is a potential problem that if w lies in W , thenw might not lie in W . But both problems cannot occur, because 0 = 0w andw = (1)w (Exercise: prove this from the axioms), and W is closed under scalar multiplication.   This Lemma makes it quite easy to generate a large number of vector spaces, simply by taking a big vector space and passing to a subset which is closed under addition and scalar multiplication. Some exam- ples: 3  (Horizontal vectors) Recall that R is the vector space of all vectors 3 (x;y;z) with x;y;z real. Let V be the subset of R consisting of all vectors with zero z co-ordinate, i.e. V :=f(x;y; 0) :x;y2 Rg. This is 3 3 a subset of R , but moreover it is also a subspace of R . To see this, we use the Lemma. It suces to show that V is closed under vector addition and scalar multiplication. Let's check the vector addition. If we have two vectors in V , say (x ;y ; 0) and (x ;y ; 0), we need to 1 1 2 2 verify that the sum of these two vectors is still in V . But the sum is just (x +x ;y +y ; 0), and this is in V because the z co-ordinate 1 2 1 2 19is zero. Thus V is closed under vector addition. A similar argument shows that V is closed under scalar multiplication, and so V is indeed 3 a subspace of R . (Indeed,V is very similar to - though technically not 2 the same thing as - R ). Note that if we considered instead the space of all vectors with z co-ordinate 1, i.e.f(x;y; 1) :x;y2 Rg, then this would be a subset but not a subspace, because it is not closed under vector addition (or under scalar multiplication, for that matter). 3 3  Another example of a subspace of R is the planef(x;y;z)2 R : 3 x + 2y + 3z = 0g. A third example of a subspace of R is the line f(t; 2t; 3t) :t2 Rg. (Exercise: verify that these are indeed subspaces). Notice how subspaces tend to be very at objects which go through the origin; this is consistent with them being closed under vector addition and scalar multiplication. 3  In R , the only subspaces are lines through the origin, planes through 3 2 the origin, the whole space R , and the zero vector spacef0g. In R , 2 the only subspaces are lines through the origin, the whole space R , and the zero vector spacef0g. (This is another clue as to why this subject is called linear algebra).  (Even polynomials) Recall thatP (R) is the vector space of all poly- nomials f(x). Call a polynomial even if f(x) = f(x); for instance, 4 2 3 f(x) = x + 2x + 3 is even, but f(x) = x + 1 is not. Let P (R) even denote the set of all even polynomials, thus P (R) is a subset of even P (R). Now we show that P (R) is not just a subset, it is a sub- even space ofP (R). Again, it suces to show thatP (R) is closed under even vector addition and scalar multiplication. Let's show it's closed un- der vector addition - i.e. if f and g are even polynomials, we have to show that f +g is also even. In other words, we have to show that f(x) +g(x) = f(x) +g(x). But this is clear since f(x) = f(x) and g(x) = g(x). A similar argument shows why even polynomials are closed under scalar multiplication.  (Diagonal matrices) Let n 1 be an integer. Recall that M (R) nn is the vector space of nn real matrices. Call a matrix diagonal if all the entries away from the main diagonal (from top left to bottom 20

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.