Lecture notes Mathematics

lecture notes business mathematics and lecture notes on mathematics of finance. engineering mathematics lecture notes pdf free download
NancyWest Profile Pic
NancyWest,Germany,Professional
Published Date:12-07-2017
Your Website URL(Optional)
Comment
1 Notes on Mathematics - 102 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD2Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Some More Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3.1 Submatrix of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.1 Block Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4 Matrices over Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Linear System of Equations 19 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Definition and a Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 A Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Row Operations and Equivalent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Gauss Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Row Reduced Echelon Form of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.1 Gauss-Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 Existence of Solution of Ax =b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.2 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7 Invertible Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.1 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.2 Equivalent conditions for Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.7.3 Inverse and Gauss-Jordan Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8.1 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.8.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.9 Miscellaneous Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3 Finite Dimensional Vector Spaces 49 3.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 34 CONTENTS 3.1.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 Important Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Ordered Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4 Linear Transformations 69 4.1 Definitions and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.2 Matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5 Inner Product Spaces 87 5.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 Gram-Schmidt Orthogonalisation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.3 Orthogonal Projections and Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.1 Matrix of the Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6 Eigenvalues, Eigenvectors and Diagonalization 107 6.1 Introduction and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.2 diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3 Diagonalizable matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.4 Sylvester’s Law of Inertia and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 121 II Ordinary Differential Equation 129 7 Differential Equations 131 7.1 Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7.2 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.2.1 Equations Reducible to Separable Form . . . . . . . . . . . . . . . . . . . . . . . . 134 7.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.3.1 Integrating Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.4 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.5 Miscellaneous Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.6 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.6.1 Orthogonal Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 8 Second Order and Higher Order Equations 153 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 8.2 More on Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 8.2.1 Wronskian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 8.2.2 Method of Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 8.3 Second Order equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 160 8.4 Non Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 8.5 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 8.6 Higher Order Equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 166CONTENTS 5 8.7 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 9 Solutions Based on Power Series 175 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 9.1.1 Properties of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 9.2 Solutions in terms of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 9.3 Statement of Frobenius Theorem for Regular (Ordinary) Point . . . . . . . . . . . . . . . 180 9.4 Legendre Equations and Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 181 9.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 9.4.2 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 III Laplace Transform 189 10 Laplace Transform 191 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.2 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.3 Properties of Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 10.3.1 Inverse Transforms of Rational Functions . . . . . . . . . . . . . . . . . . . . . . . 199 10.3.2 Transform of Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 10.4 Some Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10.4.1 Limiting Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10.5 Application to Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 10.6 Transform of the Unit-Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 IV Numerical Applications 207 11 Newton’s Interpolation Formulae 209 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2 Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2.1 Forward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2.2 Backward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.2.3 Central Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 11.2.4 Shift Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.2.5 Averaging Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.3 Relations between Difference operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.4 Newton’s Interpolation Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 12 Lagrange’s Interpolation Formula 221 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 12.2 Divided Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 12.3 Lagrange’s Interpolation formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 12.4 Gauss’s and Stirling’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 13 Numerical Differentiation and Integration 229 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.2 Numerical Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.3 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2336 CONTENTS 13.3.1 A General Quadrature Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 13.3.2 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 13.3.3 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 14 Appendix 239 14.1 System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 14.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 14.3 Properties of Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 14.4 Dimension of M +N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 14.5 Proof of Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 14.6 Condition for Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252Part I Linear Algebra 7Chapter 1 Matrices 1.1 Definition of a Matrix Definition 1.1.1 (Matrix) A rectangular array of numbers is called a matrix. We shall mostly be concerned with matrices having real numbers as entries. The horizontal arraysof a matrix are called itsrows and the vertical arraysare called itscolumns. A matrix having m rows and n columns is said to have the orderm×n. A matrix A of order m×n can be represented in the following form:   a a ··· a 11 12 1n   a a ··· a   21 22 2n   A = , . . . .   . . . . .  . . .  a a ··· a m1 m2 mn th th where a is the entry at the intersection of the i row andj column. ij In a more concise manner, we also denote the matrix A by a by suppressing its order. ij   a a ··· a 11 12 1n   a a ··· a   21 22 2n   Remark 1.1.2 Some books also use to represent a matrix. . . . .   . . . . .  . . .  a a ··· a m1 m2 mn " 1 3 7 LetA = . Then a = 1, a =3, a = 7, a = 4, a = 5, and a =6. 11 12 13 21 22 23 4 5 6 A matrix having only one column is called a column vector; and a matrix with only one row is called a row vector. Whenever a vector is used, it should be understood from the context whether it is a row vector or a column vector. Definition 1.1.3 (Equality of two Matrices) Two matricesA = a andB =b having the same order ij ij m×n are equal if a =b for each i =1,2,...,m and j =1,2,...,n. ij ij In other words,two matricesaresaid to be equal if they havethe same orderand their corresponding entries are equal. 910 CHAPTER 1. MATRICES Example 1.1.4 The linear system of equations 2x +3y = 5 and 3x+2y = 5 can be identified with the " 2 3 : 5 matrix . 3 2 : 5 1.1.1 Special Matrices Definition 1.1.5 1. A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For example, " " 0 0 0 0 0 0 = and 0 = . 2×2 2×3 0 0 0 0 0 2. A matrix having the number of rows equal to the number of columns is called a square matrix. Thus, its order is m×m (for some m) and is represented by m only. 3. In a square matrix, A = a , of order n, the entries a ,a ,...,a are called the diagonal entries ij 11 22 nn and form the principal diagonal ofA. 4. A square matrix A = a is said to be a diagonal matrix if a = 0 for i = 6 j. In other words, the ij ij " 4 0 non-zero entries appear only on the principal diagonal. For example, the zero matrix 0 and n 0 1 are a few diagonal matrices. AdiagonalmatrixDofordernwiththediagonalentriesd ,d ,...,d isdenotedbyD = diag(d ,...,d ). 1 2 n 1 n If d =d for all i =1,2,...,n then the diagonal matrix D is called a scalar matrix. i ( 1 if i=j 5. A square matrix A = a with a = ij ij 0 if i= 6 j is called the identity matrix, denoted byI . n   " 1 0 0 1 0   For example,I = , and I = 0 1 0 . 2 3   0 1 0 0 1 The subscript n is suppressed in case the order is clear from the context or if no confusion arises. 6. A square matrix A = a is said to be an upper triangular matrix if a =0 forij. ij ij A square matrix A = a is said to be an lower triangular matrix if a = 0 for ij. ij ij A square matrix A is said to be triangular if it is an upper or a lower triangular matrix.   2 1 4   For example is an upper triangular matrix. An upper triangular matrix will be represented 0 3 −1 0 0 −2   a a ··· a 11 12 1n    0 a ··· a  22 2n   by . . . .  .  . . . . .  . . .  0 0 ··· a nn 1.2 Operations on Matrices Definition 1.2.1 (Transpose of a Matrix) The transpose of an m×n matrix A = a is defined as the ij t n×m matrix B =b , with b =a for 1≤i≤m and 1≤j≤n. The transpose of A is denoted byA . ij ij ji1.2. OPERATIONS ON MATRICES 11 That is, by the transpose of an m×n matrix A, we mean a matrix of order n×m having the rows of A as its columns and the columns of A as its rows.   " 1 0 1 4 5   t For example, if A = then A = 4 1 .   0 1 2 5 2 Thus, the transpose of a row vector is a column vector and vice-versa. t t Theorem 1.2.2 For any matrix A, we have (A ) =A. t t t Proof. Let A = a , A =b and (A ) =c . Then, the definition of transpose gives ij ij ij c =b =a for all i,j ij ji ij and the result follows.  Definition 1.2.3 (Addition of Matrices) letA = a andB = b be are twom×n matrices. Then the ij ij sum A+B is defined to be the matrix C =c with c =a +b . ij ij ij ij Note that, we define the sum of two matrices only when the order of the two matrices are same. Definition 1.2.4 (Multiplying a Scalar to a Matrix) Let A = a be an m×n matrix. Then for any ij element k∈R, we define kA = ka . ij " " 1 4 5 5 20 25 For example, if A = and k =5, then 5A = . 0 1 2 0 5 10 Theorem 1.2.5 Let A,B and C be matrices of order m×n, and let k,ℓ∈R. Then 1. A+B =B +A (commutativity). 2. (A+B)+C =A+(B +C) (associativity). 3. k(ℓA) = (kℓ)A. 4. (k+ℓ)A =kA+ℓA. Proof. Part 1. Let A = a andB = b . Then ij ij A+B = a +b = a +b =b +a = b +a =B +A ij ij ij ij ij ij ij ij as real numbers commute. The reader is required to prove the other parts as all the results follow from the properties of real numbers.  Exercise 1.2.6 1. Suppose A+B =A. Then show that B =0. 2. Suppose A+B =0. Then show that B = (−1)A = −a . ij Definition 1.2.7 (Additive Inverse) Let A be an m×n matrix. 1. Then there exists a matrix B with A+B =0. This matrix B is called the additive inverse of A, and is denoted by−A = (−1)A. 2. Also, for the matrix0 ,A+0 =0+A=A. Hence, the matrix0 is called the additive identity. m×n m×n12 CHAPTER 1. MATRICES 1.2.1 Multiplication of Matrices Definition 1.2.8 (Matrix Multiplication / Product) Let A = a be an m×n matrix and B = b be ij ij an n×r matrix. The product AB is a matrix C = c of order m×r, with ij n X c = a b =a b +a b +···+a b . ij ik kj i1 1j i2 2j in nj k=1 Observe that the productAB is defined if and only if the number of columns of A = the number of rows of B.   " 1 2 1 1 2 3   For example, if A = andB = 0 0 3 then   2 4 1 1 0 4 " " 1+0+3 2+0+0 1+6+12 4 2 19 AB = = . 2+0+1 4+0+0 2+12+4 3 4 18 Note that in this example, while AB is defined, the product BA is not defined. However, for square matrices A andB of the same order, both the productAB and BA are defined. Definition 1.2.9 Two square matrices A and B are said to commute if AB =BA. Remark 1.2.10 1. Note that ifA is a squarematrix of ordernthenAI =I A. Also for anyd∈R, n n the matrix dI commutes with every square matrix of order n. The matrices dI for any d ∈ R n n are called scalar matrices. 2. In general, the matrix product is not commutative. For example, consider the following two " " 1 1 1 0 matricesA = andB = . Then check that the matrix product 0 0 1 0 " " 2 0 1 1 AB = = 6 =BA. 0 0 1 1 Theorem 1.2.11 Suppose that the matrices A, B and C are so chosen that the matrix multiplications are defined. 1. Then (AB)C =A(BC). That is, the matrix multiplication is associative. 2. For any k∈R, (kA)B =k(AB) =A(kB). 3. ThenA(B +C) =AB +AC. That is, multiplication distributes over addition. 4. If A is an n×n matrix then AI =I A =A. n n 5. For any square matrix A of order n and D = diag(d ,d ,...,d ), we have 1 2 n • the first row of DA is d times the first row ofA; 1 th th • for 1≤i≤n, the i row ofDA is d times the i row ofA. i A similar statement holds for the columns ofA when A is multiplied on the right byD. Proof. Part 1. Let A = a , B =b and C = c . Then ij m×n ij n×p ij p×q p n X X (BC) = b c and (AB) = a b . kj kℓ ℓj iℓ ik kℓ ℓ=1 k=11.3. SOME MORE SPECIAL MATRICES 13 Therefore, n n p X X X    A(BC) = a BC = a b c ik ik kℓ ℓj ij kj k=1 k=1 ℓ=1 n p n p XX XX   = a b c = a b c ik kℓ ℓj ik kℓ ℓj k=1 ℓ=1 k=1 ℓ=1 p n t X X X   = a b c = AB c ik kℓ ℓj ℓj iℓ ℓ=1 k=1 ℓ=1  = (AB)C . ij Part 5. For allj = 1,2,...,n, we have n X (DA) = d a =d a ij ik kj i ij k=1 as d =0 whenever i6=k. Hence, the required result follows. ik The reader is required to prove the other parts.  Exercise 1.2.12 1. Let A and B be two matrices. If the matrix addition A+B is defined, then prove t t t t t t that (A+B) =A +B . Also, if the matrix product AB is defined then prove that (AB) =B A .   b 1   b  2   2. Let A = a ,a ,...,a and B = . Compute the matrix products AB and BA. 1 2 n .   .  . b n n 3. Let n be a positive integer. Compute A for the following matrices:     " 1 1 1 1 1 1 1 1     , 0 1 1 , 1 1 1 .     0 1 0 0 1 1 1 1 n Can you guess a formula forA and prove it by induction? 4. Find examples for the following statements. (a) Suppose that the matrix product AB is defined. Then the product BA need not be defined. (b) Suppose that the matrix products AB and BA are defined. Then the matrices AB and BA can have different orders. (c) Suppose that the matrices A and B are square matrices of order n. Then AB and BA may or may not be equal. 1.3 Some More Special Matrices t t Definition 1.3.1 1. A matrix A overR is called symmetric if A =A and skew-symmetric if A =−A. t t 2. A matrix A is said to be orthogonal if AA =A A =I.     1 2 3 0 1 2     Example 1.3.2 1. LetA = andB = . ThenA is a symmetric matrix and 2 4 −1 −1 0 −3 3 −1 4 −2 3 0 B is a skew-symmetric matrix.14 CHAPTER 1. MATRICES   1 1 1 √ √ √ 3 3 3   1 1 √ √ 2. Let A = − 0 . Then A is an orthogonal matrix.   2 2 1 1 2 √ √ √ − 6 6 6   1 if i =j +1 n ℓ 3. LetA = a be ann×n matrix witha = . ThenA =0 andA 6=0 for 1≤ℓ≤ ij ij  0 otherwise k n−1. The matrices A for which a positive integer k exists such that A = 0 are called nilpotent k matrices. The least positive integerk for which A =0 is called the order of nilpotency. " 1 0 2 2 4. Let A = . Then A = A. The matrices that satisfy the condition that A = A are called 0 0 idempotent matrices. 1 t 1 t Exercise 1.3.3 1. Show that for any square matrix A, S = (A+A ) is symmetric, T = (A−A ) is 2 2 skew-symmetric, and A =S +T. 2. Show that the product of two lowertriangular matrices is a lowertriangular matrix. A similar statement holds for upper triangular matrices. 3. Let A and B be symmetric matrices. Show that AB is symmetric if and only if AB =BA. 4. Show that the diagonal entries of a skew-symmetric matrix are zero. 5. LetA,B be skew-symmetricmatrices withAB =BA. Is the matrixAB symmetric orskew-symmetric? 2 6. Let A be a symmetric matrix of ordern with A =0. Is it necessarily true that A =0? 7. LetA be a nilpotent matrix. Show that there exists a matrixB such that B(I +A) =I = (I +A)B. 1.3.1 Submatrix of a Matrix Definition 1.3.4 A matrix obtained by deleting some of the rows and/or columns of a matrix is said to be a submatrix of the given matrix. " 1 4 5 For example, if A = , a few submatrices ofA are 0 1 2 " " 1 1 5 1,2, ,15, , A. 0 0 2 " " 1 4 1 4 Butthematrices and arenotsubmatricesofA.(Thereaderisadvisedtogivereasons.) 1 0 0 2 Miscellaneous Exercises Exercise 1.3.5 1. Complete the proofs of Theorems 1.2.5 and 1.2.11. " " " " x y 1 0 cosθ −sinθ 1 1 2. Letx = , y = , A = andB = . Geometrically interprety =Ax x y 0 −1 sinθ cosθ 2 2 and y =Bx. 3. Consider the two coordinate transformations x =a y +a y y =b z +b z 1 11 1 12 2 1 11 1 12 2 and . x =a y +a y y =b z +b z 2 21 1 22 2 2 21 1 22 21.3. SOME MORE SPECIAL MATRICES 15 (a) Compose the two transformations to express x ,x in terms ofz ,z . 1 2 1 2 t t t (b) If x = x , x , y = y , y and z = z , z then find matrices A,B and C such that 1 2 1 2 1 2 x =Ay, y =Bz and x =Cz. (c) Is C =AB? 4. For a square matrix A of ordern, we define trace of A, denoted by tr (A) as tr (A) =a +a +···a . 11 22 nn Then for two square matrices, A and B of the same order, show the following: (a) tr (A+B) = tr (A)+tr (B). (b) tr (AB) = tr (BA). 5. Show that, there do not exist matrices A and B such that AB−BA =cI for any c6=0. n 6. Let A and B be twom×n matrices and let x be an n×1 column vector. (a) Prove that if Ax =0 for all x, then A is the zero matrix. (b) Prove that if Ax =Bx for all x, then A =B. 7. Let A be an n×n matrix such that AB =BA for all n×n matrices B. Show that A =αI for some α∈R.   1 2   8. Let A = 2 1 . Show that there exist infinitely many matrices B such that BA = I . Also, show   2 3 1 that there does not exist any matrix C such that AC =I . 3 1.3.1 Block Matrices Let A be an n×m matrix and B be an m×p matrix. Suppose r m. Then, we can decompose the " H matrices A and B as A = P Q and B = ; where P has order n×r and H has order r×p. That K is, the matricesP andQ are submatrices ofA andP consists of the firstr columns ofA andQ consists of the last m−r columns of A. Similarly, H and K are submatrices of B and H consists of the first r rows of B and K consists of the last m−r rows ofB. We now prove the following important theorem. " H Theorem 1.3.6 Let A =a = P Q and B =b = be defined as above. Then ij ij K AB =PH +QK. Proof. First note that the matrices PH and QK are each of order n×p. The matrix products PH andQK are valid as the order of the matricesP,H,Q andK are respectively,n×r, r×p, n×(m−r) and (m−r)×p. LetP = P , Q= Q , H = H , andK = k . Then, for 1≤i≤n and 1≤j≤p, ij ij ij ij we have m r m X X X (AB) = a b = a b + a b ij ik kj ik kj ik kj k=1 k=1 k=r+1 r m X X = P H + Q K ik kj ik kj k=1 k=r+1 = (PH) +(QK) =(PH +QK) . ij ij ij16 CHAPTER 1. MATRICES  Theorem 1.3.6 is very useful due to the following reasons: 1. The order of the matrices P,Q,H and K are smaller than that ofA or B. 2. It may be possible to block the matrix in such a way that a few blocks are either identity matrices or zero matrices. In this case, it may be easy to handle the matrix product using the block form. 3. Or when we want to prove results using induction, then we may assume the result for r ×r submatrices and then look for (r+1)×(r+1) submatrices, etc.   " a b 1 2 0   For example, if A = andB = c d , Then   2 5 0 e f " " " " 1 2 a b 0 a+2c b+2d AB = + ef = . 2 5 c d 0 2a+5c 2b+5d   0 −1 2   If A = 3 1 4 , then A can be decomposed as follows:   −2 5 −3     0 −1 2 0 −1 2     A = 3 1 4 , or A = 3 1 4 , or     −2 5 −3 −2 5 −3   0 −1 2   A = and so on.  3 1 4  −2 5 −3 m m s s 1 2 1 2 " " Suppose A = n P Q and B = r E F . Then the matrices P, Q, R, S and 1 1 n R S r G H 2 2 E, F, G, H, are called the blocks of the matrices A andB, respectively. Even ifA+B is defined, the orders ofP andE may not be same and hence, we may not be able " P +E Q+F to addA andB in the block form. But, ifA+B andP +E is defined thenA+B = . R+G S +H Similarly, if the productAB is defined, the productPE need not be defined. Therefore, we can talk of matrix productAB as block product of matrices, if both the productsAB andPE are defined. And " PE +QG PF +QH in this case, we haveAB = . RE +SG RF +SH That is, once a partition of A is fixed, the partition of B has to be properly chosen for purposes of block addition or multiplication. Exercise 1.3.7 1. Compute the matrix productAB using the block matrix multiplication for the matrices     1 0 0 1 1 2 2 1     0 1 1 1 1 1 2 1     A =  and B = .     0 1 1 0 1 1 1 1 0 1 0 1 −1 1 −1 1 " P Q 2. Let A = . If P,Q,R and S are symmetric, what can you say about A? Are P,Q,R and S R S symmetric, when A is symmetric?1.4. MATRICES OVER COMPLEX NUMBERS 17 3. Let A = a and B = b be two matrices. Suppose a , a , ..., a are the rows of A and ij ij 1 2 n b , b , ..., b are the columns of B. If the product AB is defined, then show that 1 2 p   a B 1   a B   2   AB = Ab , Ab , ..., Ab = . 1 2 p .   .  .  a B n That is, left multiplication by A, is same as multiplying each column of B by A. Similarly, right multiplication by B, is same as multiplying each row of A byB. 1.4 Matrices over Complex Numbers Here the entries of the matrix are complex numbers. All the definitions still hold. One just needs to look at the following additional definitions. Definition 1.4.1 (Conjugate Transpose of a Matrix) 1. LetA be anm×nmatrix overC. IfA = a ij then the Conjugate ofA, denoted by A, is the matrix B =b with b =a . ij ij ij " 1 4+3i i For example, Let A = . Then 0 1 i−2 " 1 4−3i −i A = . 0 1 −i−2 ∗ 2. Let A be an m×n matrix overC. If A = a then the Conjugate Transpose ofA, denoted by A , is ij the matrix B = b with b =a . ij ij ji " 1 4+3i i For example, Let A = . Then 0 1 i−2   1 0   ∗ A = 4−3i 1 .   −i −i−2 ∗ 3. A square matrix A overC is called Hermitian if A =A. ∗ 4. A square matrix A overC is called skew-Hermitian if A =−A. ∗ ∗ 5. A square matrix A overC is called unitary if A A =AA =I. ∗ ∗ 6. A square matrix A overC is called Normal if AA =A A. ∗ t Remark 1.4.2 If A = a with a ∈R, then A =A . ij ij Exercise 1.4.3 1. Give examples of Hermitian, skew-Hermitian and unitary matrices that have entries with non-zero imaginary parts. 2. Restate the results on transpose in terms of conjugate transpose. ∗ ∗ A+A A−A 3. Show that for any square matrix A, S = is Hermitian, T = is skew-Hermitian, and 2 2 A =S +T. ∗ ∗ 4. Show that ifA is a complex triangular matrix and AA =A A then A is a diagonal matrix.18 CHAPTER 1. MATRICESChapter 2 Linear System of Equations 2.1 Introduction Let us look at some examples of linear systems. 1. Suppose a,b∈R. Consider the system ax =b. b (a) If a6= 0 then the system has a unique solution x = . a (b) If a = 0 and i. b = 6 0 then the system has no solution. ii. b = 0 then the system has infinite number of solutions, namely all x∈R. 2. We now consider a system with 2 equations in 2 unknowns. Consider the equation ax+by = c. If one of the coefficients, a or b is non-zero, then this linear 2 equation represents a line inR . Thus for the system a x+b y =c and a x+b y =c , 1 1 1 2 2 2 the set of solutions is given by the points of intersection of the two lines. There are three cases to be considered. Each case is illustrated by an example. (a) Unique Solution t t x+2y = 1 and x+3y =1. The unique solution is (x,y) =(1,0) . Observe that in this case,a b −a b = 6 0. 1 2 2 1 (b) Infinite Number of Solutions t t t t x+2y = 1 and 2x+4y = 2. The set of solutions is (x,y) =(1−2y,y) =(1,0) +y(−2,1) with y arbitrary. In other words, both the equations represent the same line. Observe that in this case,a b −a b = 0, a c −a c = 0 and b c −b c = 0. 1 2 2 1 1 2 2 1 1 2 2 1 (c) No Solution x+2y = 1 and 2x+4y = 3. The equations represent a pair of parallel lines and hence there is no point of intersection. Observe that in this case,a b −a b = 0 but a c −a c = 6 0. 1 2 2 1 1 2 2 1 3. As a last example, consider 3 equations in 3 unknowns. 3 A linear equation ax+by+cz =d represent a plane in R provided (a,b,c) = 6 (0,0,0). As in the case of 2 equations in 2 unknowns, we have to look at the points of intersection of the given three planes. Here again, we have three cases. The three cases are illustrated by examples. 1920 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS (a) Unique Solution Considerthesystemx+y+z = 3, x+4y+2z = 7and4x+10y−z =13.Theunique solution t t to this system is (x,y,z) = (1,1,1) ; i.e. the three planes intersect at a point. (b) Infinite Number of Solutions Consider the system x +y +z = 3, x +2y + 2z = 5 and 3x +4y + 4z = 11. The set of t t t t solutions to this system is (x,y,z) = (1,2−z,z) =(1,2,0) +z(0,−1,1), withz arbitrary: the three planes intersect on a line. (c) No Solution The system x+y +z = 3, x+2y+2z = 5 and 3x+4y+4z = 13 has no solution. In this case, we get three parallel lines as intersections of the above planes taken two at a time. The readers are advised to supply the proof. 2.2 Definition and a Solution Method Definition 2.2.1 (Linear System) A linear system ofm equations inn unknownsx ,x ,...,x is a set of 1 2 n equations of the form a x +a x +···+a x = b 11 1 12 2 1n n 1 a x +a x +···+a x = b 21 1 22 2 2n n 2 . . . . . . (2.2.1) a x +a x +···+a x = b m1 1 m2 2 mn n m where for 1 ≤ i ≤ n, and 1 ≤ j ≤ m; a ,b ∈ R. Linear System (2.2.1) is called homogeneous if ij i b = 0=b =··· =b and non-homogeneous otherwise. 1 2 m We rewrite the above equations in the formAx =b, where       a a ··· a x b 11 12 1n 1 1       a a ··· a x b       21 22 2n 2 2       A = , x = , and b = . . . . .  .      . . . . . .  .      . . . . . a a ··· a x b m1 m2 mn n m The matrix A is called the coefficient matrix and the block matrix A b, is the augmented matrix of the linear system (2.2.1). th th Remark 2.2.2 Observe that the i row of the augmented matrix A b represents the i equation th th and the j column of the coefficient matrix A corresponds to coefficients of the j variable x . That j th is, for 1≤i≤m and 1≤j≤n, the entrya of the coefficient matrixA correspondsto thei equation ij th andj variablex .. j ForasystemoflinearequationsAx =b,thesystemAx =0iscalledtheassociatedhomogeneous system. Definition 2.2.3 (Solution of a Linear System) Asolutionofthe linearsystemAx =bis a columnvector y with entries y ,y ,...,y such that the linear system (2.2.1) is satisfied by substitutingy in place of x . 1 2 n i i t That is, if y =y ,y ,...,y then Ay =b holds. 1 2 n Note: The zero n-tuple x =0 is always a solution of the system Ax =0, and is called the trivial solution. A non-zeron-tuple x, if it satisfies Ax =0, is called a non-trivial solution.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.