Applications of linear algebra ppt

applications of linear algebra in engineering ppt and linear algebra and its applications ppt
Dr.ShivJindal Profile Pic
Dr.ShivJindal,India,Teacher
Published Date:19-07-2017
Your Website URL(Optional)
Comment
Linear Algebra for Communications: A gentle introduction Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 1Outline  What is linear algebra, really? Vector? Matrix? Why care?  Basis, projections, orthonormal basis  Algebra operations: addition, scaling, multiplication, inverse  Matrices: translation, rotation, reflection, shear, projection etc  Symmetric/Hermitian, positive definite matrices  Decompositions:  Eigen-decomposition: eigenvector/value, invariants  Singular Value Decomposition (SVD).  Sneak peeks: how do these concepts relate to communications ideas: fourier transform, least squares, transfer functions, matched filter, solving differential equations etc Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 2What is “Linear” & “Algebra”?  Properties satisfied by a line through the origin (“one-dimensional y case”.  A directed arrow from the origin (v) on the line, when scaled by a cv constant (c) remains on the line v  Two directed arrows (u and v) on the line can be “added” to x create a longer directed arrow (u + v) in the same line.  Wait a minute This is nothing but arithmetic with symbols  “Algebra”: generalization and extension of arithmetic. y  “Linear” operations: addition and scaling. v  Abstract and Generalize u u + v  “Line” ↔ vector space having N dimensions x  “Point” ↔ vector with N components in each of the N dimensions (basis vectors).  Vectors have: “Length” and “Direction”.  Basis vectors: “span” or define the space & its dimensionality.  Linear function transforming vectors ↔ matrix.  The function acts on each vector component and scales it  Add up the resulting scaled components to get a new vector  In general: f(cu + dv) = cf(u) + df(v) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 3What is a Vector ?  Think of a vector as a directed line segment in N-dimensions (has “length” a  and “direction”)   v b  Basic idea: convert geometry in higher  dimensions into algebra  Once you define a “nice” basis along  c  each dimension: x-, y-, z-axis …  Vector becomes a 1 x N matrix y T  v = a b c  Geometry starts to become linear v algebra on vectors like v x Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 4Examples of Geometry becoming Algebra  Lines are vectors through the origin, scaled and translated: mx + c  Intersection of lines can be modeled as addition of vectors: solution of linear equations.  Linear transformations of vectors can be associated with a matrix A, whose columns represent how each basis vector is transformed.  Ellipses and conic sections: 2 2  ax + 2bxy + cy = d T T T  Let x = x y and A is a symmetric matrix with rows a b and b c T  x Ax = c quadratic form equation for ellipse  This becomes convenient at higher dimensions  Note how a symmetric matrix A naturally arises from such a homogenous multivariate equation… Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 5Scalar vs Matrix Equations  Line equation: y = mx + c  Matrix equation: y = Mx + c  Second order equations: T  x Mx = c T  y = (x Mx)u + Mx T  … involves quadratic forms like x Mx Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 6Vector Addition: A+B A+B v w (x , x ) (y , y ) (x y , x y ) 1 2 1 2 1 1 2 2 A A+B = C (use the head-to-tail method B to combine vectors) C B A Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 7Scalar Product: av av a(x , x ) (ax ,ax ) 1 2 1 2 av v Change only the length (“scaling”), but keep direction fixed. Sneak peek: matrix operation (Av) can change length, direction and also dimensionality Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 8Vectors: Magnitude (Length) and Phase (direction) T v (x , x , , x ) 1 2 n n 2 v x (Magnitude or “2-norm”) i i1 If v1, v is a unit vector Alternate representations: (unit vector = pure direction) Polar coords: (v, ) j Complex numbers: ve y v  “phase” x Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 9T Inner (dot) Product: v.w or w v v  v.w (x , x ).(y , y ) x y x .y w 1 2 1 2 1 1 2 2 The inner product is a SCALAR v.w (x , x ).(y , y ) v  w cos 1 2 1 2 v.w 0 v w T If vectors v, w are “columns”, then dot product is w v Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 10Inner Products, Norms: Signal space  Signals modeled as vectors in a vector space: “signal space”  To form a signal space, first we need to know the inner product between two signals (functions):  Inner (scalar) product: (generalized for functions)   x(t), y(t)  x(t)y (t)dt   = cross-correlation between x(t) and y(t)  Properties of inner product:  ax(t), y(t) a x(t), y(t)  x(t),ay(t)  a x(t), y(t)  x(t) y(t), z(t) x(t), z(t) y(t), z(t) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 11Signal space …  The distance in signal space is measure by calculating the norm.  What is norm?  Norm of a signal (generalization of “length”):  2 x(t) x(t), x(t) x(t) dt E x   = “length” of x(t) ax(t) a x(t)  Norm between two signals: d x(t) y(t) x,y  We refer to the norm between two signals as the Euclidean distance between two signals. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 12Example of distances in signal space  (t) 2 s (a ,a ) 1 11 12 E d 1 s ,z 1  (t) 1 E 3 z (z , z ) 1 2 d E s ,z d 3 2 s ,z 2 s (a ,a ) 3 31 32 s (a ,a ) 2 21 22 Detection in The Euclidean distance between signals z(t) and s(t): AWGN noise: 2 2 d s (t) z(t) (a z ) (a z ) s ,z i i1 1 i2 2 i Pick the “closest” i1,2,3 signal vector Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 13Bases & Orthonormal Bases  Basis (or axes): frame of reference vs Basis: a space is totally defined by a set of vectors – any point is a linear combination of the basis Ortho-Normal: orthogonal + normal T x1 0 0 x y 0 T Sneak peek: x z 0 y0 1 0 Orthogonal: dot product is zero T y z 0 z0 0 1 Normal: magnitude is one Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 14Projections w/ Orthogonal Basis  Get the component of the vector on each axis:  dot-product with unit vector on each axis Sneak peek: this is what Fourier transform does Projects a function onto a infinite number of orthonormal basis functions: j j2n (e or e ), and adds the results up (to get an equivalent “representation” in the “frequency” domain). CDMA codes are “orthogonal”, and projecting the composite received signal on each code helps extract the symbol transmitted on that code. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 15Orthogonal Projections: CDMA, Spread Spectrum spread spectrum Base-band Spectrum Radio Spectrum Code B Code A B B A Code A A C C B C B B A B A A C A B Time Sender Receiver Each “code” is an orthogonal basis vector = signals sent are orthogonal Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 16What is a Matrix?  A matrix is a set of elements, organized into rows and columns rows a b  columns  c d  Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 17What is a Matrix? (Geometrically)  Matrix represents a linear function acting on vectors:  Linearity (a.k.a. superposition): f(au + bv) = af(u) + bf(v) T T  f transforms the unit x-axis basis vector i = 1 0 to a c T T  f transforms the unit y-axis basis vector j = 0 1 to b d T T  f can be represented by the matrix with a c and b d as columns T  Why? f(w = mi + nj) = Am n  Column viewpoint: focus on the columns of the matrix a b   c d  T T 0,1 a,c T 1,0 T b,d Linear Functions f : Rotate and/or stretch/shrink the basis vectors Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 18Matrix operating on vectors  Matrix is like a function that transforms the vectors on a plane  Matrix operating on a general point = transforms x- and y-components  System of linear equations: matrix is just the bunch of coeffs  a b x   x’ = ax + by x'  y’ = cx + dy    c d y y'    Vector (column) viewpoint: T  New basis vector a c is scaled by x, and added to: T  New basis vector b d scaled by y T  i.e. a linear combination of columns of A to get x’ y’  For larger dimensions this “column” or vector-addition viewpoint is better than the “row” viewpoint involving hyper-planes (that intersect to give a solution of a set of linear equations) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 19Vector Spaces, Dimension, Span  Another way to view Ax = b, is that a solution exists for all vectors b that lie in the “column space” of A,  i.e. b is a linear combination of the basis vectors represented by the columns of A  The columns of A “span” the “column” space  The dimension of the column space is the column rank (or rank) of matrix A.  In general, given a bunch of vectors, they span a vector space.  There are some “algebraic” considerations such as closure, zero etc  The dimension of the space is maximal only when the vectors are linearly independent of the others.  Subspaces are vector spaces with lower dimension that are a subset of the original space  Sneak Peek: linear channel codes (eg: Hamming, Reed-solomon, BCH) can be viewed as k-dimensional vector sub-spaces of a larger N-dimensional space.  k-data bits can therefore be protected with N-k parity bits Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 20