Mathematical description of linear dynamical systems pdf

linear and dynamical systems optimization and games answers and the mathematical description of linear dynamical systems kalman pdf
LeonBrown Profile Pic
LeonBrown,Hawaii,Researcher
Published Date:14-07-2017
Your Website URL(Optional)
Comment
Lecture Notes for EE263 Stephen Boyd Introduction to Linear Dynamical Systems Autumn 2007-08 Copyright Stephen Boyd. Limited copying or use for educational purposes is fine, but please acknowledge source, e.g., “taken from Lecture Notes for EE263, Stephen Boyd, Stanford 2007.”Contents Lecture 1 – Overview Lecture 2 – Linear functions and examples Lecture 3 – Linear algebra review Lecture 4 – Orthonormal sets of vectors and QR factorization Lecture 5 – Least-squares Lecture 6 – Least-squares applications Lecture 7 – Regularized least-squares and Gauss-Newton method Lecture 8 – Least-norm solutions of underdetermined equations Lecture 9 – Autonomous linear dynamical systems Lecture 10 – Solution via Laplace transform and matrix exponential Lecture 11 – Eigenvectors and diagonalization Lecture 12 – Jordan canonical form Lecture 13 – Linear dynamical systems with inputs and outputs Lecture 14 – Example: Aircraft dynamics Lecture 15 – Symmetric matrices, quadratic forms, matrix norm, and SVD Lecture 16 – SVD applications Lecture 17 – Example: Quantum mechanics Lecture 18 – Controllability and state transfer Lecture 19 – Observability and state estimation Lecture 20 – Some final comments Basic notation Matrix primer Crimes against matrices Solving general linear equations using Matlab Least-squares and least-norm solutions using Matlab ExercisesEE263 Autumn 2007-08 Stephen Boyd Lecture 1 Overview • course mechanics • outline & topics • what is a linear dynamical system? • why study linear systems? • some examples 1–1 Course mechanics • all class info, lectures, homeworks, announcements on class web page: www.stanford.edu/class/ee263 course requirements: • weekly homework • takehome midterm exam (date TBD) • takehome final exam (date TBD) Overview 1–2Prerequisites • exposure to linear algebra (e.g., Math 103) • exposure to Laplace transform, differential equations not needed, but might increase appreciation: • control systems • circuits & systems • dynamics Overview 1–3 Major topics & outline • linear algebra & applications • autonomous linear dynamical systems • linear dynamical systems with inputs & outputs • basic quadratic control & estimation Overview 1–4Linear dynamical system continuous-time linear dynamical system (CT LDS) has the form dx =A(t)x(t)+B(t)u(t), y(t) =C(t)x(t)+D(t)u(t) dt where: • t∈R denotes time n • x(t)∈R is the state (vector) m • u(t)∈R is the input or control p • y(t)∈R is the output Overview 1–5 n×n • A(t)∈R is the dynamics matrix n×m • B(t)∈R is the input matrix p×n • C(t)∈R is the output or sensor matrix p×m • D(t)∈R is the feedthrough matrix for lighter appearance, equations are often written x˙ =Ax+Bu, y =Cx+Du • CT LDS is a first order vector differential equation • also called state equations, or ‘m-input, n-state, p-output’ LDS Overview 1–6Some LDS terminology • most linear systems encountered are time-invariant: A, B, C, D are constant, i.e., don’t depend on t • when there is no input u (hence, no B or D) system is called autonomous • very often there is no feedthrough, i.e., D = 0 • when u(t) and y(t) are scalar, system is called single-input, single-output (SISO); when input & output signal dimensions are more than one, MIMO Overview 1–7 Discrete-time linear dynamical system discrete-time linear dynamical system (DT LDS) has the form x(t+1) =A(t)x(t)+B(t)u(t), y(t) =C(t)x(t)+D(t)u(t) where • t∈Z =0,±1,±2,... • (vector) signals x, u, y are sequences DT LDS is a first order vector recursion Overview 1–8Why study linear systems? applications arise in many areas, e.g. • automatic control systems • signal processing • communications • economics, finance • circuit analysis, simulation, design • mechanical and civil engineering • aeronautics • navigation, guidance Overview 1–9 Usefulness of LDS • depends on availability of computing power, which is large & increasing exponentially • used for – analysis & design – implementation, embedded in real-time systems • like DSP, was a specialized topic & technology 30 years ago Overview 1–10Origins and history • parts of LDS theory can be traced to 19th century • builds on classical circuits & systems (1920s on) (transfer functions . . . ) but with more emphasis on linear algebra • first engineering application: aerospace, 1960s • transitioned from specialized topic to ubiquitous in 1980s (just like digital signal processing, information theory, . . . ) Overview 1–11 Nonlinear dynamical systems many dynamical systems are nonlinear (a fascinating topic) so why study linear systems? • most techniques for nonlinear systems are based on linear methods • methods for linear systems often work unreasonably well, in practice, for nonlinear systems • if you don’t understand linear dynamical systems you certainly can’t understand nonlinear dynamical systems Overview 1–12y y Examples (ideas only, no details) • let’s consider a specific system x˙ =Ax, y =Cx 16 with x(t)∈R , y(t)∈R (a ‘16-state single-output system’) • model of a lightly damped mechanical system, but it doesn’t matter Overview 1–13 typical output: 3 2 1 0 −1 −2 −3 0 50 100 150 200 250 300 350 t 3 2 1 0 −1 −2 −3 0 100 200 300 400 500 600 700 800 900 1000 t • output waveform is very complicated; looks almost random and unpredictable • we’ll see that such a solution can be decomposed into much simpler (modal) components Overview 1–14t 0.2 0 −0.2 0 50 100 150 200 250 300 350 1 0 −1 0 50 100 150 200 250 300 350 0.5 0 −0.5 0 50 100 150 200 250 300 350 2 0 −2 0 50 100 150 200 250 300 350 1 0 −1 0 50 100 150 200 250 300 350 2 0 −2 0 50 100 150 200 250 300 350 5 0 −5 0 50 100 150 200 250 300 350 0.2 0 −0.2 0 50 100 150 200 250 300 350 (idea probably familiar from ‘poles’) Overview 1–15 Input design add two inputs, two outputs to system: x˙ =Ax+Bu, y =Cx, x(0) = 0 16×2 2×16 where B∈R , C ∈R (same A as before) 2 problem: find appropriate u :R →R so that y(t)→y = (1,−2) + des simple approach: consider static conditions (u, x, y constant): x˙ = 0 =Ax+Bu , y =y =Cx static des solve for u to get:    −1 −0.63 −1 u = −CA B y = static des 0.36 Overview 1–16y y u u u u 2 1 2 1 y 2 1 y 1 2 let’s apply u =u and just wait for things to settle: static 0 −0.2 −0.4 −0.6 −0.8 −1 −200 0 200 400 600 800 1000 1200 1400 1600 1800 t 0.4 0.3 0.2 0.1 0 −0.1 −200 0 200 400 600 800 1000 1200 1400 1600 1800 t 2 1.5 1 0.5 0 −200 0 200 400 600 800 1000 1200 1400 1600 1800 t 0 −1 −2 −3 −4 −200 0 200 400 600 800 1000 1200 1400 1600 1800 t . . . takes about 1500 sec for y(t) to converge to y des Overview 1–17 using very clever input waveforms (EE263) we can do much better, e.g. 0.2 0 −0.2 −0.4 −0.6 0 10 20 30 40 50 60 t 0.4 0.2 0 −0.2 0 10 20 30 40 50 60 t 1 0.5 0 −0.5 0 10 20 30 40 50 60 t 0 −0.5 −1 −1.5 −2 −2.5 0 10 20 30 40 50 60 t . . . here y converges exactly in 50 sec Overview 1–18y u 2 2 y u 1 1 in fact by using larger inputs we do still better, e.g. 5 0 −5 −5 0 5 10 15 20 25 t 1 0.5 0 −0.5 −1 −1.5 −5 0 5 10 15 20 25 t 2 1 0 −1 −5 0 5 10 15 20 25 t 0 −0.5 −1 −1.5 −2 −5 0 5 10 15 20 25 t . . . here we have (exact) convergence in 20 sec Overview 1–19 in this course we’ll study • how to synthesize or design such inputs • the tradeoff between size of u and convergence time Overview 1–20s(t) y(t) w(t) u(t) Estimation / filtering u w y H(s) A/D • signal u is piecewise constant (period 1sec) • filtered by 2nd-order system H(s), step response s(t) • A/D runs at 10Hz, with 3-bit quantizer Overview 1–21 1 0 −1 0 1 2 3 4 5 6 7 8 9 10 1.5 1 0.5 0 0 1 2 3 4 5 6 7 8 9 10 1 0 −1 0 1 2 3 4 5 6 7 8 9 10 1 0 −1 0 1 2 3 4 5 6 7 8 9 10 t problem: estimate original signal u, given quantized, filtered signal y Overview 1–22u(t) (solid) and u ˆ(t) (dotted) simple approach: • ignore quantization • design equalizer G(s) for H(s) (i.e., GH ≈ 1) • approximate u as G(s)y . . . yields terrible results Overview 1–23 formulate as estimation problem (EE263) . . . 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 1 2 3 4 5 6 7 8 9 10 t RMS error 0.03, well below quantization error () Overview 1–24EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples • linear equations and functions • engineering examples • interpretations 2–1 Linear equations consider system of linear equations y = a x + a x +···+ a x 1 11 1 12 2 1n n y = a x + a x +···+ a x 2 21 1 22 2 2n n . . . y = a x + a x +···+ a x m m1 1 m2 2 mn n can be written in matrix form as y = Ax, where       y a a ··· a x 1 11 12 1n 1       y a a ··· a x 2 21 22 2n 2       y = A = x = . . . . .  .   . . .   .  . . . . . y a a ··· a x m m1 m2 mn n Linear functions and examples 2–2Linear functions n m a function f :R −→R is linear if n • f(x+y) = f(x)+f(y), ∀x,y∈R n • f(αx) = αf(x), ∀x∈R ∀α∈R i.e., superposition holds x+y f(y) y x f(x+y) f(x) Linear functions and examples 2–3 Matrix multiplication function n m m×n • consider function f :R →R given by f(x) = Ax, where A∈R • matrix multiplication function f is linear n m • converse is true: any linear function f :R →R can be written as m×n f(x) = Ax for some A∈R • representation via matrix multiplication is unique: for any linear function f there is only one matrix A for which f(x) = Ax for all x • y = Ax is a concrete representation of a generic linear function Linear functions and examples 2–46 Interpretations of y = Ax • y is measurement or observation; x is unknown to be determined • x is ‘input’ or ‘action’; y is ‘output’ or ‘result’ n • y = Ax defines a function or transformation that maps x∈R into m y∈R Linear functions and examples 2–5 Interpretation of a ij n X y = a x i ij j j=1 a is gain factor from jth input (x ) to ith output (y ) ij j i thus, e.g., • ith row of A concerns ith output • jth column of A concerns jth input • a = 0 means 2nd output (y ) doesn’t depend on 7th input (x ) 27 2 7 • a ≫a for j = 1 means y depends mainly on x 31 3j 3 1 Linear functions and examples 2–66 6 • a ≫a for i = 5 means x affects mainly y 52 i2 2 5 • A is lower triangular, i.e., a = 0 for i j, means y only depends on ij i x ,...,x 1 i • A is diagonal, i.e., a = 0 for i = j, means ith output depends only on ij ith input more generally, sparsity pattern of A, i.e., list of zero/nonzero entries of A, shows which x affect which y j i Linear functions and examples 2–7 Linear elastic structure • x is external force applied at some node, in some fixed direction j • y is (small) deflection of some node, in some fixed direction i x 1 x 2 x 3 x 4 (provided x, y are small) we have y≈ Ax • A is called the compliance matrix • a gives deflection i per unit force at j (in m/N) ij Linear functions and examples 2–8

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.