Mathematical Theory of Networks and systems

a mathematical theory of systems engineering the elements and control mathematical theory of reliability of time dependent systems with practical applications pdf free download
Dr.FlynnHanks Profile Pic
Dr.FlynnHanks,United States,Teacher
Published Date:26-07-2017
Your Website URL(Optional)
Introduction to the Mathematical Theory of Systems and Control Plant Controller Jan Willem Polderman Jan C. Willemsx Preface number of persons present in a room, activity in the kitchen, etc. Motors in washing machines, in dryers, and in many other household appliances are controlled to run at a fixed speed, independent of the load. Modern auto- mobiles have dozens of devices that regulate various variables. It is, in fact, possibletoviewalsothesuspensionofanautomobileasaregulatorydevice that absorbs the irregularities of the road so as to improve the comfort and safety of the passengers. Regulation is indeed a very important aspect of modern technology. For many reasons, such as efficiency, quality control, safety, and reliability, industrial production processes require regulation in ordertoguaranteethatcertainkeyvariables(temperatures,mixtures,pres- sures, etc.) be kept at appropriate values. Factors that inhibit these desired values from being achieved are external disturbances, as for example the properties of raw materials and loading levels or changes in the properties oftheplant,forexampleduetoagingoftheequipmentortofailureofsome devices. Regulation problems also occur in other areas, such as economics and biology. Oneofthecentralconceptsincontrolisfeedback:thevalueofonevariablein theplantismeasuredandused(fed back)inordertotakeappropriateaction through a control variable at another point in the plant. A good example of a feedback regulator is a thermostat: it senses the room temperature, compares it with the set point (the desired temperature), and feeds back theresulttotheboiler,whichthenstartsorshutsoffdependingonwhether the temperature is too low or too high. Man has been devising control devices ever since the beginning of civiliza- tion, as can be expected from the prevalence of regulation problems. Con- trol historians attribute the first conscious design of a regulatory feedback mechanismintheWesttotheDutchinventorCornelisDrebbel(1572–1633). Drebbel designed a clever contraption combining thermal and mechanical effects in order to keep the temperature of an oven at a constant tempera- ture. Being an alchemist as well as an inventor, Drebbel believed that his oven, the Athanor, would turn lead into gold. Needless to say, he did not meet with much success in this endeavor, notwithstanding the inventive- ness of his temperature control mechanism. Later in the seventeenth cen- tury, Christiaan Huygens (1629–1695) invented a flywheel device for speed control of windmills. This idea was the basis of the centrifugal fly-ball gov- ernor (see Figure P.1) used by James Watt (1736–1819), the inventor of the steam engine. The centrifugal governor regulated the speed of a steam engine. It was a very successful device used in all steam engines during the industrial revolution, and it became the first mass-produced control mech- anism in existence. Many control laboratories have therefore taken Watt’s fly-ball governor as their favorite icon. The control problem for steam en- gine speed occurred in a very natural way. During the nineteenth century, prime movers driven by steam engines were running throughout the grim factories of the industrial revolution. It was clearly important to avoid thePreface xi FIGURE P.1. Fly ball governor. speed changes that would naturally occur in the prime mover when there wasachangeintheload,whichoccurred,forexample,whenamachinewas disconnected from the prime mover. Watt’s fly-ball governor achieved this goal by letting more steam into the engine when the speed decreased and less steam when the speed increased, thus achieving a speed that tends to be insensitive to load variations. It was soon realized that this adjustment should be done cautiously, since by overreacting (called overcompensation), an all too enthusiastic governor could bring the steam engine into oscil- latory motion. Because of the characteristic sound that accompanied it, this phenomenon was called hunting. Nowadays, we recognize this as an instability due to high gain control. The problem of tuning centrifugal gov- ernors that achieved fast regulation but avoided hunting was propounded to James Clerk Maxwell (1831–1870) (the discoverer of the equations for electromagnetic fields) who reduced the question to one about the stability of differential equations. His paper “On Governors,” published in 1868 in the Proceedings of the Royal Society of London, can be viewed as the first mathematical paper on control theory viewed from the perspective of reg- ulation. Maxwell’s problem and its solution are discussed in Chapter 7 of this book, under the heading of the Routh-Hurwitz problem. Thefieldofcontrolviewedasregulationremainedmainlytechnologydriven duringthefirsthalfofthetwentiethcentury.Thereweretwoveryimportant developments in this period, both of which had a lasting influence on the field.First,therewastheinventionoftheProportional–Integral–Differential (PID)controller.ThePIDcontrollerproducesacontrolsignalthatconsists oftheweightedsumofthreeterms(aPIDcontrolleristhereforeoftencalled a three-term controller). The P-term produces a signal that is proportional totheerrorbetweentheactualandthedesiredvalueoftheto-be-controlled variable. It achieves the basic feedback compensation control, leading to a control input whose purpose is to make the to-be-controlled variable in-xii Preface crease when it is too low and decrease when it is too high. The I-term feeds back the integral of the error. This term results in a very large correction signal whenever this error does not converge to zero. For the error there hence holds, Go to zero or bust When properly tuned, this term achieves robustness, good performance not only for the nominal plant but also for plants that are close to it, since the I-term tends to force the error to zero forawiderangeoftheplantparameters.TheD-termactsonthederivative of the error. It results in a control correction signal as soon as the error starts increasing or decreasing, and it can thus be expected that this antic- ipatory action results in a fast response. The PID controller had, and still has, a very large technological impact, particularly in the area of chemical processcontrol.Asecondimportanteventthatstimulatedthedevelopment ampli- V fier R 1 V in V out µV out R 2 R 1 µ = R +R 1 2 ground FIGURE P.2. Feedback amplifier. of regulation in the first half of the twentieth century was the invention in the 1930s of the feedback amplifier by Black. The feedback amplifier (see Figure P.2) was an impressive technological development: it permitted sig- nals to be amplified in a reliable way, insensitive to the parameter changes inherent in vacuum-tube (and also solid-state) amplifiers. (See also Exer- cise 9.3.) The key idea of Black’s negative feedback amplifier is subtle but simple.Assumethatwehaveanelectronicamplifierthatamplifiesitsinput voltage V to V = KV. Now use a voltage divider and feed back µV out out to the amplifier input, so that when subtracted (whence the term negative feedback amplifier) from the input voltage V to the feedback amplifier, in the input voltage to the amplifier itself equalsV =V −µV . Combining in outPreface xiii these two relations yields the crucial formula 1 V = V . out in 1 µ + K This equation, simple as it may seem, carries an important message, see Exercise 9.3. What’s the big deal with this formula? Well, the value of the gain K of an electronic amplifier is typically large, but also very unstable, as a consequence of sensitivity to aging, temperature, loading, etc. The voltage divider, on the other hand, can be implemented by means of pas- sive resistors, which results in a very stable value for µ . Now, for large 1 1 (although uncertain) Ks, there holds ≈ , and so somehow Black’s 1 µ µ + K 1 magic circuitry results in an amplifier with a stable amplification gain µ based on an amplifier that has an inherent uncertain gain K. The invention of the negative feedback amplifier had far-reaching appli- cations to telephone technology and other areas of communication, since long-distance communication was very hampered by the annoying drift- ing of the gains of the amplifiers used in repeater stations. Pursuing the above analysis in more detail shows also that the larger the amplifier gain 1 K, the more insensitive the overall gain of the feedback amplifier 1 µ + K becomes. However, at high gains, the above circuit could become dynam- ically unstable because of dynamic effects in the amplifier. For amplifiers, this phenomenon is called singing, again because of the characteristic noise produced by the resistors that accompanies this instability. Nyquist, a col- league of Black at Bell Laboratories, analyzed this stability issue and came up with the celebrated Nyquist stability criterion. By pursuing these ideas further, various techniques were developed for setting the gains of feed- back controllers. The sum total of these design methods was termed classi- cal control theory and comprised such things as the Nyquist stability test, Bode plots, gain and phase margins, techniques for tuning PID regulators, lead–lag compensation, and root–locus methods. This account of the history of control brings us to the 1950s. We will now backtrack and follow the other historical root of control, trajectory opti- mization. The problem of trajectory transfer is the question of determining the paths of a dynamical system that transfer the system from a given initial to a prescribed terminal state. Often paths are sought that are op- timal in some sense. A beautiful example of such a problem is the brachys- tochrone problem that was posed by Johann Bernoulli in 1696, very soon after the discovery of differential calculus. At that time he was professor at the University of Groningen, where he taught from 1695 to 1705. The brachystochrone problem consists in finding the path between two given points A and B along which a body falling under its own weight moves in the shortest possible time. In 1696 Johann Bernoulli posed this problem as a public challenge to his contemporaries. Six eminent mathematicians (and not just any six) solved the problem: Johann himself, his elder brotherxiv Preface A B FIGURE P.3. Brachystochrone. A B FIGURE P.4. Cycloid. Jakob, Leibniz, de l’Hˆopital, Tschirnhaus, and Newton. Newton submit- ted his solution anonymously, but Johann Bernoulli recognized the culprit, since, as he put it, ex ungue leonem: you can tell the lion by its claws. The brachystochrone turned out to be the cycloid traced by a point on the cir- cle that rolls without slipping on the horizontal line through A and passes through A and B. It is easy to see that this defines the cycloid uniquely (see Figures P.3 and P.4). The brachystochrone problem led to the development of the Calculus of Variations, of crucial importance in a number of areas of applied mathe- matics, above all in the attempts to express the laws of mechanics in terms of variational principles. Indeed, to the amazement of its discoverers, it was observed that the possible trajectories of a mechanical system are precisely those that minimize a suitable action integral. In the words of Legendre, Ours is the best of all possible worlds. Thus the calculus of variations had far-reaching applications beyond that of finding optimal paths: in certain applications, it could also tell us what paths are physically possible. Out of these developments came the Euler–Lagrange and Hamilton equations as conditions for the vanishing of the first variation. Later, Legendre and Weierstrass added conditions for the nonpositivity of the second variation, thus obtaining conditions for trajectories to be local minima. The problem of finding optimal trajectories in the above sense, while ex- tremely important for the development of mathematics and mathematical physics, was not viewed as a control problem until the second half of the twentieth century. However, this changed in 1956 with the publication of Pontryagin’smaximum principle.ThemaximumprincipleconsistsofaveryPreface xv general set of necessary conditions that a control input that generates an optimal path has to satisfy. This result is an important generalization of the classical problems in the calculus of variations. Not only does it allow a much larger class of problems to be tackled, but importantly, it brought forward the problem of optimal input selection (in contrast to optimal path selection) as the central issue of trajectory optimization. Aroundthesametimethatthemaximumprincipleappeared,itwasrealized that the (optimal) input could also be implemented as a function of the state. That is, rather than looking for a control input as a function of time, it is possible to choose the (optimal) input as a feedback function of the state.Thisideaisthebasisfordynamic programming,whichwasformulated by Bellman in the late 1950s and which was promptly published in many of theappliedmathematicsjournalsinexistence.Withtheinsightobtainedby dynamicprogramming,thedistinctionbetween(feedbackbased) regulation and the (input selection based) trajectory optimization became blurred. Of course, the distinction is more subtle than the above suggests, particularly because it may not be possible to measure the whole state accurately; but we do not enter into this issue here. Out of all these developments, both in exogenous inputs to-be-controlled outputs PLANT Actuators Sensors control measured inputs outputs FEEDBACK CONTROLLER FIGURE P.5. Intelligent control. the areas of regulation and of trajectory planning, the picture of Figure P.5 emergedasthecentraloneincontroltheory.Thebasicaimofcontrolasitis generally perceived is the design of the feedback processor in Figure P.5. It emphasizes feedback as the basic principle of control: the controller accepts the measured outputs of the plant as its own inputs, and from there, it computes the desired control inputs to the plant. In this setup, we consider the plant as a black box that is driven by inputs and that produces outputs. The controller functions as follows. From the sensor outputs, information is obtained about the disturbances, about the actual dynamics of the plant if these are poorly understood, of unknown parameters, and of the internal state of the plant. Based on these sensor observations, and on the controlxvi Preface objectives, the feedback processor computes what control input to apply. Via the actuators, appropriate influence is thus exerted on the plant. Often, the aim of the control action is to steer the to-be-controlled outputs back to their desired equilibria. This is called stabilization, and will be studiedinChapters9and10ofthisbook.However,thegoalofthecontroller may also be disturbance attenuation: making sure that the disturbance inputs have limited effect on the to-be-controlled outputs; or it may be tracking: making sure that the plant can follow exogenous inputs. Or the designquestionmayberobustness:thecontrollershouldbesodesignedthat the controlled system should meet its specs (that is, that it should achieve the design specifications, as stability, tracking, or a degree of disturbance attenuation) for a wide range of plant parameters. The mathematical techniques used to model the plant, to analyze it, and to synthesize controllers took a major shift in the late 1950s and early 1960s with the introduction of state space ideas. The classical way of view- ing a system is in terms of the transfer function from inputs to outputs. By specifying the way in which exponential inputs transform into expo- nential outputs, one obtains (at least for linear time-invariant systems) an insightful specification of a dynamical system. The mathematics underly- ing these ideas are Fourier and Laplace transforms, and these very much dominated control theory until the early 1960s. In the early sixties, the prevalent models used shifted from transfer function to state space models. Instead of viewing a system simply as a relation between inputs and out- puts, state space models consider this transformation as taking place via the transformation of the internal state of the system. When state models came into vogue, differential equations became the dominant mathemati- cal framework needed. State space models have many advantages indeed. They are more akin to the classical mathematical models used in physics, chemistry, and economics. They provide a more versatile language, espe- cially because it is much easier to incorporate nonlinear effects. They are also more adapted to computations. Under the impetus of this new way of lookingatsystems,thefieldexpandedenormously.Importantnewconcepts were introduced, notably (among many others) those of controllability and observability, which became of central importance in control theory. These concepts are discussed in Chapter 5. Three important theoretical developments in control, all using state space models, characterized the late 1950s: the maximum principle, dynamic pro- gramming, and the Linear–Quadratic–Gaussian (LQG) problem . As al- ready mentioned, the maximum principle can be seen as the culmination of a long, 300-year historical development related to trajectory optimiza- tion. Dynamic programming provided algorithms for computing optimal trajectories in feedback form, and it merged the feedback control picture of Figure P.5 with the optimal path selection problems of the calculus of variations. The LQG problem, finally, was a true feedback control result:Preface xvii it showed how to compute the feedback control processor of Figure P.5 in order to achieve optimal disturbance attenuation. In this result the plant is assumed to be linear, the optimality criterion involves an integral of a quadratic expression in the system variables, and the disturbances are modeled as Gaussian stochastic processes. Whence the terminology LQG problem. The LQG problem, unfortunately, falls beyond the scope of this introductory book. In addition to being impressive theoretical results in their own right,thesedevelopments hadadeepand lastinginfluence onthe mathematical outlook taken in control theory. In order to emphasize this, it is customary to refer to the state space theory as modern control theory to distinguish it from the classical control theory described earlier. Unfortunately, this paradigm shift had its downsides as well. Rather than aiming for a good balance between mathematics and engineering, the field of systems and control became mainly mathematics driven. In particular, mathematical modeling was not given the central place in systems theory that it deserves. Robustness, i.e., the integrity of the control action against plant variations, was not given the central place in control theory that it deserved. Fortunately, this situation changed with the recent formulation andthesolutionofwhatiscalledtheH problem.TheH problemgivesa ∞ ∞ methodfordesigningafeedbackprocessorasinFigureP.5thatisoptimally robustinsomewell-definedsense.Unfortunately,theH problemalsofalls ∞ beyond the scope of this introductory book. A short description of the contents of this book Both the transfer function and the state space approaches view a system as asignalprocessorthatacceptsinputsandtransformsthemintooutputs.In the transfer function approach, this processor is described through the way in which exponential inputs are transformed into exponential outputs. In the state space approach, this processor involves the state as intermediate variable, but the ultimate aim remains to describe how inputs lead to out- puts. This input/output point of view plays an important role in this book, particularly in the later chapters. However, our starting point is different, more general, and, we claim, more adapted to modeling and more suitable for applications. As a paradigm for control, input/output or input/state/output models are oftenverysuitable.Manycontrolproblemscanbeviewedintermsofplants that are driven by control inputs through actuators and feedback mecha- nisms that compute the control action on the basis of the outputs of sen- sors, as depicted in Figure P.5. However, as a tool for modeling dynamical systems, the input/output point of view is unnecessarily restrictive. Most physical systems do not have a preferred signal flow direction, and it is im- portanttoletthemathematicalstructuresreflectthis.Thisistheapproach taken in this book: we view systems as defined by any relation among dy-xviii Preface namic variables, and it is only when turning to control in Chapters 9 and 10, that we adopt the input/state/output point of view. The general model structures that we develop in the first half of the book are referred to as the behavioral approach. We now briefly explain the main underlying ideas. We view a mathematical model as a subset of a universum of possibili- ties. Before we accept a mathematical model as a description of reality, all outcomes in the universum are in principle possible. After we accept the mathematical model as a convenient description of reality, we declare that only outcomes in a certain subset are possible. Thus a mathematical model is an exclusion law: it excludes all outcomes except those in a given subset. This subset is called the behavior of the mathematical model. Proceeding from this perspective, we arrive at the notion of a dynamical system as simply a subset of time-trajectories, as a family of time signals taking on values in a suitable signal space. This will be the starting point taken in this book. Thus the input/output signal flow graph emerges in general as a construct, sometimes a purely mathematical one, not necessarily implying a physical structure. We take the description of a dynamical system in terms of its behavior, thus in terms of the time trajectories that it permits, as the vantage point fromwhichtheconceptsputforwardinthisbookunfolds.Weareespecially interested in linear time-invariant differential systems: “linearity” means that these systems obey the superposition principle, “time-invariance” that the laws of the system do not depend explicitly on time, and “differential” that they can be described by differential equations. Specific examples of such systems abound: linear electrical circuits, linear (or linearized) me- chanical systems, linearized chemical reactions, the majority of the models used in econometrics, many examples from biology, etc. Understanding linear time-invariant differential systems requires first of all an accurate mathematical description of the behavior, i.e., of the solution set of a system of differential equations. This issue—how one wants to defineasolutionofasystemofdifferentialequations—turnsouttobemore subtle than it may at first appear and is discussed in detail in Chapter 2. Linear time-invariant differential systems have a very nice structure. When we have a set of variables that can be described by such a system, then thereisatransparentwayofdescribinghowtrajectoriesinthebehaviorare generated.Someofthevariables,itturnsout,arefree,unconstrained.They canthusbeviewedasunexplainedbythemodelandimposedonthesystem by the environment. These variables are called inputs. However, once these free variables are chosen, the remaining variables (called the outputs) are not yet completely specified. Indeed, the internal dynamics of the system generates many possible trajectories depending on the past history of the system, i.e., on the initial conditions inside the system. The formalization of these initial conditions is done by the concept of state. Discovering thisPreface xix structureofthebehaviorwithfreeinputs,boundoutputs,andthememory, the state variables, is the program of Chapters 3, 4, and 5. Whenonemodelsan(interconnected)physicalsystemfromfirstprinciples, then unavoidably auxiliary variables, in addition to the variables modeled, will appear in the model. Those auxiliary variables are called latent vari- ables, in order to distinguish them from the manifest variables, which are the variables whose behavior the model aims at describing. The interaction between manifest and latent variables is one of the recurring themes in this book. Weusethisbehavioraldefinitioninordertostudysomeimportant features of dynamical systems. Two important properties that play a central role are controllability and observability.Controllability referstothequestionof whetherornotonetrajectoryofadynamicalsystemcanbesteeredtowards another one. Observability refers to the question of what one can deduce from the observation of one set of system variables about the behavior of another set. Controllability and observability are classical concepts in control theory. The novel feature of the approach taken in this book is to cast these properties in the context of behaviors. The book uses the behavioral approach in order to present a systematic view for constructing and analyzing mathematical models. The book also aims at explaining some synthesis problems, notably the design of control algorithms. We treat control from a classical, input/output point of view. It is also possible to approach control problems from a behavioral point of view. But, while this offers some important advantages, it is still a rela- tively undeveloped area of research, and it is not ready for exposition in an introductory text. We will touch on these developments briefly in Section 10.8. Wenowproceedtogiveachapter-by-chapteroverviewofthetopicscovered in this book. In the first chapter we discuss the mathematical definition of a dynamical system that we use and the rationale underlying this concept. The basic ingredients of this definition are the behavior of a dynamical system as the centralobjectofstudyandthenotionsofmanifestandlatentvariables.The manifest variables are what the model aims at describing. Latent variables are introduced as auxiliary variables in the modeling process but are often also introduced for mathematical reasons, for purposes of analysis, or in order to exhibit a special property. In the second chapter, we introduce linear time-invariant differential sys- tems. It is this model class that we shall be mainly concerned with in this book.Thecrucialconceptdiscussedisthenotionofasolution-morespecif- ically, of a weak solution of a system of differential equations. As we shall see, systems of linear time-invariant differential equations are parametrized by polynomial matrices. An important part of this chapter is devoted toxx Preface the study of properties of polynomial matrices and their interplay with differential equations. In the third chapter we study the behavior of linear differential systems in detail.Weprovethatthevariablesinsuchsystemsmaybedividedintotwo sets: one set contains the variables that are free (we call them inputs), the other set contains the variables that are bound (we call them outputs). We also study how the relation between inputs and outputs can be expressed as a convolution integral. The fourth chapter is devoted to state models. The state of a dynamical systemparametrizesitsmemory,theextenttowhichthepastinfluencesthe future.Stateequations,thatis,theequationslinkingthemanifestvariables to the state, turn out to be first-order differential equations. The output of a system is determined only after the input and the initial conditions have been specified. Chapter5dealswithcontrollabilityandobservability.Acontrollablesystem is one in which an arbitrary past trajectory can be steered so as to be concatenated with an arbitrary future trajectory. An observable system is oneinwhichthelatentvariablescanbededucedfromthemanifestvariables. These properties play a central role in control theory. In the sixth chapter we take another look at latent variable and state space systems. In particular, we show how to eliminate latent variables and how to introduce state variables. Thus a system of linear differential equations containing latent variables can be transformed in an equivalent system in which these latent variables have been eliminated. Stability is the topic of Chapter 7. We give the classical stability conditions of systems of differential equations in terms of the roots of the associated polynomial or of the eigenvalue locations of the system matrix. We also discuss the Routh–Hurwitz tests, which provide conditions for polynomials to have only roots with negative real part. Up to Chapter 7, we have treated systems in their natural, time-domain setting.However,lineartime-invariantsystemscanalsobedescribedbythe way in which they process sinusoidal or, more generally, exponential sig- nals. The resulting frequency domain description of systems is explained in Chapter 8. In addition, we discuss some characteristic features and nomen- clature for system responses related to the step response and the frequency domain properties. The remainder of the book is concerned with control theory. Chapter 9 startswithanexplanationofthedifferencebetweenopen-loopandfeedback control. We subsequently prove the pole placement theorem. This theorem states that for a controllable system, there exists, for any desired monic polynomial, a state feedback gain matrix such that the eigenvalues of the closed loop system are the roots of the desired polynomial. This result,Preface xxi called the pole placement theorem, is one of the central achievements of modern control theory. The tenth chapter is devoted to observers: algorithms for deducing the sys- temstatefrommeasuredinputsandoutputs.Thedesignofobserversisvery similar to the stabilization and pole placement procedures. Observers are subsequently used in the construction of output feedback compensators. Three important cybernetic principles underpin our construction of ob- servers and feedback compensators. The first principle is error feedback: The estimate of the state is updated through the error between the actual and the expected observations. The second is certainty equivalence. This principle suggest that when one needs the value of an unobserved variable, for example for determining the suitable control action, it is reasonable to use the estimated value of that variable, as if it were the exact value. The thirdcyberneticprincipleusedistheseparation principle.Thisimpliesthat we will separate the design of the observer and the controller. Thus the ob- server is not designed with its use for control in mind, and the controller is not adapted to the fact that the observer produces only estimates of the state. Notes and references There are a number of books on the history of control. The origins of control, going back all the way to the Babylonians, are described in 40. Two other his- tory books on the subject, spanning the period from the industrial revolution to the postwar era, are 10, 11. The second of these books has a detailed account of the invention of the PID regulator and the negative feedback amplifier. A collec- tion of historically important papers, including original articles by Maxwell, Hur- witz, Black, Nyquist, Bode, Pontryagin, and Bellman, among others, have been reprinted in 9. The history of the brachystochrone problem has been recounted in most books on the history of mathematics. Its relation to the maximum prin- ciple is described in 53. The book 19 contains the history of the calculus of variations. There are numerous books that explain classical control. Take any textbook on control written before 1960. The state space approach to systems, and the de- velopment of the LQG problem happened very much under the impetus of the work of Kalman. An inspiring early book that explains some of the main ideas is 15. The special issue 5 of the IEEE Transactions on Automatic Control con- tains a collection of papers devoted to the Linear–Quadratic–Gaussian problem, up-to-date at the time of publication. Texts devoted to this problem are, for example, 33, 3, 4. Classical control theory emphasizes simple, but nevertheless often very effective and robust, controllers. Optimal control a` la Pontryagin and LQ control aims at trajectory transfer and at shaping the transient response; LQG techniques center on disturbance attenuation; while H control empha- ∞ sizes regulation against both disturbances and plant uncertainties. The latter, H control, is an important recent development that originated with the ideas ∞ of Zames 66. This theory culminated in the remarkable double-Riccati-equationxxii Preface paper16.Thebehavioralapproachoriginatedin55andwasfurtherdeveloped, for example, in 56, 57, 58, 59, 60 and in this book. In 61 some control synthesis ideas are put forward from this vantage point.1 Dynamical Systems 1.1 Introduction We start this book at the very beginning, by asking ourselves the question, What is a dynamical system? Disregardingforamomentthedynamicalaspects—forgettingabouttime— we are immediately led to ponder the more basic issue, What is a math- ematical model? What does it tell us? What is its mathematical nature? Mind you, we are not asking a philosophical question: we will not engage in an erudite discourse about the relation between reality and its math- ematical description. Neither are we going to elucidate the methodology involved in actually deriving, setting up, postulating mathematical models. What we areaskingisthesimple question, When we accept a mathematical expression, a formula, as an adequate description of a phenomenon, what mathematical structure have we obtained? Weview amathematical model as an exclusion law.Amathematical model expresses the opinion that some things can happen, are possible, while oth- ers cannot, are declared impossible. Thus Kepler claims that planetary or- bits that do not satisfy his three famous laws are impossible. In particular, he judges nonelliptical orbits as unphysical. The second law of thermo- dynamics limits the transformation of heat into mechanical work. Certain combinations of heat, work, and temperature histories are declared to be impossible. Economic production functions tell us that certain amounts of raw materials, capital, and labor are needed in order to manufacture a2 1. Dynamical Systems finished product: it prohibits the creation of finished products unless the required resources are available. We formalize these ideas by stating that a mathematical model selects a certain subset from a universum of possibilities. This subset consists of the occurrences that the model allows, that it declares possible. We call the subset in question the behavior of the mathematical model. True, we have been trained to think of mathematical models in terms of equations. How do equations enter this picture? Simply,anequationcanbe viewedasalawexcludingtheoccurrenceofcertainoutcomes,namely,those combinations of variables for which the equations are not satisfied. This way, equations define a behavior. We therefore speak of behavioral equa- tions when mathematical equations are intended to model a phenomenon. Itisimportanttoemphasizealreadyatthispointthatbehavioralequations provide an effective, but at the same time highly nonunique, way of spec- ifying a behavior. Different equations can define the same mathematical model. One should therefore not exaggerate the intrinsic significance of a specific set of behavioral equations. In addition to behavioral equations and the behavior of a mathematical model, there is a third concept that enters our modeling language ab initio: latent variables. We think of the variables that we try to model as manifest variables: they are the attributes on which the modeler in principle focuses attention. However, in order to come up with a mathematical model for a phenomenon, one invariably has to consider other, auxiliary, variables. We call them latent variables. These may be introduced for no other reason than in order to express in a convenient way the laws governing a model. For example, when modeling the behavior of a complex system, it may be convenient to view it as an interconnection of component subsystems. Of course, the variables describing these subsystems are, in general, different from those describing the original system. When modeling the external ter- minal behavior of an electrical circuit, we usually need to introduce the currents and voltages in the internal branches as auxiliary variables. When expressing the first and second laws of thermodynamics, it has been proven convenient to introduce the internal energy and entropy as latent variables. When discussing the synthesis of feedback control laws, it is often impera- tive to consider models that display their internal state explicitly. We think of these internal variables as latent variables. Thus in first principles mod- eling,wedistinguishtwotypesofvariables.Theterminologyfirstprinciples modeling refers to the fact that the physical laws that play a role in the system at hand are the elementary laws from physics, mechanics, electrical circuits, etc. This triptych—behavior/behavioral equations/manifest and latent vari- ables—istheessentialstructureofourmodelinglanguage.Thefactthatwe take the behavior, and not the behavioral equations, as the central object specifying a mathematical model has the consequence that basic system1.2 Models 3 properties (such as time-invariance, linearity, stability, controllability, ob- servability) will also refer to the behavior. The subsequent problem then alwaysariseshowtodeducethesepropertiesfromthebehavioralequations. 1.2 Models 1.2.1 The universum and the behavior Assume that we have a phenomenon that we want to model. To start with, we cast the situation in the language of mathematics by assuming that the phenomenon produces outcomes in a setU, which we call the universum. OftenUconsistsofaproductspace,forexampleafinitedimensionalvector space. Now, a (deterministic) mathematical model for the phenomenon (viewed purely from the black-box point of view, that is, by looking at the phenomenononlyfromitsterminals,bylookingatthemodelasdescriptive butnotexplanatory)claimsthatcertainoutcomesarepossible,whileothers are not. Hence a model recognizes a certain subsetB ofU. This subset is called the behavior (of the model). Formally: Definition 1.2.1 A mathematical model is a pair (U,B) with U a set, called the universum—its elements are called outcomes—andB a subset ofU, called the behavior.  Example 1.2.2 During the ice age, shortly after Prometheus stole fire from the gods, man realized that H O could appear, depending on the 2 temperature,asliquidwater,steam,orice.Ittookawhilelongerbeforethis situation was captured in a mathematical model. The generally accepted model,withthetemperatureindegreesCelsius,isU=ice,water,steam× −273,∞) andB=((ice×−273,0)∪(water×0,100)∪(steam× 100,∞)).  Example 1.2.3 Economists believe that there exists a relation between the amount P produced of a particular economic resource, the capital K invested in the necessary infrastructure, and the laborL expended towards 3 its production. A typical model looks likeU =R and B =(P,K,L)∈ + 3 2 R P = F(K,L), where F : R → R is the production function. + + + β γ Typically, F : (K,L)7→αK L , with α,β,γ∈R , 0≤β≤ 1, 0≤γ≤ 1, + constant parameters depending on the production process, for example the typeoftechnologyused.Beforewemodeledthesituation,wewerereadyto 3 believe that every triple (P,K,L)∈R could occur. After introduction of + the production function, we limit these possibilities to the triples satisfying β γ 3 P = αK L . The subset ofR obtained this way is the behavior in the + example under consideration.4 1. Dynamical Systems  1.2.2 Behavioral equations In applications, models are often described by equations (see Example 1.2.3). Thus the behavior consists of those elements in the universum for which “balance” equations are satisfied. Definition 1.2.4 LetU be a universum,E a set, and f ,f :U→E. The 1 2 mathematical model (U,B) with B =u∈U f (u) = f (u) is said to 1 2 be described by behavioral equations and is denoted by (U,E,f ,f ). The 1 2 setE is called the equating space. We also call (U,E,f ,f ) a behavioral 1 2 equation representation of (U,B).  Often, an appropriate way of looking at f (u) = f (u) is as equilibrium 1 2 conditions: the behaviorB consists of those outcomes for which two (sets of) quantities are in balance. Example 1.2.5 Consider an electrical resistor. We may view this as im- posing a relation between the voltageV across the resistor and the current I through it. Ohm recognized more than a century ago that (for metal wires) the voltage is proportional to the current:V =RI, with the propor- tionality factor R called the resistance. This yields a mathematical model 2 with universumU =R and behaviorB, induced by the behavioral equa- tion V = RI. HereE =R, f : (V,I)7→ V, and f (V,I) : I7→ RI. Thus 1 2 2 B=(I,V)∈R V =RI. Of course, nowadays we know many devices imposing much more com- plicated relations between V and I, which we nevertheless choose to call (non-Ohmic) resistors. An example is an (ideal) diode, given by the (I,V) 2 characteristic B =(I,V)∈ R (V ≥ 0 and I = 0) or (V = 0 and I≤ 0). Other resistors may exhibit even more complex behavior, due to hysteresis, for example.  Example 1.2.6 Three hundred years ago, Sir Isaac Newton discovered (better: deduced from Kepler’s laws since, as he put it, Hypotheses non fingo) that masses attract each other according to the inverse square law. Let us formalize what this says about the relation between the forceF and the position vector q of the mass m. We assume that the other mass M 3 is located at the origin ofR . The universumU consists of all conceivable 3 3 force/position vectors, yielding U =R ×R . After Newton told us the mMq 3 behavioral equations F =−k , we knew more: B =(F,q)∈R × 3 kqk1.2 Models 5 mMq 3 −8 R F =−k , with k the gravitational constant, k = 6.67×10 3 kqk 3 2 cm /g.sec .NotethatBhasthreedegreesoffreedom–downthreefromthe six degrees of freedom inU.  In many applications models are described by behavioral inequalities. It is easy to accommodate this situation in our setup. Simply take in the above definition E to be an ordered space and consider the behavioral inequality f (u) ≤ f (u). Many models in operations research (e.g., in 1 2 linear programming) and in economics are of this nature. In this book we will not pursue models described by inequalities. Note further that whereas behavioral equations specify the behavior uniquely, the converse is obviously not true. Clearly, if f (u) = f (u) is 1 2 ′ a set of behavioral equations for a certain phenomenon and iff :E→E is any bijection, then the set of behavioral equations (f◦f )(u) =(f◦f )(u) 1 2 form another set of behavioral equations yielding the same mathematical model. Since we have a tendency to think of mathematical models in terms of behavioral equations, most models being presented in this form, it is important to emphasize their ancillary role: it is the behavior, the solution set of the behavioral equations, not the behavioral equations themselves, that is the essential result of a modeling procedure. 1.2.3 Latent variables Our view of a mathematical model as expressed in Definition 1.2.1 is as follows: identify the outcomes of the phenomenon that we want to model (specify the universum U) and identify the behavior (specify B ⊆ U). However, in most modeling exercises we need to introduce other variables in addition tothe attributes inUthat wetry tomodel.Wecallthese other, auxiliary,variableslatent variables.Inabit,wewillgiveaseriesofinstances where latent variables appear. Let us start with two concrete examples. Example 1.2.7 Consider a one-port resistive electrical circuit. This con- sists of a graph with nodes and branches. Each of the branches contains a resistor, except one, which is an external port. An example is shown in Figure 1.1. Assume that we want to model the port behavior, the rela- tion between the voltage drop across and the current through the external port. Introduce as auxiliary variables the voltages (V ,...,V ) across and 1 5 the currents (I ,...,I ) through the internal branches, numbered in the 1 5 obvious way as indicated in Figure 1.1. The following relations must be satisfied: • Kirchhoff’s current law: the sum of the currents entering each node must be zero;6 1. Dynamical Systems • Kirchhoff’s voltage law: the sum of the voltage drops across the branches of any loop must be zero; • The constitutive laws of the resistors in the branches. R R 1 2 I + R 3 external V port − R R 4 5 FIGURE 1.1. Electrical circuit with resistors only. These yield: Constitution laws Kirchhoff’s current laws Kirchhoff’s voltage laws R I = V , I = I +I , V +V = V, 1 1 1 1 2 1 4 R I = V , I = I +I , V +V = V, 2 2 2 1 3 4 2 5 R I = V , I = I +I , V +V = V +V , 3 3 3 5 2 3 1 4 2 5 R I = V , I = I +I , V +V = V , 4 4 4 4 5 1 3 2 R I = V , V +V = V . 5 5 5 3 5 4 Our basic purpose is to express the relation between the voltage across and current into the external port. In the above example, this is a relation of theformV =RI (whereRcanbecalculatedfromR ,R ,R ,R ,andR ), 1 2 3 4 5 obtained by eliminating (V ,...,V ,I ,...,I ) from the above equations. 1 5 1 5 However, the basic model, the one obtained from first principles, involves thevariables(V ,...,V ,I ,...,I )inadditiontothevariables(V,I)whose 1 5 1 5 behavior we are trying to describe. The node voltages and the currents through the internal branches (the variables (V ,...,V ,I ,...,I ) in the 1 5 1 5 above example) are thus latent variables. The port variables (V,I) are the manifestvariables.TherelationbetweenI andV isobtainedbyeliminating the latent variables. How to do that in a systematic way is explained in Chapter 6. See also Exercise 6.1.  Example 1.2.8 An economist is trying to figure out how much of a pack- age of n economic goods will be produced. As a firm believer in equilib- rium theory, our economist assumes that the production volumes consist of those points where, product for product, the supply equals the demand. n This equilibrium set is a subset ofR . It is the behavior that we are look- + ing for. In order to specify this set, we can proceed as follows. Introduce as latent variables the price, the supply, and the demand of each of the

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.