Lecture notes on Electrical Engineering

lecture notes on electrical engineering materials and lecture notes on fundamentals of electrical engineering pdf free download
AndyCole Profile Pic
AndyCole,United Kingdom,Professional
Published Date:09-07-2017
Your Website URL(Optional)
Comment
FundamentalsofElectricalEngineeringI DonH.Johnson Online: http://cnx.org/content/col10040/ C O N N E X I O N S Rice University, Houston, TexasChapter 1 Introduction 1 1.1 Themes From its beginnings in the late nineteenth century, electrical engineering has blossomed from focusing on electrical circuits for power, telegraphy and telephony to focusing on a much broader range of disciplines. However, the underlying themes are relevant today: Power creation and transmission and information have been the underlying themes of electrical engineering for a century and a half. This course concentrates on the latter theme: therepresentation, manipulation, transmission, and reception of information by electrical means. This course describes what information is, how engineers quantify information, and how electrical signals represent information. Information can take a variety of forms. When you speak to a friend, your thoughts are translated by your brain into motor commands that cause various vocal tract components–the jaw, the tongue, the lips–to move in a coordinated fashion. Information arises in your thoughts and is represented by speech, which must have a well defined, broadly known structure so that someone else can understand what you say. Utterances convey information in sound pressure waves, which propagate to your friend’s ear. There, sound energy is converted back to neural activity, and, if what you say makes sense, she understands what you say. Your words could have been recorded on a compact disc (CD), mailed to your friend and listened to by her on her stereo. Information can take the form of a text file you type into your word processor. You might send the file via e-mail to a friend, who reads it and understands it. From an information theoretic viewpoint, all of these scenarios are equivalent, although the forms of the information representation—sound waves, plastic and computer files—are very different. Engineers, who don’t care about information content, categorize information into two different forms: analog and digital. Analog information is continuous valued; examples are audio and video. Digital information is discrete valued; examples are text (like what you are reading now) and DNA sequences. The conversion of information-bearing signals from one energy form into another is known as energy conversion or transduction. All conversion systems are inefficient since some input energy is lost as heat, but this loss does not necessarily mean that the conveyed information is lost. Conceptually we could use any form of energy to represent information, but electric signals are uniquely well-suited for information repre- sentation, transmission (signals can be broadcast from antennas or sent through wires), and manipulation (circuits can be built to reduce noise and computers can be used to modify information). Thus, we will be concerned with how to  represent all forms of information with electrical signals,  encode information as voltages, currents, and electromagnetic waves,  manipulate information-bearing electric signals with circuits and computers, and  receive electric signals and convert the information expressed by electric signals back into a useful form. 1 This content is available online at http://cnx.org/content/m0000/2.18/. 12 CHAPTER 1. INTRODUCTION Telegraphy represents the earliest electrical information system, and it dates from 1837. At that time, electrical science was largely empirical, and only those with experience and intuition could develop telegraph 2 systems. Electrical science came of age when James Clerk Maxwell proclaimed in 1864 a set of equations that he claimed governed all electrical phenomena. These equations predicted that light was an electro- magnetic wave, and that energy could propagate. Because of the complexity of Maxwell’s presentation, the development of the telephone in 1876 was due largely to empirical work. Once Heinrich Hertz confirmed Maxwell’s prediction of what we now call radio waves in about 1882, Maxwell’s equations were simplified by Oliver Heaviside and others, and were widely read. This understanding of fundamentals led to a quick succession of inventions–the wireless telegraph (1899), the vacuum tube (1905), and radio broadcasting–that marked the true emergence of the communications age. During the first part of the twentieth century, circuit theory and electromagnetic theory were all an electrical engineer needed to know to be qualified and produce first-rate designs. Consequently, circuit theory served as the foundation and the framework of all of electrical engineering education. At mid-century, three “inventions” changed the ground rules. These were the first public demonstration of the first electronic computer (1946), the invention of the transistor (1947), and the publication of A Mathematical Theory of Communication by Claude Shannon (1948). Although conceived separately, these creations gave birth to the information age, in which digital and analog communication systems interact and compete for design preferences. About twenty years later, the laser was invented, which opened even more design possibilities. Thus, the primary focus shifted from how to build communication systems (the circuit theory era) to what communications systems were intended to accomplish. Only once the intended system is specified can an implementation be selected. Today’s electrical engineer must be mindful of the system’s ultimate goal, and understand the tradeoffs between digital and analog alternatives, and between hardware and software configurations in designing information systems. 3 note: Thanks to the translation efforts of Rice University’s Disability Support Services , this 4 collection is now available in a Braille-printable version. Please click here to download a .zip file containing all the necessary .dxb and image files. 5 1.2 Signals Represent Information Whether analog or digital, information is represented by the fundamental quantity in electrical engineering: the signal. Stated in mathematical terms, a signal is merely a function. Analog signals are continuous- valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score). 1.2.1 Analog Signals Analog signals are usually signals defined over continuous independent variable(s). Speech, as described in Section 4.10, is produced by your vocal cords exciting acoustic resonances in your vocal tract. The result is pressure waves propagating in the air, and the speech signal thus corresponds to a function having independent variables of space and time and a value corresponding to air pressure: s (x;t) (Here we use vector notation x to denote spatial coordinates). When you record someone talking, you are evaluating the speech signal at a particular spatial location, x say. An example of the resulting waveform s (x ;t) is 0 0 shown in Figure 1.1. Photographs are static, and are continuous-valued signals defined over space. Black-and-white images have only one value at each point in space, which amounts to its optical reflection properties. In Figure 1.2, an image is shown, demonstrating that it (and all other images as well) are functions of two independent spatial variables. 2 http://www-groups.dcs.st-andrews.ac.uk/history/Biographies/Maxwell.html 3 http://www.dss.rice.edu/ 4 http://cnx.org/content/m0000/latest/FundElecEngBraille.zip 5 This content is available online at http://cnx.org/content/m0001/2.27/.3 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 Figure 1.1: A speech signal’s amplitude relates to tiny air pressure variations. Shown is a recording of the vowel “e” (as in “speech”). (a) (b) Figure1.2: On the left is the classicLena image, which is used ubiquitously as a test image. It contains straight and curved lines, complicated texture, and a face. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables. The colors merely help show what signal values are about the same size. In this image, signal values range between 0 and 255; why is that? Color images have values that express how reflectivity depends on the optical spectrum. Painters long ago found that mixing together combinations of the so-called primary colors–red, yellow and blue–can produce very realistic color images. Thus, images today are usually thought of as having three values at every point Amplitude4 CHAPTER 1. INTRODUCTION 00 nul 01 soh 02 stx 03 etx 04 eot 05 enq 06 ack 07 bel 08 bs 09 ht 0A nl 0B vt 0C np 0D cr 0E so 0F si 10 dle 11 dc1 12 dc2 13 dc3 14 dc4 15 nak 16 syn 17 etb 18 car 19 em 1A sub 1B esc 1C fs 1D gs 1E rs 1F us 20 sp 21 22 " 23 24 25 % 26 & 27 ’ 28 ( 29 ) 2A 2B + 2C , 2D - 2E . 2F / 30 0 31 1 32 2 33 3 34 4 35 5 36 6 37 7 38 8 39 9 3A : 3B ; 3C 3D = 3E 3F ? 40 41 A 42 B 43 C 44 D 45 E 46 F 47 G 48 H 49 I 4A J 4B K 4C L 4D M 4E N 4F 0 50 P 51 Q 52 R 53 S 54 T 55 U 56 V 57 W 58 X 59 Y 5A Z 5B 5C n 5D 5E 5F _ 60 ’ 61 a 62 b 63 c 64 d 65 e 66 f 67 g 68 h 69 i 6A j 6B k 6C l 6D m 6E n 6F o 70 p 71 q 72 r 73 s 74 t 75 u 76 v 77 w 78 x 79 y 7A z 7B 7C 7D 7E  7F del Table1.1: The ASCII translation table shows how standard keyboard characters are represented by integers. In pairs of columns, this table displays first the so-called 7-bit code (how many characters in a seven-bit code?), then the character the number represents. The numeric codes are represented in hexadecimal (base-16) notation. Mnemonic characters correspond to control characters, some of which may be familiar (like cr for carriage return) and some not (bel means a “bell”). in space, but a different set of colors is used: How much of red, green and blue is present. Mathematically, T color pictures are multivalued–vector-valued–signals: s (x) = (r (x);g (x);b (x)) . Interesting cases abound where the analog signal depends not on a continuous variable, such as time, but on a discrete variable. For example, temperature readings taken every hour have continuous–analog–values, but the signal’s independent variable is (essentially) the integers. 1.2.2 Digital Signals The word “digital” means discrete-valued and implies the signal depends on the integers rather than a continuous variable. Digital information includes numbers and symbols (characters typed on the keyboard, for example). Computers rely on the digital representation of information to manipulate and transform information. Symbols do not have a numeric value, however each is typically represented by a unique number but performing arithmetic with these representations makes no sense. The ASCII character code shown in Table 1.1 has the upper- and lowercase characters, the numbers, punctuation marks, and various other symbols represented by a seven-bit integer. For example, the ASCII code represents the letter a as the number 97, the letter A with 65. 6 1.3 Structure of Communication Systems The fundamental model of communications is portrayed in Figure 1.3 (Fundamental model of communi- cation). In this fundamental model, each message-bearing signal, exemplified by s (t), is analog and is a function of time. A system operates on zero, one, or several signals to produce more signals or to simply absorb them (Figure 1.4). In electrical engineering, we represent a system as a box, receiving input signals (usually coming from the left) and producing from them new output signals. This graphical representation is known as a block diagram. We denote input signals by lines having arrows pointing into the box, output signals by arrows pointing away. As typified by the communications model, how information flows, how it is corrupted and manipulated, and how it is ultimately received is summarized by interconnecting block diagrams: The outputs of one or more systems serve as the inputs to others. 6 This content is available online at http://cnx.org/content/m0002/2.17/.5 s(t) x(t) r(t) s(t) Channel Source Transmitter Receiver Sink message modulated corrupted demodulated message modulated message message Figure 1.3: The Fundamental Model of Communication. y(t) x(t) System Figure 1.4: A system operates on its input signal x (t) to produce an output y (t). In the communications model, thesource produces a signal that will be absorbed by the sink. Examples of time-domain signals produced by a source are music, speech, and characters typed on a keyboard. Signals can also be functions of two variables—an image is a signal that depends on two spatial variables—or more— television pictures (video signals) are functions of two spatial variables and time. Thus, information sources produce signals. In physical systems, each signal corresponds to an electrical voltage or current. To be able to design systems, we must understand electrical science and technology. However, we first need to understand the big picture to appreciate the context in which the electrical engineer works. In communication systems, messages—signals produced by sources—must be recast for transmission. The block diagram has the message s (t) passing through a block labeled transmitter that produces the signal x (t). In the case of a radio transmitter, it accepts an input audio signal and produces a signal that physicallyisanelectromagneticwaveradiatedbyanantennaandpropagatingasMaxwell’sequationspredict. In the case of a computer network, typed characters are encapsulated in packets, attached with a destination address, and launched into the Internet. From the communication systems “big picture” perspective, the same block diagram applies although the systems can be very different. In any case, the transmitter should not operate in such a way that the messages (t) cannot be recovered fromx (t). In the mathematical sense, the inverse system must exist, else the communication system cannot be considered reliable. (It is ridiculous to transmit a signal in such a way that no one can recover the original. However, clever systems exist that transmit signals so that only the “in crowd” can recover them. Such cryptographic systems underlie secret communications.) Transmitted signals next pass through the next stage, the evil channel. Nothing good happens to a signal in a channel: It can become corrupted by noise, distorted, and attenuated among many possibilities. The channel cannot be escaped (the real world is cruel), and transmitter design and receiver design focus on how best to jointly fend off the channel’s effects on signals. The channel is another system in our block diagram, and produces r (t), the signal received by the receiver. If the channel were benign (good luck finding such a channel in the real world), the receiver would serve as the inverse system to the transmitter, and yield the message with no distortion. However, because of the channel, the receiver must do its best to 7 produce a received messages (t) that resembless (t) as much as possible. Shannon showed in his 1948 paper that reliable—for the moment, take this word to mean error-free—digital communication was possible over arbitrarily noisy channels. It is this result that modern communications systems exploit, and why many communications systems are going “digital.” The module on Chapter 6, titled Information Communication, details Shannon’s theory of information, and there we learn of Shannon’s result and how to use it. Finally, the received message is passed to the information sink that somehow makes use of the message. 7 http://www-gap.dcs.st-and.ac.uk/history/Biographies/Shannon.html6 CHAPTER 1. INTRODUCTION In the communications model, the source is a system having no input but producing an output; a sink has an input and no output. Understanding signal generation and how systems work amounts to understanding signals, the nature of the information they represent, how information is transformed between analog and digital forms, and how information can be processed by systems operating on information-bearing signals. This understanding demands two different fields of knowledge. One is electrical science: How are signals represented and ma- nipulated electrically? The second is signal science: What is the structure of signals, no matter what their source, whatistheirinformationcontent, andwhatcapabilitiesdoesthisstructureforceuponcommunication systems? 8 1.4 The Fundamental Signal: The Sinusoid The most ubiquitous and important signal in electrical engineering is the sinusoid. Sine Definition s (t) =A cos (2ft +) or A cos (t +) (1.1) A is known as the sinusoid’s amplitude, and determines the sinusoid’s size. The amplitude conveys the 1 sinusoid’sphysicalunits(volts,lumens,etc). Thefrequencyf hasunitsofHz(Hertz)ors ,anddetermines how rapidly the sinusoid oscillates per unit time. The temporal variable t always has units of seconds, and thus the frequency determines how many oscillations/second the sinusoid has. AM radio stations have carrier 6 frequencies of about 1 MHz (one mega-hertz or 10 Hz), while FM stations have carrier frequencies of about 100 MHz. Frequency can also be expressed by the symbol , which has units of radians/second. Clearly, = 2f. In communications, we most often express frequency in Hertz. Finally,  is the phase, and determines the sine wave’s behavior at the origin (t = 0). It has units of radians, but we can express it in  degrees, realizing that in computations we must convert from degrees to radians. Note that if  = , the 2 sinusoid corresponds to a sine function, having a zero value at the origin.    A sin (2ft +) =A cos 2ft + (1.2) 2 Thus, the only difference between a sine and cosine signal is the phase; we term either a sinusoid. We can also define a discrete-time variant of the sinusoid: A cos (2fn +). Here, the independent variable is n and represents the integers. Frequency now has no dimensions, and takes on values between 0 and 1. Exercise 1.1 (Solution on p. 9.) Show that cos (2fn) = cos (2 (f + 1)n), which means that a sinusoid having a frequency larger than one corresponds to a sinusoid having a frequency less than one. note: Notice that we shall call either sinusoid an analog signal. Only when the discrete-time signal takes on a finite set of values can it be considered a digital signal. Exercise 1.2 (Solution on p. 9.) Can you think of a simple signal that has a finite number of values but is defined in continuous time? Such a signal is also an analog signal. 1.4.2 Communicating Information with Signals Thebasicideaofcommunicationengineeringistouseasignal’sparameterstorepresenteitherrealnumbersor other signals. The technical term is to modulate the carrier signal’s parameters to transmit information from one place to another. To explore the notion of modulation, we can send a real number (today’s temperature, for example) by changing a sinusoid’s amplitude accordingly. If we wanted to send the daily 8 This content is available online at http://cnx.org/content/m0003/2.15/.7 sq(t) A • • • • • • t –2 2 –A Figure 1.5 temperature, we would keep the frequency constant (so the receiver would know what to expect) and change the amplitude at midnight. We could relate temperature to amplitude by the formula A = A (1 +kT ), 0 where A and k are constants that the transmitter and receiver must both know. 0 If we had two numbers we wanted to send at the same time, we could modulate the sinusoid’s frequency as well as its amplitude. This modulation scheme assumes we can estimate the sinusoid’s amplitude and frequency; we shall learn that this is indeed possible. Now suppose we have a sequence of parameters to send. We have exploited all of the sinusoid’s two parameters. What we can do is modulate them for a limited time (say T seconds), and send two parameters every T. This simple notion corresponds to how a modem works. Here, typed characters are encoded into eight bits, and the individual bits are encoded into a sinusoid’s amplitude and frequency. We’ll learn how this is done in subsequent modules, and more importantly, we’ll learn what the limits are on such digital communication schemes. 9 1.5 Introduction Problems Problem 1.1: RMS Values The rms (root-mean-square) value of a periodic signal is defined to be s Z T 1 2 rmss = s (t)dt T 0 where T is defined to be the signal’s period: the smallest positive number such that s (t) =s (t +T ). (a) What is the period of s (t) =A sin (2f t +)? 0 (b) What is the rms value of this signal? How is it related to the peak value? (c) What is the period and rms value of the depicted (Figure 1.5) square wave, generically denoted by sq (t)? (d) By inspecting any device you plug into a wall socket, you’ll see that it is labeled “110 volts AC.” What is the expression for the voltage provided by a wall socket? What is its rms value? Problem 1.2: Modems The word “modem” is short for “modulator-demodulator.” Modems are used not only for connecting com- puters to telephone lines, but also for connecting digital (discrete-valued) sources to generic channels. In this problem, we explore a simple kind of modem, in which binary information is represented by the presence or absence of a sinusoid (presence representing a “1” and absence a “0”). Consequently, the modem’s transmitted signal that represents a single bit has the form x (t) =A sin (2f t) , 0tT 0 Within each bit interval T, the amplitude is either A or zero. 9 This content is available online at http://cnx.org/content/m10353/2.17/.8 CHAPTER 1. INTRODUCTION (a) What is the smallest transmission interval that makes sense with the frequency f ? 0 (b) Assuming that ten cycles of the sinusoid comprise a single bit’s transmission interval, what is the datarate of this transmission scheme? (c) Now suppose instead of using “on-off” signaling, we allow one of several different values for the amplitude during any transmission interval. If N amplitude values are used, what is the resulting datarate? (d) The classic communications block diagram applies to the modem. Discuss how the transmitter must interface with the message source since the source is producing letters of the alphabet, not bits. Problem 1.3: Advanced Modems To transmit symbols, such as letters of the alphabet, RU computer modems use two frequencies (1600 and 1800 Hz) and several amplitude levels. A transmission is sent for a period of time T (known as the transmission or baud interval) and equals the sum of two amplitude-weighted carriers. x (t) =A sin (2f t) +A sin (2f t) , 0tT 1 1 2 2 We send successive symbols by choosing an appropriate frequency and amplitude combination, and sending them one after another. (a) What is the smallest transmission interval that makes sense to use with the frequencies given above? In other words, what should T be so that an integer number of cycles of the carrier occurs? (b) Sketch (using Matlab) the signal that modem produces over several transmission intervals. Make sure you axes are labeled. (c) Using your signal transmission interval, how many amplitude levels are needed to transmit ASCII characters at a datarate of 3,200 bits/s? Assume use of the extended (8-bit) ASCII code. note: We use a discrete set of values forA andA . If we haveN values for amplitudeA , andN 1 2 1 1 2 values for A , we have N N possible symbols that can be sent during each T second interval. To 2 1 2 convert this number into bits (the fundamental unit of information engineers use to qualify things), compute log (N N ). 1 2 29 Solutions to Exercises in Chapter 1 Solution to Exercise 1.1 (p. 6) As cos ( + ) = cos ( ) cos ( ) sin ( ) sin ( ), cos (2 (f + 1)n) = cos (2fn) cos (2n) sin (2fn) sin (2n) = cos (2fn). Solution to Exercise 1.2 (p. 6) A square wave takes on the values 1 and1 alternately. See the plot in Section 2.2.6.10 CHAPTER 1. INTRODUCTIONChapter 2 Signals and Systems 1 2.1 Complex Numbers Whilethefundamentalsignalusedinelectricalengineeringisthesinusoid, itcanbeexpressedmathematically in terms of an even more fundamental signal: thecomplex exponential. Representing sinusoids in terms of complex exponentials is not a mathematical oddity. Fluency with complex numbers and rational functions of complex variables is a critical skill all engineers master. Understanding information and power system designs and developing new systems all hinge on using complex numbers. In short, they are critical to modern electrical engineering, a realization made over a century ago. 2.1.1 Definitions The notion of the square root of1 originated with the quadratic formula: the solution of certain quadratic p 2 equations mathematically exists only if the so-called imaginary quantity 1 could be defined. Euler first 3 used i for the imaginary unit but that notation did not take hold until roughly Ampère’s time. Ampère used the symbol i to denote current (intensité de current). It wasn’t until the twentieth century that the importanceofcomplexnumberstocircuittheorybecameevident. Bythen, usingiforcurrentwasentrenched and electrical engineers chose j for writing complex numbers. p 2 An imaginary number has the form jb = b . A complex number, z, consists of the ordered pair (a,b),a is the real component and b is the imaginary component (the j is suppressed because the imaginary component of the pair is always in the second position). The imaginary number jb equals (0,b). Note that a and b are real-valued numbers. Figure 2.1 shows that we can locate a complex number in what we call the complex plane. Here, a, the real part, is thex-coordinate andb, the imaginary part, is they-coordinate. From analytic geometry, we know that locations in the plane can be expressed as the sum of vectors, with the vectors corresponding to the x and y directions. Consequently, a complex number z can be expressed as the (vector) sum z =a +jb wherej indicates they-coordinate. This representation is known as theCartesian form ofz. An imaginary number can’t be numerically added to a real number; rather, this notation for a complex number represents vector addition, but it provides a convenient notation when we perform arithmetic manipulations. Some obvious terminology. The real part of the complex number z = a +jb, written as Re z, equals a. We consider the real part as a function that works by selecting that component of a complex number not multiplied by j. The imaginary part of z, Im z, equals b: that part of a complex number that is multiplied by j. Again, both the real and imaginary parts of a complex number are real-valued.  The complex conjugate of z, written as z , has the same real part as z but an imaginary part of the 1 This content is available online at http://cnx.org/content/m0081/2.27/. 2 http://www-groups.dcs.st-and.ac.uk/history/Biographies/Euler.html 3 http://www-groups.dcs.st-and.ac.uk/history/Biographies/Ampere.html 1112 CHAPTER 2. SIGNALS AND SYSTEMS Figure2.1: A complex number is an ordered pair (a,b) that can be regarded as coordinates in the plane. Complex numbers can also be expressed in polar coordinates as r\. opposite sign. z = Re z +jIm z (2.1)  z = Re zjIm z Using Cartesian notation, the following properties easily follow.  If we add two complex numbers, the real part of the result equals the sum of the real parts and the imaginary part equals the sum of the imaginary parts. This property follows from the laws of vector addition. a +jb +a +jb =a +a +j (b +b ) 1 1 2 2 1 2 1 2 In this way, the real and imaginary parts remain separate.  The product of j and a real number is an imaginary number: ja. The product of j and an imaginary 2 number is a real number: j (jb) =b because j =1. Consequently, multiplying a complex number by j rotates the number’s position by 90 degrees. Exercise 2.1 (Solution on p. 30.) Use the definition of addition to show that the real and imaginary parts can be expressed as a   z+z zz sum/difference of a complex number and its conjugate. Re z = and Im z = . 2 2j Complex numbers can also be expressed in an alternate form, polar form, which we will find quite useful. Polar form arises arises from the geometric interpretation of complex numbers. The Cartesian form of a complex number can be re-written as   p a b 2 2 a +jb = a +b p +jp 2 2 2 2 a +b a +b By forming a right triangle having sides a andb, we see that the real and imaginary parts correspond to the cosine and sine of the triangle’s base angle. We thus obtain the polar form for complex numbers. z =a +jb =r\ p 2 2 a =r cos r =jzj = a +b b =r sin  = arctan (b=a) The quantity r is known as the magnitude of the complex number z, and is frequently written asjzj. The quantity is the complex number’s angle. In using the arc-tangent formula to find the angle, we must take into account the quadrant in which the complex number lies. Exercise 2.2 (Solution on p. 30.) Convert 3 2j to polar form.13 2.1.2 Euler’s Formula Surprisingly, the polar form of a complex number z can be expressed mathematically as j z =re (2.2) To show this result, we use Euler’s relations that express exponentials with imaginary arguments in terms of trigonometric functions. j e = cos +j sin (2.3) j j j j e +e e e cos = sin = (2.4) 2 2j The first of these is easily derived from the Taylor’s series for the exponential. 2 3 x x x x e = 1 + + + +::: 1 2 3 Substituting j for x, we find that 2 3    j e = 1 +j j +::: 1 2 3 2 3 4 becausej =1,j =j, andj = 1. Grouping separately the real-valued terms and the imaginary-valued ones,   2 3    j e = 1 + +j +::: 2 1 3 The real-valued terms correspond to the Taylor’s series for cos (), the imaginary ones to sin (), and Euler’s first relation results. The remaining relations are easily derived from the first. Because of the relationship p 2 2 r = a +b , we see that multiplying the exponential in (2.3) by a real constant corresponds to setting the radius of the complex number by the constant. 2.1.3 Calculating with Complex Numbers Adding and subtracting complex numbers expressed in Cartesian form is quite easy: You add (subtract) the real parts and imaginary parts separately. (z z ) = (a a ) +j (b b ) (2.5) 1 2 1 2 1 2 To multiply two complex numbers in Cartesian form is not quite as easy, but follows directly from following the usual rules of arithmetic. z z = (a +jb ) (a +jb ) 1 2 1 1 2 2 (2.6) =a a b b +j (a b +a b ) 1 2 1 2 1 2 2 1 Note that we are, in a sense, multiplying two vectors to obtain another vector. Complex arithmetic provides a unique way of defining vector multiplication. Exercise 2.3 (Solution on p. 30.) What is the product of a complex number and its conjugate? Division requires mathematical manipulation. We convert the division problem into a multiplication problem by multiplying both the numerator and denominator by the conjugate of the denominator. z a +jb 1 1 1 = z a +jb 2 2 2 a +jb a jb 1 1 2 2 = a +jb a jb 2 2 2 2 (2.7) (a +jb ) (a jb ) 1 1 2 2 = 2 2 a +b 2 2 a a +b b +j (a b a b ) 1 2 1 2 2 1 1 2 = 2 2 a +b 2 214 CHAPTER 2. SIGNALS AND SYSTEMS Because the final result is so complicated, it’s best to remember how to perform division—multiplying numerator and denominator by the complex conjugate of the denominator—than trying to remember the final result. The properties of the exponential make calculating the product and ratio of two complex numbers much simpler when the numbers are expressed in polar form. j j j( + ) 1 2 1 2 z z =r e r e =r r e 1 2 1 2 1 2 j 1 (2.8) z r e r 1 1 1 j( ) 1 2 = = e j 2 z r e r 2 2 2 To multiply, the radius equals the product of the radii and the angle the sum of the angles. To divide, the radius equals the ratio of the radii and the angle the difference of the angles. When the original complex numbers are in Cartesian form, it’s usually worth translating into polar form, then performing the multiplication or division (especially in the case of the latter). Addition and subtraction of polar forms amounts to converting to Cartesian form, performing the arithmetic operation, and converting back to polar form. Example 2.1 When we solve circuit problems, the crucial quantity, known as a transfer function, will always be expressed as the ratio of polynomials in the variable s =j2f. What we’ll need to understand the circuit’s effect is the transfer function in polar form. For instance, suppose the transfer function equals s + 2 (2.9) 2 s +s + 1 s =j2f (2.10) Performing the required division is most easily accomplished by first expressing the numerator and denominator each in polar form, then calculating the ratio. Thus, s + 2 j2f + 2 = (2.11) 2 2 2 s +s + 1 4 f +j2f + 1 p j arctan(f) 2 2 4 + 4 f e = q (2.12) 2 2 2 2 2 2 2 j arctan(2f=(14 f )) (1 4 f ) + 4 f e s 2 2 2 2 4 + 4 f j arctan(f)arctan arctan 2f=(14 f ) ( ( ( ))) = e (2.13) 2 2 4 4 1 4 f + 16 f 4 2.2 Elemental Signals Elemental signals are the building blocks with which we build complicated signals. By definition, elemental signals have a simple structure. Exactly what we mean by the “structure of a signal” will unfold in this section of the course. Signals are nothing more than functions defined with respect to some independent variable, which we take to be time for the most part. Very interesting signals are not functions solely of time; one great example of which is an image. For it, the independent variables arex andy (two-dimensional space). Video signals are functions of three variables: two spatial dimensions and time. Fortunately, most of the ideas underlying modern signal theory can be exemplified with one-dimensional signals. 4 This content is available online at http://cnx.org/content/m0004/2.29/.15 2.2.1 Sinusoids Perhaps the most common real-valued signal is the sinusoid. s (t) =A cos (2f t +) (2.14) 0 For this signal, A is its amplitude, f its frequency, and  its phase. 0 2.2.2 Complex Exponentials The most important signal is complex-valued, the complex exponential. j(2f t+) 0 s (t) =Ae (2.15) j j2f t 0 =Ae e p j Here, j denotes 1. Ae is known as the signal’s complex amplitude. Considering the complex amplitudeasacomplexnumberinpolarform, itsmagnitudeistheamplitudeAanditsanglethesignalphase. The complex amplitude is also known as a phasor. The complex exponential cannot be further decomposed into more elemental signals, and is the most important signal in electrical engineering Mathematical manipulations at first appear to be more difficult because complex-valued numbers are introduced. In fact, early in the twentieth century, mathematicians thought engineers would not be sufficiently sophisticated 5 to handle complex exponentials even though they greatly simplified solving circuit problems. Steinmetz introduced complex exponentials to electrical engineering, and demonstrated that “mere” engineers could use them to good effect and even obtain right answers See Section 2.1 for a review of complex numbers and complex arithmetic. The complex exponential defines the notion of frequency: it is the only signal that contains only one frequency component. The sinusoid consists of two frequency components: one at the frequency +f and 0 the other atf . 0 Euler relation: This decomposition of the sinusoid can be traced to Euler’s relation. j2ft j2ft e +e cos (2ft) = (2.16) 2 j2ft j2ft e e sin (2ft) = (2.17) 2j j2ft e = cos (2ft) +j sin (2ft) (2.18) Decomposition: The complex exponential signal can thus be written in terms of its real and imaginary parts using Euler’s relation. Thus, sinusoidal signals can be expressed as either the real or the imaginary part of a complex exponential signal, the choice depending on whether cosine or sine phase is needed, or as the sum of two complex exponentials. These two decompositions are mathematically equivalent to each other.   j j2ft A cos (2ft +) = Re Ae e (2.19)   j j2ft A sin (2ft +) = Im Ae e (2.20) Using the complex plane, we can envision the complex exponential’s temporal variations as seen in the above figure (Figure 2.2). The magnitude of the complex exponential is A, and the initial value of the complex exponential att = 0 has an angle of. As time increases, the locus of points traced by the complex exponential is a circle (it has constant magnitude of A). The number of times per second we go around the circle equals the frequency f. The time taken for the complex exponential to go around the circle once is 1 known as its period T, and equals . The projections onto the real and imaginary axes of the rotating f vector representing the complex exponential signal are the cosine and sine signal of Euler’s relation (2.16). 5 http://www.edisontechcenter.org/CharlesProteusSteinmetz.html16 CHAPTER 2. SIGNALS AND SYSTEMS Figure 2.2: Graphically, the complex exponential scribes a circle in the complex plane as time evolves. Its real and imaginary parts are sinusoids. The rate at which the signal goes around the circle is the 1 frequency f and the time taken to go around is the period T. A fundamental relationship is T = . f 2.2.3 Real Exponentials As opposed to complex exponentials which oscillate, real exponentials (Figure 2.3) decay. t= s (t) =e (2.21) The quantity  is known as the exponential’s time constant, and corresponds to the time required for 1 the exponential to decrease by a factor of , which approximately equals 0:368. A decaying complex e exponential is the product of a real and a complex exponential. j t= j2ft s (t) =Ae e e (2.22) j (1=+j2f)t =Ae e In the complex plane, this signal corresponds to an exponential spiral. For such signals, we can define complex frequency as the quantity multiplying t.17 Exponential 1 –1 e t τ Figure 2.3: The real exponential. 2.2.4 Unit Step The unit step function (Figure 2.4) is denoted by u(t), and is defined to be ( 0 t 0 u(t) = (2.23) 1 t 0 u(t) 1 t Figure 2.4: The unit step. Origin warning: This signal is discontinuous at the origin. Its value at the origin need not be defined because the value doesn’t matter in signal theory. This kind of signal is used to describe signals that “turn on” suddenly. For example, to mathematically repre- sent turning on an oscillator, we can write it as the product of a sinusoid and a step: s (t) =A sin (2ft) u(t). 2.2.5 Pulse The unit pulse (Figure 2.5) describes turning a unit-amplitude signal on for a duration of  seconds, then turning it off. 8 0; t 0 p (t) = 1; 0t  (2.24)  : 0; t  p (t) Δ 1 t Δ Figure 2.5: The pulse. We will find that this is the second most important signal in communications.18 CHAPTER 2. SIGNALS AND SYSTEMS 2.2.6 Square Wave The square wave (Figure 2.6) sq (t) is a periodic signal like the sinusoid. It too has an amplitude and a period, which must be specified to characterize the signal. We find subsequently that the sine wave is a simpler signal than the square wave. Square Wave A t T Figure 2.6: The square wave. 6 2.3 Signal Decomposition A signal’s complexity is not related to how wiggly it is. Rather, a signal expert looks for ways of decomposing a given signal into a sum of simpler signals, which we term the signal decomposition. Though we will never compute a signal’s complexity, it essentially equals the number of terms in its decomposition. In writing a signal as a sum of component signals, we can change the component signal’s gain by multiplying it by a constant and by delaying it. More complicated decompositions could contain derivatives or integrals of simple signals. Example 2.2 As an example of signal complexity, we can express the pulse p (t) as a sum of delayed unit steps.  p (t) = u(t) u(t ) (2.25)  Thus, the pulse is a more complex signal than the step. Be that as it may, the pulse is very useful to us. Exercise 2.4 (Solution on p. 30.) Express a square wave having period T and amplitude A as a superposition of delayed and amplitude-scaled pulses. Because the sinusoid is a superposition of two complex exponentials, the sinusoid is more complex. We could not prevent ourselves from the pun in this statement. Clearly, the word “complex” is used in two different ways here. The complex exponential can also be written (using Euler’s relation (2.16)) as a sum of a sine and a cosine. We will discover that virtually every signal can be decomposed into a sum of complex exponentials, and that this decomposition is very useful. Thus, the complex exponential is more fundamental, and Euler’s relation does not adequately reveal its complexity. 7 2.4 Discrete-Time Signals So far, we have treated what are known as analog signals and systems. Mathematically, analog signals are functions having continuous quantities as their independent variables, such as space and time. Discrete-time signals are functions defined on the integers; they are sequences. One of the fundamental results of signal theory details the conditions under which an analog signal can be converted into a discrete-time one and retrieved without error. This result is important because discrete-time signals can be manipulated by 6 This content is available online at http://cnx.org/content/m0008/2.12/. 7 This content is available online at http://cnx.org/content/m0009/2.24/.19 systems instantiated as computer programs. Subsequent modules describe how virtually all analog signal processing can be performed with software. Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. Because of the role of software in discrete-time systems, many more different systems can be envisioned and “constructed” with programs than can be with analog signals. Consequently, discrete-time systems can be easily produced in software, with equivalent analog realizations difficult, if not impossible, to design. As important as linking analog signals to discrete-time ones may be, discrete-time signals are more general, encompassing signals derived from analog ones and signals that aren’t. For example, the characters forming a text file form a sequence, which is also a discrete-time signal. We must deal with such symbolic valued (p. 153) signals and systems as well. As with analog signals, we seek ways of decomposing real-valued discrete-time signals into simpler com- ponents. With this approach leading to a better understanding of signal structure, we can exploit that structure to represent information (create ways of representing information with signals) and to extract in- formation (retrieve the information thus represented). For symbolic-valued signals, the approach is different: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unified way. From an information representation perspective, the most important issue becomes, for both real-valued and symbolic-valued signals, efficiency; What is the most parsimonious and compact way to represent information so that it can be extracted later. 2.4.1 Real- and Complex-valued Signals A discrete-time signal is represented symbolically as s (n), where n =f:::;1; 0; 1;:::g. We usually draw discrete-time signals as stem plots to emphasize the fact they are functions defined only on the integers. We can delay a discrete-time signal by an integer just as with analog ones. A delayed unit sample has the expression  (nm), and equals one when n =m. s n 1 … n … Figure 2.7: The discrete-time cosine signal is plotted as a stem plot. Can you find the formula for this signal? 2.4.2 Complex Exponentials The most important signal is, of course, the complex exponential sequence. j2fn s (n) =e (2.26) 2.4.3 Sinusoids Discrete-time sinusoids have the obvious form s (n) = A cos (2fn +). As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value, frequencies of their discrete-  1 1 time counterparts yield unique waveforms only when f lies in the interval ; . This property can be 2 2 easily understood by noting that adding an integer to the frequency of the discrete-time complex exponential has no effect on the signal’s value. j2(f+m)n j2fn j2mn e =e e (2.27) j2fn =e

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.