Digital Communications Lecture Notes

how has digital communication improved the transfer of data, what is digital communications technology. what is digital communications and multimedia pdf free download
RyanCanon Profile Pic
RyanCanon,United Arab Emirates,Teacher
Published Date:21-07-2017
Your Website URL(Optional)
Comment
ECE 5520: Digital Communications Lecture Notes Fall 2009 Dr. Neal Patwari University of Utah Department of Electrical and Computer Engineering c 2006ECE 5520 Fall 2009 2 Contents 1 Class Organization 8 2 Introduction 8 2.1 ”Executive Summary” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Why not Analog? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 Networking Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 Channels and Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 Encoding / Decoding Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.6 Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.7 Topic: Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.8 Topic: Frequency Domain Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.9 Topic: Orthogonality and Signal spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.10 Related classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Power and Energy 13 3.1 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Decibel Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 Time-Domain Concept Review 15 4.1 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5 Bandwidth 16 5.1 Continuous-time Frequency Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.1 Fourier Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Linear Time Invariant (LTI) Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6 Bandpass Signals 21 6.1 Upconversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.2 Downconversion of Bandpass Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 7 Sampling 23 7.1 Aliasing Due To Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 7.2 Connection to DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 7.3 Bandpass sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 8 Orthogonality 28 8.1 Inner Product of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 8.2 Inner Product of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.3 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 8.4 Orthogonal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 9 Orthonormal Signal Representations 32 9.1 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 9.2 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 9.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35ECE 5520 Fall 2009 3 10 Multi-Variate Distributions 37 10.1 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 10.2 Conditional Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 10.3 Simulation of Digital Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 10.4 Mixed Discrete and Continuous Joint Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 10.5 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 10.6 Gaussian Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 10.6.1 Complementary CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 10.6.2 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 10.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 10.8 Gaussian Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 10.8.1 Envelope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 11 Random Processes 46 11.1 Autocorrelation and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 11.1.1 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 12 Correlation and Matched-Filter Receivers 47 12.1 Correlation Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 12.2 Matched Filter Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 12.3 Amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 12.4 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 12.5 Correlation Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 13 Optimal Detection 51 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 13.2 Bayesian Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 14 Binary Detection 52 14.1 Decision Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 14.2 Formula for Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 14.3 Selecting R to Minimize Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 0 14.4 Log-Likelihood Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 14.5 Case of a =0, a =1 in Gaussian noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 0 1 14.6 General Case for Arbitrary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 14.7 Equi-probable Special Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 14.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 14.9 Review of Binary Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 15 Pulse Amplitude Modulation (PAM) 58 15.1 Baseband Signal Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 15.2 Average Bit Energy in M-ary PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 16 Topics for Exam 1 60ECE 5520 Fall 2009 4 17 Additional Problems 61 17.1 Spectrum of Communication Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 17.2 Sampling and Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 17.3 Orthogonality and Signal Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 17.4 Random Processes, PSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 17.5 Correlation / Matched Filter Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 18 Probability of Error in Binary PAM 63 18.1 Signal Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 18.2 BER Function of Distance, Noise PSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 18.3 Binary PAM Error Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 19 Detection with Multiple Symbols 66 20 M-ary PAM Probability of Error 67 20.1 Symbol Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 20.1.1 Symbol Error Rate and Average Bit Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 67 20.2 Bit Errors and Gray Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 20.2.1 Bit Error Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 21 Inter-symbol Interference 69 21.1 Multipath Radio Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 21.2 Transmitter and Receiver Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 22 Nyquist Filtering 70 22.1 Raised Cosine Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 22.2 Square-Root Raised Cosine Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 23 M-ary Detection Theory in N-dimensional signal space 72 24 Quadrature Amplitude Modulation (QAM) 74 24.1 Showing Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 24.2 Constellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 24.3 Signal Constellations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 24.4 Angle and Magnitude Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 24.5 Average Energy in M-QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 24.6 Phase-Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 24.7 Systems which use QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 25 QAM Probability of Error 80 25.1 Overview of Future Discussions on QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 25.2 Options for Probability of Error Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 25.3 Exact Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 25.4 Probability of Error in QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 25.5 Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 25.6 Application of Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 25.6.1 General Formula for Union Bound-based Probability of Error . . . . . . . . . . . . . . . . 84ECE 5520 Fall 2009 5 26 QAM Probability of Error 85 26.1 Nearest-Neighbor Approximate Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . 85 26.2 Summary and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 27 Frequency Shift Keying 88 27.1 Orthogonal Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 27.2 Transmission of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 27.3 Reception of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 27.4 Coherent Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 27.5 Non-coherent Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 27.6 Receiver Block Diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 27.7 Probability of Error for Coherent Binary FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 27.8 Probability of Error for Noncoherent Binary FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 27.9 FSK Error Probabilities, Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 27.9.1 M-ary Non-Coherent FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 27.9.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 27.10Bandwidth of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 28 Frequency Multiplexing 95 28.1 Frequency Selective Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 28.2 Benefits of Frequency Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 28.3 OFDM as an extension of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 29 Comparison of Modulation Methods 98 29.1 Differential Encoding for BPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 29.1.1 DPSK Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 29.1.2 DPSK Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 29.1.3 Probability of Bit Error for DPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 29.2 Points for Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 29.3 Bandwidth Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 29.3.1 PSK, PAM and QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 29.3.2 FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 E b 29.4 Bandwidth Efficiency vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 N 0 E b 29.5 Fidelity (P error) vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 N 0 29.6 Transmitter Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 29.6.1 Linear / Non-linear Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 29.7 Offset QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 29.8 Receiver Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 30 Link Budgets and System Design 105 30.1 Link Budgets Given C/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 0 30.2 Power and Energy Limited Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 30.3 Computing Received Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 30.3.1 Free Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 30.3.2 Non-free-space Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 30.3.3 Wired Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 30.4 Computing Noise Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112ECE 5520 Fall 2009 6 30.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 31 Timing Synchronization 113 32 Interpolation 114 32.1 Sampling Time Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 32.2 Seeing Interpolation as Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 32.3 Approximate Interpolation Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 32.4 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 32.4.1 Higher order polynomial interpolation filters. . . . . . . . . . . . . . . . . . . . . . . . . . 117 33 Final Project Overview 118 33.1 Review of Interpolation Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 33.2 Timing Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 33.3 Early-late timing error detector (ELTED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 33.4 Zero-crossing timing error detector (ZCTED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 33.4.1 QPSK Timing Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 33.5 Voltage Controlled Clock (VCC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 33.6 Phase Locked Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 33.6.1 Phase Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 33.6.2 Loop Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 33.6.3 VCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 33.6.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 33.6.5 Discrete-Time Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 34 Exam 2 Topics 126 35 Source Coding 127 35.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 35.2 Joint Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 35.3 Conditional Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 35.4 Entropy Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 35.5 Source Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 36 Review 133 37 Channel Coding 133 37.1 R. V. L. Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 37.2 C. E. Shannon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 37.2.1 Noisy Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 37.2.2 Introduction of Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 37.2.3 Introduction of Power Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 37.3 Combining Two Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 37.3.1 Returning to Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 37.3.2 Final Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 37.4 Efficiency Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 38 Review 138ECE 5520 Fall 2009 7 39 Channel Coding 138 39.1 R. V. L. Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 39.2 C. E. Shannon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 39.2.1 Noisy Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 39.2.2 Introduction of Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 39.2.3 Introduction of Power Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 39.3 Combining Two Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 39.3.1 Returning to Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 39.3.2 Final Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 39.4 Efficiency Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141ECE 5520 Fall 2009 8 Lecture 1 Today: (1) Syllabus (2) Intro to Digital Communications 1 Class Organization Textbook Few textbooks cover solely digital communications (without analog) in an introductory communi- cations course. But graduates today will almost always encounter / be developing solely digital communication systems. Sohalfofmosttextbooksareuseless;andthattheotherhalfissparseandneedssupplementalmaterial. For example, the past text was the ‘standard’ text in the area for an undergraduate course, Proakis & Salehi, J.G. Proakis and M. Salehi, Communication Systems Engineering, 2nd edition, Prentice Hall, 2001. Students didn’t like that I had so many supplemental readings. This year’s text covers primarily digital communications, and does it in depth. Finally, I find it to be very well-written. And, there are few options in this area. I will provide additional readings solely to provide another presentation style or fit another learning style. Unless specified, these are optional. Lecture Notes I type my lecture notes. I have taught ECE 5520 previously and have my lecture notes from past years. I’m constantly updating my notes, even up to the lecture time. These can be available to you at lecture, and/or after lecture online. However, you must accept these conditions: 1. Taking notes is important: I find most learning requires some writing on your part, not just watching. Please take your own notes. 2. My written notes do not and cannot reflect everything said during lecture: I answer questions and understand your perspective better after hearing your questions, and I try to tailor my approach during the lecture. If I didn’t, you could just watch a recording 2 Introduction A digital communication system conveys discrete-time, discrete-valued information across a physical channel. Information sources might include audio, video, text, or data. They might be continuous-time (analog) signals (audio, images) and even 1-D or 2-D. Or, they may already be digital (discrete-time, discrete-valued). Our object is to convey the signals or data to another place (or time) with as faithful representation as possible. In this section we talk about what we’ll cover in this class, and more importantly, what we won’t cover. 2.1 ”Executive Summary” Here is the one sentence version: We will study how to efficiently encode digital data on a noisy, bandwidth- limited analog medium, so that decoding the data (i.e., reception) at a receiver is simple, efficient, and high- fidelity. The keys points stuffed into that one sentence are: 1. Digital information on an analog medium: We can send waveforms, i.e., real-valued, continuous-time functions, on the channel (medium). These waveforms are from a discrete set of possible waveforms. What set of waveforms should we use? Why?ECE 5520 Fall 2009 9 2. Decoding the data: When receiving a signal (a function) in noise, none of the original waveforms will match exactly. How do you make a decision about which waveform was sent? 3. What makes a receiver difficult to realize? What choices of waveforms make a receiver simpler to imple- ment? What techniques are used in a receiver to compensate? 4. Efficiency, Bandwidth, and Fidelity: Fidelity is the correctness of the received data (i.e., the opposite of error rate). What is the tradeoff between energy, bandwidth, and fidelity? We all want high fidelity, and low energy consumption and bandwidth usage (the costs of our communication system). You can look at this like an impedance matching problem from circuits. You want, for power efficiency, to have the source impedance match the destination impedance. In digital comm, this means that we want our waveform choices to match the channel and receiver to maximize the efficiency of the communication system. 2.2 Why not Analog? The previous text used for this course, by Proakis & Salehi, has an extensive analysis and study of analog communication systems, such as radio and television broadcasting (Chapter 3). In the recent past, this course would study both analog and digital communication systems. Analog systems still exist and will continue to exist; however, development of new systems will almost certainly be of digital communication systems. Why? • Fidelity • Energy: transmit power, and device power consumption • Bandwidth efficiency: due to coding gains • Moore’s Law is decreasing device costs for digital hardware • Increasing need for digital information • More powerful information security 2.3 Networking Stack In this course, we study digital communications from bits to bits. That is, we study how to take ones and zeros from a transmitter, send them through a medium, and then (hopefully) correctly identify the same ones and zeros at the receiver. There’s a lot more than this to the digital communication systems which you use on a daily basis (e.g., iPhone, WiFi, Bluetooth, wireless keyboard, wireless car key). To manage complexity, we (engineers) don’t try to build a system to do everything all at once. We typically start with an application, and we build a layered network to handle the application. The 7-layer OSI stack, which you would study in a CS computer networking class, is as follows: • Application • Presentation () • Session () • Transport • NetworkECE 5520 Fall 2009 10 • Link Layer • Physical (PHY) Layer (Note that there is also a 5-layer model in which layers are considered as part of the application layer.) ECE 5520 is part of the bottom layer, the physical layer. In fact, the physical layer has much more detail. It is primarily divided into: • Multiple Access Control (MAC) • Encoding • Channel / Medium We can control the MAC and the encoding chosen for a digital communication. 2.4 Channels and Media We can chose from a few media, but we largely can’t change the properties of the medium (although there are exceptions). Here are some media: • EM Spectra: (anything above 0 Hz) Radio, Microwave, mm-wave bands, light • Acoustic: ultrasound • Transmission lines, waveguides, optical fiber, coaxial cable, wire pairs, ... • Disk (data storage applications) 2.5 Encoding / Decoding Block Diagram Information Source Channel Modulator Up-conversion Source Encoder Encoder Synchron- Analog to Digital Digital to Analog Channel Other Components: ization Converter Converter Information Source Channel Down- Demodulator conversion Output Decoder Decoder Figure 1: Block diagram of a single-user digital communication system, including (top) transmitter, (middle) channel, and (bottom) receiver. Notes: • Information source comes from higher networking layers. It may be continuous or packetized. • Source Encoding: Finding a compact digital representation for the data source. Includes sampling of continuous-time signals, and quantization of continuous-valued signals. Also includes compression of those sources (lossy, or lossless). What are some compression methods that you’re familiar with? We present an introduction to source encoding at the end of this course.ECE 5520 Fall 2009 11 • Channel encoding refers to redundancy added to the signal such that any bit errors can be corrected. A channel decoder, because of the redundancy, can correct some bit errors. We will not study channel encoding, but it is a topic in the (ECE 6520) Coding Theory. • Modulation refers to the digital-to-analog conversion which producesa continuous-time signal that can be sent on the physical channel. It is analogous to impedancematching - propermatching of a modulation to a channel allows optimal information transfer, like impedance matching ensured optimal power transfer. Modulation and demodulation will be the main focus of this course. • Channels: See above for examples. Typical models are additive noise, or linear filtering channel. Why do we do both source encoding (which compresses the signal as much as possible) and also channel encoding (which adds redundancy to the signal)? Because of Shannon’s source-channel coding separation theorem. He showed that (given enough time) we can consider them separately without additional loss. And separation, like layering, reduces complexity to the designer. 2.6 Channels A channel can typically be modeled as a linear filter with the addition of noise. The noise comes from a variety of sources, but predominantly: 1. Thermal background noise: Due to the physics of living above 0 Kelvin. Well modeled as Gaussian, and white; thus it is referred to as additive white Gaussian noise (AWGN). 2. Interference from other transmitted signals. These other transmitters whose signals we cannot completely cancel, we lump into the ‘interference’ category. These may result in non-Gaussian noise distribution, or non-white noise spectral density. The linear filtering of the channel result from the physics and EM of the medium. For example, attenuation in telephone wires varies by frequency. Narrowband wireless channels experience fading that varies quickly as a function of frequency. Wideband wireless channels display multipath, due to multiple time-delayed reflections, diffractions, and scattering of the signal off of the objects in the environment. All of these can be modeled as linear filters. The filter may be constant, or time-invariant, if the medium, the TX and RX do not move or change. However, for mobile radio, the channel may change very quickly over time. Even for stationary TX and RX, in real wireless channels, movement of cars, people, trees, etc. in the environment may change the channel slowly over time. Noise Received Transmitted LTI Filter Signal Signal h(t) Figure 2: Linear filter and additive noise channel model. In this course, we will focus primarily on the AWGN channel, but we will mention what variations exist for particular channels, and how they are addressed.ECE 5520 Fall 2009 12 2.7 Topic: Random Processes Random things in a communication system: • Noise in the channel • Signal (bits) • Channel filtering, attenuation, and fading • Device frequency, phase, and timing offsets These random signals often pass through LTI filters, and are sampled. We want to build the best receiver possible despite the impediments. Optimal receiver design is something that we study using probability theory. We have to tolerate errors. Noise and attenuation of the channel will cause bit errors to be made by the demodulator and even the channel decoder. This may be tolerated, or a higher layer networking protocol (eg., TCP-IP) can determine that an error occurred and then re-request the data. 2.8 Topic: Frequency Domain Representations To fit as many signals as possible onto a channel, we often split the signals by frequency. The concept of sharing a channel is called multiple access (MA). Separating signals by frequency band is called frequency- division multiple access (FDMA). For the wireless channel, this is controlled by the FCC (in the US) and called spectrum allocation. There is a tradeoff between frequency requirements and time requirements, which will be a major part of this course. The Fourier transform of our modulated, transmitted signal is used to show that it meets the spectrum allocation limits of the FCC. 2.9 Topic: Orthogonality and Signal spaces To show that signals sharing the same channel don’t interfere with each other, we need to show that they are orthogonal. This means, in short, that a receiver can uniquely separate them. Signals in different frequency bands are orthogonal. We can also employ multiple orthogonal signals in a single transmitter and receiver, in order to provide multiple independent means (dimensions) on which to modulate information. We will study orthogonal signals, and learn an algorithm to take an arbitrary set of signals and output a set of orthogonal signals with which to represent them. We’ll use signal spaces to show graphically the results, as the example in Figure 36. M=8 M=16 Figure 3: Example signal space diagram for M-ary Phase Shift Keying, for (a) M = 8 and (b) M = 16. Each point is a vector which can be used to send a 3 or 4 bit sequence.ECE 5520 Fall 2009 13 2.10 Related classes 1. Pre-requisites: (ECE 5510) Random Processes; (ECE 3500) Signals and Systems. 2. Signal Processing: (ECE 5530): Digital Signal Processing 3. Electromagnetics: EM Waves, (ECE 5320-5321) Microwave Engineering, (ECE 5324) Antenna Theory, (ECE 5411) Fiberoptic Systems 4. Breadth: (ECE 5325) Wireless Communications 5. Devices and Circuits: (ECE 3700) Fundamentals of Digital System Design, (ECE 5720) Analog IC Design 6. Networking: (ECE 5780) Embedded System Design, (CS 5480) Computer Networks 7. AdvancedClasses: (ECE6590)SoftwareRadio,(ECE6520)InformationTheoryandCoding,(ECE6540): Estimation Theory Lecture 2 Today: (1) Power, Energy, dB (2) Time-domain concepts (3) Bandwidth, Fourier Transform Twoofthebiggestlimitationsincommunicationssystemsare(1)energy /power; and(2)bandwidth. Today’s lecture provides some tools to deal with power and energy, and starts the review of tools to analyze frequency content and bandwidth. 3 Power and Energy Recall that energy is power times time. Use the units: energy is measured in Joules (J); power is measured in Watts (W) which is the same as Joules/second (J/sec). Also, recall that our standard in signals and systems is define our signals, such as x(t), as voltage signals (V). When we want to know the power of a signal we assume 2 it is being dissipated in a 1 Ohm resistor, so x(t) is the power dissipated at time t (since power is equal to the voltage squared divided by the resistance). A signal x(t) has energy defined as Z ∞ 2 E = x(t) dt −∞ For some signals, E will be infinite because the signal is non-zero for an infinite duration of time (it is always on). These signals we call power signals and we compute their power as Z T 1 2 P = lim x(t) dt T→∞2T −T The signal with finite energy is called an energy signal.ECE 5520 Fall 2009 14 3.1 Discrete-Time Signals In this book, we refer to discrete samples of the sampled signal x as x(n). You may be more familiar with the xn notation. But, Matlab uses parentheses also; so we’ll follow the Rice text notation. Essentially, whenever you see a function of n (or k, l, m), it is a discrete-time function; whenever you see a function of t (or perhaps τ) it is a continuous-time function. I’m sorry this is not more obvious in the notation. For discrete-time signals, energy and power are defined as: ∞ X 2 E = x(n) (1) n=−∞ N X 1 2 P = lim x(n) (2) N→∞2N +1 n=−N 3.2 Decibel Notation We often use a decibel (dB) scale for power. If P is the power in Watts, then lin P =10log P lin 10 dBW Decibelsaremoregeneral-theycanapplytootherunitlessquantities aswell, suchasagain (loss)L(f)through a filter H(f), 2 L(f) =10log H(f) (3) 10 dB Note: Why is the capital B used? Either the lowercase ‘b’ in the SI system is reserved for bits, so when the ‘bel’ was first proposed as log (·), it was capitalized; or it referred to a name ‘Bell’ so it was capitalized. In 10 either case, we use the unit decibel 10log (·) which is then abbreviated as dB in the SI system. 10 Note that (3) could also be written as: L(f) =20log H(f) (4) 10 dB Be careful with your use of 10 vs. 20 in the dB formula. • Onlyuse20 as the multiplier ifyou are converting fromvoltage to power; i.e., taking thelog ofa voltage 10 and expecting the result to be a dB power value. Our standard is to consider power gains and losses, not voltage gains and losses. So if we say, for example, the channel has a loss of 20 dB, this refers to a loss in power. In particular, the output of the channel has 100 times less power than the input to the channel. Remember these two dB numbers: • 3 dB: This means the number is double in linear terms. • 10 dB: This means the number is ten times in linear terms. And maybe this one: • 1 dB: This means the number is a little over 25% more (multiply by 5/4) in linear terms. With these three numbers, you can quickly convert losses or gains between linear and dB units without a calculator. Just convert any dB number into a sum of multiples of 10, 3, and 1. Example: Convert dB to linear values:ECE 5520 Fall 2009 15 1. 30 dBW 2. 33 dBm 3. -20 dB 4. 4 dB Example: Convert linear values to dB: 1. 0.2 W 2. 40 mW Example: Convert power relationships to dB: Convert the expression to one which involves only dB terms. 1. P =100P y,lin x,lin −d 2. P =G L , where P is the received power in a fiber-optic link, where d is the cable o,lin connector,lin o,lin cable,lin length (typically in units of km), G is the gain in any connectors, and L is a loss in a 1 connector,lin cable,lin km cable. 2 G G λ t,lin t,lin 3. P = P , where λ is the wavelength (m), d is the path length (m), and G and G r,lin t,lin 2 t,lin t,lin (4πd) are the linear gains in the antennas, P is the transmit power (W) and P is the received power (W). t,lin r,lin This is the Friis free space path loss formula. These last two are what we will need in Section 6.4, when we discuss link budgets. The main idea is that we have a limited amount of power which will be available at the receiver. 4 Time-Domain Concept Review 4.1 Periodicity Def’n: Periodic (continuous-time) A signal x(t) is periodic if x(t) = x(t+T ) for some constant T = 6 0 for all t∈R. The smallest such constant 0 0 T 0 is the period. 0 If a signal is not periodic it is aperiodic. Periodic signals have Fourier series representations, as defined in Rice Ch. 2. Def’n: Periodic (discrete-time) A DT signal x(n) is periodic if x(n) = x(n +N ) for some integer N = 6 0, for all integers n. The smallest 0 0 positive integer N is the period. 0ECE 5520 Fall 2009 16 4.2 Impulse Functions Def’n: Impulse Function The (Dirac) impulse function δ(t) is the function which makes Z ∞ x(t)δ(t)dt =x(0) (5) −∞ true for any function x(t) which is continuous at t =0. We are defining a function by its most important property, the ‘sifting property’. Is there another definition which is more familiar? Solution:  1/T, −T ≤t≤T δ(t) = lim 0, o.w. T→0 You can visualize δ(t) here as an infinitely high, infinitesimally wide pulse at the origin, with area one. This is why it ‘pulls out’ the value of x(t) in the integral in (5). Other properties of the impulse function: • Time scaling, • Symmetry, • Sifting at arbitrary time t , 0 The continuous-time unit step function is  1, t≥0 u(t) = 0, o.w. Example: Sifting Property R ∞ sin(πt) What is δ(1−t)dt? −∞ πt The discrete-time impulse function (also called the Kronecker delta or δ ) is defined as: K  1, n=0 δ(n) = 0, o.w. (There is no need to get complicated with the math; this is well defined.) Also,  1, n≥0 u(n)= 0, o.w. 5 Bandwidth Bandwidth is another critical resource for a digital communications system; we have various definitions to quantify it. In short, it isn’t easy to describe a signal in the frequency domain with a single number. And, in the end, a system will be designed to meet a spectral mask required by the FCC or system standard.ECE 5520 Fall 2009 17 Periodicity Time Periodic Aperiodic Continuous-Time Laplace Transform x(t)↔X(s) Fourier Series Fourier Transform x(t)↔a x(t)↔X(jω) k R ∞ −jωt X(jω) = x(t)e dt t=−∞ R ∞ 1 jωt x(t) = X(jω)e dω 2π ω=−∞ Discrete-Time z-Transform x(n)↔X(z) Discrete Fourier Transform (DFT) Discrete Time Fourier Transform (DTFT) jΩ x(n)↔Xk x(n)↔X(e ) P P 2π N−1 ∞ −j kn jΩ −jΩn N Xk = x(n)e X(e ) = x(n)e n=0 n=−∞ R P 2π π N−1 1 j nk 1 jΩ N x(n) = Xke x(n)= X(e )dΩ k=0 N 2π n=−π Table 1: Frequency Transforms Intuitively, bandwidthisthemaximumextentofoursignal’sfrequencydomaincharacterization, callitX(f). A baseband signal absolute bandwidth is often defined as the W such that X(f) = 0 for all f except for the range −W ≤f ≤W. Other definitions for bandwidth are 2 2 • 3-dB bandwidth: B is the value of f such that X(f) =X(0) /2. 3dB • 90% bandwidth: B is the value which captures 90% of the energy in the signal: 90% Z Z B ∞ 90% 2 2 X(f) df =0.90 Xd(f) df −B =−∞ 90% As a motivating example, I mention the square-root raised cosine (SRRC) pulse, which has the following desirable Fourier transform:  √ 1−α T , 0≤f≤  s 2T  s r  n h  io T πT 1−α 1−α 1+α s s H (f)= (6) 1+cos f− , ≤f≤ RRC 2 α 2T 2T 2T s s s    0, o.w. where α is a parameter called the “rolloff factor”. We can actually analyze this using the properties of the Fourier transform and many of the standard transforms you’ll find in a Fourier transform table. The SRRC and other pulse shapes are discussed in Appendix A, and we will go into more detail later on. The purpose so far is to motivate practicing up on frequency transforms. 5.1 Continuous-time Frequency Transforms Notes about continuous-time frequency transforms: 1. You are probably most familiar with the Laplace Transform. To convert it to the Fourier tranform, we replace s with jω, where ω is the radial frequency, with units radians per second (rad/s).ECE 5520 Fall 2009 18 2. You may prefer the radial frequency representation, but also feel free to use the rotational frequency f (which has units of cycles per sec, or Hz. Frequency in Hz is more standard for communications; you should use it for intuition. In this case, just substituteω =2πf. You could writeX(j2πf) as the notation for this, but typically you’ll see it abbreviated as X(f). Note that the definition of the Fourier tranform 1 in the f domain loses the in the inverse Fourier transform definition.s 2π Z ∞ −j2πft X(j2πf) = x(t)e dt t=−∞ Z ∞ j2πft x(t) = X(j2πf)e df f=−∞ (7) 3. The Fourier series is limited to purely periodic signals. Both Laplace and Fourier transforms are not limited to periodic signals. jα 4. Note that e =cos(α)+jsin(α). See Table 2.4.4 in the Rice book. Example: Square Wave Given a rectangular pulse x(t) =rect(t/T ), s  1, −T /2t≤T /2 s s x(t) = 0, o.w. What is the Fourier transform X(f)? Calculate both from the definition and from a table. Solution: Method 1: From the definition: Z T /2 s −jωt X(jω) = e dt t=−T /2 s T /2 s 1 −jωt = e −jω t=−T /2 s   1 −jωT /2 jωT /2 s s = e −e −jω sin(ωT /2) sin(ωT /2) s s = 2 =T s ω ωT /2 s  1 −jα jα Thisusesthefactthat e −e =sin(α). Whileitissometimesconvenient toreplacesin(πx)/(πx)with −2j sincx, it is confusing because sinc(x) is sometimes defined as sin(πx)/(πx) and sometimes defined as (sinx)/x. No standard definition for ‘sinc’ exists Rather than make a mistake because of this, the Rice book always writes out the expression fully. I will try to follow suit. Method 2: From the tables and properties:  1, tT s x(t) =g(2t) where g(t) = (8) 0, o.w.  sin(ωT ) s 1 ω From the table, G(jω) =2T . From the properties, X(jω) = G j . So s ωT 2 2 s sin(ωT /2) s X(jω) =T s ωT /2 sECE 5520 Fall 2009 19 See Figure 4(a). Ts Ts/2 0 −4/Ts −3/Ts −2/Ts −1/Ts 0 1/Ts 2/Ts 3/Ts 4/Ts f (a) 0 −10 −20 −30 −40 −50 −4/Ts −3/Ts −2/Ts −1/Ts 0 1/Ts 2/Ts 3/Ts 4/Ts f (b) Figure 4: (a) Fourier transform X(j2πf) of rect pulse with period T , and (b) Power vs. frequency s 20log (X(j2πf)/T ). s 10 Question: What if Y(jω) was a rect function? What would the inverse Fourier transform y(t) be? 5.1.1 Fourier Transform Properties See Table 2.4.3 in the Rice book. Assume that Fx(t)=X(jω). Important properties of the Fourier transform: 1. Duality property: x(jω) = FX(−t) x(−jω) = FX(t) (Confusing. It says is that you can go backwards on the Fourier transform, just remember to flip the result around the origin.) 2. Time shift property: −jωt 0 Fx(t−t ) =e X(jω) 0 20 log (X(f)/T ) Ts sinc(π f Ts) 10 sECE 5520 Fall 2009 20 3. Scaling property: for any real a6=0,   1 ω Fx(at)= X j a a 4. Convolution property: if, additionally y(t) has Fourier transform X(jω), Fx(t)⋆y(t) =X(jω)·Y(jω) 5. Modulation property: 1 1 Fx(t)cos(ω t) = X(ω−ω )+ X(ω+ω ) 0 0 0 2 2 6. Parceval’s theorem: The energy calculated in the frequency domain is equal to the energy calculated in the time domain. Z Z Z ∞ ∞ ∞ 1 2 2 2 x(t) dt= X(f) df = X(jω) dω 2π t=−∞ f=−∞ ω=−∞ So do whichever one is easiest Or, check your answer by doing both. 5.2 Linear Time Invariant (LTI) Filters If a (deterministic) signal x(t) is input to a LTI filter with impulse response h(t), the output signal is y(t) =h(t)⋆x(t) Using the above convolution property, Y(jω) =X(jω)·H(jω) 5.3 Examples Example: Applying FT Properties If w(t) has the Fourier transform jω W(jω) = 1+jω find X(jω) for the following waveforms: 1. x(t) =w(2t+2) −jt 2. x(t) =e w(t−1) ∂w(t) 3. x(t) =2 ∂t 4. x(t) =w(1−t) Solution: To be worked out in class. Lecture 3 Today: (1) Bandpass Signals (2) Sampling Solution: From the Examples at the end of Lecture 2:

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.