Digital Transmission Lecture Notes

what is digital transmission in data communication and what are the advantages of digital transmission over analog, what is digital data transmission techniques
Dr.AldenCutts Profile Pic
Dr.AldenCutts,United Kingdom,Teacher
Published Date:23-07-2017
Your Website URL(Optional)
83050E/1 TLT-5406 Digital Transmission Lecture Notes, Spring 2006 Markku Renfors Institute of Communications Engineering Tampere University of Technology Contents Introduction 2 Brief Introduction to Information Theory 5 Information theory, Lossless source coding; Channel capacity Transmission Channels 36 Baseband Digital Transmission 46 Baseband transmission techniques and line coding principles; Nyquist pulse shaping principles; Eye diagram. Modulation in Digital Transmission 87 Linear digital modulation, QAM, PSK; BER calculation and performance evaluation; Bit mapping principles; VSB, CAP, D-PSK FSK-Type Modulation Methods 141 FSK, MSK, CPFSK, GMSK Basics of Detection and Estimation Theory 158 ML and MAP principles in BSC and AWGN cases, ML sequence detection, Viterbi algorithm Signal Space Concepts 199 Maximum Likelihood Detection for Continuous-Time Channels 208 Optimum receiver principles, Sufficient statistics, Correlation receiver, Matched filter receiver, Sampled matched filter Channel Equalizer Structures 232 Zero forcing and MSE principles; LE, FSE, DFE, MLSD/Viterbi Adaptive Channel Equalization 258 General concepts, LMS algorithm in LE and DFE Synchronization in Digital Receivers 284 Partial Response Signalling 320 Scrambling Tchniques 329 Error Control Coding 334 Basics of error control coding, Coding gain, Block codes, Reed-Solomon codes Convolutional Codes 368 Convolutional codes, Concatenated coding, Interleaving Trellis Coding, Coded Modulation 388 83050E/2 INTRODUCTION _______________________________________ A digital transmission system may or may not include conversions between analog and digital signals (sampling, A/D- and D/A- conversion) The transmitter end of the transmission chain converts a digital bit-stream into an analog waveform which is sent to the physical channel, which is in practise analog. The receiving end converts the received analog waveform back to digital format. The transmission chain includes: Source coding/decoding: Reducing the bit-rate of the information signal by reducing the redundancy; Compression. Channel coding/decoding: Error control coding, compensating the effect of bit errors that inevitably take place in a practical transmission channel. In any 'sensible' channel, it is possible to get arbitrarily low bit-error-rate by increasing redundancy in the transmitted signal and using error control coding. One of the central results of information theory is that source coding and channel coding can, in principle, be carried out independently of each other. Modulation/demodulation: converting a digital signal into analog waveform. Channel, that distorts the transmitted signal and adds noise and interference to it. When designing the system, the primary targit is to minimize the used bandwidth and/or the transmitted signal power. 83050E/3 Block Diagram of a Digital Transmission System __________________________________________ Analog Digital input input Sampling and Source Channel Modulation quantization coding coding Noise Channel Interferences Synchroni- zation Channel Reconstruction Source Channel Demodulation equali- decoding decoding D/A-conversion Detection zation Analog Digital output output In the following, the digital transmission system is considered to include the chain between (and including) channel coding and decoding. With this definition, the transmission chain can be designed and analyzed independently of the nature of the transmitted signal. 83050E/4 The Parameters of a Digital Transmission System ____________________________________________________ The properties of a transmission chain are characterized by the following parameters: • Transmission rate, bit rate (bits/s, bps) • Bit-error-rate • Delay (due to propagation and signal processing) • 'Timing jitter' in the bit-stream obtained from the channel decoder. 83050E/5 BRIEF INTRODUCTION TO INFORMATION THEORY _____________________________________________________ In this section we study the concepts of information, entropy, and channel capacity. With this theory, it is possible to calculate the largest possible information transmission rate through a given channel. This is called the channel capacity. Even though it is usually not possible to achieve the channel capacity in a practical system, it is a good reference point when evaluating the system performance. In fact, the Shannon-Hartley law is an important fundamental law of nature in the field of communication theory, and it is quite useful also in practical engineering work. Source: Lee&Messerschmitt, Chapter 4. Source coding part: Benedetto&Biglieri, Chapter 3; Sklar, Chapter 3; Gitlin, Chapter 2. 83050E/6 Definition of Information __________________________________________________ Here the message signal is modeled as a random process. We begin by considering observations of a random variable: • Each observation gives a certain amount of information. • Rare observations give more information than usual ones. Example: The statement The sun rose this morning gives very little information (high probability). The statement San Francisco was destroyed this morning by an earthquake gives a lot of information (low probability). Definition: Observing a random variable X that takes its values from the set Ω=a , a ,…, a , the (self) information of observation X 1 2 K a is m h(a )=−log(p(a)) m 2 X m 83050E/7 Definition of Information - Interpretation __________________________________________________ • It is easy to see that 0≤ h(a )≤∞ m () • For a rare event the probability p a is small and the X m information is large. • For a usual event the probability p(a)≈ 1 and the X m information is small. • Why logarithm? In case of two independent random variables X and Y , Ω= b , b ,…, b , the information of the joint event Y 1 2 N a , b becomes m n (())(())(()) h(a ,b )=− log p a ,b=− log p a− log p b m n 2 X ,Y m n 2 X m 2 Y n = h(a )+ h(b ) m n In case of independent events, the information is additive, which is intuitively sensible. • Using base 2 logarithm = the unit of information is bit. • (base 10 logarithm = the unit is dit) • (base e (natural) logarithm = the unit is nat or Hartley) 83050E/8 Entropy __________________________________________________ The average information of a random variable is H (X )= E− log() p(x)=− p(x)log() p(x) ∑ 2 X X 2 X x∈Ω X This is called the entropy. Entropy has the following interpretations: • Average information obtained from an observation. • Average uncertainty about X before the observation. Example Binary random variable X , Ω== 0,1 ,pq 1 () XX The entropy is H (X )=−q log () q− (1− q) log(1− q) 2 2 1 The maximum entropy is 1 and it is obtained when q= . 2 The entropy becomes zero for q= 0 or q= 1. 83050E/9 Example: The Entropy of an English Text Memoryless model of English text: Source Symbol Probability p a i i Space 0.186 A 0.064 B 0.013 C 0.022 D 0.032 E 0.103 F 0.021 G 0.015 H 0.047 I 0.058 J 0.001 K 0.005 L 0.032 M 0.020 N 0.057 O 0.063 P 0.015 Q 0.001 R 0.048 S 0.051 T 0.080 U 0.023 V 0.008 W 0.017 X 0.001 Y 0.016 Z 0.001 Entropy HX ( ) =− p log p = 4.03 bits ∑ ii 2 i 83050E/10 Source Coding Theorem __________________________________________________ Let us consider a discrete-time and discrete-amplitude source that creates independent observations of a random variable X at rate r samples per second. The rate of the source is defined as R= rH (X ). Such a source can be coded using a source coder into a bit-stream, the rate of which is less than R+ε , for any ε 0. It is worth noting that it is often very difficult to construct codes that provide a rate arbitrarily close to the rate R. But often it is easy to get ‘rather close’ to the rate R. 83050E/11 An Example of Source Coding __________________________________________________ Let us consider again the case of binary random variable. 1 (1) q= 2 Now the entropy is HX()=1, so 1 bit per sample is needed. In this case it is best to send the samples as they are. (2) q=0.1 H (X )=−0.1 log (0.1)− 0.9 log (0.9)= 0.47 2 2 On the average, less than one bit per sample is sufficient. Now it is possible to construct codes where the average number of bits per sample is in the range 0.47 ... 1. One very simple but rather efficient code is obtained by coding two consecutive source bits as follows: samples code word 0,0 0 0,1 10 1,0 110 1,1 111 It is also easy to decode a bit-stream constructed in this way since no code word is a prefix of another. In this code, 0.645 bits per sample are used on the average. 83050E/12 More Examples __________________________________________________ (1) When throwing a coin, if the result is always head (it might be an unfair coin :-) the entropy becomes: H (X )=−0log 0−1log(1)= 0 2 (2) It is easy to show that the entropy satisfies H (X )≤ log K 2 where K is the number of possible outcomes (sample values, symbols, …). log K bits is, of course, always 2 enough. It can also be shown that H (X )= log K only if 2 all possible outcomes are equally probable. 83050E/13 About Source Coding _________________________________________________ In the following we consider briefly binary coding methods that aim at minimizing the number of used bits as close to the source entropy as possible. These methods are lossless in the sense that the decoder is able to reproduce exactly the original source bit stream. Many coding principles applied in this context utilize unequal code wordlengths (i.e., different number of bits for different symbols or symbol blocks). Naturally, the shortest word- lengths are used for the most common symbols/symbol blocks. An important requirement for a code is that any coded bit- stream can be uniquely decoded. One common principle to achieve this goal is the prefix property: none of the code- words appears as the beginning of another codeword. This guarantees that, when scanning the bit-stream, a recognized codeword (belonging to the code) determines also the boundary between consecutive code-words. The compression methods discussed below (entropy coding) are widely used, e.g., in telefax systems and file compression routines. They are also one ingredient in most of the current speech, audio, and video compression systems. However, in these applications much better compression ratio can be achieved by using lossy coding methods that remove some of the original source information with small/negligible effect on the perceived quality. 83050E/14 Huffman Code __________________________________________________ In Huffman codes, variable code wordlength is utilized together with the prefix property. Coding algorithm (from Proakis&Salehi) and example (from Sklar’s book): 0.6 0.4 0.4 0.4 0.4 a 1 Sort in decreasing 1.0 order of probability 0.2 0.2 0.2 0.4 0.4 b 1 0 0.6 0.1 0.2 0.2 0.2 c 1 0 0.4 0.2 0.1 0.1 d 1 0 0.2 Merge the two Input Code 0.1 0.1 least probable e 1 0 alphabet symbols 0.2 0.1 a 11 f 0 b 0 0 c 1 0 1 d 1 0 0 no Number of elements = 2? e 0 1 1 f 0 1 0 yes Assign 0 and 1 to the two codewords Is any element the yes Append the codeword result of merger with 0 and 1 of two elements no Stop 83050E/15 Huffman Code (continued) __________________________________________________ As an example, in symbol-by-symbol compression of English text, about 43 % compression rate has been achieved. However, this doesn’t take into account the fact that certain combinations of letters (the, in, on, ...) appear in the text quite often. For Huffman code, it can be shown that the average number of bits to code a source symbol, L(X), satisfies H (X )≤ L(X )≤ H (X )+1 When coding n symbols at a time, we obtain correspondingly H (X )≤ L(X )≤ H (X )+1/ n. So using the Huffman code blockwise, with sufficiently long block length, it is possible to get arbitrarily close to entropy limit. This is not a practical way, but this development basically constitutes one proof of the source coding theorem. One fundamental limitation of Huffman codes is that the source symbol statistics have to be known (or estimated). 83050E/16 Run-Length Codes __________________________________________________ Many sources produce long sequences (”runs”) of the same source symbol (e.g., think about the operation principle of telefax). In such cases, it is more efficient to send, instead of the symbol sequence, one of the repeated symbols and information about the length of the sequence. 83050E/17 Lempel-Ziv Codes __________________________________________________ Principle • The method uses a codebook consisting of a number of source symbol sequences. • The source symbol stream is scanned one symbol at a time. This is continued until the beginning of the uncoded symbol sequence is not in the codebook anymore. This sequence can be represented as the concatenation of one of the words in the codebook and one additional symbol. This new symbol sequence will be added to the codebook. • The same process is repeated starting from the beginning of the uncoded source symbol stream. 83050E/18 Lempel-Ziv Codes (continued) __________________________________________________ Example (from Proakis&Salehi): Let us assume that we want to parse and encode the following sequence: 0100001100001010000010100000110000010100001001001 Parsing the sequence by the rules explained before results in the following phrases: 0, 1, 00, 001, 10, 000, 101, 0000, 01, 010, 00001, 100, 0001, 0100, 0010, 01001, … It is seen that all the phrases are different and each phrase is a previous phrase concatenated with a new source output. The number of phrases is 16. This means that for each phrase we need 4 bits, plus an extra bit to represent the new source output. The above sequence is encoded by 0000 0, 0000 1, 0001 0, 0011 1, 0010 0, 0011 0, 0101 1, 0110 0, 0001 1, 1001 0, 1000 1, 0101 0, 0110 1, 1010 0, 0100 0, 1110 1, … Dictionary Dictionary Location Contents Codeword 1 0001 0 0000 0 2 0010 1 0000 1 3 0011 00 0001 0 4 0100 001 0011 1 5 0101 10 0010 0 6 0110 000 0011 0 7 0111 101 0101 1 8 1000 0000 0110 0 9 1001 01 0001 1 10 1010 010 1001 0 11 1011 00001 1000 1 12 1100 100 0101 0 13 1101 0001 0110 1 14 1110 0010 1010 0 15 1111 0010 0100 0 16 1110 1 83050E/19 Lempel-Ziv Codes (continued) __________________________________________________ In this example, one can hardly talk about compression. The efficiency of the method is realized only when coding considerably longer source symbol sequences. When compressing English text, about 55 % compression rate has been achieved. One key parameter of the method is the size of the codebook, which could be, for example, 4096, corresponding to 12-bit sequences. At some point, the codebook becomes full, and the ”oldest” (according to a proper criterion) code-word is removed from the codebook to make room for new ones. Lempel-Ziv Codes are commonly used, e.g., in file packing routines, like Zip. 83050E/20 The Capacity of a Discrete-Time Channel (1) Discrete-Valued Input and Output _____________________________________________________ In the following, we consider the channel capacity, starting from the case of discrete-time channel with discrete-valued input and output, and moving stepwise towards the case of continuous-time channel with continuous-valued input and output. The input is represented by the random process X . k The output is represented by the random process Y . k The channel is assumed to be memoryless, i.e., Y depends k only on X but not on any other input symbol (neither earlier k nor later ones). Such a channel is completely determined by the conditional probabilities py()x, x∈Ω∈ ,yΩ X Y YX Example: Binary symmetric channel (BSC) p () yx =1-p Ω=Ω= 0,1 YX X Y y=0 x=0 The conditional probabilities p between input and output (transition probabilities) are shown by such a graph. y=1 x=1 1-p p is the bit-error probability This is the simplest possible channel. Yet it is quite useful as a model of many practical channels, and it is used often in the continuation.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.