Question? Leave a message!




Coding Schemes, Adaptive Modulation/Coding, Hybrid ARQ

Point-to-Point Wireless Communication (Coding Schemes, Adaptive Modulation/Coding, Hybrid ARQ 44
PointtoPoint Wireless Communication (III): Coding Schemes, Adaptive Modulation/Coding, Hybrid ARQ/FEC Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 1References  David MacKay, Information Theory, Inference Learning Algorithms. PDF available online:  http://www.inference.phy.cam.ac.uk/mackay/itprnn/book.html  Chapter 1: Introduction to Information Theory  Skim Chapter 9: Communication over a noisy channel  Section 11.4: Capabilities of Practical Error Correcting Codes  Skim Chap 13: Binary codes (ideas of “distance”, “perfect”/MDS codes, concatenation)  Skim Chapter 47: LDPC codes  Chapter 48: Convolutional Turbo Codes  Optional browsing: Chap 49: Digital Fountain (erasure) codes  Article by Berlekamp: “Application of Error Control Coding to Communications” (especially the discussions on RS coding, concatenated codes, hybrid ARQ/FEC strategies) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 2Context: Time Diversity  Time diversity can be obtained by interleaving and coding over symbols across different coherent time periods. Channel: time diversity/selectivity, but correlated across successive symbols (Repetition) Coding… w/o interleaving: a full codeword lost during fade Interleaving: of sufficient depth: ( coherence time) At most 1 symbol of codeword lost Coding alone is not sufficient Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 3What is channel coding  Transforming signals to improve communications performance by increasing the robustness against channel impairments (noise, interference, fading, ..)  It is a timediversity technique, but can be broadly thought of as techniques to make better use of the degreesoffreedom in channels (eg: spacetime codes)  Waveform coding: Transforming waveforms to better waveforms  Structured sequences: Transforming data sequences into better sequences, having structured redundancy.  “Better” in the sense of making the decision process less subject to errors.  Introduce constraints on transmitted codewords to have greater “distance” between them  Note: Channel coding was developed in the context of AWGN channels we shall study them in the same context Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 4Channel (Modified) Block Diagram Source Channel Pulse Bandpass Format encode encode modulate modulate Digital modulation Digital demodulation Source Demod. Channel Format Detect decode Sample decode Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 5Channel Coding Schemes: Block, Convolutional, Turbo Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 6Coding Gain: The Value of Coding…  Error performance vs. bandwidth  Power vs. bandwidth P B  Data rate vs. bandwidth Coded  Capacity vs. bandwidth A F Coding gain: For a given biterror probability, C B the reduction in the Eb/N0 that can be realized through the use of code: D E Uncoded  E E b b  G dB dB dB  N N  0 0 u c E / N (dB) b 0 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 7Coding Gain Potential GapfromShannonlimit: 5 BER=10 9.6 + 1.59 = 11.2 dB (about 7.8 dB if you maintain spectral efficiency) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 8The Ultimate Shannon Limit  Goal: what is min Eb/No for any spectral efficiency (ρ→0)  Spectral efficiency ρ = B/W = log (1 + SNR) 2  where SNR = E /N where E =energy per symbol s o s ρ  Or SNR = (2 1)  Eb/No = Es/No (W/B) = SNR/ρ ρ Lets try to appreciate what Shannon‟s bound means Eb/No = (2 1)/ρ ln 2 = 1.59dB by designing some simple codes and comparing it to the Shannon bound  Fix ρ = 2 bits/Hz = (2ρ 1)/ρ = 3/2 = 1.76dB 5  Gaptocapacity BER =10 : 9.6dB + 1.59 = 11.2 dB (without regard for spectral eff.) or 9.6 – 1.76 = 7.84 dB (keeping spectral eff. constant) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 9Binary Symmetric Channel (BSC)  Given a BER (f), we can construct a BSC with this BER… Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 10Reliable Disk Drive Application  We want to build a disk drive and write a GB/day for 10 years. 15  = desired BER: 10  Physical solution: use more reliable components, reduce noise  System solution: accept noisy channel, detect/correct errors (engineer reliability over unreliable channels) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 11Repetition Code (R3) Majority Vote Decoding AWGN: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 12Performance of R3 The error probability is dominated by the probability that two bits in 2 a block of three are flipped, which scales as f . For BSC with f = 0.1, the R3 code has a probability of error, after decoding, of p = 0.03 per bit or 3. b Rate penalty: need 3 noisy disks to get the loss prob down to 3. To 15 get to BER: 10 , we need 61 disks Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 13Coding: RateBER Tradeoff Repetition code R3: Lets try to design a “better” code: Hamming Code  Shannon: The perception that there is a necessary tradeoff between Rate and BER is illusory It is not true upto a critical rate, the channel capacity  You only need to design better codes to give you the coding gain… Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 14Hamming Code: Linear Block Code  A block code is a rule for converting a sequence of source bits s, of length K, say, into a transmitted sequence t of length N bits.  In a linear block code, the extra NK bits are linear functions of the original K bits; these extra bits are called paritycheck bits.  (7, 4) Hamming code: transmits N = 7 bits for every K = 4 source bits.  The first four transmitted bits, t t t t , are set equal to the four source 1 2 3 4 bits, s s s s . 1 2 3 4  The paritycheck bits t t t are set so that the parity within each circle 5 6 7 (see below) is even Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 15Hamming Code: (Contd) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 16Hamming Code: Syndrome Decoding  If channel is BSC and all source vectors are equiprobable, then…  … the optimal decoder identifies the source vector s whose encoding t(s) differs from the received vector r in the fewest bits.  Similar to “closestdistance” decision rule seen in demodulation  Can we do it more efficiently Yes: Syndrome decoding Tx The decoding task is to find the smallest set of flipped bits that can account for these violations of the parity rules. The pattern of violations of the parity checks is called the syndrome: the syndrome above is z = (1, 1, 0), because the first two circles are `unhappy' (parity 1) and the third circle is `happy„ (parity 0). Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 17Syndrome Decoding (Contd)  Can we find a unique bit that lies inside all the `unhappy' circles and outside all the `happy' circles  If so, the flipping of that bit would account for the observed syndrome. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 18Hamming Code: Performance  A decoding error will occur whenever the noise has flipped more than one bit in a block of seven. 2  The probability scales as O(f ), as did the probability of error for the repetition code R3; but Hamming code has a greater rate, R = 4/7.  Dilbert Test: About 7 of the decoded bits are in error. The residual errors are correlated: often two or three successive decoded bits are flipped…  Generalizations of Hamming codes: called BCH codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 19Shannon’s Legacy: RateReliability of Codes  Noisychannel coding theorem: defines achievable rate/reliability regions  Note: you can get BER as low as desired by designing an appropriate code within the capacity region Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 20Shannon Legacy (Contd)  The maximum rate at which communication is possible with arbitrarily small p is called the capacity of the channel. b  BSC(f) capacity:  f = 0.1 has capacity C ≈ 0.53. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 21Caveats Remarks  Strictly, the above statements might not be quite right:  Shannon proved his noisychannel coding theorem by studying sequences of block codes with everincreasing block lengths, and the required block length might be bigger than a gigabyte (the size of our disk drive),  … in which case, Shannon might say `well, you can't do it with those tiny disk drives, but if you had two noisy terabyte drives, you could make a single highquality terabyte drive from them'.  Information theory addresses both the limitations and the possibilities of communication.  Reliable communication at any rate beyond the capacity is impossible, and that reliable communication at all rates up to capacity is possible. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 22Generalize: Linear Coding/Syndrome Decoding  The first four received bits, r r r r , purport to be the four source bits; and 1 2 3 4 the received bits r r r purport to be the parities of the source bits, as defined 5 6 7 by the generator matrix G.  Evaluate the three paritycheck bits for the received bits, r r r r , and see 1 2 3 4 whether they match the three received bits, r r r . 5 6 7  The differences (modulo 2) between these two triplets are called the syndrome of the received vector.  If the syndrome is zero then the received vector is a codeword, and the most probable decoding is given by reading out its first four bits.  If the syndrome is nonzero, then the noise sequence for this block was nonzero, and the syndrome is our pointer to the most probable error pattern. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 23Linear Coding/Syndrome Decoding (Contd)  Coding:  Received vector Syndome: Lets now build linear codes from ground up (first principles) The syndromedecoding problem is to find the most probable noise vector n satisfying the equation  Parity Check Matrix H: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 24Some definitions  Binary field :  The set 0,1, under modulo 2 binary addition and multiplication forms a field. Addition Multiplication 0 0 0 00 0 01 1 01 0 1 0 1 10 0 11 0 11 1  Binary field is also called Galois field, GF(2). Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 25Definitions: Fields  Fields :  Let F be a set of objects on which two operations „+‟ and „.‟ are defined.  F is said to be a field if and only if 1. F forms a commutative group under + operation. The additive identity element is labeled “0”. a,bF ab b aF 2. F0 forms a commutative group under . operation. The multiplicative identity element is labeled “1”. a,bF ab baF 3. The operations “+” and “.” distribute: a(b c) (ab) (ac) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 26Definitions: Vector Space over Fields  Vector space: (note: it mixes vectors and scalars)  Let V be a set of vectors and F a fields of elements called scalars. V forms a vector space over F if: u,v V  u  v  v u F 1. Commutative: 2. Closure: a F, v V  a  v  u V 3. Distributive: (a b)  v  a  v b  v and a (u  v)  a u  a  v 4. Associative: a,b F, v V  (a b)  v  a (b  v) v V, 1  v  v 5. Identity Element: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 27Vector Spaces, Subspaces V  Examples of vector spaces n  The set of binary ntuples, denoted by V(0000),(0001),(0010),(0011),(0100),(0101),(0111), 4 (1000),(1001),(1010),(1011),(1100),(1101),(1111)  Vector subspace:  A subset S of the vector space is called a subspace if: V n Zero: The allzero vector is in S. Closure: The sum of any two vectors in S is also in S.  Example: (0000),(0101),(1010),(1111) is a subspace of V . 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 28Span, Bases…  Spanning set:  A collection of vectors ,  G v , v ,, v 1 2 n the linear combinations of which include all vectors in a vector space V, is said to be a spanning set for V or to span V. Example:  (1000),(0110),(1100),(0011),(1001) spans V . 4  Bases:  A spanning set for V that has minimal cardinality is called a basis for V.  Cardinality of a set is the number of objects in the set. Example: (1000),(0100),(0010),(0001) is a basis for V . 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 29Linear Block Codes are just Subspaces  Linear block code (n,k) k 2 CV  A set with cardinality is called a linear n block code if, and only if, it is a subspace of the vector space V . n V CV k n  Members of C are called codewords.  The allzero codeword is a codeword.  Any linear combination of codewords is a codeword. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 30Linear block codes – cont’d mapping V n V k C Bases of C Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 31Linear block codes – cont’d  The information bit stream is chopped into blocks of k bits.  Each block is encoded to a larger block of n bits.  The coded bits are modulated and sent over channel.  The reverse procedure is done at the receiver. Channel Data block Codeword encoder k bits n bits nk Redundant bits k R Code rate c n Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 32Recall: ReedSolomon RS(N,K): Linear Algebra in Action… Recover K = K of N RS(N,K) data packets received FEC (NK) Block Size Lossy Network (N) This is linear algebra in action: design a kdimensional vector subspace out of an Data = K Ndimensional vector space Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 33Linear block codes – cont’d  The Hamming weight (w) of vector U, denoted by w(U), is the number of nonzero elements in U.  The Hamming distance (d) between two vectors U and V, is the number of elements in which they differ.  The minimum distance of a block code is d(U, V) w(UV) d min d(U ,U ) min w(U ) min i j i i j i Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 34Linear block codes – cont’d  Error detection capability is given by e d1 min  Error correcting capability t of a code, which is defined as the maximum number of guaranteed correctable errors per codeword, is d1  min t  2  Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 35Linear block codes –cont’d mapping V n V k C Bases of C  A matrix G is constructed by taking as its rows the vectors on the basis, . V ,V ,,V 1 2 k v v v  11 12 1n V  1  v v v 21 22 2n   G     V   k v v v  k1 k 2 kn Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 36Linear block codes – cont’d  Encoding in (n,k) block code U mG V  1  V 2  (u ,u ,,u ) (m ,m ,,m ) 1 2 n 1 2 k    V k  (u ,u ,,u ) mV mV mV 1 2 n 1 1 2 2 2 k  The rows of G, are linearly independent. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 37Linear block codes – cont’d  Example: Block code (6,3) Message vector Codeword 000 000000 V 110100  1 100 110100  G V 011010 2 010 011010   V 101001  3 110 101110 001 101001 101 011101 011 110011 111 000111 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 38Systematic Block Codes  Systematic block code (n,k)  For a systematic code, the first (or last) k elements in the codeword are information bits. G P I k I k k identity matrix k P k(n k) matrix k U (u ,u ,...,u ) ( p , p ,..., p ,m ,m ,...,m ) 1 2 n 1 2 nk 1 2 k   parity bits message bits Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 39Linear block codes – cont’d  For any linear code we can find an matrix which its H (nk)n rows are orthogonal to rows of G T GH 0  Why H checks the parity of the received word (i.e. maps the Nbit word to a Mbit syndrome).  Codewords (=mG) should have parity of 0 (i.e. nullspace).  H is called the parity check matrix and its rows are linearly independent.  For systematic linear block codes: T H I P nk Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 40Linear block codes – cont’d Channel m U Data source Format Modulation encoding channel Channel Demodulation Data sink Format decoding Detection ˆ r m r Ue r (r ,r ,....,r ) received codeword or vector 1 2 n e (e ,e ,....,e ) error pattern or vector 1 2 n  Syndrome testing:  S is syndrome of r, corresponding to the error pattern e. T T S rH eH Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 41Linear block codes – cont’d Error pattern Syndrome 000000 000 U (101110) transmitted. 000001 101 r (001110) is received. 000010 011 The syndrome of r is computed: 000100 110 T T S rH (001110)H (100) 001000 001 010000 010 Error pattern corresponding to this syndrome is 100000 100 ˆ e (100000) 010001 111 The corrected vector is estimated ˆ ˆ U r e (001110) (100000) (101110) There is a unique mapping from Syndrome ↔ Error Pattern Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 42Standard Array: Error Patterns  Example: Standard array for the (6,3) code codewords 000000 110100 011010 101110 101001 011101 110011 000111 000001 110101 011011 101111 101000 011100 110010 000110 000010 110110 011000 101100 101011 011111 110001 000101 000100 110000 011110 101010 101101 011010 110111 000110 001000 111100 010000 100100 Coset: Error pattern + 100000 010100 codeword 010001 100101 010110 Coset leaders (error patterns) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 43Linear block codes – cont’d  Standard array nk V 1. For row , find a vector in of minimum i 2,3,...,2 n weight which is not already listed in the array. 2. Call this error pattern e and form the row as the i : th i corresponding coset zero codeword U U U k 1 2 2 e e U e U k 2 2 2 2 2 coset  e e U e U nk nk nk k 2 2 2 2 2 coset leaders Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 44Linear block codes – cont’d  Standard array and syndrome table decoding T 1. Calculate syndrome S rH 2. Find the coset leader, , c ˆ orresponding to . e e S i ˆ ˆ ˆ m 3. Calculate a U r e nd corresponding . ˆ ˆ ˆ ˆ  Note that U re (Ue)e U (ee) ˆ e e  If , error is corrected. ˆ e e  If , undetectable decoding error occurs. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 45Hamming codes  Hamming codes  Hamming codes are a subclass of linear block codes and belong to the category of perfect codes.  Hamming codes are expressed as a function of a single integer , i.e. n and k are derived from m: m 2 m Code length : n 21 m Number of information bits : k 2 m1 Number of parity bits : nk m Error correction capability : t1  The columns of the paritycheck matrix, H, consist of all nonzero binary mtuples. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 46Hamming codes  Example: Systematic Hamming code (7,4) 1 0 0 0 1 1 1   T H 0 1 0 1 0 1 1 I P 33   0 0 1 1 1 0 1  0 1 1 1 0 0 0   1 0 1 0 1 0 0  G P I 44  1 1 0 0 0 1 0  1 1 1 0 0 0 1  Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 47Cyclic block codes  Cyclic codes are a subclass of linear block codes.  Encoding and syndrome calculation are easily performed using feedback shiftregisters.  Hence, relatively long block codes can be implemented with a reasonable complexity.  BCH and ReedSolomon codes are cyclic codes. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 48Cyclic block codes  A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are also a codeword. “i” cyclic shifts of U U (u ,u ,u ,...,u ) 0 1 2 n1 (i) U (u ,u ,...,u ,u ,u ,u ,...,u ) ni ni1 n1 0 1 2 ni1 Example: U (1101) (1) (2) (3) (4) U (1110) U (0111) U (1011) U (1101) U Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 49Cyclic block codes  Algebraic structure of Cyclic codes, implies expressing codewords in polynomial form 2 n1 U(X ) uu Xu X...u X degree (n1) 0 1 2 n1  Relationship between a codeword and its cyclic shifts: 2 n1 n XU(X ) u X u X...,u X u X 0 1 n2 n1 2 n1 n  u u X u X... u X u X u n1 0 1 n2 n1 n1  (1) n U ( X ) u ( X1) n1 (1) n  U (X ) u (X1) n1 (1) n Hence: U (X ) XU(X ) modulo (X1) By extension (i) i n U (X ) X U(X ) modulo (X1) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 50Cyclic block codes  Basic properties of Cyclic codes:  Let C be a binary (n,k) linear cyclic code 1. Within the set of code polynomials in C, there is a unique monic polynomial g( X ) with minimal degree r  n. g (X ) is called the generator polynomials. r g(X ) g g X... g X 0 1 r 2. Every code polynomial U ( X ) in C, can be expressed uniquely as U(X ) m(X )g(X ) 3. The generator polynomial is a factor g(X ) n of X1 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 51Cyclic block codes 4. The orthogonality of G and H in n g(X )h(X ) X1 polynomial form is expressed as n This means h ( X ) is also a factor of X1 5. The row i, i  1 ,..., k , of generator matrix is formed by the coefficients of the "i1" cyclic shift of the generator polynomial. g g g 0  0 1 r g(X )   Toeplitz Matrix (like the circulant matrix): Efficient Linear Algebra g g g 0 1 r   Xg(X )   Operations (mult G iplica tion, inver se,  solut ion  of Ax = b) etc possible    g g g  0 1 r k1  X g(X )   0 g g g 0 1 r  Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 52Cyclic block codes  Systematic encoding algorithm for an (n,k) Cyclic code: nk 1. Multiply the message polynomial by X m(X ) 2. Divide the result of Step 1 by the generator polynomial . Let be the reminder. g(X ) p(X ) nk p(X ) U(X ) 3. Add to to form the codeword X m(X ) Remember CRC used to detect errors in packets “Cyclic” Redundancy Check: same idea Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 53Cyclic block codes  Example: For the systematic (7,4) Cyclic code with generator 3 polynomial g(X )1 X X 1. Find the codeword for the message m (1011) n 7, k 4, n k 3 2 3 m (1011) m(X )1 X X nk 3 3 2 3 3 5 6 X m(X ) X m(X ) X (1 X X ) X X X nk Divide X m(X ) by g(X) : 3 5 6 2 3 3 X X X (1 X X X )(1 X X ) 1  quotientq (X) generator g(X) remainder p( X ) Form the codeword polynomial: 3 3 5 6 U(X ) p(X ) X m(X )1 X X X U (1 0 0 1 0 1 1)  parity bits message bits Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 54Example: Encoding of systematic cyclic codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 55Table 16.6 Decoding cyclic codes s(x) mod r(x)/ g(x)  gx () Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 56Cyclic block codes 2. Find the generator and parity check matrices, G and H, respectively. 2 3 g(X )11 X 0 X1 X (g , g , g , g ) (1101) 0 1 2 3 1 1 0 1 0 0 0  Not in systematic form.  0 1 1 0 1 0 0 We do the following:  G  0 0 1 1 0 1 0 row(1) row(3) row(3)  0 0 0 1 1 0 1  row(1) row(2) row(4) row(4) 1 1 0 1 0 0 0  1 0 0 1 0 1 1   0 1 1 0 1 0 0   G H 0 1 0 1 1 1 0   1 1 1 0 0 1 0  0 0 1 0 1 1 1   1 0 1 0 0 0 1  T I 33 P P I 44 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 57Cyclic block codes  Syndrome decoding for Cyclic codes:  Received codeword in polynomial form is given by Error Received r(X ) U(X )e(X ) pattern codeword  The syndrome is the reminder obtained by dividing the received polynomial by the generator polynomial. r(X ) q(X )g(X )S(X ) Syndrome  With syndrome and Standard array, error is estimated. In Cyclic codes, the size of standard array is considerably reduced. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 58Example of the block codes P B 8PSK QPSK E / N dB b 0 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 59Wellknown Cyclic Codes  (n,1) Repetition codes. High coding gain, but low rate  (n,k) Hamming codes. Minimum distance always 3. Thus can detect 2 m errors and correct one error. n=2 1, k = n m, m 3  Maximumlength codes. For every integer there exists a maximum k 3 k k1 length code (n,k) with n = 2 1,d = 2 . Hamming codes are dual of min maximal codes. m m 3  BCHcodes. For every integer there exist a code with n = 2 1, and where t is the error correction capability k n mt dt  21 min  (n,k) ReedSolomon (RS) codes. Works with k symbols that consists of m bits that are encoded to yield code words of n symbols. For these codes m n 21,number of check symbols nk 2t anddt  21 min  BCH and RS are popular due to large d , large number of codes, and easy min generation Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 60ReedSolomon Codes (RS)  Group bits into Lbit symbols. Like BCH codes with symbols rather than single bits.  Can tolerate burst error better (fewer symbols in error for a given bitlevel burst event).  Shortened RScodes used in CDROMs, DVDs etc Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 61Shortened Reed Solomon Codes RS(N,K) RS(N,K) 0 0 0 z Zeros (z) 0 0 0 FEC (F =NK) K = d + z Block Size (N) Data = d d Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 62RScode performance  Longer blocks, better performance  Encoding/decoding complexity lower for higher code rates (i.e. ½ ): OK(NK) log N. 2 5  5.75.8 dB coding gain BER = 10 (similar to 5.1 dB for convolutional codes, see later) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 63Convolutional Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 64Block vs k bits n bits (n,k) encoder convolutional coding k input bits  (n,k) block codes: Encoder output of n bits depends only on the k input bits n output bits  (n,k,K) convolutional codes: input bit  each source bit influences n(K+1) encoder output bits n(K+1) is the constraint length n(K+1) output bits K is the memory depth Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 65Channel Block diagram: Convolutional Coding Information Rate 1/n Modulator source Conv. encoder U G(m) m (m ,m ,...,m ,...) 1 2 i   (U ,U ,U ,...,U ,...) Input s equence 1 2 3 i  Codeword sequence U u ,...,u ,...,u i 1i ji ni  Branch wo rd (n coded bits) Information Rate 1/n Demodulator sink Conv. decoder ˆ ˆ ˆ ˆ m (m ,m ,...,m ,...) Z (Z , Z , Z ,..., Z ,...) 1 2 i 1 2 3 i  received sequence Z z ,...,z ,...,z i 1i ji ni   Demodulator outputs n outputs per Branch word Shivkumar Kalyanaraman for Branch word i Rensselaer Polytechnic Institute : “shiv rpi” 66Convolutional codescont’d  A Convolutional code is specified by three parameters (n,k, K) (k / n, K) or where R k / n is the coding rate, determining the c number of data bits per coded bit.  In practice, usually k=1 is chosen and we assume that from now on.  K is the constraint length of the encoder a where the encoder has K1 memory elements. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 67A Rate ½ Convolutional encoder  Convolutional encoder (rate ½, K=3)  3 bit shiftregister where the first one takes the incoming data bit and the rest form the memory of the encoder. u First coded bit 1 (Branch word) Input data bits Output coded bits u ,u m 1 2 u 2 Second coded bit Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 68A Rate ½ Convolutional encoder Message sequence: m (101) Time Output Time Output (Branch word) (Branch word) u u 1 1 u u u u 1 2 1 2 t t 1 2 1 0 0 0 1 0 1 1 1 0 u u 2 2 u u 1 1 u u u u 1 2 1 2 t t 3 4 1 0 1 0 1 0 0 0 1 0 u u 2 2 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 69A Rate ½ Convolutional encoder (contd) Time Time Output Output (Branch word) (Branch word) u u 1 1 u u u u 1 2 1 2 t t 5 6 0 0 1 0 0 0 1 1 0 0 u u 2 2 Encoder m (101) U (11 10 00 10 11) n = 2, k = 1, K = 3, L = 3 input bits 10 output bits Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 70Effective code rate  Initialize the memory before encoding the first bit (allzero)  Clear out the memory after encoding the last bit (allzero)  Hence, a tail of zerobits is appended to data bits. data tail codeword Encoder  Effective code rate :  L is the number of data bits and k=1 is assumed: L R R eff c n(L K1) Encoder m (101) U (11 10 00 10 11) Example: n = 2, k = 1, K = 3, L = 3 input bits. Output = n(L + K 1) = 2(3 + 3 – 1) = 10 output bits Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 71Encoder representation  Vector representation:  We define n binary vector with K elements (one vector for each modulo2 adder).  The i:th element in each vector, is “1” if the i:th stage in the shift register is connected to the corresponding modulo2 adder, and “0” otherwise. Example: u 1 g (111) 1 m u u 1 2 g (101) 2 u 2 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 72Encoder representation: Impulse Response  Impulse response representaiton:  The response of encoder to a single “one” bit that goes through it. Example: Branch word Register u u contents 1 2 100 1 1 Input sequence : 1 0 0 010 1 0 Outputs equence : 11 10 11 001 1 1 Input m Output 1 11 10 11 0 00 00 00 1 11 10 11 Modulo2 sum: 11 10 00 10 11 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 73Encoder representation: Polynomial  Polynomial representation:  We define n generator polynomials, one for each modulo2 adder. Each polynomial is of degree K1 or less and describes the connection of the shift registers to the corresponding modulo2 adder. Example: (1) (1) (1) 2 2 g (X ) g g .X g .X1 X X 1 0 1 2 (2) (2) (2) 2 2 g (X ) g g .X g .X1 X 2 0 1 2 The output sequence is found as follows: U(X ) m(X )g (X ) interlaced with m(X )g (X ) 1 2 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 74Encoder representation –cont’d In more details: 2 2 3 4 m(X )g (X ) (1 X )(1 X X )1 X X X 1 2 2 4 m(X )g (X ) (1 X )(1 X )1 X 2 2 3 4 m(X )g (X )1 X 0.X X X 1 2 3 4 m(X )g (X )1 0.X 0.X 0.X X 2 2 3 4 U(X ) (1,1) (1,0)X (0,0)X (1,0)X (1,1)X U11 10 00 10 11 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 75State diagram  A finitestate machine only encounters a finite number of states.  State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine.  In a convolutional encoder, the state is represented by the content of the memory. K1  Hence, there are states. (grows exponentially w/ 2 constraint length) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 76State diagram – cont’d Current input Next output 0/00 state state Output (Branch word) Input 0 00 S S 0 S 0 0 00 1 11 1/11 0/11 00 S 2 0 11 S 1/00 0 S 01 1 1 00 S S 2 1 S 10 01 2 0 10 0/10 S S 10 1 1 01 2 1/01 S S 0 01 0/01 3 3 11 S 11 1 10 1 S 3 S 1/10 3 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 77Trellis – cont’d  Trellis diagram is an extension of the state diagram that shows the passage of time.  Example of a section of trellis for the rate ½ code Branch State 0/00 S 00 0 1/11 S10 0/11 2 1/00 0/10 S 01 1/01 1 0/01 1/10 S11 3 t t Time i i1 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 78Trellis –cont’d  A trellis diagram for the example code Tail bits Input bits 1 0 1 0 0 Output bits 11 10 00 10 11 0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 1/11 1/11 0/11 0/11 0/11 0/11 0/11 1/00 1/00 1/00 1/00 1/00 0/10 0/10 0/10 0/10 0/10 1/01 1/01 1/01 1/01 1/01 0/01 0/01 0/01 0/01 0/01 t t t t t t 1 5 6 2 3 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 79Trellis – cont’d Tail bits Input bits 1 0 1 0 0 Output bits 11 10 00 10 11 0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 0/11 0/11 0/11 1/00 0/10 0/10 0/10 1/01 1/01 0/01 0/01 t t t t t t 1 5 6 2 3 4 Path through the trellis Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 80Optimum decoding  If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder.  ML decoder, selects a codeword among all the possible (m) p(Z U ) codewords which maximizes the likelihood function (m) U where is the received sequence and is one of the Z possible codewords: L 2 codewords ML decoding rule: to search  (m ) (m ) (m) Choose U if p(Z U ) max p(Z U ) (m) over all U Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 81ML decoding for memoryless channels  Due to the independent channel statistics for memoryless channels, the likelihood function becomes  n (m) (m) (m) (m) p(Z U ) p (Z , Z ,..., Z ,...U ) p(Z U ) p(z u ) z ,z ,...,z ,... 1 2 i i i ji ji 1 2 i i1 i1 j1 and equivalently, the loglikelihood function becomes  n (m) (m) (m)  (m) log p(Z U ) log p(Z U ) log p(z u ) U i i ji ji i1 i1 j1 Bit metric Path metric Branch metric "i"  The path metric up to time index , is called the partial path metric. ML decoding rule: Choose the path with maximum metric among all the paths in the trellis. This path is the “closest” path to the transmitted sequence. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 82AWGN channels  For BPSK modulation the transmitted sequence (m) U corresponding to the codeword is denoted by (m) (m) (m) (m) (m) (m) (m) (m) where S (S ,S ,..., S ,...)and S (s ,...,s ,..., s ) 1 2 i i 1i ji ni and s E . ij c  The loglikelihood function becomes  n Inner product or correlation (m) (m)  (m) z s  Z, S between Z and S U ji ji i1 j1  Maximizing the correlation is equivalent to minimizing the Euclidean distance. ML decoding rule: Choose the path which with minimum Euclidean distance to the received sequence. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 83The Viterbi algorithm  The Viterbi algorithm performs Maximum likelihood decoding.  It find a path through trellis with the largest metric (maximum correlation or minimum distance).  It processes the demodulator outputs in an iterative manner.  At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric.  It proceeds in the trellis by eliminating the least likely paths. K1  It reduces the decoding complexity to L2 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 84The Viterbi algorithm cont’d  Viterbi algorithm: A. Do the following set up:  For a data block of L bits, form the trellis. The trellis has L+K1 sections or levels and starts at time and ends up at t 1 time . t LK  Label all the branches in the trellis with their corresponding branch metric. t  For each state in the trellis at the time which is denoted by i K1 S(t )0,1,...,2 S(t ),t , define a parameter (path metric) i i i B. Then, do the following: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 85The Viterbi algorithm cont’d 1. Set and i 2. (0,t ) 0 1 2. At time , compute the partial path metrics for all the t i paths entering each state. 3. Set equal to the best partial path metric S(t ),t i i t entering each state at time . i Keep the survivor path and delete the dead paths from the trellis. i L K i 4. If , increase by 1 and return to step 2. t C. Start at state zero at time . Follow the surviving branches LK backwards through the trellis. The path thus defined is unique and correspond to the ML codeword. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 86Example of Viterbi decoding m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 0/11 0/11 0/11 1/00 0/10 0/10 0/10 1/01 1/01 0/01 0/01 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 87Viterbi decodingcont’d  Label al the branches with the branch metric (Hamming distance) m (101) U (11 10 00 10 11) S(t ),t Z (11 10 11 10 01) i i 0 2 1 2 1 1 0 1 0 0 1 1 2 0 1 0 1 2 2 1 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 88Viterbi decodingcont’d  i=2 m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 2 0 2 1 2 1 1 0 1 0 0 0 1 1 2 0 1 0 1 2 2 1 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 89Viterbi decodingcont’d  i=3 m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 2 3 0 2 1 2 1 1 0 1 0 0 3 0 1 1 2 0 1 0 0 1 2 2 1 2 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 90Viterbi decodingcont’d  i=4 m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 2 3 0 0 2 1 2 1 1 0 1 0 0 3 2 0 1 1 1 2 0 0 0 3 1 2 2 1 2 3 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 91Viterbi decodingcont’d  i=5 m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 2 3 1 0 0 2 1 2 1 1 0 1 0 3 2 0 0 1 1 1 2 0 0 3 2 0 1 2 2 1 2 3 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 92Viterbi decodingcont’d  i=6 m (101) U (11 10 00 10 11) Z (11 10 11 10 01) 2 3 0 1 2 0 2 1 2 1 1 0 1 0 0 3 2 0 1 1 1 2 0 0 0 3 2 1 2 2 1 2 3 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 93Viterbi decodingcont’d  Trace back and then: m (101) ˆ m (100) U (11 10 00 10 11) vs ˆ Z (11 10 11 10 01) U (11 10 11 00 00) 2 3 0 1 2 0 2 1 2 1 1 0 1 0 0 3 2 0 1 1 1 2 0 0 0 3 2 1 2 2 1 2 3 1 t t t t t t 5 6 1 3 2 4 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 94Soft and hard decisions  Hard decision:  The demodulator makes a firm or hard decision whether one or zero is transmitted and provides no other information reg. how reliable the decision is.  Hence, its output is only zero or one (the output is quantized only to two level) which are called “hardbits”.  Soft decision:  The demodulator provides the decoder with some side information together with the decision.  The side information provides the decoder with a measure of confidence for the decision.  The demodulator outputs which are called softbits, are quantized to more than two levels. (eg: 8levels)  Decoding based on softbits, is called the “softdecision decoding”.  On AWGN channels, 2 dB and on fading channels 6 dB gain are obtained by using softdecoding over harddecoding Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 95Performance bounds …  Basic coding gain (dB) for softdecision Viterbi decoding Uncoded Code rate 1/ 3 1/ 2 E / N b 0 (dB) P K 7 8 6 7 B 3 6.8 10 4.2 4.4 3.5 3.8 5 9.6 10 5.7 5.9 4.6 5.1 7 11.3 10 6.2 6.5 5.3 5.8 Upper bound 7.0 7.3 6.0 7.0 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 96Interleaving  Convolutional codes are suitable for memoryless channels with random error events.  Some errors have bursty nature:  Statistical dependence among successive error events (timecorrelation) due to the channel memory. Like errors in multipath fading channels in wireless communications, errors due to the switching noise, …  “Interleaving” makes the channel looks like as a memoryless channel at the decoder. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 97Interleaving …  Consider a code with t=1 and 3 coded bits.  A burst error of length 3 can not be corrected. A1 A2 A3 B1 B2 B3 C1 C2 C3 2 errors  Let us use a block interleaver 3X3 A1 A2 A3 B1 B2 B3 C1 C2 C3 A1 B1 C1 A2 B2 C2 A3 B3 C3 Interleaver Deinterleaver A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3 1 error 1 error 1 error Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 98Concatenated Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 99Channel Concatenated codes  A concatenated code uses two levels on coding, an inner code and an outer code (higher rate).  Popular concatenated codes: Convolutional codes with Viterbi decoding as the inner code and ReedSolomon codes as the outer code  The purpose is to reduce the overall complexity, yet achieving the required error performance. Input Outer Inner Interleaver Modulate data encoder encoder Output Outer Inner Deinterleaver Demodulate data decoder decoder Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 100Concatenated Codes  Encoderchanneldecoder system C → Q → D can be viewed as defining a super channel Q‟ with a smaller probability of error, and with complex correlations among its errors.  We can create an encoder C‟ and decoder D‟ for this super channel Q‟. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 101Product/Rectangular Codes: Concatenation + Interleaving  Some concatenated codes make use of the idea of interleaving.  Blocks of size larger than the block lengths of the constituent codes C and C‟.  After encoding the data of one block using code C‟,  … the bits are reordered within the block in such a way that nearby bits are separated from each other once the block is fed to the second code C.  A simple example of an interleaver is a rectangular code or product code in which …  … the data: K x K rectangular block, and … 2 1  … encoded horizontally using an (N ,K ) linear code, 1 1  … then vertically using a (N ,K ) linear code. 2 2 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 102Product code Example  (a) A string 1011 encoded using a concatenated code w/ two Hamming codes, H(3, 1) ≡ Repetition code (R3) and H(7,4).  (b) a noise pattern that flips 5 bits.  (c) The received vector. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 103Product Codes (Contd)  (d) After decoding using the horizontal (3, 1) decoder, and  (e) after subsequently using the vertical (7; 4) decoder.  The decoded vector matches the original.  Note: Decoding in the other order (weakercode first) leads to residual error in this example: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 104Practical example: Compact disc “Without error correcting codes, digital audio would not be technically feasible.”  Channel in a CD playback system consists of a transmitting laser, a recorded disc and a photodetector.  Sources of errors are manufacturing damages, fingerprints or scratches  Errors have bursty like nature.  Error correction and concealment is done by using a concatenated error control scheme, called crossinterleaver ReedSolomon code (CIRC).  Both the inner and outer codes are shortened RS codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 105Compact disc – CIRC Encoder  CIRC encoder and decoder: Encoder C C  D D 2 1 interleave encode interleave encode interleave C C  D D 2 1 deinterleave decode deinterleave decode deinterleave Decoder Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 106Adaptive Modulation and Coding Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 107Adaptive Modulation  Just vary the “M” in the MQAM constellation to the appropriate SNR  Can be used in conjunction with spatial diversity Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 108Adaptive modulation/coding: MultiUser  Exploit multiuser diversity.  Users with high SNR: use MQAM (large M) + high code rates  Users with low SNR: use BPSK + low code rates (i.e. heavy error protection)  In any WiMAX frame, different users (assigned to timefrequency slots within a frame) would be getting a different rate  i.e. be using different code/modulation combos.. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 109Basis for Adaptive Modulation/Coding (AMC)  Kuser system: the subcarrier of interest experiences i.i.d. Rayleigh fading: each user‟s channel gain is independent of the others, and is denoted by h . k Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 110Wimax: Uses Feedback Burst Profiles  Lower data rates are achieved by using a small constellation – such as QPSK – and low rate error correcting codes such as rate 1/2 convolutional or turbo codes.  The higher data rates are achieved with large constellations – such as 64QAM – and less robust error correcting codes, for example rate 3/4 convolutional, turbo, or LDPC codes.  Wimax burst profiles: 52 different possible configurations of modulation order and coding types and rates.  WiMAX systems heavily protect the feedback channel with error correction, so usually the main source of degradation is due to mobility, which causes channel estimates to rapidly become obsolete. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 111AMC Considerations  BLER and Received SINR: In adaptive modulation theory, the transmitter needs only to know the statistics and instantaneous channel SINR. From the channel SINR, it can determine the optimum coding/modulation strategy and transmit power.  In practice however, the BLER should be carefully monitored as the final word on whether the data rate should be increased (if the BLER is low) or decreased to a more robust setting.  Automatic Repeat Request (ARQ): ARQ allows rapid retransmissions, and Hybrid ARQ generally increases the ideal BLER operating point by about a factor of 10, e.g. from 1 to 10.  For delaytolerant applications, it may be possible to accept a BLER approaching even 70, if Chase combining is used in conjunction with HARQ to make use of unsuccessful packets.  Power control vs. Waterfilling: In theory, the best power control policy from a capacity standpoint is the socalled waterfilling strategy, in which more power is allocated to strong channels, and less power allocated to weak channels. In practice, the opposite may be true in some cases. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 112AMC vs Shannon Limit  Optionally turbocodes or LDPC codes can be used instead of simple block/convolutional codes in these schemes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 113Main Points  Adaptive MQAM uses capacityachieving power and rate adaptation, with power penalty K.  Adaptive MQAM comes within 56 dB of capacity  Discretizing the constellation size results in negligible performance loss.  Constellations cannot be updated faster than 10s to 100s of symbol times: OK for most dopplers.  Estimation error and delay lead to irreducible error floors. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 114Hybrid ARQ/FEC Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 115Type I HARQ: Chase Combining  In Type I HARQ, also referred to as Chase Combining, the redundancy version of the encoded bits is not changed from one transmission to the next, i.e. the puncturing patterns remains same.  The receiver uses the current and all previous HARQ transmissions of the data block in order to decode it.  With each new transmission the reliability of the encoded bits improve thus reducing the probability of error during the decoding stage.  This process continues until either the block is decoded without error (passes the CRC check) or the maximum number of allowable HARQ transmissions is reached.  When the data block cannot be decoded without error and the maximum number of HARQ transmissions is reached, it is left up to a higher layer such as MAC or TCP/IP to retransmit the data block.  In that case all previous transmissions are cleared and the HARQ process start from the beginning.  Used in WiMAX implementations: can provide range extension (especially at celledge). Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 116Type II ARQ: Incremental Redundancy  Type II HARQ is also referred to as Incremental Redundancy  The redundancy version of the encoded bits is changed from one transmission to the next. (Ratecompatible Punctured Convolutional codes (RCPC)) used.  Thus the puncturing pattern changes from one transmission to the next.  This not only improves the log likelihood estimates (LLR) of parity bits but also reduces the code rate with each additional transmission.  Incremental redundancy leads to lower bit error rate (BER) and block error rate (BLER) compared to chase combining.  Wimax uses only Type I HARQ (Chase) and not Type II for complexity reasons Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 117Hybrid ARQ/FEC: Combining Coding w/ Feedback Packets • Sequence Numbers • CRC or Checksum • Proactive FEC Timeout • ACKs Status Reports • NAKs, • SACKs • Bitmaps Retransmissions • Packets • Reactive FEC Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 118Hybrid ARQ/FEC For TCP over Lossy Networks PROACTIVE FEC (PFEC) P = f(,) fec REACTIVE FEC (RFEC) Y = g(p,,X) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 119LossTolerant TCP (LTTCP) vs TCPSACK Maximum Goodput Missing Goodput Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 120Tradeoffs in Hybrid ARQ/FEC 1.4Mbps goodput sacrificed Analysis : (10 Mbps, p = 50) (FEC waste) to reduce latency, residual loss Goodput = 3.61 Mbps vs 5 Mbps (max) PFEC:  +  of loss process Upfront PFEC waste (10) PFEC waste: 1.0 Mbps = 10 dominates RFEC waste RFEC waste: 0.39 Mbps = 3.9 Residual Loss can be negligible even for high loss rates (50), even Residual Loss : 0.0 with a limit of just 1 ARQ attempt. Weighted Avg Rounds: 1.13 Tradeoffs Goodput Block recovery Residual latency Loss Rate Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 121Towards the Shannon Limit LDPC, Turbo Codes, Digital Fountains Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 122Recall: Coding Gain Potential GapfromShannonlimit: 5 BER=10 9.6 + 1.59 = 11.2 dB (about 7.8 dB if you maintain spectral efficiency) 5  With convolutional code alone, BER of 10 , we require Eb/No of 4.5dB or get a gain of 5.1 dB.  With concatenated RSConvolutional code, BER curve vertical cliff at an Eb/No of about 2.52.6 dB, i.e a gain of 7.1dB.  We are still 11.2 – 7.1 = 4.1 dB away from the Shannon limit   Turbo codes and LDPC codes get us within 0.1dB of the Shannon limit  Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 123LowDensity Parity Check (LDPC) Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 124LDPC Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 125Example LDPC Code  A lowdensity paritycheck matrix and the corresponding (bipartite) graph of a rate1/4 lowdensity paritycheck code with blocklength N =16, and M =12 constraints.  Each white circle represents a transmitted bit.  Each bit participates in j = 3 constraints, represented by squares.  Each constraint forces the sum of the k = 4 bits to which it is connected to be even.  This code is a (16; 4) code. Outstanding performance is obtained when the blocklength is increased to N ≈ 10,000. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 126Tanner Graph Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 127A.k.a Factor Graph Notation Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 128Factor Graphs  A factor graph shows how a function of several variables can be factored into a product of "smaller" functions.  For example, the function g defined by g(x,y)=xy+x can be factored into g(x,y)=f (x)f (y) where f (x)=x and f (y)=y+1. 1 2 1 2  The factor graph depicting this factorization:  Graph for function g(x,y,z) = f (x,y) f (y,z) f (x,z). 1 2 3  Why Factor graphs  1. Very general: variables and functions are arbitrary  2. Factorization = SumProduct Algorithm can be applied  3. Third, many efficient algorithms are special cases of the SumProduct Algorithm applied to factor graphs:  FFT (Fast Fourier Transform), Viterbi Algorithm, ForwardBackward Algorithm, Kalman Filter and Bayesian Network Belief Propagation.  Brings many good algorithms together in a common framework. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 129LDPC Coding Constructions Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 130LDPC Decoding: Iterative Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 131Regular vs Irregular LDPC Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 132Irregular LDPC Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 133Turbo Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 134Turbo Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 135Turbo Encoder  The encoder of a turbo code.  Each box C1, C2, contains a convolutional code.  The source bits are reordered using a permutation π before they are fed to C2.  The transmitted codeword is obtained by concatenating or interleaving the  outputs of the two convolutional codes.  The random permutation is chosen when the code is designed, and fixed thereafter. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 136Turbo: MAP Decoding Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 137Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 138Turbo Codes: Performance… Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 139UMTS Turbo Encoder Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 140WiMAX: Convolutional Turbo Codes (CTC) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 141Digital Fountain Erasure Codes Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 142What is a Digital Fountain  A digital fountain is an ideal/paradigm for data transmission.  Vs. the standard (TCP) paradigm: data is an ordered finite sequence of bytes.  Instead, with a digital fountain, a k symbol file yields an infinite data stream (“fountain”); once you have received any k symbols from this stream, you can quickly reconstruct the original file. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 143How Do We Build a Digital Fountain  We can construct (approximate) digital fountains using erasure codes.  Including ReedSolomon, Tornado, LT, fountain codes.  Generally, we only come close to the ideal of the paradigm.  Streams not truly infinite; encoding or decoding times; coding overhead. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 144Forward Error Correction (FEC): Eg: ReedSolomon RS(N,K) Recover K = K of N RS(N,K) data packets received FEC (NK) Block Size Lossy Network (N) High Encode/Decode times: OK(NK) log N. 2 Hard to do very fast line rates (eg: 1Gbps+). Data = K Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 145… … Digital Fountain Codes (Eg: Raptor codes) Recover K = K+ε data packets received Rateless: No Block Size “Fountain of encoded pkts” Compute on demand Data = K Low Encode/Decode times: OK ln(K/δ) Lossy Network w/ probability 1 δ. Overhead ε 5. Can be done by software very fast (eg: 1Gbps+). Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 146Raptor/Rateless Codes  Properties: Approximately MDS  “Infinite” supply of packets possible.  Need k(1+e) symbols to decode, for some e 0.  Decoding time proportional to k ln (1/e).  On average, ln (1/e) (constant) time to produce an encoding symbol.  Key: Very fast encode/decode time compared to RS codes  Compute new check packets on demand  Bottomline: these codes can be made very efficient and deliver on the promise of the digital fountain paradigm. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 147Digital Fountain Encoder/Decoder  Encoder:  Decoder: Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 148Digital Fountain decoding (example)  Received bits: 1011  t is of degree 1, s = t = 1 1 1 1 First such code called “Tornado” code.  t t XOR‟ed w/ s = 1. Later: LTcodes; 2 3 1 Concatenated version: Remove s ‟s edges 1 “Raptor code”  s set to t = 0 degree = 1 2 4  Repeat as before; s = 1 3 Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 149Esoterics: Robust Soliton Degree Distribution Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 150Applications: Reliable Multicast  Many potential problems when multicasting to large audience.  Feedback explosion of lost packets.  Start time heterogeneity.  Loss/bandwidth heterogeneity.  A digital fountain solves these problems.  Each user gets what they can, and stops when they have enough: doesn‟t matter which packets they‟ve lost  Different paths could have diff. loss rates Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 151Applications: Downloading in Parallel  Can collect data from multiple digital fountains for the same source seamlessly.  Since each fountain has an “infinite” collection of packets, no duplicates.  Relative fountain speeds unimportant; just need to get enough.  Combined multicast/multigather possible.  Can be used for BitTorrentlike applications.  Microsoft‟s “Avalanche” product uses randomized linear codes to do “network coding”  http://research.microsoft.com/pablo/avalanche.aspx  Used to deliver patches to security flaws rapidly; Microsoft Update dissemination etc Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 152Single path: limited capacity, delay, loss… High Delay/Jitter Low Capacity Lossy Time Network paths usually have: • low e2e capacity, • high latencies and • high/variable loss rates. Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 153Idea: Aggregate Capacity, Use Route Diversity Low Perceived Loss High Perceived Capacity Scalable Performance Boost with ↑ Low Perceived Delay/Jitter Paths Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 154Multipath LTTCP (MLTCP): Structure Map pkts→paths intelligently Socket based upon Rank(p , RTT, w ) i i i Buffer Perpath congestion control Reliability aggregate, across paths (like TCP) (FEC block = weighted sum of windows, PFEC based upon weighted average loss rate) Note: these ideas can be applied to other linklevel multihoming, Networklevel virtual paths, nonTCP transport protocols (including videostreaming) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 155Summary  Coding: allows better use of degrees of freedom  Greater reliability (BER) for a given Eb/No, or  Coding gain (power gain) for a given BER. 5  Eg: BER = 10 :  5.1 dB (Convolutional), 7.1dB (concatenated RS/Convolutional)  Near (0.11dB from) Shannon limit (LDPC, Turbo Codes)  Magic achieved through iterative decoding (belief propagation) in both LDPC/Turbo codes  Concatenation, interleaving used in turbo codes  Digital fountain erasure codes use randomized LDPC constructions as well.  Coding can be combined with modulation adaptively in response to SNR feedback  Coding can also be combined with ARQ to form Hybrid ARQ/FEC  Efficient coding schemes now possible in software/high line rates = they are influencing protocol design at higher layers also:  LTTCP, MLTCP, multicast, storage (RAID, CD/DVDs), Bittorrent, Network coding in Avalanche (Microsoft Updates) etc Shivkumar Kalyanaraman Rensselaer Polytechnic Institute : “shiv rpi” 156
Website URL
Comment