信息论与编码理论基础 December 4,2007 XDU,Fall 2007 Lecture Notes Introduction to Channel Coding I.Error control coding (overview) Shannon showed that reliable communications can be achieved by proper coding of information to be transmitted provided that the rate of information transmission is below the channel capacity. Coding is achieved by adding properly designed redundancy to each message before its transmission.The added redundar appear in the for rm of extra symbols (or bits).or in the form of channel signal-ser expansion or in the form of combination of both. Coding may be designed and performed separately from modulation,or designed in conjunction with modulation as a single entity.In the former case,redundancy appears in the form of extra symbols,normally called parity-check symbols. ■ ng achieved by addin is known as conventio nal coding,in which error control (or coding gain)is achieved at the expense of bandwidth expansion or data rate reduction.Therefore,conventional coding is suitable for error control in power limited channels,such as deep space channel. In the case that coding is designed in conjunction with modulation,redundancy comes from channel signal- nsion.This combination of coding and modulation is usually known as coded modulation,which allo s us to achieve error control (or coding gain)without compromising bandwidth efficiency.We refer this technique as the bandwidth efficient coding. ■Historical notes: Hamming codes(1950) Reed-Muller codes(1954) BCH codes (by Bose,Ray-Chaudhuri and Hocquenghem,1959) BM算法(1968) Reed-Solomon codes(1960) Low-density parity-check codes(by Gallager in 1962,rediscovered in 90's) by Elias,1955) Viterbi algorithm(1967) Concatenated codes(by Forney,1966) Trellis-coded modulation (by Ungerboeck,1982) Turbo codes(by Berrou,1993) Space-time codes(by Vahid Tarokh,1998) Applications Deep space,satellite,mobile communications,voice modem,data networks,etc. ■Two simple examples:
1 信息论与编码理论基础 December 4, 2007 XDU, Fall 2007 Lecture Notes Introduction to Channel Coding I.Error control coding (overview) Shannon showed that reliable communications can be achieved by proper coding of information to be transmitted provided that the rate of information transmission is below the channel capacity. Coding is achieved by adding properly designed redundancy to each message before its transmission. The added redundancy is used for error control. The redundancy may appear in the form of extra symbols (or bits), or in the form of channel signal-set expansion or in the form of combination of both. Coding may be designed and performed separately from modulation, or designed in conjunction with modulation as a single entity. In the former case, redundancy appears in the form of extra symbols, normally called parity-check symbols. Coding achieved by adding extra redundant digits is known as conventional coding, in which error control (or coding gain) is achieved at the expense of bandwidth expansion or data rate reduction. Therefore, conventional coding is suitable for error control in power limited channels, such as deep space channel. In the case that coding is designed in conjunction with modulation, redundancy comes from channel signal-set expansion. This combination of coding and modulation is usually known as coded modulation, which allows us to achieve error control (or coding gain) without compromising bandwidth efficiency. We refer this technique as the bandwidth efficient coding. Historical notes: Hamming codes (1950) Reed-Muller codes (1954) BCH codes (by Bose, Ray-Chaudhuri and Hocquenghem, 1959) Reed-Solomon codes (1960) ⎫ ⎬ ⎭ BM 算法(1968) Low-density parity-check codes (by Gallager in 1962, rediscovered in 90’s) Convolutional codes (by Elias, 1955) Viterbi algorithm (1967) Concatenated codes (by Forney, 1966) Trellis-coded modulation (by Ungerboeck, 1982) Turbo codes (by Berrou , 1993) Space-time codes (by Vahid Tarokh,1998) Applications: Deep space, satellite, mobile communications, voice modem, data networks, etc. Two simple examples:
0→0000001 Repetition codes: 1→111111 (n,1)code Single parity-check codes: 0000:01 0001:1 1(.1)code 0011:0 ■References: [1]Shu Lin and D.J.Costello,Jr.Error Control Coding:Fundamentals and Applications 2 ed Prentice-Hall 2004 [2]王新梅,肖国镇.纠错码一原理与方法.西安:西安电子科技大学出版社,1991 [3]D.J.Costello,J.Hagenauer.H.Imai,and S.B.Wicker,"Applications of error-control coding."IEEE Trans.Inform.Theory,vol44,no.6,pp.2531-2560,Oct.1998. [4]D.J.Costello and G.D.Forney,"Channel coding:The road to channel capacity," Proceedings of The IEEE,vol.95,no.6,pp.1150-1177,June 2007. II.Block coding In block coding,information sequence is divided into messages ofk information bits(or symbols)each.Each message is mapped into a structured sequence of n bits(with), ealled a codeword. (,4,.4-)(G0,9,.,cn-) codeweed The mapping operation is called encoding.Each encoding operation is independent of past encodings.The collection of all codewords is called an (n.k)block code,where n and k are the】 ength and dime sion of the code, resp In the process of encoding.n-k redundant bits are added to each message for protection against transmission errors For example,consider a(5,2)binary code of size M2=4 0010101=c 01←→10010=c 1010010=c3 C={c,c2,c3,c4} 11←→11110=c An important class of block codes is the class of linear block codes.A block code is said to be linear if the vector sum of two codewords is also a codeword: c,=(Co,C,.C-)eC, ,=(C.0,C.C-)eC C,⊕Cy=(Co⊕C0,C⊕C,.,Cn-®cn-i)eC More general,a linear code is a subspace of GF(g)”.(矢量加、标量乘运算封闭)
2 Repetition codes: 0 000000 1 111111 → ⎫ ⎬ → ⎭ (n, 1) code Single parity-check codes: 0000 0 00 01 1 0010 1 0 0 11 0 ⎫ ⎪ ⎪ ⎬ ⎪ ⎪⎭ # # # # (n, n-1) code References: [1] Shu Lin and D. J. Costello, Jr. Error Control Coding: Fundamentals and Applications. 2nd ed. Prentice-Hall, 2004. [2] 王新梅,肖国镇.纠错码-原理与方法.西安:西安电子科技大学出版社,1991. [3] D. J. Costello, J. Hagenauer, H. Imai, and S. B. Wicker, “Applications of error-control coding,” IEEE Trans. Inform. Theory, vol.44, no.6, pp.2531-2560, Oct. 1998. [4] D. J. Costello and G. D. Forney, “Channel coding: The road to channel capacity,” Proceedings of The IEEE, vol.95, no.6, pp.1150-1177, June 2007. II. Block coding In block coding, information sequence is divided into messages of k information bits (or symbols) each. Each message is mapped into a structured sequence of n bits (with n>k), called a codeword. 01 1 01 1 message codeword (, , ) (, , ) k n uu u cc c " " − − ↔ The mapping operation is called encoding. Each encoding operation is independent of past encodings. The collection of all codewords is called an (n, k) block code, where n and k are the length and dimension of the code, respectively. In the process of encoding, n-k redundant bits are added to each message for protection against transmission errors. For example, consider a (5,2) binary code of size M=2k =4: 1 2 1234 3 4 00 10101 01 10010 {, , , } 10 10010 11 11110 ↔ = ↔ = = ↔ = ↔ = c c cccc c c C An important class of block codes is the class of linear block codes. A block code is said to be linear if the vector sum of two codewords is also a codeword: ,0 ,1 , 1 (, ) i i i in cc c = ∈ − c " C , ,0 ,1 , 1 (, ) j j j jn cc c c = " − ∈C ,0 ,0 ,1 ,1 , 1 , 1 ( , , ) i j i j i j in jn c cc c c c ⊕= ⊕ ⊕ ⊕ ∈ − − c c " C More general, a linear code is a subspace of GF( )n q . (矢量加、标量乘运算封闭)
Linear block codes are normally put in systematic form: parity-check part message part Each parity-check bit is a linear sum of message bits,i.e. ,j=0,l.,n-k-1. where p0or 1.Thekequations which gives the parity-check bits are called the parity-check equations.They specify the encoding rule. For an(n.k)block code,the ratios R= and 7=- are called coderate and redundancy,respectively. An example for block code: Let n=7 and k=4.Consider the (7,4)linear systematic block code Message (4,4,2,4) Codeword: (Co,91,C2,C3,C4,C,C6)=(c,9,C2,46,4,4,4) C0=4+41+12 Here, G=4+42+华 C,=t+14+l In matrix form: [1011000] 1110100 c=4,4,4,4i100010 =uG L0110001 Encoder circuit:
3 Linear block codes are normally put in systematic form: 01 1 01 01 1 1 (, , ) ( , , , , , ) n nk k parity check part message part cc c − cc c uu u − − − − " = " " Each parity-check bit is a linear sum of message bits, i,e, 1 0 , 0,1, , 1. k j ij i i c pu j n k − = = = −− ∑ " where ij p =0 or 1. The n k − equations which gives the parity-check bits are called the parity-check equations. They specify the encoding rule. For an (n, k) block code, the ratios k R n = and n k n η − = are called code rate and redundancy, respectively. An example for block code: Let n=7 and k=4. Consider the (7, 4) linear systematic block code Message: 0123 (,) uuuu Codeword: 0123456 012 01 2 3 (,) (, , , ) ccccccc cccuuuu = Here, 0 012 1123 2 013 c uuu cuuu c uuu = ++ =++ = ++ In matrix form: 0123 1 0 11 0 0 0 1 1 10 1 0 0 (,) 1 1 00 0 1 0 0 1 10 0 0 1 uuuu ⎡ ⎤ ⎢ ⎥ = =⋅ ⎣ ⎦ c uG # # # # Encoder circuit:
message register To channe III.Convolution Coding During each unit of time,the input to convolutional is also a k-bit messa ge block and the corresponding is also ann-bit coded withk<n.(为避免混淆,可改为用ka,m表示) Each coded n-bit output block depends not only on the corresponding k-bit input message block at the same time unit but also on the m previous message blocks. Encoder (memory) Input data stream shift register data fram 运算逻辑 有记忆 encoder odeword fram The code rate is defined as R=k/n. The parameter m is called the memory order of the code
4 ⊕ ⊕ ⊕ u 0 c 1 c 2 c III. Convolution Coding During each unit of time, the input to convolutional is also a k-bit message block and the corresponding is also an n-bit coded with k<n. (为避免混淆,可改为用 k0, n0 表示) Each coded n-bit output block depends not only on the corresponding k-bit input message block at the same time unit but also on the m previous message blocks. # # Encoder (memory) u (1) u (2) u ( )k u ( )n c (2) c (1) c c Input data stream shift register 运算逻辑 n nn n encoder codeword frame data frame 有记忆 N The code rate is defined as R = k n/ . The parameter m is called the memory order of the code
In the process of coding.the information sequence u is divided into data frames of length k.These subseq ences of length k are applied to the k-input terminals of the encoder,producing coded sequence oflength An example: Let n2.=1 and m=2.Consider a rate-1/2(2,1,2)convolutional code which is specified by the following two generator sequences: g"=(101),g=(11) Note:g",g2)可看作编码器的两个冲激响应,由u=6=(I00)得到。冲激响应至多 持续m+1个时间单位,且可写为: g"=(g,g",g)g2=(882,g2,g2) -Let u=()be the input message sequence.Then the two output sequences are c)=u*g) 编码方程(与冲激响应的卷积运算) c2 =u+g) -At the /th time unit,the input is a single bitThe corresponding output is a block of two bits,(cc),which is given by P-28”4g”+g”+中w c"=4+42 →G=4++"型 memory The output codeword is given by c=). Foru=1011100.),c=(11,0L,00,10,01,10,11,.)
5 In the process of coding, the information sequence u is divided into data frames of length k. These subsequences of length k are applied to the k-input terminals of the encoder, producing coded sequence of length n. An example: Let n=2, k=1 and m=2. Consider a rate-1/2 (2,1,2) convolutional code which is specified by the following two generator sequences: (1) (2) g g = = (101), (111) u (1) c c (2) c Note: (1) (2) g g , 可看作编码器的两个冲激响应,由u = δ = (100.)得到。冲激响应至多 持续m +1个时间单位,且可写为: ( ) ( ) (1) (1) (1) (1) (2) (2) (2) (2) 01 0 1 , , , , , m m g g = = gg g gg g . . - Let 0 1 u = (,) u u " be the input message sequence. Then the two output sequences are (1) (1) (2) (2) * * = ⎫ ⎬ = ⎭ c ug c ug 编码方程 (与冲激响应的卷积运算) - At the lth time unit, the input is a single bit l u . The corresponding output is a block of two bits, (1) (2) (, ) l l c c , which is given by () () () () () 0 11 1 m j jj j j l li i l l lm m i c u g ug u g u g − −− = = = + ++ ∑ " (1) 2 2 1 2 ll l l ll l memory cu u c uu u − − − ⎧ = + ⎪ ⇒ ⎨ =+ + ⎪ ⎩ - The output codeword is given by ( ) (1) (2) (1) (2) 00 11 c = cc cc , ," . For (1011100 ), (11,01,00,10,01,10,11, ) u c = = "