receiver matched filter.For BPSK transmission over the AWGN channel,a simple measure of the reliability of a received symbol yis its magnitude the larger the magnitude,the larger the reliability of the hard-decision bit measure has been used in many reliability-based algorithm for decoding linear block codes. Let H be the parity-check matrix of a (d,de)-regular LDPC code C of length n.For 0sI<n consider the set A={h”,h9,h} ofd,rows in H that are orthogonal on the bit position I.For 1s j<d,consider the jth row in h=(h8,纷,9-) For simplicity of notation,we will use p to<nbe the locations where=.h=1.Then h checks the code bits,c.cc.Then the syndrome bit computed from zand h is ”=zh9=h州+h城+.+nh9 =e,hg+eh+.+e,hg We measure the reliability of the received symbol with For /<and 1sj<d,define ly,a=min1y卡1≤k≤p} (4.13) The numbery is used as a measure of the reliability of the syndrome bits Define the following sum E∑”s29-ly巴 (4.14) Then E is simply a weighted check-sum that is orthogonal on the bit position I.The weighted check-sums can be incorporated into the BF decoding algorithm to improve the decoding performance.Such a BF decoding is called weighted BF decoding.It is carried out as follows. 1)Compute the parity-check sums based on(4.10).If all the parity-check sums are zero,stop the decodin 2)Compute the weighted check-sums Ebased on (4.14)fors/<n 3)Find the bit position for which Er is the largest. 4)Flip the received bit=. 5)Repeat steps 1)through 4).This processing of bit flipping continues until all the 16
16 receiver matched filter. For BPSK transmission over the AWGN channel, a simple measure of the reliability of a received symbol yl is its magnitude | yl |: the larger the magnitude, the larger the reliability of the hard-decision bit zl. This reliability measure has been used in many reliability-based algorithm for decoding linear block codes. Let H be the parity-check matrix of a (dv, dc)-regular LDPC code C of length n. For 0 ≤ <l n , consider the set { } () () () 1 2 , ,., v ll l A hh h l d = of dv rows in H that are orthogonal on the bit position l. For 1 v ≤ j < d , consider the jth row in Al, ( ) () () () () ,0 ,1 , 1 , ,., l ll l j j j jn hh h h = − For simplicity of notation, we will use ρ to denote dc. Let 1 2 0 , ,., ii i n ≤ ρ < be the locations where () () () ,0 ,1 , 1 . 1. ll l j j jn hh h = == = − Then ( )l h j checks the code bits, 1 2 , ,., ii i cc c ρ . Then the syndrome bit computed from z and ( )l h j is 11 2 2 () () () () () , , l lll l j j i ji i ji i ji s zh zh z hρ ρ =⋅ = + + + z h " 11 2 2 () () () , , ll l i ji i ji i ji eh eh e hρ ρ = + ++ " We measure the reliability of the received symbol with | | k k i i z y . For 0 ≤ <l n and 1 v ≤ <j d , define { } ( ) min | | min | |: 1 k l j i y yk = ≤≤ ρ (4.13) The number ( ) | | l j y is used as a measure of the reliability of the syndrome bit ( )l j s . Define the following sum ( ) ( ) () () min 2 1| | l j l l l l jj s E sy ∈ ∑ − S (4.14) Then El is simply a weighted check-sum that is orthogonal on the bit position l. The weighted check-sums can be incorporated into the BF decoding algorithm to improve the decoding performance. Such a BF decoding is called weighted BF decoding. It is carried out as follows. 1) Compute the parity-check sums based on (4.10). If all the parity-check sums are zero, stop the decoding. 2) Compute the weighted check-sums El based on (4.14) for 0 ≤ l n < . 3) Find the bit position l for which El is the largest. 4) Flip the received bit zl. 5) Repeat steps 1) through 4). This processing of bit flipping continues until all the
parity-check sums are zero or a preset maximum number of iterations is reached. 4.8.3.5 The Sum-Product Algorithm for LDPC Decoding We now present the soft-decision version of the message-passing algorithm,beginning with the introduction of the following notation. Consider an ldpc code of length n given by the null space of a sparse a x n parity-check matrix H with rows h,h2 In the codes Tanner graph,the ocal onstraints (check nodes)are zero-sum nstraints; the check-sums constitute actually the syndrome v ect Denote by s the check sum of the /th check node (computed from the /th parity-check equation).Let q.(b)=P(c=bly,all checks involvingc,are satisfied) =P(c=bly,{s,=0,jeN()),be0,19 be the pseudo-posteriori probability that is used to make the decisions about the decoded bits. (b)=P(c=blys=0.j'.(i)denote the conditional probability that the transmitted code bit c has value be,1),given the check-sums computed from the parity-check equations corresponding to .()which is the message to be passed from the variable node c,to the check node s. Let M.(~i)=(messages from all variable nodes except the node c,and (b)=P(check-sums,=0lc,=b.y.M.(-i)) =P(check-sums,=0lc,=b,(qri'E N()\i) denote the conditional probability of the jth check equation being satisfied,given that c,= be 1)and the other code bits in .have a separable distributionThis is the message to be passed from the check node s,to the variable node c. At the beginning of iterative decoding.(b)is initialized to the prior probabilities P(c,=b).Then from the discussion in Section 4.7(cf.(4.17)).we have the following rule for message-passing in the /th iteration. Update rule for check nodes (6)-P(,-01c-b.feN()c) (4.15) SreN.(iM The conditional probabilities in the above summation are either zero or one,depending on whether ci()constitute codewords of the jth local constraint code.Notice that in 17
17 parity-check sums are zero or a preset maximum number of iterations is reached. 4.8.3.5 The Sum-Product Algorithm for LDPC Decoding We now present the soft-decision version of the message-passing algorithm, beginning with the introduction of the following notation. Consider an LDPC code of length n given by the null space of a sparse q × n parity-check matrix H with rows h1, h2, ., hq. In the code’s Tanner graph, the local constraints (check nodes) are zero-sum constraints; and the q check-sums constitute actually the syndrome vector. Denote by sj the check sum of the jth check node (computed from the jth parity-check equation). Let ( ) | , all checks involving are satisfied ( ) ii i qb Pc b c = = y = = =∈ Pc b s j i ( ) ij v | , 0, ( ) y { N } , b∈{0, 1} be the pseudo-posteriori probability that is used to make the decisions about the decoded bits. q b Pc b y s j i j ij i i j v ( ) | , 0, ' ( ) \ = = =∈ ( ) { ' N } denote the conditional probability that the transmitted code bit ci has value b∈{0, 1}, given the check-sums computed from the parity-check equations corresponding to ( ) \{ } v N i j , which is the message to be passed from the variable node ci to the check node sj. Let ( ) Mv ∼ i = {messages from all variable nodes except the node ci}, and σ ji j i v ( ) check-sum 0 | , , ( ) b P s cbM i = == ( ) y ∼ ( ) ' check-sum 0 | ,{ , ' ( ) \ } = == ∈ P s c bq i j i j i ij c N denote the conditional probability of the jth check equation being satisfied, given that ci = b∈{0, 1} and the other code bits in ( ) \{ } c N j i have a separable distribution ' ' { } ij i i q ≠ . This is the message to be passed from the check node sj to the variable node ci. At the beginning of iterative decoding, ( ) ij q b is initialized to the prior probabilities ( ) Pc b i = . Then from the discussion in Section 4.7 (cf. (4.17)), we have the following rule for message-passing in the lth iteration. The conditional probabilities in the above summation are either zero or one, depending on whether {ci j i c , () ∈ N } constitute codewords of the jth local constraint code. Notice that in Update rule for check nodes: ( ) { } ' ( ) ( ) ' '' : ' ( )\ ' ( )\ ( ) 0| , , ' ( )\ ( ) i c c l l ji j i i c i j i ci ji i ji σ b Ps c b c i j i q c ∈ ∈ = == ∈ ∑ ∏ N N N (4.15)
the derivation we have used the separable distribution assumption.So for the graph without cycles,it is exact.but for the graph containing cycles,it is approximate. The computed values of(b)are then used to update the values of(b)as follows. Update rule for variable nodes: 4gb)=apG=b1W)9b (4.16) where is a scale factor which is chosen such that 944(0)+g0)=1 式(4.16)的推导见附录。 +s =C) 白c +Si d-1 4/q 由由 8》 (a) (b) Figure 4.8.7.Flow for message update for variable and check nodes At the /th iteration step,the pseudo-posteriori probabilities are given by q"O)=4Pc=b1)L0g-例 (4.17) where is chosen such that 94(0)+q()=1 Based on these probabilities,we can make hard decisions and form the following vector as the decoded candidate z0=(,”,) with
18 the derivation we have used the separable distribution assumption. So for the graph without cycles, it is exact; but for the graph containing cycles, it is approximate. The computed values of ( ) ( ) l ji σ b are then used to update the values of ( 1) ( ) l ij q b + as follows. where ( 1) l αij + is a scale factor which is chosen such that ( 1) ( 1) (0) (1) 1 l l ij ij q q + + + = 式(4.16)的推导见附录。 . + sj ij q i y 1 v d ci 1i = + + + σ 2i 1, v σ d i − . σ ji 1 j q 2 j q 1 c d − + = = = = 1, c d j q − ci sj (a) (b) Figure 4.8.7. Flow for message update for variable and check nodes At the lth iteration step, the pseudo-posteriori probabilities are given by where ( )l αi is chosen such that () () (0) (1) 1 l l i i q q + = Based on these probabilities, we can make hard decisions and form the following vector as the decoded candidate, ( ) () () () () 01 1 , ,., l ll l n zz z = − z with Update rule for variable nodes: ( 1) ( 1) ( ) ' ' ( )\ () ( | ) () v ll l ij ij i i j i j ij q b Pc b y b α σ + + ∈ = = ∏ N (4.16) ( ) ( ) ( 1) ( ) () ( | ) () v ll l i i i i ji j i q b Pc b y b α σ − ∈ = = ∏ N (4.17)
”=ifg”0>4"o 0,otherwise IfH=0 or the maximum iteration number is reached,then stop the decoding iteration process and output as the decoded codeword.Moreover,decoding erro is declared. For the AWGN channel model y=x+n,where n~N(0.2)and x,=(-1)is equally likely to be +1 or-1,it is ease to verify that efa 1 P(c,ly)=P(x,ly) e后+e后1+e2后 With the assignment (b)=P(c=bly,),the SPA can be summarized as follows. Initialization:Set/=0,and maximum number of iteration to For every(such thath=1 with 1sjsq and0sisn,set q0)=P=+l川)=1+eG 1 q0)=Px=-1y)= +e2a Do the following for all edges in parallel: 1)compute the probabilities(0)and (1)according to (4.15). 2)compute the values ()and()according to (4.16)with=1. g0) Then normalize)() 96(0)=1-9"(0) 3)Compute the values()and(1)for all i.Form the decision vector and test H IfH=0or the maximum iteration number is reached, go to next step:Otherwise,go back to Output as the decoded codeword and stop the decoding process. An example:
19 () () ( ) 1, if (1) (0) 0, otherwise l l l i i i q q z ⎧ > = ⎨ ⎩ If ( )l T zH 0 = or the maximum iteration number is reached, then stop the decoding iteration process and output ( )l z as the decoded codeword. Moreover, if ( )l T zH 0 ≠ , a decoding error is declared. For the AWGN channel model i ii y = x n + , where 2 (0, ) i n N ∼ σ and ( 1) i c i x = − is equally likely to be +1 or –1, it is ease to verify that 2 22 2 / / / 2/ 1 (| ) (| ) 1 i i i i ii x y ii ii y y yx e Pc y Px y ee e σ σ − − σ σ == = + + With the assignment (0) () ( | ) ij i i q b Pc b y = = , the SPA can be summarized as follows. Initialization: Set l = 0, and maximum number of iteration to lmax. For every (i, j) such that 1 ji h = with 1 and 0 ≤ ≤ ≤≤ jq in , set 2 (0) 2 / 1 (0) ( 1| ) 1 i ij i i y q Px y e− σ = =+ = + 2 (0) 2 / 1 (1) ( 1| ) 1 i ij i i y q Px y e σ = =− = + Do the following for all edges in parallel: 1) compute the probabilities () () (0) and (1) l l σ σ ji ji according to (4.15). 2) compute the values ( 1) ( 1) (0) and (1) l l ij ij q q + + according to (4.16) with ( 1) l αij + =1. Then normalize: ( 1) ( 1) ( 1) ( 1) (0) (0) (0) (1) l l ij ij l l ij ij q q q q + + + + = + ( 1) ( 1) (1) 1 (0) l l ij ij q q + + = − 3) Compute the values ( 1) ( 1) (0) and (1) l l i i q q + + for all i. Form the decision vector ( 1) l+ z and test ( 1) l T + z H . If ( 1) l T + zH 0 = or the maximum iteration number lmax is reached, go to next step; Otherwise, set l = l + 1 and go back to 1). Output ( 1) l+ z as the decoded codeword and stop the decoding process. An example: