3.4 Codes with Many Codewords-the Union Bhattacharyya Bound and Gallager 3.4.1 Union Bhattacharyya Bound We now consider generalizing the bound in (3.12)to a code C=with many codewords.Since the complement ofD.is given byand noting that D,i=l2,Mare disjoint decision regions,we have(3.5)也可直接写为(3.l5) Pa(elm)=PryeUD sent =2eDkm叫 -∑RwIx) (3.15) 点 Note that,for ML decoding. y∈D.→Pw(ylxm)2Px(ylxm) Since every y Dr can also be put into the decoding region D:of an ML decoder for the code (x2)=(x)with only two codewords,we see that 是R,AR)-盆P.=BD (3.16 veD Say x Assuming m'=1 Say x2 D PutyeD into D D D D Say x Say x Decision regions for Decision regions for C=X C'={,x}={xm,xm} Figure 3.2 Illustration of observation space and decision regions Invoking the Bhattacharyya bound(cf.(3.10))in (3.16),we have 3-6
3-6 3.4 Codes with Many Codewords – the Union Bhattacharyya Bound and Gallager Bound 3.4.1 Union Bhattacharyya Bound We now consider generalizing the bound in (3.12) to a code 1 2 { , ,., } = M C xx x with many codewords. Since the complement of Dm is given by m m m ′ ′≠ ∪ D , and noting that , 1,2,., i D i M = are disjoint decision regions, we have ((3.5)也可直接写为(3.15)) ' ' ( | ) Pr B mm m m P e m sent ≠ ⎛ ⎞ = ∈ ⎜ ⎟ ⎝ ⎠ y x ∪ D ( ' ) ' Pr | m m m m sent ≠ = ∈ ∑ y x D ' ' 1 ' (| ) m M N m m m m P = ∈ ≠ = ∑ ∑ y y x D (3.15) Note that, for ML decoding, ' ' (| ) (| ) y y ∈⇒ ≥ Dm NmN m P P x y x Since every y ∈ Dm’ can also be put into the decoding region D2 ′ of an ML decoder for the code 12 ' (, ) ( , ) m m xx x x ′ ′ = with only two codewords, we see that '22 1 ( | ) ( | ) ( | ) ( |1) m Nm Nm N B D D P P P Pe ∈ ∈∈ ′ ′ ∑∑∑ ≤ =≡ ′ ′ y yy yx yx yx D (3.16) D1 D2 Dm 1 D′ 2 D′ 1 Assuming ' 1 m = Say x Say m x Say 1 x′ Say 2 x′ 1 2 { , ,., } = M C xx x 12 ' '{, }{ , } m m C = xx x x ′ ′ = Put into m' 2 y ∈D D′ Figure 3.2 Illustration of observation space and decision regions Invoking the Bhattacharyya bound (cf. (3.10)) in (3.16), we have
ABI)ΣRGIRGI s∑P(yIx.)P(ylx) (3.17) Substituting (3.17)into (3.15),we obtain the so-called union Bhattacharyya bound: Pems2ΣB.(wIs.)R.(yi, m=1,2.,M (3.18) In particular,for memoryless channels Pem)≤立i∑VP0lxP0川x,mel,2M (3.19) 3.4.2 Gallager Bound For an ML decoder.we know that yEDn→P(ylxn)2P(ylx.)for some m'≠m. Then we have BIs) LR ≥1,for any s>0 since at least one term in the summation will itself be at least one and then the sum is at least one.We can raise both sides of the above equation to the power of some non-negative preserve the inequality,.e. yEDn→ 「y1 w1 21,for any s>0 and any p20 (3.20) Note that (3.20)can be written as P(ylx) p.(ylx)'21,any s0 and any p2o (3.21) By multiplying each of terms in the summation in (3.5)by the L.H.S.of (3.21),we can see that the error probability is upper-bounded by Pa(elm)s∑P(ylx) Pv(ylx)' (3.22) Lettings=1/(+p)and extending the summation in (3.22)to all y,we have the so-called Gallager bound. Gallager Bound:When the code C=length N is decoded by an MLdecoder,then
3-7 ' 2 ' (| ) (| ) (| ) m N m N mN m D P PP ∈ ∈ ′ ∑ ∑≤ y y y x y x y x D ' (| ) (| ) ≤ ∑ P P N mN m y y x y x (3.17) Substituting (3.17) into (3.15), we obtain the so-called union Bhattacharyya bound: ' ' 1 ' (| ) (| ) (| ) M B N mN m m m m P em P P = ≠ ≤ ∑ ∑ y y x y x , m=1,2,.,M (3.18) In particular, for memoryless channels, ' '1 1 (| ) ( | )( | ) M N B mn m n mn y P em Py x Py x = = ≤ ∑∏∑ , m=1,2,.,M (3.19) 3.4.2 Gallager Bound For an ML decoder, we know that ' (| ) (| ) y y ∉DmN mN m ⇒ ≥ P P x y x for some m’ ≠ m. Then we have ' ' 1 ' (| ) 1 (| ) s M N m m N m m m P = P ≠ ⎡ ⎤ ⎢ ⎥ ≥ ⎣ ⎦ ∑ y x y x , for any s>0 since at least one term in the summation will itself be at least one and then the sum is at least one. We can raise both sides of the above equation to the power of some non-negative parameter ρ ≥0 and preserve the inequality, i.e., ' ' 1 ' (| ) 1 (| ) s M N m m m N m m m P P ρ = ≠ ⎧ ⎫ ⎪ ⎪ ⎡ ⎤ ∉⇒ ≥ ⎨ ⎬ ⎢ ⎥ ⎪ ⎪ ⎣ ⎦ ⎩ ⎭ ∑ y x y y x D , for any s>0 and any ρ≥0 (3.20) Note that (3.20) can be written as ' ' 1 ' (| ) (| ) 1 M s s Nm Nm m m m P P ρ − ρ = ≠ ⎡ ⎤ ⎢ ⎥ ≥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ yx yx ∑ , any s>0 and any ρ≥0 (3.21) By multiplying each of terms in the summation in (3.5) by the L.H.S. of (3.21), we can see that the error probability is upper-bounded by 1 ' ' 1 ' (| ) (| ) (| ) m M s s B Nm Nm m m m P em P P ρ − ρ ∉ = ≠ ⎡ ⎤ ⎢ ⎥ ≤ ⎢ ⎥ ⎢⎣ ⎥⎦ ∑ ∑ y yx yx D (3.22) Letting s = 1/(1+ρ) and extending the summation in (3.22) to all y, we have the so-called Gallager bound. Gallager Bound: When the code 1 2 { , ,., } = M C xx x of length N is decoded by an ML decoder, then