Principles of Information Science Chapter 5 Principles of Information Transferring Communication Theory
Principles of Information Science Chapter 5 Principles of Information Transferring: Communication Theory
1. Model of Communication System Noise Source Sink Channel T: Transformation Source, sink, and channel are given beforehand Transformation to match the source with the channel
1. Model of Communication System T C T Source Sink Channel Transformation: to match the source with the channel. Source, sink, and channel are given beforehand. Noise X Y -1 T: Transformation
The functions of Transformation Modulation: Spectra Matching, Better performance/cost seeking g Amplification: Signal/Noise Improving Equalization: Channel Characteristic Adjusting Source Coding: Transmission Efficiency Bettering Channel Coding: Noise Immunity Cryptographic Coding: Security Protection
The Functions of Transformation - Modulation: Spectra Matching, Better Performance/Cost Seeking - Amplification: Signal/Noise Improving - Equalization: Channel Characteristic Adjusting - Source Coding: Transmission Efficiency Bettering - Channel Coding: Noise Immunity - Cryptographic Coding: Security Protection
2, Model analysis A radical feature of communication: The sent waveform recovery at receiving end with a certain fidelity under noises. Ignoring the content and utility factors, the model of communication exhibits statistical properties. Source Entropy: H(X),HX,Y Mutual Information I(X; Y)=H(X)-H(XY H(Y-H(YX
2, Model Analysis Ignoring the content and utility factors, the model of communication exhibits statistical properties. Source Entropy: H(X), H(X, Y) I(X; Y) = H(X) - H(X|Y) = H(Y) - H(Y|X) A radical feature of communication: The sent waveform recovery at receiving end with a certain fidelity under noises. Mutual Information:
Define Channel Capacity C Max Rp(X) I(X;Y) Example(aWgn channel) p(yx)=(22)12exp-(1/2a2)(yx)2 I(X; Y=H(Y-H(YX) H(Y- p(x) p(yx) log p(ylx) dy dx H(Y)-log(2no 2e) The only way to maximize I(X; Y)is to maximize H(Y) This requires y being normal variable with zero mean. Due to Y=X+. it requires x being a normal variable with zero mean
Channel Capacity C = I(X; Y) Max Define {p(X)} Example (AWGN Channel): p(y|x) =(2ps ) exp[-(1/2s ) (y-x) ] = H(Y) - p(x) p(y|x) log p(y|x) dy dx 2 = H(Y) - log (2ps e) I(X; Y) = H(Y) - H(Y|X) The only way to maximize I(X; Y) is to maximize H(Y). This requires Y being normal variable with zero mean. Due to Y = X+N, it requires X being a normal variable with zero mean. -1/2 2 2 - - 2 1/2