Principles of Information Science Chapter 3 Measures of information
Principles of Information Science Chapter 3 Measures of Information
3-1 Measures of Random Syntactic Information Shannon Theory of Information Key point: 1. Information is something that can be used to remove uncertainty 2. The amount of information can then be measured by the amount of uncertainty it removed. 3. In the cases of communications only waveform is concerned while meaning and value are ignored 4. Uncertainty and thus information are statistic in nature and statistical mathematics is enough
3-1 Measures of Random Syntactic Information Shannon Theory of Information 2. The amount of information can then be measured by the amount of uncertainty it removed. 1. Information is something that can be used to remove uncertainty. Key point: 3. In the cases of communications, only waveform is concerned while meaning and value are ignored. 4. Uncertainty and thus information are statistic in nature and statistical mathematics is enough
Shannon Theorem of Random entropy The measure of uncertainty, will take the form of H(p,…,p,)=-k n=1 R log R if the conditions below are satisfied (1)Hs should be a continuous function of p for all n (2)Hs should be a monotonically increasing function ofn when p=I/N for all n 3) Hs should observe the rule of stepped weighting summation
Shannon Theorem of Random Entropy The measure of uncertainty, will take the form of H (p , …, p ) = - k p log p n n=1 N n S 1 N (1) H should be a continuous function of p , for all n; S n if the conditions below are satisfied: (2) H should be a monotonically increasing function of N when p = 1/N for all n; S n (3) H should observe the rule of stepped weighting summation. S
The rule of stepped weighting summation: 1/2 P 1/2 1/2 1/3 2p=1/3 l/22/3X2 1/6 P=1/6 1/3 3 H(u2,13,16)=Hs(1/2,12)+12H(2/3,13)
The rule of stepped weighting summation: 1/2 1/3 1/6 x x x 1 2 3 1/2 1/2 2/3 1/3 x x x 1 2 3 p = 1/2 p = 1/3 p = 1/6 1 2 3 H (1/2, 1/3, 1/6) = H (1/2, 1/2) + 1/2 H (2/3, 1/3) S S S
Proof: (a) In the case of equal probabilities Let H(N,…1N)=A(N) By use condition 3), it is then easy to have A(MN=HS(IMN,., IMN Hs(1M,…,1M)+,,(1M)(1N,…,1N) Then A(M+AN a(N2=2A(N), A(S)=CA(S), A(t3=BA(t) For any given B, it is always possible to find a proper a such that a+1
Proof: (a) In the case of equal probabilities S S H (1/N, …, 1/N) = A(N) By use condition (3), it is then easy to have A(MN) = H (1/MN, …, 1/MN) S S M i=1 = A(M) + A(N) Then A(N ) = 2 A(N), A(S ) = A(S), A(t ) = A(t) 2 a a b b Let = H (1/M, …, 1/M) + (1/M) H (1/N, …, 1/N) For any given b, it is always possible to find a proper a such that S t < S a b a+1 (*)