J.M. Mendel, Tutorial on higher-order statistics(spectra) in signal processing and systems theory: Theoretical results and some applications, " Proc. IEEE, vol. 79, Pp. 278-305, 1991 H V. Poor, An Introduction to Signal Detection and Estimation, 2nd ed, New York: Springer-Verlag, 1994 H V Poor and J B. Thomas,Signal detection in dependent non-Gaussian noise, "in Advances in Statistical Signal br. p Processing, vol 2, Signal Detection, H V. Poor and J.B. Thomas, Eds, Greenwich, Conn: JAI Press, 1993 J.G. Proakis, Digital Communications, New York: McGraw-Hill, 1983 DL. Snyder and M.I. Miller, Random Point Processes in Time and Space, New York: Springer-Verlag, 1991 J. Tsitsiklis, Distributed detection, "in Advances in Statistical Signal Processing, vol. 2, Signal Detection, H V. Poor and J B. Thomas, Eds, Greenwich, Conn. JAI Press, 1993 S. Verdu, Multiuser detection, " in Advances in Statistical Signal Processing, vol 2, Signal Detection, H V. Poor and J B. Thomas, Eds, Greenwich, Conn. JAI Press, 1993 Further Information Except as otherwise noted in the accompanying text, further details on the topics introduced in this section can be found in the textbook. Poor, H.V. An Introduction to Signal Detection and Estimation, 2nd ed, New York: Springer-Verlag, 1994 The bimonthly journal, IEEE Transactions on Information Theory publishes recent advances in the theory of signal detection. It is available from the Institute of Electrical and Electronics Engineers, Inc, 345 East 47th Street, New York, NY 10017 Papers describing applications of signal detection are published in a number of journals, including the monthly journals IEEE Transactions on Communications, IEEE Transactions on Signal Processing, and the Journal of the Acoustical Society of America. The IEEE journals are available from the IEEE, as above. The Journal of the Acoustical Society of America is available from the American Institute of Physics, 335 East 45th Street, New York, NY10017 73.2 Noise Carl G. Looney Every information signal s( t) is corrupted to some extent by the superimposition of extra-signal fluctuations that assume unpredictable values at each time instant t. Such undesirable signals were called noise due to early measurements with sensitive audio amplifiers Noise sources are(1)intrinsic,(2)external, or(3)process induced Intrinsic noise in conductors comes from thermal agitation of molecularly bound ions and electrons, from microboundaries of impurities and grains with varying potential, and from transistor junction areas that become temporarily depleted of electrons/holes. External electromagnetic interference sources include airport radar, x-rays, power and telephone lines, com munications transmissions, gasoline engines and electric motors, computers and other electronic devices; and also include lightning, cosmic rays, plasmas(charged particles)in space, and solar/stellar radiation(conductors act as antennas). Reflective objects and other macroboundaries cause multiple paths of signals. Process-induced errors include measurement, quantization, truncation, and signal generation errors. These also corrupt the ignal with noise power and loss of resolution Statistics of noise Statistics allow us to analyze the spectra of noise. We model a noise signal by a random(or stochastic) process N(o, a function whose realized value N(n)=x, at any time instant t is chosen by the outcome of the random variable N,=N(o). N(o) has a probability distribution for the values x it can assume Any particular trajectory , x ) of outcomes is called a realization of the noise process. The first-order statistic of N(o)is the expected value u, EIN(D]. The second-order statistic is the autocorrelation function RN(t, t+ t)=EINON(t+ t)l, N,=N(t)and N2=N(t2) at times ti and t2 depend on each other in an average sense oise random variables where E[-] is the expected value operator Autocorrelation measures the extent to which no e 2000 by CRC Press LLC
© 2000 by CRC Press LLC J.M. Mendel, “Tutorial on higher-order statistics (spectra) in signal processing and systems theory: Theoretical results and some applications,’’ Proc. IEEE, vol. 79, pp. 278–305, 1991. H.V. Poor, An Introduction to Signal Detection and Estimation, 2nd ed., New York: Springer-Verlag, 1994. H.V. Poor and J. B. Thomas, “Signal detection in dependent non-Gaussian noise,’’ in Advances in Statistical Signal Processing, vol. 2, Signal Detection, H.V. Poor and J.B. Thomas, Eds., Greenwich, Conn.: JAI Press, 1993. J.G. Proakis, Digital Communications, New York: McGraw-Hill, 1983. D.L. Snyder and M.I. Miller, Random Point Processes in Time and Space, New York: Springer-Verlag, 1991. J. Tsitsiklis, “Distributed detection,’’ in Advances in Statistical Signal Processing, vol. 2, Signal Detection, H.V. Poor and J.B. Thomas, Eds., Greenwich, Conn.: JAI Press, 1993. S. Verdú, “Multiuser detection,’’ in Advances in Statistical Signal Processing, vol. 2, Signal Detection, H.V. Poor and J.B. Thomas, Eds., Greenwich, Conn.: JAI Press, 1993. Further Information Except as otherwise noted in the accompanying text, further details on the topics introduced in this section can be found in the textbook: Poor, H.V. An Introduction to Signal Detection and Estimation, 2nd ed., New York: Springer-Verlag, 1994. The bimonthly journal, IEEE Transactions on Information Theory, publishes recent advances in the theory of signal detection. It is available from the Institute of Electrical and Electronics Engineers, Inc., 345 East 47th Street, New York, NY 10017. Papers describing applications of signal detection are published in a number of journals, including the monthly journals IEEE Transactions on Communications, IEEE Transactions on Signal Processing, and the Journal of the Acoustical Society of America. The IEEE journals are available from the IEEE, as above. The Journal of the Acoustical Society of America is available from the American Institute of Physics, 335 East 45th Street, New York, NY 10017. 73.2 Noise Carl G. Looney Every information signal s(t) is corrupted to some extent by the superimposition of extra-signal fluctuations that assume unpredictable values at each time instant t. Such undesirable signals were called noise due to early measurements with sensitive audio amplifiers. Noise sources are (1) intrinsic, (2) external, or (3) process induced. Intrinsic noise in conductors comes from thermal agitation of molecularly bound ions and electrons, from microboundaries of impurities and grains with varying potential, and from transistor junction areas that become temporarily depleted of electrons/holes. External electromagnetic interference sources include airport radar, x-rays, power and telephone lines, communications transmissions, gasoline engines and electric motors, computers and other electronic devices; and also include lightning, cosmic rays, plasmas (charged particles) in space, and solar/stellar radiation (conductors act as antennas). Reflective objects and other macroboundaries cause multiple paths of signals. Process-induced errors include measurement, quantization, truncation, and signal generation errors. These also corrupt the signal with noise power and loss of resolution. Statistics of Noise Statistics allow us to analyze the spectra of noise. We model a noise signal by a random (or stochastic) process N(t), a function whose realized value N(t) = xt at any time instant t is chosen by the outcome of the random variable Nt = N(t). N(t) has a probability distribution for the values x it can assume. Any particular trajectory {(t,xt)} of outcomes is called a realization of the noise process. The first-order statistic of N(t) is the expected value mt = E[N(t)]. The second-order statistic is the autocorrelation function RNN(t, t + t) = E[N(t)N(t + t)], where E[–] is the expected value operator.Autocorrelation measures the extent to which noise random variables N1 = N(t1) and N2 = N(t2 ) at times t1 and t2 depend on each other in an average sense
N(t)=n(t) Probability density FIGURE 73.3 A noise process. When the first-and second-order statistics do not change over time, we call the noise a weakly (or wide sense)stationary process. This means that: (1)EIN(o)]=u,=u is constant for all t and (2)RNN(t, t+ t) EIN(ON(t+ t))= EN(O)N(t)]= RNMt)for all t [see Brown, 1983, P. 82; Gardner, 1990, P. 108; or Peebles, 1987, P. 153 for properties of ron(t). In this case the autocorrelation function depends only on the offset T We assume hereafter that u =o(we can subtract u, which does not change the autocorrelation). When t=0, RNMO)= EN(ON(t+O)=E(N(t))=ON, which is the fixed variance of each random variable N, for all t Weakly stationary (ws) processes are the most commonly encountered cases and are the ones considered here Evolutionary processes have statistics that change over time and are difficult to analyze. Figure 73.3 shows a realization of a noise process N(o), where at any particular time t, the probability density function is shown coming out of the page in a third dimension. For a ws noise, the distributions are the same each t. The most mathematically tractable noises are Gaussian ws processes, where at each time t the probability distribution for the random variable N,=N(t)is Gaussian(also called norman. The first-and second-order statistics completely determine Gaussian distributions, and so ws makes their statistics of all orders stationary over time also. It is well known[see Brown, 1983, P. 39]that linear transformations of Gaussian andom variables are also Gaussian random variables. The probability density function for a Gaussian random variable N, is N(x)=(1/(2oN ]exp[-(x-HN)2/2oN], which is the familiar bell-shaped curve centered on HN. The standard Gaussian probability table [Peebles, 1987, P. 314] is useful, e.g., Pr[-N< N, ON) 2Pr[0< N,< On)=0.8413 from the table. The noise signal N(o) represents voltage, so the autocorrelation function at offset 0, RNMO)=EN(ON(oJ represents expected power in volts squared, or watts per ohm. When R=1 S, then N(O)N(n)=N(OIN(O/R N(oI(n)volt-amperes= watts(where I(t) is the current in a 1-2 resistor). The Fourier transform FIRNNT)] of the autocorrelation function RNmT)is the power spectrum, called the power spectral density function(psdf), SNN(w)in W/(rad/s). Then 广Rm(cmh=F() (73.20) RNN(t) SNN(w)edw= F-ISNN(w) e 2000 by CRC Press LLC
© 2000 by CRC Press LLC When the first- and second-order statistics do not change over time, we call the noise a weakly (or widesense) stationary process. This means that: (1) E[N(t)] = mt = m is constant for all t, and (2) RNN(t, t + t) = E[N(t)N(t + t)] = E[N(0)N(t)] = RNN(t) for all t [see Brown, 1983, p. 82; Gardner, 1990, p. 108; or Peebles, 1987, p. 153 for properties of RNN(t)]. In this case the autocorrelation function depends only on the offset t. We assume hereafter that m = 0 (we can subtract m, which does not change the autocorrelation). When t = 0, RNN(0) = E[N(t)N(t + 0)] = E[(N(t))2 ] = sN 2 , which is the fixed variance of each random variable Nt for all t. Weakly stationary (ws) processes are the most commonly encountered cases and are the ones considered here. Evolutionary processes have statistics that change over time and are difficult to analyze. Figure 73.3 shows a realization of a noise process N(t), where at any particular time t, the probability density function is shown coming out of the page in a third dimension. For a ws noise, the distributions are the same for each t. The most mathematically tractable noises are Gaussian ws processes, where at each time t the probability distribution for the random variable Nt = N(t) is Gaussian (also called normal). The first- and second-order statistics completely determine Gaussian distributions, and so ws makes their statistics of all orders stationary over time also.It is well known [see Brown, 1983, p. 39] that linear transformations of Gaussian random variables are also Gaussian random variables. The probability density function for a Gaussian random variable Nt is fN (x) = {1/[2psN 2]1/2exp[–(x – mN )2/2sN 2 ], which is the familiar bell-shaped curve centered on x = mN . The standard Gaussian probability table [Peebles, 1987, p. 314] is useful, e.g., Pr[–sN < Nt < sN ) = 2Pr[0 < Nt < sN ) = 0.8413 from the table. Noise Power The noise signal N(t) represents voltage, so the autocorrelation function at offset 0, RNN(0) = E[N(t)N(t)] represents expected power in volts squared, or watts per ohm. When R = 1 W, then N(t)N(t) = N(t)[N(t)/R] = N(t)I(t) volt-amperes = watts (where I(t) is the current in a 1-W resistor). The Fourier transform F[RNN(t)] of the autocorrelation function RNN(t) is the power spectrum, called the power spectral density function (psdf), SNN(w) in W/(rad/s). Then (73.20) FIGURE 73.3 A noise process. S w R e d R R S w e dw S w NN NN jws NN NN NN jws NN ( ) ( ) [ ( )] ( ) ( ) [ ( )] = = = = - -• • - -• • Ú Ú t t t t p F F 1 2 1
The psdf at frequency f is defined to be the expected power that the voltage N(), bandlimited to an increment band df centered at f would dissipate in a 1-f2 resistance, divided by df. Equations(73.20)are known as the Wiener-Khinchin relations that establish that SxN(w)and RNT)are a Fourier transform pair for ws random processes [Brown, 1983; Gardner, 1990, P. 230; Peebles, 1987 ]. The psdf SNN(w)has units of w/(rad/s), whereas the autocorrelation function RNn()has units of watts. when T=0 in the second integral of Eq(73. 20), the exponential becomes e=l, so that RNNO)(= EIN(02=ON) is the integral of the psdf SNn()over all radian frequencies,-o0 w<oo. The rms(root-mean-square)voltage is Nms =U, (the standard deviation). The power spectrum in W/(rad/s)is a density that is summed up via an integral over the radian frequency band wi to w2 to obtain the total power over that band. P(w1,w2) x(w)· dw watts PNN =ON=EN(: (w)· dw watts The variance on=RNN(O)is the mean instantaneous power PNN over all frequencies at any time t. Effect of linear transformations on autocorrelation and Power Spectral Density Let h(t) be the impulse response function of a time-invariant linear system L and H(w)=F[h(o)] be its transfer function. Let an input noise signal N(n) have autocorrelation function RNN(T)and psdf SNN(w). We denote the output noise signal by Y(r)=LIN(o]. The Fourier transforms Y(w)=FlY(o] and N(w)= FN(o] do not exist, but they are not needed. The output Y(o) of a linear system is ws whenever the input N(r) is ws [see Gardner, 1990, P. 195; or Peebles, 1987, P. 215]. The output psdf S(w)and autocorrelation function Rry(T) are given by, respectively, Syr(w)=Hw)SNOw), Ryr(T)=F-ISrrw)I (73.22) Isee Gardner, 1990, P. 223]. The output noise power is H(w)2 White, Gaussian, and Pink Noise Models White noise[see Brown, 1983; Gardner, 1990, P. 234; or Peebles, 1987] is a theoretical model W(n)of noise that (时 W/(rad/s)-& w< a, The inverse Fourier transform of this is the i to lse te light, p its p (n,)8(T), which is zero for all offsets except T=O. Therefore, white noise W(t) is a process that is uncorrelated over time, ie, EW(t)w(t)]=0 for t, not equal to t,. Figure 73. 4(a) shows the autocorrelation and psdf for white noise where the offset is s=T A Gaussian white noise is white noise such that the probability distribution of each random variable w= W(r)is Gaussian. When two gaussian random variables w, and W2 related, i. e, E[W,W2]=0, they are independent [see Gardner, 1990, P. 37]. We use Gaussian models because of the central limit theorem that states that the sum of a number of random variables is approximately Gaussian. Actual circuits attenuate signals above cut-off frequencies, and also the power must be finite. However, for white noise, Pww= RNO)=oo, so we often truncate the white noise spectral density(psdf)at cu w. The result is known as pink noise, P(t), and is usually taken to be Gaussian because linear filtering of any white noise(through the effect of the central limit theorem)tends to make the noise Gaussian [see Gardner, e 2000 by CRC Press LLC
© 2000 by CRC Press LLC The psdf at frequency f is defined to be the expected power that the voltage N(t), bandlimited to an incremental band df centered at f, would dissipate in a 1-W resistance, divided by df. Equations (73.20) are known as the Wiener-Khinchin relations that establish that SNN(w) and RNN(t) are a Fourier transform pair for ws random processes [Brown, 1983; Gardner, 1990, p. 230; Peebles, 1987]. The psdf SNN(w) has units of W/(rad/s), whereas the autocorrelation function RNN(t) has units of watts. When t = 0 in the second integral of Eq. (73.20), the exponential becomes e0 = 1, so that RNN(0) (= E[N(t)2 ] = sN 2 ) is the integral of the psdf SNN(w) over all radian frequencies, –` < w < `. The rms (root-mean-square) voltage is Nrms = sN (the standard deviation). The power spectrum in W/(rad/s) is a density that is summed up via an integral over the radian frequency band w1 to w2 to obtain the total power over that band. (73.21) The variance sN 2 = RNN(0) is the mean instantaneous power PNN over all frequencies at any time t. Effect of Linear Transformations on Autocorrelation and Power Spectral Density Let h(t) be the impulse response function of a time-invariant linear system L and H(w) = F[h(t)] be its transfer function. Let an input noise signal N(t) have autocorrelation function RNN(t) and psdf SNN(w). We denote the output noise signal by Y(t) = L[N(t)]. The Fourier transforms Y(w) [ F[Y(t)] and N(w) [ F[N(t)] do not exist, but they are not needed. The output Y(t) of a linear system is ws whenever the input N(t) is ws [see Gardner, 1990, p. 195; or Peebles, 1987, p. 215]. The output psdf SYY(w) and autocorrelation function RYY (t) are given by, respectively, SYY(w) = *H(w)* 2SNN(w), RYY(t) = F–1[SYY(w)] (73.22) [see Gardner, 1990, p. 223]. The output noise power is (73.23) White, Gaussian, and Pink Noise Models White noise [see Brown, 1983; Gardner, 1990, p. 234; or Peebles, 1987] is a theoretical model W(t) of noise that is ws with zero mean. It has a constant power level no over all frequencies (analogous to white light), so its psdf is SWW(w) = no W/(rad/s), –` < w < `. The inverse Fourier transform of this is the impulse function RWW(t) = (no)d(t), which is zero for all offsets except t = 0. Therefore, white noise W(t) is a process that is uncorrelated over time, i.e., E[W(t1)W(t2)] = 0 for t1 not equal to t2 . Figure 73.4(a) shows the autocorrelation and psdf for white noise where the offset is s = t. A Gaussian white noise is white noise such that the probability distribution of each random variable Wt = W(t) is Gaussian. When two Gaussian random variables W1 and W2 are uncorrelated, i.e., E[W1W2] = 0, they are independent [see Gardner, 1990, p. 37]. We use Gaussian models because of the central limit theorem that states that the sum of a number of random variables is approximately Gaussian. Actual circuits attenuate signals above cut-off frequencies, and also the power must be finite. However, for white noise, PWW = RNN(0) = `, so we often truncate the white noise spectral density (psdf) at cut-offs –wc to wc . The result is known as pink noise, P(t), and is usually taken to be Gaussian because linear filtering of any white noise (through the effect of the central limit theorem) tends to make the noise Gaussian [see Gardner, P w w S w dw P E N t S w dw NN NN w w NN N NN ( , ) ( ) [ ( ) ] ( ) 1 2 1 2 2 2 1 2 1 2 = × = = = × Ú Ú-• • p s p watts watts s p p Y YY YY NN P S w dw H w S w dw 2 2 1 2 1 2 = = = -• • -• • Ú Ú ( ) * * ( ) ( )
Sww(w)=n. (a) white noise (b)Pink noise FIGURE 73.4 Power transform pairs for white and pink noise Figure 73.5 not available FIGURE 73.5 Thermal noise in a resistor. 1990, P. 241. Figure 73. 4(b) shows the sinc function Rpp(s)=F[Spp(w)] for pink noise. Random variables P. and P2 at times t, and t2 are correlated only for t, and t2 close Thermal noise as Gaussian white noise Brown observed in 1828 that pollen and dust particles moved randomly when suspended in liquid. instein analyzed such motion based on the random walk model. Perrin confirmed in 1908 that the ctivity of molecules in a liquid caused irregular bombardment of the much larger particles. It was that charges bound to thermally vibrating molecules would generate electromotive force (emf)at the open rminals of a conductor, and that this placed a limit on the sensitivity of galvanometers. Thermal noise(also called Johnson noise)was first observed by J B. Johnson at Bell Laboratories in 1927. Figure 73.5 displays white noise as seen in the laboratory on an oscilloscop e 2000 by CRC Press LLC
© 2000 by CRC Press LLC 1990, p. 241]. Figure 73.4(b) shows the sinc function RPP(s) = F–1[SPP(w)] for pink noise. Random variables P1 and P2 at times t1 and t2 are correlated only for t1 and t2 close. Thermal Noise as Gaussian White Noise Brown observed in 1828 that pollen and dust particles moved randomly when suspended in liquid. In 1906, Einstein analyzed such motion based on the random walk model. Perrin confirmed in 1908 that the thermal activity of molecules in a liquid caused irregular bombardment of the much larger particles. It was predicted that charges bound to thermally vibrating molecules would generate electromotive force (emf) at the open terminals of a conductor, and that this placed a limit on the sensitivity of galvanometers. Thermal noise (also called Johnson noise) was first observed by J. B. Johnson at Bell Laboratories in 1927. Figure 73.5 displays white noise as seen in the laboratory on an oscilloscope. FIGURE 73.4 Power transform pairs for white and pink noise. FIGURE 73.5 Thermal noise in a resistor. Figure 73.5 not available
(a) Noisy resistor(b)Noiseless resistor (c)Equivalent R W FIGURE 73.6 Thermal noise in a resistor The voltage N(o) generated thermally between two points in an open circuit conductor is the sum of an extremely large number of superimposed, independent electronically and ionically induced microvoltages at all frequencies up to f = 6,000 GHz at room temperature [see Gardner 1990, P. 235], near infrared. The mean relaxation time of free electrons is 1/f=0.5 X 10-10/Ts, so at room temperature of T= 290K, it is 0. 17 ps (1 picosecond= 10-2s). The values of N(t) at different times are uncorrelated for time differences(offsets) greater than T.=1/fe. The expected value of N() is zero. The power is fairly constant across a broad spectrum and we cannot sample signals at picosecond periods, so we model Johnson noise N(t) with Gaussian white noise W(t). Although u= E[W(t)]=0, the average power is positive at temperatures above OK, and is o Rww(O)[see the right side of Eq. (73.21). A disadvantage of the white noise model is its infinite power,i.e Rww0)=0x=oo, but it is valid over a limited bandwidth of B Hz, in which case its power is finite. In 1927, Nyquist [1928] theoretically derived thermal noise power in a resistor to be Pww(B)= 4kTRB(watts) 73.24) where R is resistance(ohms), B is the frequency bandwidth of measurement in Hz(all emf fluctuations outside of B are ignored), Pww(B)is the mean power over B(see Eq. 73.21), and Boltzmanns constant is k= 1. 38 x 023 J/K [see Ott, 1988; Gardner, 1990, P. 288; or Peebles, 1987, P. 227. Under external emf, the thermally induced collisions are the main source of resistance in conductors (electrons pulled into motion by an external emf at OK meet no resistance). The rms voltage is Wms=ow=[(4kTRB)]V over a bandwidth of B Hz Plancks radiation law is SN(w)=(2h fD/exp(hlf)/kT)-1], where h=6.63 X 10-34 J/s is Plancks constant, and fis the frequency [see Gardner, 1990, P. 234]. For If much smaller than kT/h=6.04 X 102 Hz=6,000 GHz, the exponential above can be approximated by exp(hlf//kT)=1+hlf kT. The denominator of SNN(w)becomes hlfVkT, So SNN w)=(2hl//(Hf/kT)=2kTW/Hz in a 1-Q2 resistor Over a resistance of RS2 and a bandwidth of B Hz(positive frequencies), this yields the total power Pww(B)=2BRSNN w)=4kTRB W over the two-sided Thermal noise is the same in a 1000-2 carbon resistor as it is in a 1000-Q2 tantalum thin-film resistor [see Ott, 1988]. While the intrinsic noise may never be less, it may be higher because of other superimposed noise(described in later sections). We model the thermal noise in a resistor by an internal source(generator ), as shown in Fig. 73.6 Capacitance cannot be ignored at high f but pure reactance(C or L)cannot dissipate energy, and so cannot generate thermal noise. The white noise model W(n) for thermal noise N(n) has a constant psdf Swuw)=n W/(rad/s)for -o0< w<oo. By Eq. 73.21, the white noise mean power over the frequency bandwidth B is Pww (b)= w(whw=n2(4兀B/2π)=2n (73.25) e 2000 by CRC Press LLC
© 2000 by CRC Press LLC The voltage N(t) generated thermally between two points in an open circuit conductor is the sum of an extremely large number of superimposed, independent electronically and ionically induced microvoltages at all frequencies up to fc = 6,000 GHz at room temperature [see Gardner 1990, p. 235], near infrared. The mean relaxation time of free electrons is 1/fc = 0.5 ¥ 10–10/T s, so at room temperature of T = 290K, it is 0.17 ps (1 picosecond = 10–12 s). The values of N(t) at different times are uncorrelated for time differences (offsets) greater than tc = 1/fc . The expected value of N(t) is zero. The power is fairly constant across a broad spectrum, and we cannot sample signals at picosecond periods, so we model Johnson noise N(t) with Gaussian white noise W(t). Although m = E[W(t)] = 0, the average power is positive at temperatures above 0K, and is sW 2 = RWW (0) [see the right side of Eq. (73.21)]. A disadvantage of the white noise model is its infinite power, i.e., RWW(0) = sW 2 = `, but it is valid over a limited bandwidth of B Hz, in which case its power is finite. In 1927, Nyquist [1928] theoretically derived thermal noise power in a resistor to be PWW(B) = 4kTRB (watts) (73.24) where R is resistance (ohms), B is the frequency bandwidth of measurement in Hz (all emf fluctuations outside of B are ignored), PWW (B) is the mean power over B (see Eq. 73.21), and Boltzmann’s constant is k = 1.38 3 10–23 J/K [see Ott, 1988; Gardner, 1990, p. 288; or Peebles, 1987, p. 227]. Under external emf, the thermally induced collisions are the main source of resistance in conductors (electrons pulled into motion by an external emf at 0K meet no resistance). The rms voltage is Wrms = sW = [(4kTRB)]1/2 V over a bandwidth of B Hz. Planck’s radiation law is SNN(w) = (2hu f u)/[exp(hu f u/kT) – 1], where h = 6.63 3 10–34 J/s is Planck’s constant, and f is the frequency [see Gardner, 1990, p. 234]. For u f u much smaller than kT/h = 6.04 3 1012 Hz ª 6,000 GHz, the exponential above can be approximated by exp(hu f u/kT) = 1 + hu f u/kT. The denominator of SNN(w) becomes hu f u/kT, so SNN(w) = (2hu f u)/(hu f u/kT) = 2kT W/Hz in a 1-W resistor. Over a resistance of R W and a bandwidth of B Hz (positive frequencies), this yields the total power PWW(B) = 2BRSNN(w) = 4kTRB W over the two-sided frequency spectrum. This is Nyquist’s result. Thermal noise is the same in a 1000-W carbon resistor as it is in a 1000-W tantalum thin-film resistor [see Ott, 1988].While the intrinsic noise may never be less, it may be higher because of other superimposed noise (described in later sections).We model the thermal noise in a resistor by an internal source (generator), as shown in Fig. 73.6. Capacitance cannot be ignored at high f, but pure reactance (C or L) cannot dissipate energy, and so cannot generate thermal noise. The white noise model W(t) for thermal noise N(t) has a constant psdf SWW(w) = no W/(rad/s) for –` < w < `. By Eq. 73.21, the white noise mean power over the frequency bandwidth B is (73.25) FIGURE 73.6 Thermal noise in a resistor. P B WW SWW w dw no B noB B B ( ) = ( ) = ( ) = -Ú 1 2 4 2 2 2 2 p p p p p /