13.4 Power Spectrum Estimation Using the FFT 549 IC2(measured) IN2(extrapolated) http://www.nr.com or call 1-800-872- Permission is read able files (including this one) granted fori IS2(deduced) internet from NUMERICAL RECIPES IN C: (North Figure 13.3.1.Optimal (Wiener)filtering.The power spectrum of signal plus noise shows a signal peak added to a noise tail.The tail is extrapolated back into the signal region as a"noise model."Subtracting America tusers to make one paper 1988-1992 by Cambridge University Press. THE gives the"signal model."The models need not be accurate for the method to be useful.A simple algebraic 是 ART combination of the models gives the optimal filter (see text). Programs Don't waste your time on this line of thought.The scheme converges to a signal of ictly proh S(f)=0.Converging iterative methods do exist;this just isn't one of them You can use the routine four1 (812.2)or realft (812.3)to FFT your data when you are constructing an optimal filter.To apply the filter to your data,you can use the methods described in $13.1.The specific routine convlv is not needed for optimal filtering,since your filter is constructed in the frequency domain to begin with.If you are also deconvolving your data with a known response function, 1988-1992 however,you can modify convlv to multiply by your optimal filter just before it OF SCIENTIFIC COMPUTING(ISBN 0-521- takes the inverse Fourier transform. CITED REFERENCES AND FURTHER READING: Numerical Recipes -43108-5 Rabiner,L.R.,and Gold,B.1975,Theory and Application of Digital Signal Processing(Englewood Cliffs,NJ:Prentice-Hall). (outside Nussbaumer,H.J.1982,Fast Fourier Transform and Convolution Algorithms(New York:Springer- Verlag). North Software. Elliott,D.F.,and Rao.K.R.1982.Fast Transforms:Algorithms,Analyses,Applications (New York: Academic Press). visit website machine 13.4 Power Spectrum Estimation Using the FFT In the previous section we"informally"estimated the power spectral density ofa function c(t)by taking the modulus-squared of the discrete Fourier transform of some
13.4 Power Spectrum Estimation Using the FFT 549 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). S 2 (deduced) N 2 (extrapolated) C 2 (measured) log scale f Figure 13.3.1. Optimal (Wiener) filtering. The power spectrum of signal plus noise shows a signal peak added to a noise tail. The tail is extrapolated back into the signal region as a “noise model.” Subtracting gives the “signal model.” The models need not be accurate for the method to be useful. A simple algebraic combination of the models gives the optimal filter (see text). Don’t waste your time on this line of thought. The scheme converges to a signal of S(f)=0. Converging iterative methods do exist; this just isn’t one of them. You can use the routine four1 (§12.2) or realft (§12.3) to FFT your data when you are constructing an optimal filter. To apply the filter to your data, you can use the methods described in §13.1. The specific routine convlv is not needed for optimal filtering, since your filter is constructed in the frequency domain to begin with. If you are also deconvolving your data with a known response function, however, you can modify convlv to multiply by your optimal filter just before it takes the inverse Fourier transform. CITED REFERENCES AND FURTHER READING: Rabiner, L.R., and Gold, B. 1975, Theory and Application of Digital Signal Processing (Englewood Cliffs, NJ: Prentice-Hall). Nussbaumer, H.J. 1982, Fast Fourier Transform and Convolution Algorithms (New York: SpringerVerlag). Elliott, D.F., and Rao, K.R. 1982, Fast Transforms: Algorithms, Analyses, Applications (New York: Academic Press). 13.4 Power Spectrum Estimation Using the FFT In the previous section we “informally” estimated the power spectral density of a function c(t) by taking the modulus-squared of the discrete Fourier transform of some
550 Chapter 13.Fourier and Spectral Applications finite,sampled stretch of it.In this section we'll do roughly the same thing,but with considerably greater attention to details.Our attention will uncover some surprises. The first detail is power spectrum(also called a power spectral density or PSD)normalization.In general there is some relation of proportionality between a measure of the squared amplitude of the function and a measure of the amplitude of the PSD.Unfortunately there are several different conventions for describing the normalization in each domain,and many opportunities for getting wrong the relationship between the two domains.Suppose that our function c(t)is sampled at N points to produce values co...c-1,and that these points span a range of time T,that is T=(N-1)A,where A is the sampling interval.Then here are several different descriptions of the total power: N-1 ≡“sum squared amplitude” (13.4.1) j=0 N-1 lc(t)Pdt≈ ="mean squared amplitude" (13.4.2) j=0 9 N-1 c(t2dt≈△ ∑lsf2≡“time-integral squared amplitude (13.4.3) j=0 PSD estimators,as we shall see,have an even greater variety.In this section, 28 we consider a class of them that give estimates at discrete values of frequency fi, where i will range over integer values.In the next section,we will learn about OF SCIENTIFIC a different class of estimators that produce estimates that are continuous functions 6 of frequency f.Even if it is agreed always to relate the PSD normalization to a particular description of the function normalization(e.g.,13.4.2),there are at least the following possibilities:The PSD is defined for discrete positive,zero,and negative frequencies,and its sum over these is the function mean squared amplitude 10-521 defined for zero and discrete positive frequencies only,and its sum over Numerica these is the function mean squared amplitude 431 defined in the Nyquist interval from-fe to fe,and its integral over this Recipes range is the function mean squared amplitude defined from 0 to fc.and its integral over this range is the function mean squared amplitude North It never makes sense to integrate the PSD of a sampled function outside of the Nyquist interval-fe and fe since,according to the sampling theorem,power there will have been aliased into the Nyquist interval. It is hopeless to define enough notation to distinguish all possible combinations of normalizations.In what follows,we use the notation P(f)to mean any of the above PSDs,stating in each instance how the particular P(f)is normalized.Beware the inconsistent notation in the literature. The method of power spectrum estimation used in the previous section is a simple version of an estimator called,historically,the periodogram.If we take an N-point sample of the function c(t)at equal intervals and use the FFT to compute
550 Chapter 13. Fourier and Spectral Applications Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). finite, sampled stretch of it. In this section we’ll do roughly the same thing, but with considerably greater attention to details. Our attention will uncover some surprises. The first detail is power spectrum (also called a power spectral density or PSD) normalization. In general there is some relation of proportionality between a measure of the squared amplitude of the function and a measure of the amplitude of the PSD. Unfortunately there are several different conventions for describing the normalization in each domain, and many opportunities for getting wrong the relationship between the two domains. Suppose that our function c(t) is sampled at N points to produce values c0 ...cN−1, and that these points span a range of time T , that is T = (N − 1)∆, where ∆ is the sampling interval. Then here are several different descriptions of the total power: N −1 j=0 |cj | 2 ≡ “sum squared amplitude” (13.4.1) 1 T T 0 |c(t)| 2 dt ≈ 1 N N −1 j=0 |cj | 2 ≡ “mean squared amplitude” (13.4.2) T 0 |c(t)| 2 dt ≈ ∆ N −1 j=0 |cj | 2 ≡ “time-integral squared amplitude” (13.4.3) PSD estimators, as we shall see, have an even greater variety. In this section, we consider a class of them that give estimates at discrete values of frequency f i, where i will range over integer values. In the next section, we will learn about a different class of estimators that produce estimates that are continuous functions of frequency f. Even if it is agreed always to relate the PSD normalization to a particular description of the function normalization (e.g., 13.4.2), there are at least the following possibilities: The PSD is • defined for discrete positive, zero, and negative frequencies, and its sum over these is the function mean squared amplitude • defined for zero and discrete positive frequencies only, and its sum over these is the function mean squared amplitude • defined in the Nyquist interval from −fc to fc, and its integral over this range is the function mean squared amplitude • defined from 0 to fc, and its integral over this range is the function mean squared amplitude It never makes sense to integrate the PSD of a sampled function outside of the Nyquist interval −fc and fc since, according to the sampling theorem, power there will have been aliased into the Nyquist interval. It is hopeless to define enough notation to distinguish all possible combinations of normalizations. In what follows, we use the notation P(f) to mean any of the above PSDs, stating in each instance how the particular P(f) is normalized. Beware the inconsistent notation in the literature. The method of power spectrum estimation used in the previous section is a simple version of an estimator called, historically, the periodogram. If we take an N-point sample of the function c(t) at equal intervals and use the FFT to compute
13.4 Power Spectrum Estimation Using the FFT 551 its discrete Fourier transform N-1 Ck= Cj e2xijk/N k=0..,N-1 (13.4.4) j=0 then the periodogram estimate of the power spectrum is defined at N/2+1 frequencies as P(0)=P(fo)= Na ICo2 ://www.nr PU)-ICP+IC k=1,2,… (13.4.5) 2 PUa=PU)=Ca Cam ICAL where fr is defined only for the zero and positive frequencies 在层在=2乐片 k=0,1,,2 (13.4.6) 9 By Parseval's theorem,equation(12.1.10),we see immediately that equation(13.4.5) is normalized so that the sum of the N/2+1 values of P is equal to the mean squared amplitude of the function cj. We must now ask this question.In what sense is the periodogram estimate 与A (13.4.5)a"true"estimator of the power spectrum of the underlying function c(t)? You can find the answer treated in considerable detail in the literature cited(see, e.g.,[1]for an introduction).Here is a summary. First,is the expectation value of the periodogram estimate equal to the power spectrum,i.e.,is the estimator correct on average?Well,yes and no.We wouldn't really expect one of the P(f)'s to equal the continuous P(f)at exactly fi,since fk is supposed to be representative of a whole frequency"bin"extending from halfway from the preceding discrete frequency to halfway to the next one.We should be Numerica 10621 expecting the P(f)to be some kind of average of P(f)over a narrow window function centered on its f.For the periodogram estimate (13.4.6)that window 431 function,as a function of s the frequency offset in bins,is Recipes w(s)= 1 sin(πs)72 sin(πs/N) (13.4.7) Notice that W(s)has oscillatory lobes but,apart from these,falls off only about as W(s)(s)-2.This is not a very rapid fall-off,and it results in significant leakage (that is the technical term)from one frequency to another in the periodogram estimate. Notice also that W(s)happens to be zero for s equal to a nonzero integer.This means that if the function c(t)is a pure sine wave of frequency exactly equal to one of the fr's,then there will be no leakage to adjacent f's.But this is not the characteristic case!If the frequency is,say,one-third of the way between two adjacent fk's,then the leakage will extend well beyond those two adjacent bins.The solution to the problem of leakage is called data windowing,and we will discuss it below
13.4 Power Spectrum Estimation Using the FFT 551 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). its discrete Fourier transform Ck = N −1 j=0 cj e2πijk/N k = 0,...,N − 1 (13.4.4) then the periodogram estimate of the power spectrum is defined at N/2+1 frequencies as P(0) = P(f0) = 1 N2 |C0| 2 P(fk) = 1 N2 |Ck| 2 + |CN−k| 2 k = 1, 2,..., N 2 − 1 P(fc) = P(fN/2) = 1 N2 CN/2 2 (13.4.5) where fk is defined only for the zero and positive frequencies fk ≡ k N∆ = 2fc k N k = 0, 1,..., N 2 (13.4.6) By Parseval’s theorem, equation (12.1.10), we see immediately that equation (13.4.5) is normalized so that the sum of the N/2+1 values of P is equal to the mean squared amplitude of the function c j . We must now ask this question. In what sense is the periodogram estimate (13.4.5) a “true” estimator of the power spectrum of the underlying function c(t)? You can find the answer treated in considerable detail in the literature cited (see, e.g., [1] for an introduction). Here is a summary. First, is the expectation value of the periodogram estimate equal to the power spectrum, i.e., is the estimator correct on average? Well, yes and no. We wouldn’t really expect one of the P(fk)’s to equal the continuous P(f) at exactly fk, since fk is supposed to be representative of a whole frequency “bin” extending from halfway from the preceding discrete frequency to halfway to the next one. We should be expecting the P(fk) to be some kind of average of P(f) over a narrow window function centered on its fk. For the periodogram estimate (13.4.6) that window function, as a function of s the frequency offset in bins, is W(s) = 1 N2 sin(πs) sin(πs/N) 2 (13.4.7) Notice that W(s) has oscillatory lobes but, apart from these, falls off only about as W(s) ≈ (πs)−2. This is not a very rapid fall-off, and it results in significant leakage (that is the technical term) from one frequency to another in the periodogram estimate. Notice also that W(s) happens to be zero for s equal to a nonzero integer. This means that if the function c(t) is a pure sine wave of frequency exactly equal to one of the fk’s, then there will be no leakage to adjacent fk’s. But this is not the characteristic case! If the frequency is, say, one-third of the way between two adjacent f k’s, then the leakage will extend well beyond those two adjacent bins. The solution to the problem of leakage is called data windowing, and we will discuss it below.
552 Chapter 13.Fourier and Spectral Applications Turn now to another question about the periodogram estimate.What is the variance of that estimate as N goes to infinity?In other words,as we take more sampled points from the original function(either sampling a longer stretch of data at the same sampling rate,or else by resampling the same stretch of data with a faster sampling rate),then how much more accurate do the estimates P&become?The unpleasant answer is that the periodogram estimates do not become more accurate at all!In fact,the variance of the periodogram estimate at a frequency f is always equal to the square of its expectation value at that frequency.In other words,the standard deviation is always 100 percent of the value,independent of N!How can this be?Where did all the information go as we added points?It all went into 81 producing estimates at a greater number of discrete frequencies f.If we sample a longer run of data using the same sampling rate,then the Nyquist critical frequency fe is unchanged,but we now have finer frequency resolution(more f's)within the Nyquist frequency interval;alternatively,if we sample the same length of data with a finer sampling interval,then our frequency resolution is unchanged,but the Nyquist range now extends up to a higher frequency.In neither case do the additional samples 物 reduce the variance of any one particular frequency's estimated PSD You don't have to live with PSD estimates with 100 percent standard deviations, however.You simply have to know some techniques for reducing the variance of 9 the estimates.Here are two techniques that are very nearly identical mathematically, though different in implementation.The first is to compute a periodogram estimate with finer discrete frequency spacing than you really need,and then to sum the periodogram estimates at K consecutive discrete frequencies to get one"smoother" estimate at the mid frequency of those K.The variance of that summed estimate 菩色D 9 will be smaller than the estimate itself by a factor of exactly 1/K,i.e.,the standard OF SCIENTIFIC deviation will be smaller than 100 percent by a factor 1/VK.Thus,to estimate the power spectrum at M+1 discrete frequencies between 0 and fe inclusive,you begin 6 by taking the FFT of 2MK points(which number had better be an integer power of two!).You then take the modulus square of the resulting coefficients,add positive and negative frequency pairs,and divide by (2MK)2,all according to equation (13.4.5)with N =2MK.Finally,you"bin"the results into summed (not averaged) groups of K.This procedure is very easy to program,so we will not bother to give a routine for it.The reason that you sum,rather than average,K consecutive points Numerica 10621 is so that your final PSD estimate will preserve the normalization property that the S sum of its M+1 values equals the mean square value of the function. 431 A second technique for estimating the PSD at M+1 discrete frequencies in Recipes the range 0 to fe is to partition the original sampled data into K segments each of 腿 2M consecutive sampled points.Each segment is separately FFT'd to produce a periodogram estimate(equation 13.4.5 with N =2M).Finally,the K periodogram North estimates are averaged at each frequency.It is this final averaging that reduces the variance of the estimate by a factor K(standard deviation by vR).This second technique is computationally more efficient than the first technique above by a modest factor,since it is logarithmically more efficient to take many shorter FFTs than one longer one.The principal advantage of the second technique,however,is that only 2M data points are manipulated at a single time,not 2KM as in the first technique This means that the second technique is the natural choice for processing long runs of data,as from a magnetic tape or other data record.We will give a routine later for implementing this second technique,but we need first to return to the matters of
552 Chapter 13. Fourier and Spectral Applications Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Turn now to another question about the periodogram estimate. What is the variance of that estimate as N goes to infinity? In other words, as we take more sampled points from the original function (either sampling a longer stretch of data at the same sampling rate, or else by resampling the same stretch of data with a faster sampling rate), then how much more accurate do the estimates Pk become? The unpleasant answer is that the periodogram estimates do not become more accurate at all! In fact, the variance of the periodogram estimate at a frequency f k is always equal to the square of its expectation value at that frequency. In other words, the standard deviation is always 100 percent of the value, independent of N! How can this be? Where did all the information go as we added points? It all went into producing estimates at a greater number of discrete frequencies f k. If we sample a longer run of data using the same sampling rate, then the Nyquist critical frequency fc is unchanged, but we now have finer frequency resolution (more f k’s) within the Nyquist frequency interval; alternatively, if we sample the same length of data with a finer sampling interval, then our frequency resolution is unchanged, but the Nyquist range now extends up to a higher frequency. In neither case do the additional samples reduce the variance of any one particular frequency’s estimated PSD. You don’t have to live with PSD estimates with 100 percent standard deviations, however. You simply have to know some techniques for reducing the variance of the estimates. Here are two techniques that are very nearly identical mathematically, though different in implementation. The first is to compute a periodogram estimate with finer discrete frequency spacing than you really need, and then to sum the periodogram estimates at K consecutive discrete frequencies to get one “smoother” estimate at the mid frequency of those K. The variance of that summed estimate will be smaller than the estimate itself by a factor of exactly 1/K, i.e., the standard deviation will be smaller than 100 percent by a factor 1/ √ K. Thus, to estimate the power spectrum at M + 1 discrete frequencies between 0 and f c inclusive, you begin by taking the FFT of 2MK points (which number had better be an integer power of two!). You then take the modulus square of the resulting coefficients, add positive and negative frequency pairs, and divide by (2MK) 2, all according to equation (13.4.5) with N = 2MK. Finally, you “bin” the results into summed (not averaged) groups of K. This procedure is very easy to program, so we will not bother to give a routine for it. The reason that you sum, rather than average, K consecutive points is so that your final PSD estimate will preserve the normalization property that the sum of its M + 1 values equals the mean square value of the function. A second technique for estimating the PSD at M + 1 discrete frequencies in the range 0 to fc is to partition the original sampled data into K segments each of 2M consecutive sampled points. Each segment is separately FFT’d to produce a periodogram estimate (equation 13.4.5 with N ≡ 2M). Finally, the K periodogram estimates are averaged at each frequency. It is this final averaging that reduces the variance of the estimate by a factor K (standard deviation by √ K). This second technique is computationally more efficient than the first technique above by a modest factor, since it is logarithmically more efficient to take many shorter FFTs than one longer one. The principal advantage of the second technique, however, is that only 2M data points are manipulated at a single time, not 2KM as in the first technique. This means that the second technique is the natural choice for processing long runs of data, as from a magnetic tape or other data record. We will give a routine later for implementing this second technique, but we need first to return to the matters of
13.4 Power Spectrum Estimation Using the FFT 553 leakage and data windowing which were brought up after equation(13.4.7)above. Data Windowing The purpose of data windowing is to modify equation(13.4.7),which expresses the relation between the spectral estimate P at a discrete frequency and the actual underlying continuous spectrum P(f)at nearby frequencies.In general,the spectral power in one"bin"k contains leakage from frequency components that are actually s bins away,where s is the independent variable in equation(13.4.7).There is,as we pointed out,quite substantial leakage even from moderately large values of s. g When we select a run of N sampled points for periodogram spectral estimation, nted for we are in effect multiplying an infinite run of sampled data cj by a window function in time,one that is zero except during the total sampling time NA,and is unity during that time.In other words,the data are windowed by a square window function.By the convolution theorem(12.0.9;but interchanging the roles of f and t),the Fourier transform of the product of the data with this square window function is equal to the convolution of the data's Fourier transform with the window's Fourier transform.In fact,we determined equation(13.4.7)as nothing more than the square of the discrete Fourier transform of the unity window function. University Press. 需 W(s)= 1 sin(ms) (13.4.8) 9 sin(πs/N) 心量 OF SCIENTIFIC( The reason for the leakage at large values of s,is that the square window function turns on and off so rapidly.Its Fourier transform has substantial components at high frequencies.To remedy this situation,we can multiply the input data ci,j=0,...,N-1 by a window function wj that changes more gradually from zero to a maximum and then back to zero as j ranges from 0 to N.In this case,the equations for the periodogram estimator(13.4.4-13.4.5)become 10.621 N-1 Dk三∑C四e2ikN 公 k=0,..,N-1 (13.4.9) Numerical Recipes 43106 j=0 P(0)=P(fo)-IDo (outside Software. PU)=DP+Dw-] North k=1,2, )PUa (13.4.10) 显 where Wss stands for "window squared and summed," N-1 (13.4.11) i=0
13.4 Power Spectrum Estimation Using the FFT 553 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). leakage and data windowing which were brought up after equation (13.4.7) above. Data Windowing The purpose of data windowing is to modify equation (13.4.7), which expresses the relation between the spectral estimate Pk at a discrete frequency and the actual underlying continuous spectrum P(f) at nearby frequencies. In general, the spectral power in one “bin” k contains leakage from frequency components that are actually s bins away, where s is the independent variable in equation (13.4.7). There is, as we pointed out, quite substantial leakage even from moderately large values of s. When we select a run of N sampled points for periodogram spectral estimation, we are in effect multiplying an infinite run of sampled data c j by a window function in time, one that is zero except during the total sampling time N∆, and is unity during that time. In other words, the data are windowed by a square window function. By the convolution theorem (12.0.9; but interchanging the roles of f and t), the Fourier transform of the product of the data with this square window function is equal to the convolution of the data’s Fourier transform with the window’s Fourier transform. In fact, we determined equation (13.4.7) as nothing more than the square of the discrete Fourier transform of the unity window function. W(s) = 1 N2 sin(πs) sin(πs/N) 2 = 1 N2 N −1 k=0 e2πisk/N 2 (13.4.8) The reason for the leakage at large values of s, is that the square window function turns on and off so rapidly. Its Fourier transform has substantial components at high frequencies. To remedy this situation, we can multiply the input data cj , j = 0,...,N − 1 by a window function wj that changes more gradually from zero to a maximum and then back to zero as j ranges from 0 to N. In this case, the equations for the periodogram estimator (13.4.4–13.4.5) become Dk ≡ N −1 j=0 cjwj e2πijk/N k = 0,...,N − 1 (13.4.9) P(0) = P(f0) = 1 Wss |D0| 2 P(fk) = 1 Wss |Dk| 2 + |DN−k| 2 k = 1, 2,..., N 2 − 1 P(fc) = P(fN/2) = 1 Wss DN/2 2 (13.4.10) where Wss stands for “window squared and summed,” Wss ≡ N N −1 j=0 w2 j (13.4.11)