180 Statistical channel parameter estimation 7.1.4 Parameter Estimation using the 1*t-order GAM Model In this section,we introduce the 1-order GAM model and derive estimators of the model pa deterministic and stochastic ML methods,as well as a novel MUSIC algorithm.An As estimator based on these parameter estimators is also proposed. The 1t-order GAM Model The GAM model Asztely et al.(1997)makes use of the fact that the deviations are small with high probability. In the 1"-order GAM model,the first-orde Taylor series expansi n the re it sig al of the We regaed the model and the effective signal model is provided in Section.1.5. measure of the et al.(1997) 0≈ycAM=∑au0[c(可+rc'(例+w =a(0c(o)+8t)c(⑥)+w(④), (7.9) where In matrix notation,(.9)reads VGAM(t)=F()E()+(t) (7.10) with F(o)=c(o)c(o)and (t)=a(t).B(t)T. The autocorrelation functions of o(t)and B(t)is calculated to be respectively, R付stoa()o"@+-】=. and Rg(r)E3()3(t+r】=a(r) (7.11) (7.11)that 0后=0月02 (7.12) deviatio s.th e ai cton spread can b 1975) SDS is propor Fleury(2 (7.13) e d In a scenario with DSDSs,(7.9)extends to v0≈aAu(e饼2oa0a+(e0c(网+o0 =B()y)+w(), (7.14
180 Statistical channel parameter estimation 7.1.4 Parameter Estimation using the 1 st-order GAM Model In this section, we introduce the 1 st-order GAM model and derive estimators of the model parameters using standard deterministic and stochastic ML methods, as well as a novel MUSIC algorithm. An AS estimator based on these parameter estimators is also proposed. The 1 st-order GAM Model The GAM model Asztély et al. (1997) makes use of the fact that the deviations φ˜ d,ℓ are small with high probability. In the 1 st-order GAM model, the first-order Taylor series expansion of the array response is used to approximate the effective impact of the SDSs on the received signal. We regard a distributed scatterer as an SDS when its contribution to the output signal of the Rx array is closely approximated using the 1 st-order GAM model. A measure of the fit between the approximation model and the effective signal model is provided in Section 7.1.5. We first consider a single-SDS scenario. The function c(φ¯ + φ˜ ℓ) in (7.1) can be approximated by its first-order Taylor series expansion at φ¯. Inserting this approximation for each c(φ¯ + φ˜ ℓ) in (7.1) yields the 1 st-order GAM model Asztély et al. (1997) y(t) ≈ yGAM(t) .= X L ℓ=1 aℓ(t) c(φ¯)+φ˜ ℓc ′ (φ¯) + w(t) =α(t)c(φ¯) + β(t)c ′ (φ¯) + w(t), (7.9) where c ′ (φ¯) .= dc(φ) dφ φ=φ¯ . In matrix notation, (7.9) reads yGAM(t) = F(φ¯)ξ(t) + w(t) (7.10) with F(φ¯) .=[c(φ¯) c ′ (φ¯)] and ξ(t) .=[α(t), β(t)]T. The autocorrelation functions of α(t) and β(t) is calculated to be respectively, Rα(τ) .= E[α(t)α ∗ (t + τ)] = X L ℓ=1 Raℓ (τ) and Rβ(τ) .= E[β(t)β ∗ (t + τ)] = σ 2 φ˜ · Rα(τ), (7.11) where σ 2 φ˜ .= E[φ˜2 ℓ ]. Note that E[φ˜ ℓ] = 0 according to Assumption 1). The parameter σ 2 φ˜ is the second-central moment of the azimuth deviation. By denoting the variances of α(t) and β(t) with σ 2 α and σ 2 β respectively, we conclude from (7.11) that σ 2 β = σ 2 φ˜ · σ 2 α. (7.12) This equality can also be obtained using the results given in (Shahbazpanahi et al. 2001, (49)–(51)). We refer to the parameter σφ˜ as the AS of the SDS. Note that as shown in Fleury (2000), the natural figure for characterizing direction dispersion is the direction spread. However, in a scenario with horizontal-only propagation and small azimuth deviations, the direction spread can be approximated by σφ˜ expressed in radian Fleury (2000). For example, in the case where the azimuth power spectrum of an SDS is proportional to the von-Mises probability density function Mardia (1975) fφ˜ ℓ (φ) = 1 2πI0(κ) exp{κ cos(φ − φ¯)}, (7.13) where κ denotes the concentration parameter and I0(·) represents the modified Bessel function of the first kind and order 0, the approximation is close provided κ ≥ 7, i.e. σφ˜ ≤ 10◦ Fleury (2000). In a scenario with D SDSs, (7.9) extends to y(t) ≈ yGAM(t) .= X D d=1 αd(t)c(φ¯ d) + βd(t)c ′ (φ¯ d) + w(t) = B(φ¯)γ(t) + w(t), (7.14)
Statistical channel parameter estimation 181 NAEstimators In this subsection,the standard deterministic and stochastic ML estimation methods as well as a novel MUSIC algorithm are applied using the 1-order GAM model to derive estimators of the NAs of SDSs Deterministie ML (DML)NA Estimator ML estimator of is calculated as Krim and Viberg (1996) (7.15) The parametersy(t).t=t, are estimated as (Y(t))pMB()'y(t),t=h.....tx. (7.16) Stochastic ML (SML)NA Estimator respectively.Let be the vector containing the p to be estimated: (7.17 The MLestimator of is a solution to the maximization problem Krim and Viberg (1996) SsML arg max{-In(lyGaMll -tr[(GAM)]), (7.18) where the covariance matrix of GAM()in (7.14)reads GM=B()R,B(+2I (7.19 Here.denores maris and i)is the covariance maris of. The maximization op )n(retivdimensionl and e search procedures prohibits the implementation of opML and th SAGE algorithm Fleury et al.(1999);Yin and Fleury (2005)provides w-complexity approximation of these ML estimators MUSIC NA Estimator The standard MUSIC algorithm Schmidt(1986)derived based on the SS model(7.3)uses the pseudo-spectrum llc(o) 7.20) Here..e denotes the Frobenius norm and E is an orthonormal basis of the estimated noise subspace calculated The azimuths of the D scatterers are estimated to be the arguments of the pseudo-spectrum corresponding
Statistical channel parameter estimation 181 where B(φ¯) .= [c(φ¯ 1), c ′ (φ¯ 1), . . . , c(φ¯D), c ′ (φ¯D)] and γ(t) .= [α1(t), β1(t), . . . , αD(t), βD(t)]T. Under Assumptions 3)–5) in Subsection 7.1.2 the elements in the vector γ(t) are uncorrelated. NA Estimators In this subsection, the standard deterministic and stochastic ML estimation methods as well as a novel MUSIC algorithm are applied using the 1 st-order GAM model to derive estimators of the NAs of SDSs. Deterministic ML (DML) NA Estimator The DML NA estimator based on the 1 st-order GAM model can be derived similarly to the SS-ML azimuth estimator (7.4). Assuming that the weight samples αd(t) and βd(t), t = t1, . . . , tN , d = 1, . . . , D in (7.14) are deterministic, the ML estimator of φ¯ is calculated as Krim and Viberg (1996) φ ˆ¯ DML=arg max φ¯ {tr[ΠB(φ¯)Σˆ y]}. (7.15) The parameters γ(t), t = t1, . . . , tN are estimated as (\γ(t))DML=B(φ ˆ¯) †y(t), t = t1, . . . , tN . (7.16) Stochastic ML (SML) NA Estimator The SML azimuth estimator derived based on the SS model was introduced in Jaffer (1988). We obtain the SML NA estimator based on the 1 st-order GAM model in a similar manner. Making use of the assumptions 1)–5) in Section 7.1.2 and invoking the central limit theorem, the weight samples αd(t) and βd(t), t = t1, . . . , tN , d = 1, . . . , D are uncorrelated complex circularly-symmetric Gaussian random processes with variances σ 2 αd and σ 2 βd respectively. Let Ω be the vector containing the parameters to be estimated: Ω .= [σ 2 w, φ¯ d, σ2 αd , σ2 βd ; d = 1, . . . , D]. (7.17) The ML estimator of Ω is a solution to the maximization problem Krim and Viberg (1996) Ωb SML = arg max Ω {−ln[|ΣyGAM |] − tr (ΣyGAM ) −1Σˆ y }, (7.18) where the covariance matrix ΣyGAM of yGAM(t) in (7.14) reads ΣyGAM = B(φ¯)RγB(φ¯) H + σ 2 wIM . (7.19) Here, IM denotes the M × M identity matrix and Rγ = diag(σ 2 α1 , σ2 β1 , . . . , σ2 αD , σ2 βD ) is the covariance matrix of γ(t). Here, diag(·) denotes a diagonal matrix with diagonal elements listed as argument. The maximization operations in (7.15) and (7.18) require respectively a D-dimensional and a (3D + 1)-dimensional search. The high computational complexity of these search procedures prohibits the implementation of φ ˆ¯ DML and Ωb SML in real applications. As an alternative, the SAGE algorithm Fleury et al. (1999); Yin and Fleury (2005) provides with a low-complexity approximation of these ML estimators. MUSIC NA Estimator The standard MUSIC algorithm Schmidt (1986) derived based on the SS model (7.3) uses the pseudo-spectrum fMUSIC(φ)= kc(φ)k 2 F kc(φ)HEwk 2 F . (7.20) Here, k · kF denotes the Frobenius norm and Ew is an orthonormal basis of the estimated noise subspace calculated from Σˆ y. The azimuths of the D scatterers are estimated to be the arguments of the pseudo-spectrum corresponding to its D highest peaks.
182 Statistical channel parameter estimation sed on fsic(O)=TF(o)El唱 (7.21) In the r.h.s.term in(7.21),F()is an orthonormal basis of the space spanned by the columns of F().The NAs of the are obtained by minimizing the distance bet n the subspace spanned by the signal originating from single scatterer wing (E )an the two subspac s.It an be shown that this distance is prop al to the Frobenius norm of the projection of one subspace21) space or f the( and other previously published extensions of the standard MUSIC algorithm is given in Subsection 7.1.4. he p USgodelaerof eMUSIC The proposed MUSIC algorithm,which makes use of the pseudo-spectrum(7.21),can be generalized to the scenario by one component (ga scatterer)span a subspace of any thi e,in the case where ispersion of an sDs is characte using a pdf,F()can be obtained by the eigenvalue decomposition of ve inte aces Golub an 6).which all for a co on w the arant of the two subspaces is quivalent to minimizins the norm of (where re tsthe vector containingal principa subspacesnd)isthe operator ise f Thus,in our case ofare the principal angles between the subspace spanned by the columns ofF(and the signal subspace estimated ovaria Thisisa reasonable approach in the ID case where the dimension of the si Btheof the MUSICt proposed in Asztely et a (1997)computes the NA estimates by ce spanned by the columns of F()and the NA es hen the wasint6no The pseudo-spectrum(7.21)can be recast as: )W()FE.} (7.22 where W()is an azimuth-dependent weighting matrix defined as W(@)三F(p)tF(o)F()(F(o)t)日 (7.23)
182 Statistical channel parameter estimation We propose a natural extension of the standard MUSIC algorithm for the estimation of the NAs of SDSs based on the 1 st-order GAM model. The extension considers the following generalization of the pseudo-spectrum in (7.20): fMUSIC(φ) = 1 kF˜(φ)HEwk 2 F . (7.21) In the r.h.s. term in (7.21), F˜(φ) is an orthonormal basis of the space spanned by the columns of F(φ). The NAs of the D SDSs are estimated to be the arguments of the pseudo-spectrum corresponding to its D highest peaks. Both the standard MUSIC algorithm and the proposed extension rely on the same principle, i.e. parameter estimates are obtained by minimizing the distance between the subspace spanned by the signal originating from single scatterer and an estimate of this subspace computed from the sample covariance matrix. In the SS case, the signal subspace induced by an SS is spanned by the steering vector c(φ), while in the SDS scenario the subspace induced by an SDS is spanned by the columns of F(φ). In that sense, the latter algorithm is a natural extension of the former one. Following (Edelman et al. 1998, p. 337), the distance between the subspace spanned by the columns of F(φ) and the estimated signal subspace coincides with the Frobenius norm of the difference between the projection matrices of the two subspaces. It can be shown that this distance is proportional to the Frobenius norm of the projection of one subspace onto the null space of the other subspace, i.e. kF˜(φ) HEwk 2 F in our case. Thus, the inverse of the pseudospectrum (7.21) provides with a measure of the distance between the signal subspace spanned by the columns of F(φ) and the estimated signal subspace. A thorough discussion of the relationships between this extended MUSIC algorithm and other previously published extensions of the standard MUSIC algorithm is given in Subsection 7.1.4. Generalization of the Proposed MUSIC Algorithm and its Relation to other Extensions of the Standard MUSIC Algorithm The proposed MUSIC algorithm, which makes use of the pseudo-spectrum (7.21), can be generalized to the scenario where the signals contributed by one component (e.g. a scatterer) span a subspace of any arbitrary dimension. In this case, F˜(φ) is an orthonormal basis of the signal subspace. The argument φ of F˜(φ) may be also multi-dimensional and it is not required that a closed–form expression exists which relates F˜(φ) to φ. For instance, in the case where azimuth dispersion of an SDS is characterized using a pdf, F˜(φ) can be obtained by the eigenvalue decomposition of the covariance matrix calculated using this pdfs. We propose now an alternative interpretation of the proposed MUSIC algorithm using the concept of principal angles between subspaces Golub and Loan (1996), which allows for a comparison with the variant of the MUSIC algorithm published in Christensen et al. (2004). As shown in (Edelman et al. 1998, p. 337), minimizing the distance between two subspaces is equivalent to minimizing the norm of sin(θ), where θ represents the vector containing all principal angles between these two subspaces and sin(·) is the operator computing the element–wise sin of θ. Thus, in our case the NA estimates obtained by maximizing the pseudo-spectrum (7.21) in fact minimize k sin(θ)k where the components of θ are the principal angles between the subspace spanned by the columns of F(φ) and the signal subspace estimated from the sample covariance matrix. This is a reasonable approach in the ID case where the dimension of the signal subspace induced by an SDS is larger than 1. By contrast, the variant of the MUSIC algorithm proposed in Asztély et al. (1997) computes the NA estimates by maximizing the smallest principal angle between the 2-dimensional subspace spanned by the columns of F(φ) and the estimated signal subspace. This maximization is indeed equivalent to the maximization of the objective function λ −1 min(F(φ) HEwEH w F(φ)), with λmin(·) denoting the smallest eigenvalue of the matrix given as argument, described in Asztély et al. (1997) to compute the NA estimates Drmac (2000). The resulting algorithm is applicable when the dimension of the subspace effectively induced by an SDS is equal to one, e.g. in the CD case for which the algorithm was initially designed. The pseudo-spectrum (7.21) can be recast as: fMUSIC(φ)= 1 tr{EH w F(φ)W(φ)F(φ)H Ew} , (7.22) where W(φ) is an azimuth-dependent weighting matrix defined as W(φ) .= F(φ) †F˜(φ)F˜(φ) H(F(φ) † ) H. (7.23) At first glance the representation in (7.22) seems to be similar to the pseudo-spectrum (Krim and Viberg 1996, Eq. (37)) of the weighted MUSIC algorithm. However, the proposed MUSIC algorithm and the standard weighted MUSIC
Statistical channel parameter estimation 183 t of all it is im 7211 betwe n Ew and E,w hile the placed be veen F()anc (1996)the weighting matri is computed from the eigenvalues and eigenvectors of the sample covariance matrix and is constant.By contrast (7.22)is explicitly computed as a function of F()and as a consequence,depends on the Bengm is tha the NAs etween this metod and the proposec )and the estimated signal subspace,while in the ired in he estimat th ar the PSF method requires a multi-dimensional search.Only in a single-SDS scenario is the objective function maximized in the PSF method id al ic to the pseudo-spectrum (7.21)cale d in the proposed MOSIC rooof the comofontoThis method relies on the pseudo-spectrum considers the IF( (7.24) F()E. any values of This condition is usually not satisfied in real applications.Simulation show that the NA estimator derived from(7.21)outperform the estimator obtained from(7.24)in terms of lower root mean square Toour best knowledge,the proposed extension of the MUSIC algorithm according to(7.21)has not been reported in any publi hed wor s algorithm is indeed the natural ext on of the standar USIC algorithm to the c the IC case in the SDS scenario considered in this contribution.Another example is fundamental for signals with a harmonic structure Christensen et al.(2004). AS Estimator Identity (7.12)inspires the following estimator of the AS of SDS =VG/屈 (7.25) =+20-<0>P 2=六∑6()-<a0>P (7.26) when DML estimation is used.In (76),()and), 1 we rewrite the covariance matrixin (7.19)according to vec()=D()e 7.27刀
Statistical channel parameter estimation 183 algorithm have fundamental differences. First of all, it is impossible to recast the pseudo-spectrum (7.22) in exactly the same form as the pseudo-spectrum of the weight MUSIC algorithm. More specifically, the weighting matrix in the weighted MUSIC algorithm is inserted between Ew and EH w , while the weighting matrix is placed between F(φ) and F(φ) H in the proposed MUSIC algorithm. Furthermore, the criteria for the selection of the weighting matrices are fundamentally different. In the standard weighted MUSIC algorithm Krim and Viberg (1996), the weighting matrix is computed from the eigenvalues and eigenvectors of the sample covariance matrix Σˆ y and is constant. By contrast the weighting matrix in (7.22) is explicitly computed as a function of F˜(φ) and as a consequence, depends on the parameter to be estimated. The pseudo-spectrum (7.21) looks similar to the objective functions maximized in the pseudo-subspace fitting (PSF) method (Bengtsson 1999, Subsection 4.5.1). However, an essential difference between this method and the proposed MUSIC algorithm is that the latter computes the NA estimates by “scanning” a measure of the distance between a multi-dimensional subspace (induced by a single SDS in our case) and the estimated signal subspace, while in the PSF method, the NA estimates are the values providing the best “fit” between the estimated signal subspace and the subspace spanned by all signals. Thus, a one-dimensional search is required in the proposed MUSIC algorithm, while the PSF method requires a multi-dimensional search. Only in a single-SDS scenario is the objective function maximized in the PSF method identical to the pseudo-spectrum (7.21) calculated in the proposed MUSIC algorithm. In Christensen et al. (2004) another extension of the standard MUSIC algorithm is proposed which considers the projection of the columns of F(φ) on to Ew. This method relies on the pseudo-spectrum kF(φ)k 2 F kF(φ)HEwk 2 F . (7.24) The inverse of the pseudo-spectrum (7.24) corresponds to the distance between the multi-dimensional subspace spanned by the columns of F(φ) and the signal subspace, if and only if, the columns of F(φ) are orthonormal for any values of φ. This condition is usually not satisfied in real applications. Simulation results also show that the NA estimator derived from (7.21) outperform the estimator obtained from (7.24) in terms of lower root mean square estimation error. To our best knowledge, the proposed extension of the MUSIC algorithm according to (7.21) has not been reported in any published work yet. This algorithm is indeed the natural extension of the standard MUSIC algorithm to the case where the subspace induced by each individual signal component is multi-dimensional. One application example is the IC case in the SDS scenario considered in this contribution. Another example is fundamental frequency estimation for signals with a harmonic structure Christensen et al. (2004). AS Estimator Identity (7.12) inspires the following estimator of the AS of SDS: σcφ˜ = q σc2 β σc2 α. (7.25) The estimates σc2 β and σc2 α can be directly obtained for each of the D SDSs from (7.18) when using the SML estimators, or computed as σc2 β = 1 N XtN t=t1 |βˆ(t)− < βˆ(t) > | 2 and σc2 α = 1 N XtN t=t1 |αˆ(t)− < αˆ(t) > | 2 (7.26) when DML estimation is used. In (7.26), βˆ(t) and αˆ(t), t = t1, . . . , tN are calculated from (7.16) and < · > denotes averaging. In the case where the proposed MUSIC algorithm (7.21) is applied, σc2 β and σc2 α can be obtained by applying the least-square covariance matrix fitting method (Johnson and Dudgeon n.d., Section 7.1.2) that we shortly describe below. First we rewrite the covariance matrix ΣyGAM in (7.19) according to vec(ΣyGAM) = D(φ¯)e (7.27)
184 Statistical channel parameter estimation with ve()denoting the vectorization Minka(2000) D()[c()⑧c(di),c'(di)⑧c(何)',,c(p)c(op),c(p)⑧c(p)',ec(L where is the Kronecker product,and e-0a1:..u GAM =0 (7.28) holds forSolving (7.28)yields the close-form expression for the solution: e=D(avec(②), (7.29) It is worth azimuth deviatio tocalculare the model parameters.In (Shahbazpanahi of this pdf is approximated according to1-()/To( In the sequel,the notation"AS(F estimator"is used to denote the AS estimator calculated using (7.25),where re computed from the estimates obtained using methodMore specifically,can be DML"SML 7.1.5 ANew Definition of SDS and the Array Size Adaptation Technique which s use of the ratio of the largest eigenvalue by antitatively asse model (and the order GAM model (7.9 Finally,we pre sent a techni que,called array size adaptation in this section. The traditional definition and a new Definition of SDs ssion reveals the advantages of bsrifly review the conve.na sceo with a single distributed scaerer of the vm As the hes1g知al trix Men al. 996:5 ective c cral heisht Zatman (998).As shown in xu (2003) the effective rank of the signal su space induced by a distributed scatterer increases along with its AS.An SDS is and Volck ve rank E son ar en2000 fio of spSis based on the fowin eperimental eridnctribur L 1 061 with small AS the largest eigenvalue of the signal covariance matrix significantly dominates the other eigenvalues.A quantitative
184 Statistical channel parameter estimation with vec(·) denoting the vectorization Minka (2000) D(φ¯) .= [c(φ¯ 1) ⊗ c(φ¯ 1) ∗ , c ′ (φ¯ 1) ⊗ c ′ (φ¯ 1) ∗ , . . . , c(φ¯D) ⊗ c(φ¯D) ∗ , c ′ (φ¯D) ⊗ c ′ (φ¯D) ∗ , vec(IM )], where ⊗ is the Kronecker product, and e .= [σ 2 α1 , σ2 β1 , σ2 α2 , σ2 β2 , . . . , σ2 w] T. In the covariance matrix fitting method, the estimate Ωˆ of Ω in (7.17) minimizes the Euclidean distance between Σˆ y and ΣyGAM. Thus, the identity ∂kΣˆ y − ΣyGAMk 2 F ∂eH e=eˆ = 0 (7.28) holds for Ω = Ωˆ . Solving (7.28) yields the close-form expression for the solution eˆ: eˆ = D(φ ˆ¯) † vec(Σˆ y). (7.29) It is worth mentioning that the AS estimator (7.25) does not require knowledge of the pdf of the azimuth deviation. In the case where some assumption is made on the pdf in form of a parametric model, the AS estimate can be used to calculate the model parameters. In (Shahbazpanahi 2004, (39)–(43)), the AS σφ˜ is related to the parameters controlling the spread of the truncated Gaussian, the Laplacian and the confined uniform distributions. In addition, when the von-Mises pdf Ribeiro et al. (2005) is used, the relation between the AS and the concentration parameter κ of this pdf is approximated according to σφ˜ ≈ p 1 − |I1(κ)/I0(κ)| 2. In the sequel, the notation “AS(F) estimator” is used to denote the AS estimator calculated using (7.25), where σd2 βd and σd2 αd are computed from the estimates obtained using method “F”. More specifically, F can be “DML”, “SML” or “MUSIC”. 7.1.5 A New Definition of SDS and the Array Size Adaptation Technique In this section, a new definition of SDS is first provided, which makes use of the ratio of the largest eigenvalue by the second largest eigenvalue of the covariance matrix of the individual signal components contributed by scatterers. Then, based on this ratio we introduce a measure that quantitatively assesses the degree of the fit between the effective signal model (7.1) and the 1 st-order GAM model (7.9). Finally, we present a technique, called array size adaptation (ASA), which ensures a good fit between the two models by appropriately selecting the array size. The array size here is referred to as the size of the array aperture. Except when explicitly mentioned the single-SDS scenario is considered in this section. The Traditional Definition and a New Definition of SDS In this subsection, we present two definitions of SDS, the traditional definition relying on the effective rank of the signal subspace and a new definition based on the ratio of the largest signal eigenvalue by the second largest signal eigenvalue. We then discuss some issues arising when we apply the criteria induced by these definitions to decide based on measurement data whether a distributed scatterer is an SDS or not. The discussion reveals the advantages of the new definition. We first briefly review the conventional definition of SDS. In a scenario with a single distributed scatterer, the signal subspace is spanned by the L vectors in the sum in (7.1). For L > M, the signal covariance matrix has rank M with probability one. However, for small to moderate values of the AS the signal energy is concentrated in a few eigenvalues of the signal covariance matrix Meng et al. (1996); Shahbazpanahi et al. (2001); Xu (2003). The effective dimension of the signal subspace is determined by the so-called effective rank of the covariance matrix, which is defined to be the number of eigenvalues larger than twice the noise spectral height Zatman (1998). As shown in Xu (2003), the effective rank of the signal subspace induced by a distributed scatterer increases along with its AS. An SDS is a distributed scatterer which results in a signal subspace with small effective rank Bengtsson and Ottersten (2000); Bengtsson and Völcker (2001); Meng et al. (1996); Shahbazpanahi (2004); Trump and Ottersten (1996). The new definition of SDS is based on the following experimental evidence. For distributed scatterers with small AS the largest eigenvalue of the signal covariance matrix significantly dominates the other eigenvalues. A quantitative