1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu 67 Detection: Determining the Number of Sources pptx

11 430 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 11
Dung lượng 179,67 KB

Nội dung

Williams, D.B. “Detection: Determining the Number of Sources” Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 1999 c  1999byCRCPressLLC 67 Detection: Determining the Number of Sources Douglas B. Williams Georgia Institute of Technology 67.1 Formulation of the Problem 67.2 Information Theoretic Approaches AIC and MDL • EDC 67.3 Decision Theoretic Approaches The Sphericity Test • Multiple Hypothesis Testing 67.4 For More Information References The processing of signals received by sensor arrays generally can be separated into two problems: (1) detecting the number of sources and (2) isolating and analyzing the signal produced by each source. We make this distinction because many of the algorithms for separating and processing array signals make the assumption that the number of sources is known a priori and may give misleading results if the wrong number of sources is used [3]. A good example are the errors produced by many high resolution bearing estimation algorithms (e.g., MUSIC) when the wrong number of sources is assumed. Because, in general, it is easier to determine how many signals are present than to estimate the bearings of those signals, signal detection algorithms typically can correctly determine the number of signals present even when bearing estimation algorithms cannot resolve them. In fact, the capability of an array to resolve two closely spaced sources could be said to be limited by its ability to detect that there are actually two sources present. If we have a reliable method of determining the number of sources, not only can we correctly use high resolution bearing estimation algorithms, but we can also use this knowledge to utilize more effectively the information obtained from the bearing estimation algorithms. If the bearing estimation algorithm gives fewer source directions than we know there are sources, then we know that there is more than one source in at least one of those directions and have thus essentially increased the resolution of the algorithm. If analysis of the information provided by the bearing estimation algorithm indicates more source directions than we know there are sources, then we can safely assume that some of the directions are the results of false alarms and may be ignored, thus decreasing the probability of false alarm for the bearing estimation algorithms. In this section we will present and discuss the more common approaches to determining the number of sources. 67.1 Formulation of the Problem The basic problem is that of determining how many signal producing sources are being observed by an array of sensors. Although this problem addresses issues in several areas including sonar, radar, c  1999 by CRC Press LLC communications, and geophysics, one basic formulation can be applied to all these applications. We will give only a basic, brief description of the assumed signal structure, but more detail can be found in references such as the book by Johnson and Dudgeon [3]. We will assume that an array of M sensors observes signals produced by N s sources. The array is allowed to have an arbitrary geometry. For our discussion here, we will assume that the sensors are omnidirectional. However, this assumption is only for notational convenience as the algorithms to be discussed will work for more general sensor responses. Theoutputofthemth sensor can be expressed as a linear combination of signals and noise y m (t) = N s  i=1 s i ( t −  i (m) ) + n m (t) . The noise observed at the mth sensor is denoted by n m (t). The propagation delays,  i (m),are measured with respect to an origin chosen to be at the geometric center of the array. Thus, s i (t) indicates the ith propagating signal observed at the origin, and s i (t −  i (m)) is the same signal measured by the mth sensor. For a plane wave in a homogeneous medium, these delays can be found from the dot product between a unit vector in the signal’s direction of propagation,  ζ o i , and the sensor’s location, x m ,  i (m) =  ζ o i ·x m c , where c is the plane wave’s speed of propagation. Most algorithms used to detect the number of sources incident on the array are frequency domain techniques that assume the propagating signals are narrowband about a common center frequency, ω o . Consequently, after Fourier transforming the measured signals, only one frequency is of interest and the propagation delays become phase shifts Y m  ω o  = N s  i=1 S i  ω o  e −jω o  i (m) + N m  ω o  . The detection algorithms then exploit the form of the spatial correlation matrix, R, for the array. The spatial correlation matrix is the M × M matrix formed by correlating the vector of the Fourier transforms of the sensor outputs at the particular frequency of interest Y =  Y 0  ω o  Y 1  ω o  ··· Y M−1  ω o  T . If the sources are assumed to be uncorrelated with the noise, then the form of R is R = E  YY   = K n + SCS  , where K n is the correlation matrix of the noise, S is the matrix whose columns correspond to the vector representations of the signals, S  is the conjugate transpose of S, and C is the matrix of the correlations between the signals. Thus, the matrix S has the form S =    e −jω o  1 (0) ··· e −jω o  N s (0) . . . . . . e −jω o  1 (M−1) ··· e −jω o  N s (M−1)    . If we assume that the noise is additive, white Gaussian noise with power σ 2 n and that none of the signals are perfectly coherent with any of the other signals, then K n = σ 2 n I m , C has full rank, and the form of R is R = σ 2 n I M + SCS  . (67.1) c  1999 by CRC Press LLC We will assume that the columns of S are linearly independent when there are fewer sources than sensors, which is the case for most common array geometries and expected source locations. As C is of full rank, if there are fewer sources than sensors, then the rank of SCS  is equal to the number of signals incident on the array or, equivalently, the number of sources. If there are N s sources, then SCS  is of rank N s and its N s eigenvalues in descending order are δ 1 , δ 2 ,···, δ N s .TheM eigenvalues of σ 2 n I M are all equal to σ 2 n , and the eigenvectors are any orthonormal set of length M vectors. So the eigenvectors of R are the N s eigenvectors of SCS  plus any M − N s eigenvectors which complete the orthonormal set, and the eigenvalues in descending order are σ 2 n + δ 1 ,···, σ 2 n + δ N s , σ 2 n ,···, σ 2 n .The correlation matrix is generally divided into two parts: the signal-plus-noise subspace formed by the largest eigenvalues (σ 2 n + δ 1 , ···,σ 2 n + δ N s ) and their eigenvectors, and the noise subspace formed by the smallest, equal eigenvalues and their eigenvectors. The reason for these labels is obvious as the space spanned by the signal-plus-noise subspace eigenvectors contains the signals and a portion of the noise while the noise subspace contains only that part of the noise that is orthogonal to the signals [3]. If there are fewer sources than sensors, the smallest M − N s eigenvalues of R are all equal and to determine exactly how many sources there are, we must simply determine how many of the smallest eigenvalues are equal. If there are not fewer sources than sensors (N s ≥ M), then none of the smallest eigenvalues are equal. The detection algorithms then assume that only the smallest eigenvalue is in the noise subspace as it is not equal to any of the other eigenvalues. Thus, these algorithms can detect up to M − 1 sources and for N s ≥ M will say that there are M − 1 sources as this is the greatest detectable number. Unfortunately, all that is usually known is  R, the sample correlation matrix, which is formed by averaging N samples of the correlation matrix taken from the outputs of the array sensors. As  R is formed from only a finite number of samples of R, the smallest M− N s eigenvalues of  R are subject to statistical variations and are unequal with probability one [4]. Thus, solutions to the detection problem have concentrated on statistical tests to determine how many of the eigenvalues of R are equal when only the sample eigenvalues of  R are available. When performing statistical tests on the eigenvalues of the sample correlation matrix to determine the number of sources, certain assumptions must be made about the nature of the signals. In array processing, both deterministic and stochastic signal models are used depending on the application. However, for the purpose of testing the sample eigenvalues, the Fourier transforms of the signals at frequency ω o ; S i (ω o ), i = 1, ., N s ; are assumed to be zero mean Gaussian random processes that are statistically independent of the noise and have a positive definite correlation matrix C.We also assume that the N samples taken when forming  R are statistically independent of each other. With these assumptions, the spatial correlation matrix is still of the same form as in (67.1), except that now we can more easily derive statistical tests on the eigenvalues of  R. 67.2 Information Theoretic Approaches We will see that the source detection methods to be described all share common characteristics. However, we will classify them into two groups—information theoretic and decision theoretic approaches—determined by the statistical theories used to derive them. Although the decision theoretic techniques are quite a bit older, we will first present the information theoretic algorithms as they are currently much more commonly used. 67.2.1 AIC and MDL AIC and MDL are both information theoretic model order determination techniques that can be used to test the eigenvalues of a sample correlation matrix to determine how many of the smallest eigenvalues of the correlation matrix are equal. The AIC and MDL algorithms both consist of minimizing a criterion over the number of signals that are detectable, i.e., N s = 0, ., M− 1. c  1999 by CRC Press LLC To construct these criteria, a family of probability densities, f(Y|θ(N s )), N s = 0, ., M − 1, is needed, where θ, which is a function of the number of sources, N s , is the vector of parameters needed for the model that generated the data Y. The criteria are composed of the negative of the log-likelihood function of the density f(Y ˆ θ(N s )),where ˆ θ(N s ) is the maximum likelihood estimate of θ for N s signals, plus an adjusting term for the model dimension. The adjusting term is needed because the negative log-likelihood function always achieves a minimum for the highest dimension model possible, which in this case is the largest possible number of sources. Therefore, the adjusting term will be a monotonically increasing function of N s and should be chosen so that the algorithm is able to determine the correct model order. AIC was introduced by Akaike [1]. Originally, the “IC” stood for information criterion and the “A” designated it as the first such test, but it is now more commonly considered an acronym for the “Akaike Information Criterion.” If we have N independent observations of a random variable with probability density g(Y) and a family of models in the form of probability densities f(Y|θ)where θ is the vector of parameters for the models, then Akaike chose his criterion to minimize I(g; f(·|θ))=  g(Y) ln g(Y)dY−  g(Y) ln f(Y|θ)dY (67.2) which is known as the Kullback-Leibler mean information distance. 1 N AI C(θ) is an estimate of −E{  g(Y) ln f(Y|θ)dY} and minimizing AI C(θ) over the allowable values of θ should minimize (67.2). The expression for AI C(θ ) is AI C(θ) =−2ln  f  Y| ˆ θ ( N s )  + 2η, where η is the number of independent parameters in θ. Following AIC, MDL was developed by Schwarz [6] using Bayesian techniques. He assumed that the a priori density of the observations comes from a suitable family of densities that possess efficient estimates [7]; they are of the form f(Y|θ)= exp(θ · p(Y) − b(θ)) . The MDL criterion was then found by choosing the model that is most probable a posteriori. This choice is equivalent to selecting the model for which MDL(θ) =−ln  f  Y| ˆ θ ( N s )  + 1 2 η ln N is minimized. This criterion was independently derived by Rissanen [5] using information theoretic techniques. Rissanen noted that each model can be perceived as encoding the observed data and that the optimum model is the one that yields the minimum code length. Hence, the name MDL comes from “Minimum Description Length”. For the purpose of using AIC and MDL to determine the number of sources, the forms of the log- likelihoodfunction andthe adjustingterms have beengivenbyWax[8]. For N s signals theparameters that completely parameterize the correlation matrix R are {σ 2 n ,λ 1 , ···,λ N s , v 1 , ···, v N s } where λ i and v i , i = 1, ., N s , are the eigenvalues and their respective eigenvectors of the signal-plus-noise subspace of the correlation matrix. As the vector of sensor outputs is a Gaussian random vector with correlation matrix R and all the samples of the sensor outputs are independent, the log-likelihood function of f(Y|θ)is ln f  Y|σ 2 n ,λ 1 , ···,λ N s , v 1 , ···, v N s  = π −pN ( det R ) −N exp  −Ntr  R −1  R  c  1999 by CRC Press LLC where tr(·) denotes the trace of the matrix,  R is the sample correlation matrix, and R is the unique correlation matrix formed from the given parameters. The maximum likelihood estimate of the parameters are [2, 4] ˆv i = u i ; i = 1, ···,N s ˆ λ i = l i ; i = 1, ···,N s (67.3) ˆσ 2 n = 1 M − N s M  i=N s +1 l i = ¯ l, where l 1 , ···,l M are the eigenvalues in descending order of  R and u i are the corresponding eigen- vectors. Therefore, the log-likelihood function of f(Y| ˆ θ(N s )) is ln f(Y| ¯ l, l 1 , ···,l N s , u 1 , ···, u N s ) = ln        M  i=N s +1 l 1/(M−N s ) i 1 M − N s M  i=N s +1 l i        (M−N s )N . Remembering that the eigenvalues of a complex correlation matrix are real and that the eigenvectors are complex and orthonormal, the number of degrees of freedom in the parameters of the model is classically chosen to be η = N s (2M − N s )+1. Noting that any constant term in the criteria which is common to the entire family of models for either AIC or MDL may be ignored, we have the criterion for AIC as AI C(  N s ) =−2N ln           M  i=  N s +1 l i   1 M −  N s M  i=  N s +1 l i   M−  N s           + 2  N s (2M −  N s );  N s = 0, ., M− 1 and the criterion for MDL as MDL(  N s ) =−N ln           M  i=  N s +1 l i   1 M −  N s M  i=  N s +1 l i   M−  N s           + 1 2  N s (2M−  N s ) ln N;  N s = 0, ., M−1 . For both of these methods, the estimate of the number of sources is that value of  N s which minimizes the criterion. In [9] there is a more thorough discussion concerning determining the number of degrees of freedom and the advantages of choosing instead η = N s (2M − N s − 1). In general, MDL is considered to perform better than AIC. Schwarz [6], through his derivation of the MDL criterion, showed that if his assumptions are accepted, then AIC cannot be asymptotically optimal. He also mentioned that MDL tends toward lower-dimensional models than AIC as the model dimension term is multiplied by 1 2 ln N in the MDL criterion. Zhao et al. [14] showed that c  1999 by CRC Press LLC MDL is consistent (the probability of detecting the correct number of sources, i.e., Pr(  N s = N s ), goesto1asN goes to infinity), but AIC is not consistent and will tend to overestimate the number of sources as N goes to infinity. Thus, most people in array processing prefer to use MDL over AIC. Interestingly, many statisticians prefer AIC because many of their modeling problems have a very large penalty for underestimating the model order but a relatively mild penalty for overestimating it. Xu and Kaveh [12] have provided a thorough discussion of the asymptotic properties of AIC and MDL, including an examination of their sensitivities to modelling errors and bounds on the probability that AIC will overestimate the number of sources. 67.2.2 EDC Clearly, the only difference between the implementations of AIC and MDL is the choice of the adjusting term that penalizes for choosing larger model orders. Several people have examined using other adjusting terms to arrive at other criteria. In particular, statisticians at the University of Pittsburgh [13, 14] have developed the Efficient Detection Criterion (EDC) procedure which is actually a family of criteria chosen such that they are all consistent. The general form of these criteria is EDC(θ) =−ln  f  Y| ˆ θ ( N s )  + ηC N , where C N can be any function of N such that (1) lim N→∞ C N /N = 0 (2) lim N→∞ C N / ln(ln(N)) =∞. Thus, for the array processing source detection problem the EDC procedure chooses the value of  N s that minimizes EDC(  N s ) =−N ln           M  i=  N s +1 l i   1 M −  N s M  i=  N s +1 l i   M−  N s           +  N s (2M −  N s )C N ;  N s = 0, ., M− 1 . In their analysis of the EDC procedure, Zhao et al. [14] showed that not only are all the EDC criteria consistent for the data assumptions we have made, but under certain conditions they remain consistent even when the data sample vectors used to form the estimate  R are not independent or Gaussian. The choice of C N = 1 2 ln(N ) satisfies the restrictions on C N and, thus, produces one of the EDC procedures. Thisparticular criterion isidentical toMDL andshowsthat theMDLcriterionis included as one of the EDC procedures. Another relatively common choice for C N is C N = √ N ln(N ). 67.3 Decision Theoretic Approaches The methods that we term decision theoreticapproaches all relyonthe statistical theory of hypothesis testing to determine the number of sources. The first of these that we will discuss, the sphericity test, is by far the oldest algorithm for source detection. c  1999 by CRC Press LLC 67.3.1 The Sphericity Test Originally, the sphericity test wasa hypothesis testing methoddesignedtodetermine if thecorrelation (or covariance) matrix, R, of a length M Gaussian random vector is proportional to the identity matrix, I M , when only  R, the sample correlation matrix, is known. If R ∝ I M , then the contours of equal density for the Gaussian distribution form concentric spheres in M-dimensional space. The sphericity test derives its name from being a test of the sphericity of these contours. The original sphericity test had two possible hypotheses H 0 : R = σ 2 n I M H 1 : R = σ 2 n I M for some unknown σ 2 n . If we denote the eigenvalues of R in descending order by λ 1 , λ 2 ,···, λ M , then equivalent hypotheses are H 0 : λ 1 = λ 2 =···= λ M H 1 : λ 1 >λ M . For the appropriate statistic, T(  R), the test is of the form T(  R) H 1 > < H 0 γ where the threshold, γ , can be set according to the Neyman-Pearson criterion [7]. That is, if the distribution of T(  R) is known under the null hypothesis, H 0 , then for a given probability of false alarm, P F , we can choose γ such that Pr(T (  R)>γ|H 0 ) = P F . Using the alternate form of the hypotheses, T(  R) is actually T(l 1 ,l 2 , ···,l M ), and the eigenvalues of the sample correlation matrix are a sufficient statistic for the hypothesis test. The correct form of the sphericity test statistic is the generalized likelihood ratio [4] T(l 1 ,l 2 , ···,l M ) = ln          1 M M  i=1 l i  M M  i=1 l i         which was also a major component of the information theoretic tests. For the source detection problem we are interested in testing a subset of the smaller eigenvalues for equality. In order to use the sphericity test, the hypotheses are generally broken down into pairs of hypotheses that can be tested in a series of hypothesis tests. For testing M −  N s eigenvalues for equality, the hypotheses are H 0 : λ 1 ≥···≥ λ  N s ≥ λ  N s +1 =···= λ M H 1 : λ 1 ≥···≥ λ  N s ≥ λ  N s +1 >λ M . We are interested in finding the smallest value of  N s for which H 0 is true, which is done by testing  N s = 0,  N s = 1, ···until  N s = M − 2 or the test does not fail. If the test fails for  N s = M − 2, c  1999 by CRC Press LLC then we consider none of the smallest eigenvalues to be equal and say that there are M − 1 sources. If  N s is the smallest value for which H 0 is true, then we say that there are  N s sources. There is also a problem involved in setting the desired P F . The Neyman-Pearson criterion is not able to determine a threshold for given P F for the overall detection problem. The best that can be done is to set a P F for each individual test in the nested series of hypothesis tests using Neyman-Pearson methods. Unfortunately, as the hypothesis tests are obviously not statistically independent and their statistical relationship is not very clear, how this P F for each test relates to the P F for the entire series of tests is not known. To use the sphericity test to detect sources, we need to be able to set accurately the threshold γ according to the desired P F , which requires knowledge of the distribution of the sphericity test statistic T(l  N s +1 , ···,l M ) under the null hypothesis. The exact form of this distribution is not available in a form that is very useful as it is generally written as an infinite series of Gaussian, chi- squared, orbetadistributions [2, 4]. However, if the teststatistic is multiplied bya suitable functionof the eigenvalues of  R, then its distribution can be accurately approximated as being chi-squared [10]. Thus, the statistic 2   (N − 1) −  N s − 2  M −  N s  2 + 1 6  M −  N s  +  N s  i=1  l i ¯ l − 1  −2   ln             1 M −  N s M  i=  N s +1 l i   M−  N s M  i=  N s +1 l i           is approximately chi-squared distributed with degrees of freedom given by d =  M −  N s  2 − 1, where ¯ l = 1 M−  N s  M i=  N s +1 l i . Although the performance of the sphericity test is comparable to that of the information theoretic tests, it is not as popular because it requires selection of the P F and calculation of the test thresholds for each value of  N s . However, if the received data does not match the assumed model, the ability to change the test thresholds gives the sphericity test a robustness lacking in the information theoretic methods. 67.3.2 Multiple Hypothesis Testing The sphericity test relies on a sequence of binary hypothesis tests to determine the number of sources. However, the optimum test for this situation would be to test all hypotheses simultaneously: H 0 : λ 1 = λ 2 =···= λ M H 1 : λ 1 >λ 2 =···= λ M H 2 : λ 1 ≥ λ 2 >λ 3 =···= λ M . . . H M−1 : λ 1 ≥ λ 2 ≥···≥ λ M−1 >λ M to determine how many of the smaller eigenvalues are equal. While it is not possible to generalize the sphericity test directly, it is possible to use an approximation to the probability density function (pdf ) of the eigenvalues to arrive at a suitable test. Using the theory of multiple hypothesis tests, we c  1999 by CRC Press LLC can derive a test that is similar to AIC and MDL and is implemented in exactly the same manner, but is designed to minimize the probability of choosing the wrong number of sources. Toarrive at our statistic, we start with the joint probabilitydensity function (pdf ) of the eigenvalues of the M × M sample covariance when the M −  N s smallest eigenvalues are known to be equal. We will denote this pdf by f  N s (l 1 , .,l M |λ 1 ≥ ··· ≥ λ  N s +1 = ··· = λ M ) where the l i denote the eigenvalues of the sample matrix and the λ i are the eigenvalues of the true covariance matrix. The asymptotic expression for f  N s (·) isgivenbyWongetal.[11] for the complex-valued data case as f  N s (l 1 , .,l M |λ 1 ≥···≥ λ  N s +1 =···= λ M ) ≈ n mn−  N s 2 (2M−  N s −1) π M(M−1)−  N s 2 (2M−  N s −1)   M (n)   M−  N s (M −  N s )  M i=1 λ −n i  M i=1 l n−M i exp  −n  M i=1 l i λ i   M i=  N s +1  M i<j  l i − l j  2   N s i=1   N s i<j  ( l i −l j ) λ i λ j λ i −λ j    N s i=1  M j=  N s +1  ( l i −l j ) λ i λ j λ i −λ j  where n = N − 1 is one less than the number of samples and   N (·) is the multivariate gamma function for complex-valued data [11]. We then form M likelihood ratios by dividing each joint pdf by f M−1 (·) to form (  N s ) = f  N s  l 1 , .,l M |λ 1 ≥···≥ λ  N s +1 =···= λ M  f M−1 ( l 1 , .,l M |λ 1 ≥···≥ λ M ) ,  N s = 0, ., M− 1 . Assuming that each value of  N s is equally likely, then multiple hypothesis testing theory tells us that the value of  N s that maximizes (  N s ) is the optimum choice in that it minimizes the probability of choosing the incorrect  N s [7]. Because (  N s ) in this form requires knowledge of the unknown parameters λ i , we must use a generalized likelihood ratio test and independently substitute the maximum likelihood estimates of the λ i [see Eq.(67.3) for these expressions] into both f  N s (·), for which we assume M −  N s equal λ i s, and f M−1 (·), for which we assume no equal λ i s, to get our new statistics (  N s ). After much simplification including dropping terms that are common to (  N s ) for every allowable value of  N s and then taking the natural logarithm of each (  N s ), we get the statistic    N s  =  n −  N s  ln           M  i=  N s +1 l i   1 M −  N s M  i=  N s +1 l i   M−  N s           − 1 2  N s  2M −  N s − 1  ln[n]+ ln  π −  N s ( 2M−  N s −1 ) /2   M−  N s  M −  N s   +  N s  i=1 M  j=  N s +1 ln  l i − l j l i − ¯ l  − M  i=  N s +1 M  j=i+1 2ln  (l i l j ) 1/2 l i − l j  where ¯ l = 1 M−  N s  M i=  N s +1 l i . The terms in the first line of this equation are almost identical to the negative of the MDL criterion, especially when the degrees of freedom recommended in [9] are used. Note that the change in sign is necessary because we are finding the maximum of this criterion, not the minimum. The extra terms on the following line include both the eigenvalues being tested for equality and those not being tested. These extra terms allow this test to outperform the information theoretic techniques, since the use of all the eigenvalues for each value of  N s being tested allows this criterion to be more adaptive. c  1999 by CRC Press LLC [...].. .67. 4 For More Information Most of the original papers on model order determination appeared in the statistical literature in journals such as The Annals of Statistics and the Journal of Multivariate Analysis However, almost all of the more recent developments that apply these techniques to the source detection problem have appeared in signal processing journals such as the IEEE Transactions... 1413–1426, June 1995 [13] Yin, Y.Q and Krishnaiah, P.R., On some nonparametric methods for detection of the number of signals, IEEE Trans Acoustics, Speech, and Signal Processing, ASSP–35, 1533–1538, Nov 1987 [14] Zhao, L.C., Krishnaiah, P.R and Bai, Z.D., On detection of the number of signals in presence of white noise, J Multivariate Analysis, 20, 1–25, Oct 1986 c 1999 by CRC Press LLC ... Nov 1990 [11] Wong, K.M., Zhang, Q.-T., Reilly, J.P and Yip, P.C., On information theoretic criteria for determining the number of signals in high resolution array processing, IEEE Trans Acoustics, Speech, and Signal Processing, 38, 1959–1971, Nov 1990 [12] Xu, W and Kaveh, M., Analysis of the performance and sensitivity of eigendecomposition-based detectors, IEEE Trans Signal Processing, 43, 1413–1426,... Aspects of Multivariate Statistical Theory, John Wiley & Sons, New York, 1982 [5] Rissanen, J., Modeling by shortest data description, Automatica, 14, 465–471, Sept 1978 [6] Schwarz, G., Estimating the dimension of a model, Annal Stat., 6, 461–464, Mar 1978 [7] Van Trees, H.L., Detection, Estimation, and Modulation Theory, Part I, John Wiley & Sons, New York, 1968 [8] Wax, M and Kailath, T., Detection of. .. M and Kailath, T., Detection of signals by information theoretic criteria, IEEE Trans Acoustics, Speech, and Signal Processing, ASSP-33, 387–392, Apr 1985 [9] Williams, D.B., Counting the degrees of freedom when using AIC and MDL to detect signals, IEEE Trans Signal Processing, 42, 3282–3284, Nov 1994 [10] Williams, D.B and Johnson, D.H., Using the sphericity test for source detection with narrowband... on Signal Processing More advanced topics that have been addressed in the signal processing literature but not discussed here include: detecting coherent (i.e., completely correlated) signals, detecting sources in unknown colored noise, and developing more robust source detection methods References [1] Akaike, H., A new look at the statistical model identification, IEEE Trans on Automatic Control, AC-19, . 67 Detection: Determining the Number of Sources Douglas B. Williams Georgia Institute of Technology 67. 1 Formulation of the Problem 67. 2 Information Theoretic. and discuss the more common approaches to determining the number of sources. 67. 1 Formulation of the Problem The basic problem is that of determining how

Ngày đăng: 16/12/2013, 04:15

TỪ KHÓA LIÊN QUAN