Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 14 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
14
Dung lượng
222,91 KB
Nội dung
Tugnait, J.K. “Validation, Testing, and Noise Modeling” DigitalSignalProcessingHandbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 1999 c 1999byCRCPressLLC 16 Validation, Testing, and Noise Modeling Jitendra K. Tugnait Auburn University 16.1 Introduction 16.2 Gaussianity, Linearity, and Stationarity Tests Gaussianity Tests • Linearity Tests • Stationarity Tests 16.3 Order Selection, Model Validation, and Confidence Intervals Order Selection • Model Validation • Confidence Intervals 16.4 Noise Modeling Generalized Gaussian Noise • MiddletonClassANoise • Stable Noise Distribution 16.5 Concluding Remarks References 16.1 Introduction Linear parametric models of stationary random processes, whether signal or noise, have been found to be useful in a wide variety of signalprocessing tasks such as signal detection, estimation, filtering, and classification, and in a wide variety of applications such as digital communications, automatic control, radar and sonar, and other engineering disciplines and sciences. A general representation of a linear discrete-time stationary signal x(t) is given by x(t) = ∞ i=0 h(i)(t − i) (16.1) where {(t)} is a zero-mean, i.i.d. (independent and identically distributed) random sequence with finite variance, and {h(i), i ≥ 0} is the impulse response of the linear system such that ∞ i=−∞ h 2 (i) < ∞. Much effort has been expended on developing approaches to linear model fitting given a single measurement record of the signal (or noisy signal). Parsimonious parametric models such as AR (autoregressive), MA (moving average), ARMA or state-space, as opposed to impulse response modeling, have been popular together with the assumption of Gaussianity of the data. Define H(q)= ∞ i=0 h(i)q −i (16.2) where q −1 is the backward shift operator (i.e., q −1 x(t) = x(t − 1), etc.). If q is replaced with the complex variable z, then H(z) is the Z-transform of {h(i)}, i.e., it is the system transfer function. c 1999 by CRC Press LLC Using (16.2), (16.1) may be rewritten as x(t) = H(q)(t). (16.3) Fittinglinear models tothemeasurementrecordrequiresestimationof H(q), orequivalentlyof{h(i)} (without observing {(t)} ). Typically H(q)is parameterized by a finite number of parameters, say by the parameter vector θ (M) of dimension M. For instance, an AR model representation of order M means that H AR (q; θ (M) ) = 1 1 + M i=1 a i q −i ,θ (M) = (a 1 ,a 2 ,···,a M ) T . (16.4) This reduces the number of estimated parameters from a “large” number to M. In this section several aspects of fitting models such as (16.1)to(16.3) to the given measurement record are considered. These aspects are (see also Fig. 16.1): • Is the model of the type (16.1) appropriate to the given record? This requires testing for linearity and stationarity of the data. • Linear Gaussian models have long been dominant both for signals as well as for noise pro- cesses. Assumption of Gaussianity allows implementation of statistically efficient param- eter estimators such as maximum likelihoodestimators. A Gaussian processis completely characterized by its second-order statistics (autocorrelation function or, equivalently, its power spectral density). Since the power spectrum of {x(t)} of (16.1)isgivenby S xx (ω) = σ 2 |H(e jω )| 2 ,σ 2 = E{ 2 (t)}, (16.5) one cannot determine the phase of H(e jω ) independent of |H(e jω )|. Determination of the true phase characteristic is crucial in several applications such as blind equaliza- tion of digital communications channels. Use of higher-order statistics allows one to uniquely identify nonminimum-phase parametric models. Higher-order cumulants of Gaussian processes vanish, hence, if the data are stationary Gaussian, a minimum-phase (ormaximum-phase) model isthe“best” that onecan estimate. Therefore, another aspect considered in this section is testing for non-Gaussianity of the given record. • If the data are Gaussian, one may fit models based solely upon the second-order statistics of the data —else useof higher-orderstatistics in addition toor in lieu of the second-order statistics is indicated, particularly if the phase of the linear system is crucial. In either case, one typically fits a model H(q; θ (M) ) by estimating the M unknown parameters through optimization of some cost function. In practice, (the model order) M is unknown and its choice has a significant impact on the quality of the fitted model. In this section another aspect of the model-fitting problem considered is that of order selection. • Having fittedamodel H(q; θ (M) ), onewouldalsoliketoknowhowgood arethe estimated parameters? Typically this is expressed in terms of error bounds or confidence intervals on the fitted parameters and on the corresponding model transfer function. • Having fitted a model, a final step is that of model falsification. Is the fitted model an appropriate representation of the underlying system? This is referred to variously as model validation, model verification, or model diagnostics. • Finally, variousmodelsof univariate noise pdf(probability density function)arediscussed to complete the discussion of model fitting. c 1999 by CRC Press LLC FIGURE 16.1: Section outline (SOS — second-order statistics; HOS — higher-order statistics). 16.2 Gaussianity, Linearity, and Stationarity Tests Given a zero-mean, stationary random sequence{x(t)}, its third-order cumulant function C xxx (i, k) isgivenby[12] C xxx (i, k) := E{x(t + i)x(t + k)x(t)}. (16.6) Its bispectrum B xxx (ω 1 ,ω 2 ) is defined as [12] B xxx (ω 1 ,ω 2 ) = ∞ i=−∞ ∞ k=−∞ C xxx (i, k)e −j(ω 1 i+ω 2 k) . (16.7) Similarly, its fourth-order cumulant function C xxxx (i,k,l)isgivenby[12] C xxxx (i,k,l) := E{x(t)x(t + i)x(t + k)x(t + l)} − E{x(t)x(t + i)}E{x(t + k)x(t + l)} − E{x(t)x(t + k)}E{x(t + l)x(t + i)} − E{x(t)x(t + l)}E{x(t + k)x(t + i)}. (16.8) Its trispectrum is defined as [12] T xxxx (ω 1 ,ω 2 ,ω 3 ) := ∞ i=−∞ ∞ k=−∞ ∞ l=−∞ C xxxx (i,k,l)e −j(ω 1 i+ω 2 k+ω 3 l) . (16.9) c 1999 by CRC Press LLC If {x(t)} obeys (16.1), then [12] B xxx (ω 1 ,ω 2 ) = γ 3 H(e jω 1 )H (e jω 2 )H ∗ (e j(ω 1 +ω 2 ) ) (16.10) and T xxxx (ω 1 ,ω 2 ,ω 3 ) = γ 4 H(e jω 1 )H (e jω 2 )H (e jω 3 )H ∗ (e j(ω 1 +ω 2 +ω 3 ) ) (16.11) where γ 3 = C (0, 0, 0) and γ 4 = C (0, 0, 0, 0). (16.12) For Gaussian processes, B xxx (ω 1 ,ω 2 ) ≡ 0 and T xxxx (ω 1 ,ω 2 ,ω 3 ) ≡ 0; equivalently, C xxx (i, k) ≡ 0 and C xxxx (i,k,l)≡ 0. This forms a basis for testing Gaussianity of a given measurement record. When {x(t)} is linear (i.e., it obeys (16.1)), then using (16.5) and (16.10), |B xxx (ω 1 ,ω 2 )| 2 S xx (ω 1 )S xx (ω 1 )S xx (ω 1 + ω 2 ) = γ 3 σ 6 = constant ∀ ω 1 ,ω 2 , (16.13) and using (16.5) and (16.11), |T xxxx (ω 1 ,ω 2 ,ω 3 )| 2 S xx (ω 1 )S xx (ω 1 )S xx (ω 3 )S xx (ω 1 + ω 2 + ω 3 ) = γ 4 σ 8 = constant ∀ ω 1 ,ω 2 ,ω 3 . (16.14) The above two relations form a basis for testing linearity of a given measurement record. How the tests are implemented depends upon the statistics of the estimators of the higher-order cumulant spectra as well as that of the power spectra of the given record. 16.2.1 Gaussianity Tests Suppose that the given zero-mean measurement record is of length N denoted by {x(t), t = 1, 2,···,N}. Suppose that the given sample sequence of length N is divided into K nonover- lapping segments each of size N B samples so that N = KN B .LetX (i) (ω) denote the discrete Fourier transform (DFT) of the ith block {x(t + (i − 1)N B ), 1 ≤ t ≤ N B } (i = 1, 2,···,K)given by X (i) (ω m ) = N B −1 l=0 x(l + 1 + (i − 1)N B )exp(−jω m l) (16.15) where ω m = 2π N B m, m = 0, 1,···,N B − 1. (16.16) Denote the estimate of the bispectrum B xxx (ω m ,ω n ) at bifrequency (ω m = 2π N B m, ω n = 2π N B n) as B xxx (m, n), given by averaging over K blocks B xxx (m, n) = 1 K K i=1 1 N B X (i) (ω m )X (i) (ω n ) X (i) (ω m + ω n ) ∗ , (16.17) where X ∗ denotes the complex conjugate of X. A principal domain of B xxx (m, n) is the triangular grid D = (m, n)|0 ≤ m ≤ N B 2 , 0 ≤ n ≤ m, 2m + n ≤ N B . (16.18) Values of B xxx (m, n) outside D can be inferred from that in D. c 1999 by CRC Press LLC FIGURE 16.2: Coarse and fine grids in the principal domain. Select a coarse frequency grid ( m, n) in the principal domain D as follows. Let d denote the distance between two adjacent coarse frequency pairs such that d = 2r + 1 with r a positive integer. Set n 0 = 2 + r and n = n 0 ,n 0 + d, ···,n 0 + (L n − 1)d where L n = N B 3 −1 d . Foragiven n,setm 0,n = N B −n 2 −r, m = m n = m 0,n ,m 0,n − d, ···,m 0,n − (L m,n − 1)d where L m,n = m 0,n −(n+r+1) d +1.LetP denote the number of points on the coarse frequency grid as defined above so that P = L n n=1 L m,n . Suppose that (m, n) is a coarse point, then select a fine grid (m, n nk ) and (m mi ,n nk ) consisting of m mi = m + i, |i|≤r, n nk = n + k, |k|≤r, (16.19) for someinteger r>0 such that (2r+1) 2 >P; see also Fig. 16.2. Order the L (= (2r+1) 2 ) estimates B xxx (m mi ,n nk ) on the fine grid around the bifrequency pair (m, n) into an L-vector, which after relabeling, may be denoted as ν ml ,l= 1, 2,···,L, m= 1, 2,···,P,where m indexes the coarse grid and l indexes the fine grid. Define P -vectors i = (ν 1i ,ν 2i ,···,ν Pi ) T (i = 1, 2,···, L). (16.20) Consider the estimates M = 1 L L i=1 i and = 1 L L i=1 i − M i − M H . (16.21) Define F G = 2(L − P) 2P M H −1 M. (16.22) If {x(t)} is Gaussian, then F G is distributed as a central F (Fisher) with (2P, 2(L − P))degrees of freedom. AstatisticaltestfortestingGaussianity of{x(t)}istodeclareittobe a non-Gaussiansequence if F G >T α where T α is selected to achieve a fixed probability of false alarm α (= Pr{F G >T α } with F G distributed as a central F with (2P, 2(L − P))degrees of freedom). If F G ≤ T α , then either {x(t)} is Gaussian or it has zero bispectrum. The above test is patterned after [3]. It treats the bispectral estimates on the “fine” bifrequency grid as a “data set” from a multivariable Gaussian distribution with unknown covariance matrix. Hinich [4] has simplified the test of [3] by using the known asymptotic expression for the covariance matrix involved, and his test is based upon χ 2 distributions. Notice that F G ≤ T α does not c 1999 by CRC Press LLC necessarily imply that {x(t)} is Gaussian; it may result from that fact that {x(t)} is non-Gaussian with zero bispectrum. Therefore, a next logical step would be to test for vanishing trispectrum of the record. This has been done in [14] using the approach of [4]; extensions of [3] are too complicated. Computationallysimpler testsusing “integrated polyspectrum” of the data have been proposedin [6]. The integrated polyspectrum (bispectrum or trispectrum) is computed as cross-power spectrum and it is zero for Gaussian processes. Alternatively, one may test if C xxx (i, k) ≡ 0 and C xxxx (i,k,l)≡ 0. This has been done in [8]. Other tests that do not rely on higher-order cumulant spectra of the record may be found in [13]. 16.2.2 Linearity Tests Denote the estimate of the power spectral density S xx (ω m ) of {x(t)} at frequency ω m = 2π N B m as S xx (m) given by S xx (m) = 1 K K i=1 1 N B X (i) (ω m ) X (i) (ω m ) ∗ . (16.23) Consider γ x (m, n) = | B xxx (m, n)| 2 S xx (m) S xx (n) S xx (m + n) . (16.24) It turns out thatγ x (m, n) is a consistent estimator of the left side of (16.13), and it is asymptotically distributed as a Gaussian random variable, independent at distinct bifrequencies in the interior of D. These properties have been used by Subba Rao and Gabr [3] to design a test of linearity. Construct a coarse grid and a fine grid of bifrequencies in D as before. Order the L estimates γ x (m mi ,n nk ) on the fine grid around the bifrequency pair (m, n) into an L-vector, which after relabeling, may be denoted as β ml ,l= 1, 2,···,L, m= 1, 2,···,P,where m indexes the coarse grid and l indexes the fine grid. Define P -vectors i = ( β 1i ,β 2i ,···,β Pi ) T ,(i= 1, 2,···, L). (16.25) Consider the estimates M = 1 L L i=1 i and = 1 L L i=1 ( i − M)( i − M) T . (16.26) Define a (P −1)× P matrix B whose ij th element B ij is given by B ij =1ifi = j;=−1ifj = i+1; = 0 otherwise. Define F L = L − P + 1 P − 1 ( BM ) T BB T −1 BM. (16.27) If{x(t)} is linear, then F L is distributed as a central F with (P −1,L−P +1) degrees of freedom. A statistical testfor testing linearity of{x(t)} is to declare it tobe a nonlinear sequence if F L >T α where T α is selected to achieve a fixed probability of false alarm α (= Pr{F L >T α } with F L distributed as acentralF with (P − 1,L− P + 1) degrees of freedom). If F L ≤ T α , then either {x(t)} is linear or it has zero bispectrum. The above test is patterned after [3]. Hinich [4] has “simplified” the test of [3]. Notice that F L ≤ T α does not necessarily imply that {x(t)} is nonlinear; it may result from that fact that {x(t)} is non-Gaussian with zero bispectrum. Therefore, a next logical step would be to test if (16.14) holds true. This has been done in [14] using the approach of [4]; extensions of [3] are too complicated. The approaches of [3] and [4] will fail if the data are noisy. A modification to [3] is presented in [7] when additive Gaussian noise is present. Finally, other tests that do not rely on higher-order cumulant spectra of the record may be found in [13]. c 1999 by CRC Press LLC 16.2.3 Stationarity Tests Various methods exist for testing whether a given measurement record may be regarded as a sample sequence of a stationary random sequence. A crude yet effective way to test for stationarity is to divide the record into several (at least two) nonoverlapping segments and then test for equivalency (or compatibility) of certain statistical properties (mean, mean-square value, power spectrum, etc.) computed from these segments. More sophisticated tests that do not require a priori segmentation of the record are also available. Consider a record of length N divided into two nonoverlapping segments each of length N/2.Let KN B = N/2 and use the estimators such as (16.23) to obtain the estimator S (l) xx (m) of the power spectrum S (l) xx (ω m ) of the l−th segment (l = 1, 2), where ω m is given by (16.16). Consider the test statistic Y = 2 N B − 2 K 2 N B 2 −1 m=1 ln S (1) xx (m) − ln S (2) xx (m) . (16.28) Then, asymptotically Y is distributed as zero-mean, unit variance Gaussian if {x(t)} is stationary. Therefore, if |Y| >T α , then {x(t)} is declared to be nonstationary where the threshold T α is chosen to achieve a false-alarm probability of α (= Pr{|Y| >T α } with Y distributed as zero-mean, unit variance Gaussian). If |Y|≤T α , then {x(t)} is declared to be stationary. Notice that similar tests based upon higher-order cumulant spectra can also be devised. The above test is patterned after [10]. More sophisticated tests involving two model comparisons as above but without prior segmentation of the record are available in [11] and references therein. A test utilizing evolutionary power spectrum may be found in [9]. 16.3 Order Selection, Model Validation, and Confidence Intervals As noted earlier, one typically fits a model H(q; θ (M) ) tothe givendata byestimating the M unknown parameters through optimization of some cost function. A fundamental difficulty here is the choice of M. There are two basic philosophical approaches to this problem: one consists of an iterative process of model fitting and diagnostic checking (model validation), and the other utilizes a more “objective” approach of optimizing a cost w.r.t. M (in addition to θ (M) ). 16.3.1 Order Selection Let f θ (M) (X) denotethe probability density function of X =[x(1), x(2), ···, x(N)] T parameterized by the parameter vector θ (M) of dimension M. A popular approach to model order selection in the context of linear Gaussian models is to compute the Akaike information criterion (AIC) AI C(M) =−2 ln f θ (M) (X) + 2M (16.29) where θ (M) maximizes f θ (M) (X) given the measurement record X.LetM denote an upper bound on the true model order. Then the minimum AIC estimate (MAICE), the selected model order, is given by the minimizer of AI C(M) over M = 1, 2,···, M. Clearly one needs to solve the problem of maximization of ln f θ (M) (X) w.r.t. θ (M) for each value of M = 1,2,···, M. The second term on the right side of (16.29) penalizes overparametrization. Rissanen’s minimum description length (MDL) criterion is given by MDL(M) =−2lnf θ (M) (X) + M ln N. (16.30) c 1999 by CRC Press LLC It is known that if {x(t)} is a Gaussian AR model, then AIC is an inconsistent estimator of the model order whereas MDL is consistent, i.e., MDL picks the correct model order with probability one as the data length tends to infinity, whereas there is a nonzero probability that AIC will not. Several other variations of these criteria exist [15]. Although the derivation of these order selection criteria is based upon Gaussian distribution, they have frequently been used for non-Gaussian processes with success provided attention is confined to the use of second-order statistics of the data. They may fail if one fits models using higher-order statistics. 16.3.2 Model Validation Model validation involves testing to see if the fitted model is an appropriate representation of the underlying (true) system. It involves devising appropriate statistical tools to test the validity of the assumptions made in obtaining the fitted model. It is also known as model falsification, model verification, or diagnostic checking. It can also be used as a tool for model order selection. It is an essential part of any model fitting methodology. Suppose that {x(t)} obeys (16.1). Suppose that the fitted model corresponding to the estimated parameter θ (M) is H(q; θ (M) ). Assuming that the true model H(q)is invertible, in the ideal case one should get (t) = H −1 (q)x(t) where{(t)} is zero-mean, i.i.d. (or at least white when using second- order statistics). Hence, if the fitted model H(q; θ (M) ) is a valid description of the underlying true system, one expects (t) = H −1 (q; θ (M) )x(t ) to be zero-mean, i.i.d. One of the diagnostic checks then is to test for whiteness or independence of the inverse filtered data (or the residuals or linear innovations, in case second-order statistics are used). If the fitted model is unable to “adequately” capture the underlying true system, one expects { (t)} to deviate from i.i.d. distribution. This is one of the most widely used and useful diagnostic checks for model validation. A test for second-order whiteness of { (t)} is as follows [15]. Construct the estimates of the covariance function as r (τ ) = N −1 N−τ t=1 (t + τ) (t) (τ ≥ 0). (16.31) Consider the test statistic R = N r 2 (0) m i=1 r 2 (i) (16.32) where m is some a priori choice of the maximum lag for whiteness testing. If { (t)} is zero-mean white, then R is distributed as χ 2 (m) (χ 2 with m degrees of freedom). A statistical test for testing whiteness of{ (t)} is to declare it to be a nonwhite sequence (hence invalidate the model) if R>T α where T α is selected to achieve a fixed probability of false alarm α (= Pr{R>T α } with R distributed as χ 2 (m)). If R ≤ T α , then { (t)} is second-order white, hence the model is validated. The above procedure only tests for second-order whiteness. In order to test for higher-order whiteness, one needs to examine either the higher-order cumulant functions or the higher-order cumulant spectra (or the integrated polyspectra) of the inverse-filtered data. A statistical test using bispectrum is available in [5]. It is particularly useful if the model fitting is carried out using higher- order statistics. If { (t)} is third-order white, then its bispectrum is a constant for all bifrequencies. Let B (m, n)denotethe estimateofthe bispectrum B (ω m ,ω n )mimicking(16.17). Construct a coarse grid and a fine grid of bifrequencies in D as before. Order the L estimates B (m mi ,n nk ) on the fine grid around the bifrequency pair ( m, n) into an L-vector, which after relabeling may be denoted as µ ml ,l= 1, 2,···,L, m= 1, 2,···,P,where m indexes the coarse grid and l indexes c 1999 by CRC Press LLC the fine grid. Define P -vectors i = (µ 1i ,µ 2i ,···,µ Pi ) T ,(i= 1, 2,···, L). (16.33) Consider the estimates M = 1 L L i=1 i and = 1 L L i=1 i − M i − M H . (16.34) Define a (P −1)× P matrix B whose ij th element B ij is given by B ij =1ifi = j;=−1ifj = i+1; = 0 otherwise. Define F W = 2(L − P + 1) 2P − 2 B M H B B T −1 B M. (16.35) If { (t)} is third-order white, then F W is distributed as a central F with (2P − 2, 2(L − P + 1)) degrees of freedom. A statistical test for testing third-order whiteness of { (t)} istodeclareittobe a nonwhite sequence if F W >T α where T α is selected to achieve a fixed probability of false alarm α (= Pr{F W >T α } with F W distributed as a central F with (2P − 2, 2(L − P + 1)) degrees of freedom). If F W ≤ T α , then either { (t)} is third-order white or it has zero bispectrum. The above model validation test can be used for model order selection. Fix an upper bound on the model orders. For every admissible model order, fit a linear model and test its validity. From among the validated models, select the “smallest” order as the correct order. It is easy to see that this procedure will work only so long as the various candidate orders are nested. Further details may be found in [5] and [15]. 16.3.3 Confidence Intervals Having settled upon a model order estimate M,let θ (M) N be the parameter estimator obtained by minimizing a cost function V N (θ (M) ), given a record of length N, such that V ∞ (θ) := lim N→∞ V N (θ) exists. For instance, using the notation of the section on order selection, one may take V N (θ (M) ) = −N −1 ln f θ (M) (X). How reliable are these estimates? An assessment of this is provided by confidence intervals. Under some general technical conditions, it usually follows that asymptotically (i.e., for large N), √ N θ (M) N − θ 0 is distributed as a Gaussian random vector with zero-mean and covariance matrix P where θ 0 denotes the true value of θ (M) . A general expression for P is given by [15] P = V ∞ (θ 0 ) −1 P ∞ V ∞ (θ 0 ) −1 (16.36) where P ∞ = lim N→∞ E NV T N (θ 0 )V N (θ 0 ) (16.37) and V (a row vector) and V (a square matrix) denote the gradient and the Hessian, respectively, of V . The above result can be used to evaluate the reliability of the parameter estimator. It follows from the above results that η N = N θ (M) N − θ 0 T P −1 θ (M) N − θ 0 (16.38) is asymptotically χ 2 (M). Define χ 2 α (M) via Pr{y>χ 2 α (M)}=α where y is distributed as χ 2 (M). For instance, χ 2 0.05 = 9.49 so that Pr{η N > 9.49}=0.05. The ellipsoid η N ≤ χ 2 α (M) then defines c 1999 by CRC Press LLC [...]... higher-order statistics, IEEE Trans Signal Process., SP-42: 1728-1736, July, 1994 [6] Tugnait, J.K., Detection of non-Gaussian signals using integrated polyspectrum, IEEE Trans Signal Process., SP-42: 3137-3149, Nov., 1994 (Corrections in IEEE Trans Signal Process., SP-43 Nov., 1995.) [7] Tugnait, J.K., Testing for linearity of noisy stationary signals, IEEE Trans .Signal Process., SP-42: 2742-2748,... o [16] Ljung, L., System Identification: Theory for the User, Prentice-Hall, Englewood Cliffs, NJ, 1987 c 1999 by CRC Press LLC [17] Kassam, S.A., Signal Detection in Non-Gaussian Noise, Springer-Verlag, New York, 1988 [18] Shao, M and Nikias, C.L., Signal processing with fractional lower order moments: stable processes and their applications, Proc IEEE, 81: 986-1010, July, 1993 c 1999 by CRC Press LLC... linearity of noisy stationary signals, IEEE Trans .Signal Process., SP-42: 2742-2748, Oct., 1994 [8] Giannakis, G.B and Tstatsanis, M.K., Time-domain tests for Gaussianity and time-reversibility, IEEE Trans Signal Process., SP-42: 3460-3472, Dec., 1994 [9] Priestley, M.B., Nonlinear and Nonstationary Time Series Analysis, Academic Press, New York, 1988 [10] Jenkins, G.M., General considerations in the estimation... easy; it requires knowledge of θ0 Typically, one (M) replaces θ0 with θN If a closed-form expression for P is not available, it may be approximated by a sample average [16] 16.4 Noise Modeling As for signal models, Gaussian modeling of noise processes has long been dominant Typically the central limit theorem is invoked to justify this assumption; thermal noise is indeed Gaussian Another reason is . Tugnait, J.K. “Validation, Testing, and Noise Modeling” Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton:. random processes, whether signal or noise, have been found to be useful in a wide variety of signal processing tasks such as signal detection, estimation,