1. Trang chủ
  2. » Tài Chính - Ngân Hàng

Tài liệu Bài 9: ICA by Maximum Likelihood Estimation ppt

17 364 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 17
Dung lượng 364,42 KB

Nội dung

9 ICA by Maximum Likelihood Estimation A very popular approach for estimating the independent component analysis (ICA) model is maximum likelihood (ML) estimation. Maximum likelihood estimation is a fundamental method of statistical estimation; a short introduction was provided in Section 4.5. One interpretation of ML estimation is that we take those parameter values as estimates that give the highest probability for the observations. In this section, we show how to apply ML estimation to ICA estimation. We also show its close connection to the neural network principle of maximization of information flow (infomax). 9.1 THE LIKELIHOOD OF THE ICA MODEL 9.1.1 Deriving the likelihood It is not difficult to derive the likelihood in the noise-free ICA model. This is based on using the well-known result on the density of a linear transform, given in (2.82). According to this result, the density p x of the mixture vector x = As (9.1) can be formulated as p x (x)=j det Bjp s (s)=j det Bj Y i p i (s i ) (9.2) 203 Independent Component Analysis. Aapo Hyv ¨ arinen, Juha Karhunen, Erkki Oja Copyright  2001 John Wiley & Sons, Inc. ISBNs: 0-471-40540-X (Hardback); 0-471-22131-7 (Electronic) 204 ICA BY MAXIMUM LIKELIHOOD ESTIMATION where B = A 1 ,andthe p i denote the densities of the independent components. This can be expressed as a function of B =(b 1 :::b n ) T and x ,giving p x (x)=j det Bj Y i p i (b T i x) (9.3) Assume that we have T observations of x , denoted by x(1) x(2) ::: x(T ) . Then the likelihood can be obtained (see Section 4.5) as the product of this density evaluated at the T points. This is denoted by L and considered as a function of B : L(B)= T Y t=1 n Y i=1 p i (b T i x(t))j det Bj (9.4) Very often it is more practical to use the logarithm of the likelihood, since it is algebraically simpler. This does not make any difference here since the maximum of the logarithm is obtained at the same point as the maximum of the likelihood. The log-likelihood is given by log L(B)= T X t=1 n X i=1 log p i (b T i x(t)) + T log j det Bj (9.5) The basis of the logarithm makes no difference, though in the following the natural logarithm is used. To simplify notation and to make it consistent to what was used in the previous chapter, we can denote the sum over the sample index t by an expectation operator, and divide the likelihood by T to obtain 1 T log L(B)=E f n X i=1 log p i (b T i x)g + log j det Bj (9.6) The expectation here is not the theoretical expectation, but an average computed from the observed sample. Of course, in the algorithms the expectations are eventually replaced by sample averages, so the distinction is purely theoretical. 9.1.2 Estimation of the densities Problem of semiparametric estimation In the preceding, we have expressed the likelihood as a function of the parameters of the model, which are the elements of the mixing matrix. For simplicity, we used the elements of the inverse B of the mixing matrix. This is allowed since the mixing matrix can be directly computed from its inverse. There is another thing to estimate in the ICA model, though. This is the densities of the independent components. Actually, the likelihood is a function of these densities as well. This makes the problem much more complicated, because the estimation of densities is, in general, a nonparametric problem. Nonparametric means that it THE LIKELIHOOD OF THE ICA MODEL 205 cannot be reduced to the estimation of a finite parameter set. In fact the number of parameters to be estimated is infinite, or in practice, very large. Thus the estimation of the ICA model has also a nonparametric part, which is why the estimation is sometimes called “semiparametric”. Nonparametric estimation of densities is known to be a difficult problem. Many parameters are always more difficult to estimate than just a few; since nonparametric problems have an infinite number of parameters,they are the most difficult to estimate. This is why we would like to avoid the nonparametric density estimation in the ICA. There are two ways to avoid it. First, in some cases we might know the densities of the independent components in advance, using some prior knowledge on the data at hand. In this case, we could simply use these prior densities in the likelihood. Then the likelihood would really be a function of B only. If reasonably small errors in the specification of these prior densities have little influence on the estimator, this procedure will give reasonable results. In fact, it will be shown below that this is the case. A second way to solve the problem of density estimation is to approximate the densities of the independent components by a family of densities that are specified by a limited number of parameters. If the number of parameters in the density family needs to be very large, we do not gain much from this approach, since the goal was to reduce the number of parameters to be estimated. However, if it is possible to use a very simple family of densities to estimate the ICA model for any densities p i ,we will get a simple solution. Fortunately, this turns out to be the case. We can use an extremely simple parameterization of the p i , consisting of the choice between two densities, i.e., a single binary parameter. A simple density family It turns out that in maximum likelihood estimation, it is enough to use just two approximations of the density of an independent component. For each independent component, we just need to determine which one of the two approximations is better. This shows that, first, we can make small errors when we fix the densities of the independent components, since it is enough that we use a density that is in the same half of the space of probability densities. Second, it shows that we can estimate the independent components using very simple models of their densities, in particular, using models consisting of only two densities. This situation can be compared with the one encountered in Section 8.3.4, where we saw that any nonlinearity can be seen to divide the space of probability distributions in half. When the distribution of an independent component is in one of the halves, the nonlinearity can be used in the gradient method to estimate that independent component. When the distribution is in the other half, the negative of the nonlinearity must be used in the gradient method. In the ML case, a nonlinearity corresponds to a density approximation. The validity of these approaches is shown in the following theorem, whose proof can be found in the appendix. This theorem is basically a corollary of the stability theorem in Section 8.3.4. 206 ICA BY MAXIMUM LIKELIHOOD ESTIMATION Theorem 9.1 Denote by ~p i the assumed densities of the independent components, and g i (s i )= @ @s i log ~p i (s i )= ~p 0 i (s i ) ~p i (s i ) (9.7) Constrain the estimates of the independent components y i = b T i x to be uncorrelated and to have unit variance. Then the ML estimator is locally consistent, if the assumed densities ~p i fulfill E fs i g i (s i )  g 0 (s i )g > 0 (9.8) for all i . This theorem shows rigorously that small misspecifications in the densities p i do not affect the local consistency of the ML estimator, since sufficiently small changes do not change the sign in (9.8). Moreover, the theorem shows how to construct families consisting of only two densities, so that the condition in (9.8) is true for one of these densities. For example, consider the following log-densities: log ~p + i (s)= 1  2 log cosh(s) (9.9) log ~p  i (s)= 2  s 2 =2  log cosh(s)] (9.10) where  1  2 are positive parameters that are fixed so as to make these two functions logarithms of probability densities. Actually, these constants can be ignored in the following. The factor 2 in (9.9) is not important, but it is usually used here; also, the factor 1=2 in (9.10) could be changed. The motivation for these functions is that ~p + i is a supergaussian density, because the log cosh function is close to the absolute value that would give the Laplacian density. The density given by ~p  i is subgaussian, because it is like a gaussian log- density, s 2 =2 plus a constant, that has been somewhat “flattened” by the log cosh function. Simple computations show that the value of the nonpolynomial moment in (9.8) is for ~p + i 2E f tanh(s i )s i +(1 tanh(s i ) 2 )g (9.11) and for ~p  i it is E ftanh(s i )s i  (1  tanh(s i ) 2 )g (9.12) since the derivative of tanh(s) equals 1  tanh(s) 2 ,and E fs 2 i g =1 by definition. We see that the signs of these expressions are always opposite. Thus, for practically any distributions of the s i , one of these functions fulfills the condition, i.e., has the desired sign, and estimation is possible. Of course, for some distribution of the s i the nonpolynomial moment in the condition could be zero, which corresponds to the ALGORITHMS FOR MAXIMUM LIKELIHOOD ESTIMATION 207 case of zero kurtosis in cumulant-based estimation; such cases can be considered to be very rare. Thus we can just compute the nonpolynomial moments for the two prior distribu- tions in (9.9) and (9.10), and choose the one that fulfills the stability condition in (9.8). This can be done on-line during the maximization of the likelihood. This always provides a (locally) consistent estimator, and solves the problem of semiparametric estimation. In fact, the nonpolynomial moment in question measures the shape of the density function in much the same way as kurtosis. For g (s) = s 3 , we would actually obtain kurtosis. Thus, the choice of nonlinearity could be compared with the choice whether to minimize or maximize kurtosis, as previously encountered in Section 8.2. That choice was based on the value of the sign of kurtosis; here we use the sign of a nonpolynomial moment. Indeed, the nonpolynomial moment of this chapter is the same as the one encoun- tered in Section 8.3 when using more general measures of nongaussianity. However, it must be noted that the set of nonlinearities that we can use here is more restricted than those used in Chapter 8. This is because the nonlinearities g i used must corre- spond to the derivative of the logarithm of a probability density function (pdf). For example, we cannot use the function g (s)=s 3 because the corresponding pdf would be of the form exp(s 4 =4) , and this is not integrable, i.e., it is not a pdf at all. 9.2 ALGORITHMS FOR MAXIMUM LIKELIHOOD ESTIMATION To perform maximum likelihood estimation in practice, we need an algorithm to perform the numerical maximization of likelihood. In this section, we discuss dif- ferent methods to this end. First, we show how to derive simple gradient algorithms, of which especially the natural gradient algorithm has been widely used. Then we show how to derive a fixed-point algorithm, a version of FastICA, that maximizes the likelihood faster and more reliably. 9.2.1 Gradient algorithms The Bell-Sejnowski algorithm The simplest algorithms for maximizing likeli- hood are obtained by gradient methods. Using the well-known results in Chapter 3, one can easily derive the stochastic gradient of the log-likelihood in (9.6) as: 1 T @ log L @ B =B T ] 1 + E fg(Bx)x T g (9.13) Here, g(y)=(g i (y i ):::g n (y n )) is a component-wise vector function that consists of the so-called (negative) score functions g i of the distributions of s i , defined as g i =(logp i ) 0 = p 0 i p i : (9.14) 208 ICA BY MAXIMUM LIKELIHOOD ESTIMATION This immediately gives the following algorithm for ML estimation: B / B T ] 1 + E fg(Bx)x T g (9.15) A stochastic version of this algorithm could be used as well. This means that the expectation is omitted, and in each step of the algorithm, only one data point is used: B / B T ] 1 + g(Bx)x T : (9.16) This algorithm is often called the Bell-Sejnowski algorithm. It was first derived in [36], though from a different approach using the infomax principle that is explained in Section 9.3 below. The algorithm in Eq. (9.15) converges very slowly, however, especially due to the inversion of the matrix B that is needed in every step. The convergence can be improved by whitening the data, and especially by using the natural gradient. The natural gradient algorithm The natural (or relative) gradient method sim- plifies the maximization of the likelihood considerably, and makes it better condi- tioned. The principle of the natural gradient is based on the geometrical structure of the parameter space, and is related to the principle of relative gradient, which uses the Lie group structure of the ICA problem. See Chapter 3 for more details. In the case of basic ICA, both of these principles amount to multiplying the right-hand side of (9.15) by B T B . Thus we obtain B / (I + E fg(y)y T g)B (9.17) Interestingly, this algorithm can be interpreted as nonlinear decorrelation.This principle will be treated in more detail in Chapter 12. The idea is that the algo- rithm converges when E fg(y)y T g = I , which means that the y i and g j (y j ) are uncorrelated for i 6= j . This is a nonlinear extension of the ordinary requirement of uncorrelatedness, and, in fact, this algorithm is a special case of the nonlinear decorrelation algorithms to be introduced in Chapter 12. In practice, one can use, for example, the two densities described in Section 9.1.2. For supergaussian independent components, the pdf defined by (9.9) is usually used. This means that the component-wise nonlinearity g is the tanh function: g + (y )=2tanh(y ) (9.18) For subgaussian independent components, other functions must be used. For exam- ple, one could use the pdf in (9.10), which leads to g  (y ) = tanh(y )  y (9.19) (Another possibility is to use g (y ) = y 3 for subgaussian components.) These nonlinearities are illustrated in Fig. 9.1. The choice between the two nonlinearities in (9.18) and (9.19) can be made by computing the nonpolynomial moment: E f tanh(s i )s i +(1 tanh(s i ) 2 )g (9.20) ALGORITHMS FOR MAXIMUM LIKELIHOOD ESTIMATION 209 Fig. 9.1 The functions g + in Eq. (9.18) and g  in Eq. (9.19), given by the solid line and the dashed line, respectively. using some estimates of the independent components. If this nonpolynomial moment is positive, the nonlinearity in (9.18) should be used, otherwise the nonlinearity in (9.19) should be used. This is because of the condition in Theorem 9.1. The choice of nonlinearity can be made while running the gradient algorithm, using the running estimates of the independent components to estimate the nature of the independent components (that is, the sign of the nonpolynomial moment). Note that the use of the polynomial moment requires that the estimates of the independent components are first scaled properly, constraining them to unit variance, as in the theorem. Such normalizations are often omitted in practice, which may in some cases lead to situations in which the wrong nonlinearity is chosen. The resulting algorithm is recapitulated in Table 9.1. In this version, whitening and the above-mentioned normalization in the estimation of the nonpolynomial moments are omitted; in practice, these may be very useful. 9.2.2 A fast fixed-point algorithm Likelihood can be maximized by a fixed-point algorithm as well. The fixed-point algorithm given by FastICA is a very fast and reliable maximization method that was introduced in Chapter 8 to maximize the measures of nongaussianity used for ICA estimation. Actually, the FastICA algorithm can be directly applied to maximization of the likelihood. The FastICA algorithm was derived in Chapter 8 for optimization of E fG(w T z)g under the constraint of the unit norm of w . In fact, maximization of likelihood gives us an almost identical optimization problem, if we constrain the estimates of the independent components to be white (see Chapter 7). In particular, this implies that the term log j det Wj is constant, as proven in the Appendix, and thus the likelihood basically consists of the sum of n terms of the form optimized by FastICA. Thus 210 ICA BY MAXIMUM LIKELIHOOD ESTIMATION 1. Center the data to make its mean zero 2. Choose an initial (e.g., random) separating matrix B . Choose initial values of  i i = 1 ::: n , either randomly or using prior information. Choose the learning rates  and   . 3. Compute y = Bx . 4. If the nonlinearities are not fixed a priori: (a) update  i =(1   ) i +   E f tanh(y i )y i +(1 tanh(y i ) 2 )g . (b) if  i > 0 ,define g i as in (9.18), otherwise define it as in (9.19). 5. Update the separating matrix by B  B + I + g(y)y T ]B (9.21) where g(y )=(g 1 (y 1 ):::g n (y n )) T . 6. If not converged, go back to step 3. Table 9.1 The on-line stochastic natural gradient algorithm for maximum likelihood esti- mation. Preliminary whitening is not shown here, but in practice it is highly recommended. we could use directly the same kind of derivation of fixed-point iteration as used in Chapter 8. In Eq. (8.42) in Chapter 8 we had the following form of the FastICA algorithm (for whitened data): w  w  E fzg (w T z)g +  w]=E fg 0 (w T z)g +  ] (9.22) where  can be computed from (8.40) as  = E fy i g (y i )g . If we write this in matrix form, we obtain: W  W + diag ( i ) diag ( i )+E fg(y)y T g]W (9.23) where  i is defined as 1=(E fg 0 (w T z)+ i g) ,and y = Wz . To express this using nonwhitened data, as we have done in this chapter, it is enough to multiply both sides of (9.23) from the right by the whitening matrix. This means simply that we replace the W by B , since we have Wz = WVx which implies B = WV . Thus, we obtain the basic iteration of FastICA as: B  B + diag ( i ) diag ( i )+E fg(y)y T g]B (9.24) where y = Bx ,  i = E fy i g (y i )g ,and  i = 1=( i + E fg 0 (y i )g) . After every step, the matrix B must be projected on the set of whitening matrices. This can be accomplished by the classic method involving matrix square roots, B  (BCB T ) 1=2 B (9.25) THE INFOMAX PRINCIPLE 211 where C = E fxx T g is the correlation matrix of the data (see exercises). The inverse square root is obtained as in (7.20). For alternative methods, see Section 8.4 and Chapter 6, but note that those algorithms require that the data is prewhitened, since they simply orthogonalize the matrix. This version of FastICA is recapitulated in Table 9.2. FastICA could be compared with the natural gradient method for maximizing likelihood given in (9.17). Then we see that FastICA can be considered as a computationally optimized version of the gradient algorithm. In FastICA, convergence speed is optimized by the choice of the matrices diag ( i ) and diag ( i ) . These two matrices give an optimal step size to be used in the algorithm. Another advantage of FastICA is that it can estimate both sub- and supergaussian independent components without any additional steps: We can fix the nonlinearity to be equal to the tanh nonlinearity for all the independent components. The reason is clear from (9.24): The matrix diag ( i ) contains estimates on the nature (sub- or supergaussian) of the independent components. These estimates are used as in the gradient algorithm in the previous subsection. On the other hand, the matrix diag ( i ) can be considered as a scaling of the nonlinearities, since we could reformulate  diag ( i )+E fg(y)y T g] = diag ( i )I + diag ( 1 i )E fg(y)y T g] . Thus we can say that FastICA uses a richer parameterization of the densities than that used in Section 9.1.2: a parameterized family instead of just two densities. Note that in FastICA, the outputs y i are decorrelated and normalized to unit variance after every step. No such operations are needed in the gradient algorithm. FastICA is not stable if these additional operations are omitted. Thus the optimization space is slightly reduced. In the version given here, no preliminary whitening is done. In practice, it is often highly recommended to do prewhitening, possibly combined with PCA dimension reduction. 9.3 THE INFOMAX PRINCIPLE An estimation principle for ICA that is very closely related to maximum likelihood is the infomax principle [282, 36]. This is based on maximizing the output entropy, or information flow, of a neural network with nonlinear outputs. Hence the name infomax. Assume that x is the input to the neural network whose outputs are of the form y i =  i (b T i x)+n (9.31) where the  i are some nonlinear scalar functions, and the b i are the weight vectors of the neurons. The vector n is additive gaussian white noise. One then wants to maximize the entropy of the outputs: H (y)=H ( 1 (b T 1 x)::: n (b T n x)) (9.32) This can be motivated by considering information flow in a neural network. Efficient information transmission requires that we maximize the mutual information between 212 ICA BY MAXIMUM LIKELIHOOD ESTIMATION 1. Center the data to make its mean zero. Compute correlation matrix C = E fxx T g . 2. Choose an initial (e.g., random) separating matrix B . 3. Compute y = Bx (9.26)  i = E fy i g (y i )g for i =1:::n (9.27)  i = 1=( i + E fg 0 (y i )g) for i =1:::n (9.28) 4. Update the separating matrix by B  B + diag ( i ) diag ( i )+E fg(y)y T g]B (9.29) 5. Decorrelate and normalize by B  (BCB T ) 1=2 B (9.30) 6. If not converged, go back to step 3. Table 9.2 The FastICA algorithm for maximum likelihood estimation. This is a version without whitening; in practice, whitening combined with PCA may often be useful. The nonlinear function g is typically the tanh function. [...]... methods, and it is automatically done in FastICA The first approaches to using maximum likelihood estimation for ICA were in [140, 372]; see also [368, 371] This approach became very popular after the introduction of the algorithm in (9.16) by Bell and Sejnowski, who derived it using the infomax principle [36]; see also [34] The connection between these two approaches was later proven by [64, 322, 363] The... gradient method for maximum likelihood estimation The nonlinearity was the one in (9.19), which was not correct in this case CONCLUDING REMARKS AND REFERENCES 217 Fig 9.6 FastICA automatically estimates the nature of the independent components, and converges fast to the maximum likelihood solution Here, the solution was found in 2 iterations for subgaussian independent components Fig 9.7 FastICA this time... distribution functions corresponding to the densities pi , i.e., 0 i (:) = pi (:), the output entropy is actually equal to the likelihood This means that infomax is equivalent to maximum likelihood estimation 9.4 EXAMPLES Here we show the results of applying maximum likelihood estimation to the two mixtures introduced in Chapter 7 Here, we use whitened data This is not strictly necessary, but the algorithms... corresponding estimation for two mixtures of two supergaussian independent components This time, the nonlinearity in (9.18) was the correct one, and gave the estimates in Fig 9.4 This could be checked by computing the nonpolynomial moment in (9.20): It was positive In contrast, using the nonlinearity in (9.19) gave completely wrong estimates, as seen in Fig 9.5 B 214 ICA BY MAXIMUM LIKELIHOOD ESTIMATION. .. mixtures Again, the solution was found in 2 iterations 218 ICA BY MAXIMUM LIKELIHOOD ESTIMATION Problems 9.1 Derive the likelihood in (9.4) 9.2 Derive (9.11) and (9.12) 9.3 Derive the gradient in (9.13) 9.4 Instead of the function in (9.19), one could use the function g (y ) = y 3 Show that this corresponds to a subgaussian distribution by computing the kurtosis of the distribution Note the normalization... algorithm, FastICA effortlessly finds the independent components in both cases In Fig 9.6, the results are shown for the subgaussian data, and in Fig 9.7, the results are shown for the supergaussian data In both cases the algorithm converged correctly, in a couple of iterations 9.5 CONCLUDING REMARKS AND REFERENCES Maximum likelihood estimation, perhaps the most commonly used statistical estimation principle,... the edges of the square, as they should be Fig 9.3 The same as in Fig 9.2, but with the correct nonlinearity, given by (9.19) This time, the natural gradient algorithm gave the right result The estimated vectors are aligned with the edges of the square 216 ICA BY MAXIMUM LIKELIHOOD ESTIMATION Fig 9.4 In this experiment, data was two whitened mixtures of supergaussian independent components The nonlinearity... this chapter Moreover, a high-order polynomial leads to very nonrobust estimators CONCLUDING REMARKS AND REFERENCES 215 Fig 9.2 Problems of convergence with the (natural) gradient method for maximum likelihood estimation The data was two whitened mixtures of subgaussian independent components The nonlinearity was the one in (9.18), which was not correct in this case The resulting estimates of the columns... ( 1 ) ::: n ( n )) denotes the function defined by the neural network We can simply calculate the derivative to obtain E flog j det @F (x)jg = @B XE i b x)g + log j det Bj flog 0 ( T i i (9.34) Now we see that the output entropy is of the same form as the expectation of the likelihood as in Eq 9.6 The pdf’s of the independent components are here replaced by the functions 0 Thus, if the nonlinearities... is this not correct in the maximum likelihood framework? 9.6 Take a linear g (y ) = y What is the interpretation of this in the ML framework? Conclude (once again) that g must be nonlinear 9.7 Assume that you use the general function family g (y ) = 2 tanh(a1 y ) instead the simple tanh function in (9.18), where a1 1 is a constant What is the interpretation of a1 in the likelihood framework? 9.8 Show . ICA by Maximum Likelihood Estimation A very popular approach for estimating the independent component analysis (ICA) model is maximum likelihood (ML) estimation. . Appendix, and thus the likelihood basically consists of the sum of n terms of the form optimized by FastICA. Thus 210 ICA BY MAXIMUM LIKELIHOOD ESTIMATION 1. Center

Ngày đăng: 23/12/2013, 07:19

TỪ KHÓA LIÊN QUAN

w