1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo hóa học: " Speech Source Separation in Convolutive Environments Using Space-Time-Frequency Analysis" pot

11 203 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • Introduction

  • Background

  • Proposed source separation method

    • Identification of single-source TF cells

    • Spatial transfer function estimation

  • Tracking and frequency association algorithm

    • Gaussian mixture model and extended Kalman filter

    • The separation algorithm

  • Experimental results

  • Conclusions

  • Acknowledgment

  • REFERENCES

Nội dung

Hindawi Publishing Corporation EURASIP Journal on Applied Signal Processing Volume 2006, Article ID 38412, Pages 1–11 DOI 10.1155/ASP/2006/38412 Speech Source Separation in Convolutive Environments Using Space-Time-Frequency Analysis Shlomo Dubnov, 1 Joseph Tabrikian, 2 and Miki Arnon-Targan 2 1 CALIT 2, University of California, San Diego, CA 92093, USA 2 Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel Received 10 February 2005; Revised 28 September 2005; Accepted 4 October 2005 We propose a new method for speech source separation that is based on directionally-disjoint estimation of the transfer functions between microphones and sources at different frequencies and at multiple times. The spatial transfer functions are estimated from eigenvectors of the microphones’ correlation matrix. Smoothing and association of transfer function parameters across different frequencies are performed by simultaneous extended Kalman filtering of the amplitude and phase estimates. This approach allows transfer function estimation even if the number of sources is greater than the number of microphones, and it can operate for both wideband and narrowband sources. The performance of the proposed method was studied via simulations and the results show good performance. Copyright © 2006 Shlomo Dubnov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. INTRODUCTION Many audio communication and entertainment applications deal with acoustic signals that contain combinations of sev- eral acoustic sources in a mixture that overlaps in time and frequency. In the recent years, there has been a growing in- terest in methods that are capable of separating audio signals from microphone arrays using blind source separation (BSS) techniques [1]. In contrast to most of the research works in BSS that assume multiple microphones, the audio data in most practical situations is limited to stereo recordings. Moreover, the majority of the potential applications of BSS in the audio realm consider separation of simultaneous au- dio sources in reverberant or echo environments, such as a room or inside a vehicle. These applications deal with convo- lutive mixtures [2] that often contain long impulse responses that are difficult to estimate or invert. In this paper, we consider a simpler but still practical and largely overlooked situation of mixtures that contain a combination of source signals in weak reverberation envi- ronments, such as speech or music recorded with close mi- crophones. The main mixing effect in such a case is direct path delay and possibly a small combination of multipath delays that can be described by convolution with a relatively short impulse response. Recently, several works proposed separation of multiple signals when additional assumptions are imposed on the signals in the time-frequency (TF) do- main. In [3, 4] an assumption that each source occupies sep- arate regions in short-time Fourier transform (STFT) rep- resentation using an analysis window W(t) (so-called W- disjoint assumption) was considered. In [5] a source sepa- ration method is proposed using so-called single-source au- toterms of a spatial ambiguity function. In the W-disjoint case the amplitude and delay estimation of the mixing pa- rameters of each source is performed based on the ratio of the STFTs of signals between the two microphones. Since the disjoint assumption appears to be too strict for many real- world situations, several improvements have been reported that only allow an approximate disjoint situation [6]. The basic idea in such a case is to use some sort of a detection function that allows one to determine the TF areas where each source is present alone (we will refer to such an area as a single-source TF cell,orsingle-TF for short) and use only these areas for separation. Detection of single-source autoterms is based on detecting points that have only one nonzero diagonal entry in the spatial time-frequency distri- bution (STFD). The STFD generalizes the TF distribution for the case of vector signals. It can be shown that under a lin- ear data model, the spatial TF distribution has a structure similar to that of the spatial correlation matrix that is usually used in array signal processing. The benefits of the spatial TF methods is that they directly exploit the nonstationary 2 EURASIP Journal on Applied Signal Processing property of the signals for purposes of detecting and sepa- rating the individual sources. Recent reported results of BSS using various single-TF detection functions show excellent performance for instantaneous mixtures. In this paper, we propose a new method for source sepa- ration in the echoic or slightly reverberant case that is based on estimating and clustering the spatial signatures (trans- fer functions) between the microphones and the sources at different frequencies and at multiple times. The trans- fer functions for each source-microphone pair are derived from eigenvectors of correlation matrices between the micro- phone signals at each frequency, and are determined through a selection and clustering process that creates disjoint sets of eigenvector candidates for every frequency at multiple times. This requires solving the permutation problem [7], that is, association of transfer function values across different fre- quencies into a single transfer function. Smoothing and asso- ciation are achieved by simultaneous Kalman filtering of the noisy amplitude and phase estimates along different frequen- cies for each source. This differs from association methods that assume smoothness of spectra of the separated signals, rather than smoothness of the transfer functions. Even when notches in room response occur due to signal reflections, these are relatively rare compared to the inherent sparseness of the source signals, which is inherent in the W-disjoint as- sumption. Our approach allows estimation of the transfer functions between each source and every microphone, and is capable of operating for both wideband and narrowband sources. The proposed method can be used for approximate signal sepa- ration in undercomplete cases (more than two sources in a stereo recording) using filtering or time-frequency masking [8], in a manner similar to that of the W-disjoint situation. This paper is structured in the following manner: in the next section, we review some recent state-of-the-art algo- rithms for BSS, specifically considering the nonstationary methods of independent component analysis (ICA) and the W-disjoint approaches. Section 3 presents our model and the details of the proposed algorithm. Specifically, we will de- scribe the TF analysis and representation and its associated eigenvector analysis of the correlation matrices at different frequencies and multiple times. Then, we proceed to derive a criterion for identification of the single-source TF cells and clustering the spatial transfer functions. Details of the ex- tended Kalman filter (EKF) tracking, smoothing, and across- frequency association of the transfer function amplitudes and phases conclude this sec tion. The performance of the proposed method for source separation is demonstrated in Section 5. Finally, our conclusions are presented in Section 6. 2. BACKGROUND The problem of multiple-acoustic-source separation using multiple microphones has been intensively investigated dur- ing the last decade, m ostly based on independent compo- nent analysis (ICA) methods. These methods, largely driven by advances in machine learning research, treat the separa- tion issue broadly as a density estimation problem. A com- mon assumption in ICA-based methods is that the sources have a particular statistical behavior, such that the sources are random stationary statistically independent signals. Us- ing this assumption, ICA attempts to linearly recombine the measured sig nals so as to achieve output signals that are as independent as possible. The acoustic mixing problem can be described by the equation x(t) = As(t), (1) where s(t) ∈ R M denotes the vector of M source signals, x(t) ∈ R N denotes the vector of N microphone signals, and A stands for the mixing matrix with constant coefficients A nm describing the amplitude scaling between source m and mi- crophone n. Naturally, this formulation describes only an in- stantaneous mixture with no delays or convolution effects. In a multipath environment, each source m couples with sen- sor n through a linear time-invariant system. Using discrete time t and τ, and assuming impulse responses not exceeding length L, the microphone signals are x n (t) = M  m=1 L  τ=1 A nm (τ)s m (t − τ). (2) Note that the mixing is now a matrix convolution between the source signals and the microphones, where A nm (·)rep- resents the impulse response between source n and micro- phone m. We can rewrite this equation by applying the dis- crete Fourier transform (DFT): x(ω) =  A(ω)s(ω), (3) where denotes the DFT of the signal. This notation assumes that either the signals and the mixing impulse responses are of short duration (shorter than the DFT length), or that an overlap-add formulation of the convolution process is as- sumed, which allows infinite duration for s(t)andx(t), but requires a short duration of the A nm (·) responses. From now on we will consider the convolutive problem by assuming separate instantaneous mixing problems x(ω) =  A(ω)s(ω)at every frequency ω. The aim of the convolutive BSS is to find filters W mn (t) that when applied to x(t) result in new signals y(t) that are approximately independent. In the frequency- domain formulation we have y(ω) =  W(ω)x(ω), (4) so that y(t) corresponds to the original sources s(t), up to some allowed transformation such as permutation, that is, not knowing which source s m (t) appears in which output y m  (t), and amplitude scaling (relative volume). This problem can be reformulated in statistical terms as follows: for each frequency, given a multivariate distr ibution Shlomo Dubnov et al. 3 of vectors x = (x 1 , x 2 , , x N ) T , whose coordinates or com- ponents correspond to the signals at the N microphones, we seek to find a matrix  W and vector y = (y 1 , y 2 , , y M ) T , whose components are “as independent as possible.” Saying so, it is assumed that there exists a multivariate process with independent components s, which correspond to the actual independent acoustic sources, such as speakers or musical instruments, and a matrix  A =  W −1 that corresponds to the mixing condition (up to permutation and scaling), so that x =  As. Note that here and in the following we will at times drop the frequency parameter ω from the problem formulation. Since the problem consists of finding an inverse matrix to the model x =  As, any solution of this problem is possible only by using some prior information of  A and s.Consider- ing a pairwise independence a ssumption, the relevant crite- rion can be described by considering the following: ∀t, k, l, τ, i = j : E  s k i (t)s l j (t + τ)  = E  s k i (t)  E  s m j (t + τ)  . (5) The parameterization of different ICA approaches can be written now as different conditions on the parameters of the independence assumption. For stationary signals, the time indices are irrelevant and higher-order statistical criteria in the form of independence conditions with k, l>1mustbe considered. For stationary colored signals, it has been shown that decorrelation of multiple times t for k = l = 1allows recovery of the sources in the case of an instantaneous mix- ture, but is insufficient for the general convolutive case. For nonstationary signals, decorrelation at multiple times, t,can be used (for k = l = 1) to perform the separation. The idea behind decorrelation at multiple times t is basi- cally an extension of decorrelation at two time instances. In the case of nonmoving sources and microphones, the same linear model is assumed to be valid at different time instances with different signal statistics, with the same orthogonal sep- arating matrix W: Wx  t j , ω  = y  t j , ω  , j = 1, , J,(6) where the additional index ω of W implies that we are deal- ing with multiple separation problems for different values of ω. The same formulation can be used without ω for a time- domain problem, which gives a solution to the instantaneous mixture problem. Considering autocorrelation statistics at time instances t 1 , , t J we obtain J sets of matrix equations: R x,t j = W −1 Λ y,t j W −T , j = 1, , J,(7) where we assume that {Λ y,t j } J j =1 are diagonal since the com- ponents of y are independent. This problem can be solved using a simultaneous diagonalization of {R x,t j } J j =1 , without knowledge of the true covariances of y at different times. A crucial point in implementation of this method is that it works only when the eigenvalues of the matrices R x,t are all distinct. This case corresponds in physical reality to suf- ficiently unequal powers of signals ar riving from different directions, a situation that is likely to be violated in prac- tical scenarios. Moreover, since the covariance matrices are estimated in practice from short time frames, the averaging time needs to correspond to the stationarity time. An addi- tional difficulty occurs specifically for the TF representation: independence between two signals in a certain band around ω corresponds to independence between narrowband pro- cesses, which can be revealed at time scales that are signifi- cantly longer than the window size or the effective impulse response of the bandpass filter used for TF analysis. This in- herently limits the possibility of averaging (taking multiple frames or snapshots of the signal in one time segment) with- out exceeding the stationarity interval of the signal. In the following we will show how our method solves the eigenvalue indeterminacy problem by choosing those time segments where only one significant eigenvalue occurs. Our “segmen- tal” approach actually reduces the generalized (or multiple) eigenvalue problem into a set of disjoint eigenvalue problems that are solved separately for each source. The details of our algorithm will be described in the next section. In the fol- lowing, we will consider the “directionally-disjoint” sources case in which the local covariance matrices R x,t j have a single largeeigenvalueatsufficiently many time instances t j .The precise definition and the amount of times that are sufficient for separation will be discussed later. 3. PROPOSED SOURCE SEPARATION METHOD Consider an N-channel sensor signal x(t) that arises from M unknown scalar source signals s m (t), corrupted by zero- mean, white Gaussian additive noise. In a convolutive en- vironment, the signals are received by the array after delays and reflections. We consider the case where each one of the sources has a different spatial transfer function. Therefore, the signal at the nth microphone is given by x n (t) = M  m=1 L  l=1 a nml s m (t − τ nml )+v n (t), n = 1, , N, (8) in which τ nml and a nml are the delay and gain of the lth path between source signal m and microphone n,andv n (t)de- notes the zero-mean white Gaussian noise. The STFT of (8) gives X n (t, ω) = M  m=1 A nm (ω)S m (t, ω)+V n (t, ω), n = 1, , N, (9) where S m (t, ω)andV n (t, ω) are the STFT of s m (t)andv n (t), respectively, and the transfer function between the mth signal to the nth sensor is defined as A nm (ω) = L  l=1 a nml e − jωτ nml . (10) 4 EURASIP Journal on Applied Signal Processing In matrix notation, the model (9)canbewrittenintheform X(t, ω) = A(ω)S(t, ω)+V(t, ω). (11) Our goal here is to estimate the spatial transfer function matrix, A(ω), and the signal vector, s(t), from the measure- ment vector x(t). For estimation of the signal vector, we will assume that the number of sources, M, is not greater than the number of sensors, N. This assumption is not required for estimation of the spatial transfer function matrix, A(ω). The proposed approach seeks time-frequency cells in which only one source is present. At these cells, it is pos- sible to estimate the unstructured spatial transfer function matrix for the present source. Therefore, we will first iden- tify the single-source TF cells and calculate the spatial trans- fer functions for the sources present in those cells. In the second stage, the spatial transfer functions are clustered using a Gaussian mixture model (GMM). The frequency- permutation problem is solved by considering the spatial transfer functions as a frequency-domain Markov model and applying an EKF to track it. Finally, the sources are separated by inverse filtering of the measurements using the estimated transfer function matrices. The autocorrelation matrix at a given time-frequency cell is given by R x (t, ω) = E  X(t, ω)X H (t, ω)  = A(ω)R s (t, ω)A H (ω)+R v (t, ω), (12) where R x , R s ,andR v are the time-frequency spectra of the measurements, source signals, and sensor noises, respec- tively. We assume that the noise is stationary, and there- fore its covariance matrix is independent of time t, that is, R v (t, ω) = R v (ω). Furthermore, the noise spect rum is usually known, so (12) can be spatially prewhitened by left multiply- ing (11)byR −1/2 v (ω). Thus, we can assume R v (ω) = σ 2 v I N for all ω where I N is the identity matrix of size N. 3.1. Identification of single-source TF cells Each time-frequency window is tested in order to identify the time-frequency windows in which a s ingle signal is present. In these cells, the unstructured spatial t ransfer funct ion can be easily estimated. Consider a time segment consisting of T time cells in which the signals are stationary. Then, (12) becomes time-independent: R x (ω) = A(ω)R s (ω)A H (ω)+σ 2 v I N . (13) If only the mth source is present, (13)becomes R x m (ω) = a m (ω)a H m (ω)σ 2 s m (ω)+σ 2 v I N , (14) where a m (ω) is the mth column of the matrix A(ω)and σ 2 s m (ω) denotes the mth signal power spectr um. In this case, the rank of the (noiseless) signal covariance matrix is 1 and a m (ω) is proportional to the eigenvector of the autocorre- lation matrix R x m (ω) associated with the maximum eigen- value: λ 1,m (ω) = σ 2 s m (ω)a m (ω) 2 + σ 2 v . This propert y al lows us to derive a test for identification of the single-source seg- ments and estimate the corresponding spatial transfer func- tion a m (ω). We will denote the eigenvector corresponding to the maximum eigenvalue of the matrix R x (ω)byu(ω), disre- garding the source index m. The three hypotheses for each time-frequency cell in a stationary segment, which indicate the number of active sources in this segment, are H 0 : X(t, ω) ∼ N c  0, σ 2 v I N  , H 1 : X(t, ω) ∼ N c  0, u(ω)u H (ω)σ 2 s (ω)+σ 2 v I N  , H 2 : X(t, ω) ∼ N c  0, R x (ω)  , (15) where H 0 , H 1 , H 2 indicate noise-only, single-source, and multiple-source hypotheses, respectively, with X ∼ N c (·, ·) denoting the complex Gaussian distribution. Under hypoth- esis H 0 , the model parameters are known. Under hypoth- esis H 1 , the vector u(ω) is the normalized spatial transfer function of the present source in the segment (i.e., one of the columns of the matrix A(ω)) and σ 2 s (ω) represents the corresponding signal power spectrum. We assume that u(ω) and σ 2 s (ω) are unknown. In hypothesis H 2 , it is assumed that the data model is complex Gaussian-distributed and spatially colored with unknown covariance matrix R x (ω), which rep- resents the contribution of several mixed sources. Usually, the Gaussian distribution assumption for hypotheses H 1 and H 2 does not hold, and in fact leads to suboptimal solutions. However, this assumption enables obtaining a simple and meaningful result for source separation. In order to identify the case of a single source, two tests are performed. In the first, the hypotheses H 0 and H 1 are tested, w hile in the second, hypotheses H 1 and H 2 are tested. A time-frequency cell is considered as a single-source cell if in both tests it is decided that a single source is present. These tests are carried out between hypotheses with unknown pa- rameters, and therefore the generalized likelihood ratio test (GLRT) is employed, that is, H 1 GLRT 1 = max u,σ 2 s log f X|H 1 ;u,σ 2 s − log f X|H 0 ≷ γ 1 , H 0 H 2 GLRT 2 = max R x log f X|H 2 ;R x − max u,σ 2 s log f X|H 1 ;u,σ 2 s ≷ γ 2 , H 1 (16) where f X|H 0 , f X|H 1 ;u,σ 2 s ,and f X|H 2 ;R x denote the probability density functions (pdf’s) of each time-frequency seg ment under the three hypotheses. Now, we will derive the GLRTs for identification of single-source cells. Consider T independent samples of the data vectors X(ω)  [X(1, ω), , X(T, ω)] for which the data vector is stationary. Then, under the three hypothe- ses described above, X(t, ω) is complex Gaussian-distributed Shlomo Dubnov et al. 5 X(t, ω) ∼ N c [0, R x (ω)]. The model of R x (ω)differs between the three hypotheses. The log-likelihood of the data X(ω)un- der the joint mo del is log f X|R x =−T log   πR x (ω)   − T  t=1 X H (t, ω)R −1 x (ω)X(t, ω) =−T  log   πR x (ω)   +tr   R x (ω)R −1 x (ω)  , (17) where  R x (ω) is the sample covariance matrix  R x (ω)  1/T  T t=1 X(t, ω)X H (t, ω). For simplicity of notation, we will drop the dependence on frequency ω. Under hypothesis H 0 , R x = σ 2 v I, and therefore the log- likelihood from (17)becomes log f X|H 0 =−T  N log  πσ 2 v  + 1 σ 2 v tr   R x   . (18) Under hypothesis H 1 , R x = σ 2 s uu H + σ 2 v I N , for which the following equations are satisfied: R −1 x = 1 σ 2 v  I N − SNR 1+SNR uu H  ,   R x   = σ 2N v (1 + SNR), (19) where SNR  σ 2 s /σ 2 v . Substitution of (19) into (17) yields log f X|H 1 ,u,σ 2 s =−T  log  πσ 2 v  N (1 + SNR)  + 1 σ 2 v tr   R x  I N − SNR 1+SNR uu H  =− T  N log  πσ 2 v  + 1 σ 2 v tr   R x  +log(1+SNR) − SNR σ 2 v (1 + SNR) u H  R x u  . (20) Maximization of (20)withrespecttoσ 2 s can be replaced by maximization with respect to SNR. This operation can be performed by calculating the derivative of (20)withre- spect to SNR and equating it to zero, resulting in  SNR(u) = u H  R x u/σ 2 v − 1orσ 2 s (u) = u H  R x u − σ 2 v .Thus, max σ 2 s log f X|H 1 ,u,σ 2 s =−T  N log  πσ 2 v  + 1 σ 2 v tr   R x  +1+logη − η  , (21) where η  u H  R x u/σ 2 v . We seek to maximize (21)withre- spect to u,whereu is constrained to unity norm. Since (21) is monotonically increasing with η,forη>1, then the log- likelihood is maximized when η is maximized. Let λ 1 ≥ ···≥ λ N denote the eigenvalues of  R x . Then, max u u H  R x u = λ 1 ,and max u,σ 2 s log f X|H 1 ,u,σ 2 s =−T  N log  πσ 2 v  +1+ 1 σ 2 v tr   R x  +log λ 1 σ 2 v − λ 1 σ 2 v  =− T  N log  πσ 2 v  +1+ N  i=2 λ i σ 2 v +log λ 1 σ 2 v  . (22) Under hypothesis H 2 , the matrix R x is unstructured and assumed to be unknown. Equation (17) is maximized for R x =  R x [9]. The resulting log-likelihood under this hypoth- esis is max R x log f X|H 2 ,R x =−T  log   π  R x   + N  =− T  N log π + N  i=1 log λ i + N  . (23) Now, the two GLRTs for decision between (H 0 , H 1 )and (H 1 , H 2 ) can be derived by subtracting the corresponding log-likelihood functions: H 1 GLRT 1 = max u,σ 2 s log f X|H 1 ;u,σ 2 s − log f X|H 0 = T  λ 1 σ 2 v − log λ 1 σ 2 v − 1  ≷ γ  1 , H 0 H 2 GLRT 2 = max R x log f X|H 2 ;R x − max u,σ 2 s log f X|H 1 ;u,σ 2 s = T  N  i=2  λ i σ 2 v − log λ i σ 2 v  − N +1  ≷ γ  2 . H 1 (24) 6 EURASIP Journal on Applied Signal Processing Finally, after dropping the constants, and modifying the thresholds accordingly, the two tests can be stated as H 1 T 1 =  λ 1 σ 2 v − log λ 1 σ 2 v  ≷ γ 1 , H 0 H 2 T 2 = N  i=2  λ i σ 2 v − log λ i σ 2 v  ≷ γ 2 . H 1 (25) The thresholds γ 1 and γ 2 in the two tests should be set according to the following considerations. Large values for γ 1 and small values for γ 2 will lead to missed detections of single-source TF cells, and therefore lead to a lack of data for calculation of the spatial transfer function. On the other hand, small values for γ 1 or large values for γ 2 will lead to false detections of single-source TF cells, w hich can cause er- roneous estimation of the spatial t ransfer function. Gener- ally, larger amounts of data will enable us to increase γ 1 and decrease γ 2 . In the case of stereo signals (N = 2), both tests could be expressed for i = 1, 2 and λ 2 ≥ λ 1 ≥ σ 2 v as H i T i =  λ i σ 2 v − log λ i σ 2 v  ≷ γ i . H i−1 (26) 3.2. Spatial transfer function estimation In the TF cells that are identified to be single-source cells, the ML estimator for the normalized spatial transfer function of the present source at the given frequency ω is given by the eigenvector associated with the maximum eigenvalue of the autocorrelation matrix R x m . It is important to note that a sin- gle amplitude-delay pair is sufficient to describe the spatial transform for a sufficiently narrow frequency band represen- tation and assuming a linear spatial system. We can rewrite the model (11) for the case of two sources and two micro- phones as  X 1 (ω) X 2 (ω)  =  11 a 1 e − jωδ 1 a 2 e − jωδ 2  S 1 (ω) S 2 (ω)  (27) in which case, the mixing matrix column, corresponding to one of the sources, say source m, can be directly estimated from the eigenvector, a m (ω), associated with the maximum eigenvalue of the autocorrelation matrix R x m under hypoth- esis T 1 , that is, a single-source m is present in this TF region. Thus, a m e − jωδ m = a m,2 (ω) a m,1 (ω) , (28) where a m,i denotes the ith component of a m , or more specif- ically a m =     a m,2 (ω) a m,1 (ω)     , δ m = 1 ω   log a m,2 (ω) a m,1 (ω)  , (29) where  denotes taking the imaginary part. Having different amplitude and delay values for each source at e very frequency, we need to associate the different amplitude and delay values across frequency to their corre- sponding source. If we assume that the amplitude and de- lay are constant over different frequencies, occurring in the case of a direct path effect only, the association can be per- formed by clustering the amplitude and phase values around their mean value. In the case of multipath, the amplitude and delay values may differ across frequencies. Using smooth- ness considerations, one could try to associate the parame- ters across different frequencies by assuming proximity of pa- rameter values across frequency bins for the same source. It should be also noted that smoothness of delay values requires unwrapping of the complex logarithm before dividing by ω. This is limited by spatial aliasing for high frequencies, that is, if the spacing d between the sensors is too large, the delay d/c where c is the speed of sound, might be larger than the max- imum permissible delay 2π/ω s ,withω s denoting the sam- pling frequency. In other words, it might not be possible to uniquely solve the permutation problem if the delay between two microphones is more than one sample. Moreover, sepa- rate clustering or associating amplitude and delay parameters also looses information about the relations between the real and imaginary components of the spatial transfer function vector. In the following section, we w ill describe an optimal tracking and frequency association based on Kalman mod- eling, which addresses these problems assuming smoothness of the amplitude and phase of the spatial transfer function across frequency. 4. TRACKING AND FREQUENCY ASSOCIATION ALGORITHM A common problem in frequency-domain convolutive BSS is that the mixing parameter estimation is performed sep- arately for each frequency. In order to reconstruct the time signal, the frequency-separated channels must be combined together in a consistent manner, that is, one must insure that the different frequency components correspond to the same source. This problem is sometimes referred to as the frequency-permutation or association problem. In our method we perform the association in two steps. First, we reduce the number of points at every frequency by finding clusters of the points a m,2 (ω)/a m,1 (ω) in the complex plane at different time segments. This clustering is performed us- ing a two-dimensional GMM of the real and imaginary parts. The number of clusters is determined aprioriaccording to the number of sources. When the number of sources is un- known, additional methods for determining the number of clusters may be considered. Next, association of the mixing Shlomo Dubnov et al. 7 parameters across frequency is performed by operating sep- arate EKFs on the cluster means, one for each source. 4.1. Gaussian mixture model and extended Kalman filter The GMM assumes that the observations z are distributed according to the following density function p z (z) = M  m=1 π m N  z | Θ m  , (30) where π m are the weights of the Gaussian distribution N(·| Θ m ), and Θ m ={μ m , Σ m } are its mean and covariance matrix parameters, respectively. In our case, the observations, z,are estimates of the real and imaginary parts of the transfer func- tion over frequency (see previous section). The parameters of the GMM are obtained using an expectation-maximization (EM) procedure. The estimated mean and covariance matrix at each frequency are used for tracking the spatial transfer function. An EKF is used for t racking and association of the trans- fer functions, whose mean and variance are estimated by the EM algorithm. T he idea here is that the spatial transfer func- tion between each source and microphone is smooth over frequency. Notches that occur in the transfer func tion due to signal reflections will be smoothed by the EKF, causing errors in the estimation (29), which color the signal but do not in- terfere with the association process since one of the sources in this case has small or zero amplitude. Therefore, the spa- tial transfer functions are modeled as first-order Markov se- quences. It is natur al to use the magnitude and phase of each spatial transfer func tion for the state vector, because in sim- ple scenarios with no multipath, the absolute value of the transfer function is constant over frequency, while its phase linearly varies with frequency. Thus, the state vector of each EKF includes the magnitude (ρ), phase (α), and phase rate ( ˙ α) of the transfer function. The presence of multipath causes deviation from this model, which can be represented by a noise model. Thus, the state vector dynamics across neigh- boring frequencies (frequency smoothness constraint) are modeled as φ k = ⎛ ⎜ ⎝ ρ k α k ˙ α k ⎞ ⎟ ⎠ = ⎛ ⎜ ⎝ 100 011 001 ⎞ ⎟ ⎠ ⎛ ⎜ ⎝ ρ k−1 α k−1 ˙ α k−1 ⎞ ⎟ ⎠ + n φk , μ k =    a m  ω k    a m  ω k   =  ρ k cos α k ρ k sin α k  + n μk , (31) in w h ich the noise covariance of n μk is taken from the above- mentioned clustering algorithm, and the model noise covari- ance of n φk is set according to the expected dynamics of the spatial transfer function. For tracking the M transfer functions, M independent EKFs are implemented in parallel. At each frequency step, the data is associated with the EKFs according to the criterion of minimum-norm distance between the clustering estimates and the M Kalman predictions. 4.2. The separation algorithm The various steps of the algorithm can be summarized as fol- lows. (i) Given a two-channel recording, perform a separate STFT analysis for every channel, resulting in the sig- nal model (11). (ii) Perform an eigenvalue analysis of the cross-channel correlation matrix at each frequency, as described in Section 3, where (12)and(26) determine the transfer function. (iii) At each frequency, determine the cluster centers of the set of amplitude ratio measurements using the GMM. (iv) Perform EKF tracking of the cluster means across fre- quency for each source to obtain an estimate of the mixing matrix as a function of frequency. (v) If the mixing matrix is invertible, recover the signals by multiplying the STFT channels at each frequency by the inverse of the estimated mixing matrix. In case of more microphones than sources, the pseudoinverse of the mixing matrix should be used. In case of more sources than microphones, source separation can be approximately performed using time-frequency mask- ing method of [8]. (vi) Perform an inverse STFT using the associated frequen- cies for each of the sources. Since the mixing matrix can be determined only up to a scaling factor, we assume a unit relative magnitude for one of the sources and use the amplitude ratios to determine the mixing parameters of the remaining source. This problem of scale invariance may cause a “coloration” of the recovered signal (over frequency) and is one of the possible sources of error, being common to most convolutional source separa- tion methods. Another typical problem is that the narrow- band processing corresponds to circular convolution rather than the desired linear convolution. This effectively restricts the length of the impulse response between the microphones to be less than half of the analysis window length, or in fre- quency it restricts the spectral smoothness to that of the DFT length.Sincespeechsourcesaresparseinfrequency(atleast for the voiced segments), it is assumed that spectral peaks of speech harmonics would not be seriously influenced by spec- tral detail smaller than one FFT bin. 5. EXPERIMENTAL RESULTS Separation experiments were carried out for simulated mix- ing conditions. We tested the proposed algorithm under dif- ferent conditions, such as relative amplitudes of the sources, angles and amplitudes of the multipath reflections, and dif- ferent types of sound sources. In the first experiment, two female speakers were recorded by two microphones with 4.5 cm spacing. Figure 1 shows the measured versus smoothed spatial transfer func- tions for this difficult case of two female speaker sources of 20-second length, sampled at a rate of 8 kHz, with nearly equal amplitude mixing conditions. The separation 8 EURASIP Journal on Applied Signal Processing 3500300025002000150010005000 Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 a 2 /a 1 Measured values Smoothed transfer function values (a) 3500300025002000150010005000 Frequency (Hz) 1 0.5 0 0.5 1 1.5 2 2.5 3 ∠a 2 /a 1 Measured values Smoothed transfer function values (b) Figure 1: Amplitude and phase of two female speaker sources with nearly equal amplitude mixing conditions. is possible due to the different phase behavior of the sig- nals, which is properly detected using the EKF tracking. The EKF parameters were set as follows. The system noise covariance matrix was set according to standard deviation (STD) of 0.1/sample in the transfer function amplitude and 0.1 rad/sample for phase. The measurement covariance ma- trices were set based on the results of the EM algorithm for GMM parameters estimation. The measurement STDs are in fact the widths of the Gaussians. The EKF parameters were also fixed in the following examples. In Figure 2 the SNR improvement for different relative positions of the sources with different relative amplitudes is presented. The SNR improvement was calculated according to the method described in [10]. The separation quality of the mth source is evaluated by the ratio of the maximal en- ergy output signal and sum of energies of the remaining out- put signals when only source m is present at the input. One of the sources was fixed at 0 ◦ while the other source was shifted from −40 ◦ to 40 ◦ . The amplitude ratio of the sources at the microphones varied from 0.8 to equal amplitude ratios. The multipath reflections occurred at constant angles of 60 ◦ and −40 ◦ with relative amplitudes of a few percent of the orig- inal. For equal amplitudes, we achieve up to 10 dB of im- provement when the sources are 40 ◦ apart. The angle sensi- tivity disappears when sufficient amplitude difference exists 40302010010203040 DOA of source 2(deg) 0 5 10 15 20 25 30 Improvement for source 1 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (a) 40302010010203040 DOA of source 2(deg) 5 0 5 10 15 20 25 Improvement for source 2 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (b) Figure 2: Improvement in SNR as a function of source angle for different relative amplitudes under weak multipath conditions. between the sources. For an amplitude ratio of 0.8(i.e.,each microphone receives its main source at amplitude 1 and the interfering source at amplitude 0.8), we achieved 20–30 dB improvement. One should note that the above results con- tain weak multipath components. Even better improvement (50 dB or more) can be achieved for cases when no multipath is present. The performance of the proposed method was tested also under strong multipath conditions. In this test, the two microphones measured signals from two sources. Each source signal arrives at the microphones through six differ- ent paths. The paths of the first source are from 0 ◦ , −5 ◦ , −10 ◦ , −20 ◦ , −30 ◦ , −40 ◦ , with strengths 0, −6, −7.5, −9, −11, and −13.5 dB. The paths of the second source are from 60 ◦ ,50 ◦ ,40 ◦ ,30 ◦ ,20 ◦ , with strengths −7.5, −9, −11, −13.5, and −17 dB, where the main path was at 0 dB with vary- ing direction. The relative amplitude of the received paths at the microphones was randomly chosen between 0.67–0.86. Figure 3 shows the SNR improvement for both sources as a function of the main path direction for different relative am- plitudes. The proposed method was also tested for separation of three sources (female speakers) using three microphones. Figure 4 shows the SNR improvement results with differ- ent relative amplitudes as a function of the third source Shlomo Dubnov et al. 9 40302010010203040 DOA of source 2(deg) 0 5 10 15 20 Improvement for source 1 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (a) 40302010010203040 DOA of source 2(deg) 0 5 10 15 20 Improvement for source 2 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (b) Figure 3: Improvement in SNR under strong multipath conditions as a function of source angle for different relative amplitudes. direction. The microphones were positioned within a linear, equally spaced (LES) array with 4.5 cm intersensor spacing. The performance in this case is slightly lower than the case of two microphones versus two sources, mainly because there are fewer TF cells in which a single source is present. Ob- viously, longer data can significantly improve the results in cases of multiple sources and multiple microphones. As mentioned above, the proposed method is able to esti- mate the spatial transfer function in the case of more sources than sensors. Figure 5 shows the magnitude and phase of the true and estimated channel transfer functions of the three sources where only two microphones were used. The sources were located at −40 ◦ , −10 ◦ ,and30 ◦ with relative amplitudes of the different sources of 4, 2, and 0.5 between the micro- phones. Figure 6 shows the amplitude of the spatial transfer func- tion obtained by the inverse mixing matrix over frequency for the case of two sources located at 0 ◦ and 60 ◦ , without multipath. One can observe that the spatial pattern gen- erated by the inverse of the estimated mixing matrix in- troduces a null in the direction of the interfering source. Figure 6(a) shows the null generated a round 60 ◦ for recov- ering the source at 0 ◦ , while Figure 6(b) shows the null gen- erated around 0 ◦ for recovering the source at 60 ◦ . 40302010010203040 DOA of source 3(deg) 0 10 20 30 40 Improvement for source 1 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (a) 40302010010203040 DOA of source 3(deg) 0 10 20 30 40 Improvement for source 2 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (b) 40302010010203040 DOA of source 3(deg) 0 10 20 30 40 Improvement for source 3 SNR improvement (dB) Amp. ratio = 0.8 Amp. ratio = 0.9 Amp. ratio = 0.95 Amp. ratio = 1 (c) Figure 4: Improvement in SNR for the case of three microphones and three sources as a function of the third source angle for different relative amplitudes. The proposed method for estimating the spatial transfer functions using the correlation matrix of the TF representa- tion can be compared to the method for estimation of mixing and delay parameters from the STFT, as reported in [3, 8]. The basic assumption of that approach is the orthogonality of the “W-disjoint,” which requires that part of TF the cells in the TF representation of the sources do not overlap. The derivation of the relative amplitude and delay parameters as- sociated with source m being active at (t, ω) is done using  a m , δ m  =      X 2 (t, ω) X 1 (t, ω)     , 1 ω ∠ X 2 (t, ω) X 1 (t, ω)  . (32) Note that unlike the proposed method, in this case the mix- ing parameters are estimated directly from the STFT rep- resentation without taking into account the additive noise, which affects both amplitude and phase estimates. Using spa- tial correlation, it is possible to recover the relative amplitude 10 EURASIP Journal on Applied Signal Processing 40003500300025002000150010005000 Frequency (Hz) 0 1 2 3 4 5 The measured versus smoothed spatial transfer functions a 2 /a 1 Estimated for source 1 Estimated for source 2 Estimated for source 3 Original for source 1 Original for source 2 Original for source 3 (a) 40003500300025002000150010005000 Frequency (Hz) 1.5 1 0.5 0 0.5 1 1.5 2 ∠a 2 /a 1 Estimated for source 1 Estimated for source 2 Estimated for source 3 Original for source 1 Original for source 2 Original for source 3 (b) Figure 5: Channel transfer function estimation for three sources using two microphones. and phase of the spatial transfer function for a single-source TF cell containing additive white noise. A central step in the W-disjoint approach is the clustering of the parameters in amplitude and delay space so as to identify separate sources in the mixtures. Usually this clustering step is performed un- der the assumption of constant amplitude and delay over fre- quency and is possible for speech signals when the sources are distinctly localized both in amplitude and delay. It should be noted that these methods can not handle multipath, that is, when more than one peak in the amplitude and delay space corresponds to a single source. Figure 7 shows the dist ribu- tion of the ratio of spatial transfer function values a 2 /a 1 in the complex plane for two real sources over different fre- quencies at TF points that have been detected as single-TFs. It can be seen from the figure that these values have signifi- cant overlap in amplitude and phase. It is evident that simple clustering can not separ a te these sources and more sophisti- cated methods are required. 6. CONCLUSIONS In this paper, we presented a new method for speech source separation based on directionally-disjoint estimation of the 80604020020406080 DOA (deg) 0 5 10 15 20 25 30 35 10 2 Frequency (Hz) dB 60 40 20 0 20 (a) 80604020020406080 DOA (deg) 0 5 10 15 20 25 30 35 10 2 Frequency (Hz) dB 60 40 20 0 20 (b) Figure 6: Spatial pattern obtained by the inverse of the mixing ma- trix for each frequency in the case of two sources at 0 ◦ and 60 ◦ . 1.510.500.511.5 Real (a 2 /a 1 ) 2 1.5 1 0.5 0 0.5 1 1.5 Image (a 2 /a 1 ) Figure 7: Distribution of the ratio of spatial transfer function values a 2 /a 1 in the complex plane for two real sources (indicated by circles and asterisks) over different frequencies at TF points that have been detected as single-TFs. transfer functions between microphones and sources at dif- ferent frequencies and at multiple t imes. We assume that the mixed signals contain a combination of source signals in a reverberant environment, such a s speech or music recorded with close microphones, where the mixing effect is a direct path delay in addition to a combination of weak multipath delays. The proposed algorithm detects the transfer functions in the frequency domain using eigenvector analysis of the [...]... methods,” in Proceedings of 4th International Workshop on Independent Component Analysis and Blind Signal Separation (ICA ’03), pp 1059–1064, Nara, Japan, April 2003 [7] M Z Ikram and D R Morgan, “Permutation inconsistency in blind speech separation: investigation and solutions,” IEEE Transactions on Speech and Audio Processing, vol 13, no 1, pp 1–13, 2005 [8] O Yilmaz and S Rickard, “Blind separation of speech. .. Parra and C Spence, Convolutive blind separation of nonstationary sources,” IEEE Transactions on Speech and Audio Processing, vol 8, no 3, pp 320–327, 2000 [3] A Jourjine, S Rickard, and O Yilmaz, “Blind separation of disjoint orthogonal signals: demixing N sources from 2 mixtures,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’00), vol 5, pp 2985–2988,... problems in a principled and optimal manner ACKNOWLEDGMENT This work was partially supported by the Israeli Science Foundation (ISF) REFERENCES [1] K Torkkola, “Blind separation for audio signals—are we there yet?” in Proceedings of 1st International Workshop on Independent Component Analysis and Blind Signal Separation (ICA ’99), pp 239–244, Aussois, France, January 1999 [2] L Parra and C Spence, Convolutive. .. masking,” IEEE Transactions on Signal Processing, vol 52, no 7, pp 1830–1847, 2004 [9] A Steinhardt, “Adaptive multisensor detection and estimation,” in Adaptive Radar Detection and Estimation, S Haykin and A Steinhardt, Eds., pp 91–160, John Wiley & Sons, New York, NY, USA, 1992 [10] D W E Schobben, K Torkkola, and P Smaragdis, “Evaluation of blind signal separation methods,” in Proceedings of 1st International... Israel, in 1986, 1992, and 1997, respectively During 1996–1998, he was with the Department of Electrical and Computer Engineering, Duke University, Durham, NC, as an Assistant Research Professor He is now a Faculty Member in the Department of Electrical and Computer Engineering, BenGurion University of the Negev, Beer-Sheva, Israel His research interests include statistical signal processing, source. .. Brown, Speech segregation based on sound localization,” The Journal of the Acoustical Society of America, vol 114, no 4, pp 2236–2252, 2003 [5] C Fevotte and C Doncarli, “Two contributions to blind source separation using time-frequency distributions,” IEEE Signal Processing Letters, vol 11, no 3, pp 386–389, 2004 [6] Y Deville, “Temporal and time-frequency correlation-based blind source separation. .. localization, and speech and audio processing He served as an Associate Editor of the IEEE Transactions on Signal Processing during 2001–2004 Miki Arnon-Targan received the B.S degree from the Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel, in 1998, where he is now studying towards the M.S degree He currently works as an Electrical Engineer in the Israel... Pompidou, Paris During 1998–2003, he was a Senior Lecturer in the Department of Communication Systems Engineering at Ben-Gurion University of the Negev, BeerSheva, Israel He is now an Associate Professor in the Department of Music and a Researcher at New Media Arts, CALIT2, University of California, San Diego Joseph Tabrikian received the B.S., M.S., and Ph.D degrees in electrical engineering from the Tel-Aviv... International Workshop on Independent Component Analysis and Blind Signal Separation (ICA ’99), Aussois, France, January 1999 11 Shlomo Dubnov received the Ph.D degree in computer science from Hebrew University of Jerusalem, Jerusalem, Israel He holds also a B.A degree in music composition from the Rubin Academy of Music and Dance, Jerusalem From 1996 to 1998, he worked as Invited Researcher at IRCAM,... at single-TF instances The advantage of our approach is that it allows transfer function estimation even in difficult conditions where the amplitudes of the mixed signals are approximately equal, and it can operate for both wideband and narrowband sources The current work successfully extends common BSS methods that use a single-TF detection criterion to the convolutive case The paper formulates single-TF . two sources in a stereo recording) using filtering or time-frequency masking [8], in a manner similar to that of the W-disjoint situation. This paper is structured in the following manner: in the next. and meaningful result for source separation. In order to identify the case of a single source, two tests are performed. In the first, the hypotheses H 0 and H 1 are tested, w hile in the second, hypotheses. In case of more microphones than sources, the pseudoinverse of the mixing matrix should be used. In case of more sources than microphones, source separation can be approximately performed using

Ngày đăng: 22/06/2014, 23:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN