Signal processing Part 1 pot

30 161 0
Signal processing Part 1 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

I Signal Processing Signal Processing Edited by Sebastian Miron In-Tech intechweb.org Published by In-Teh In-Teh Olajnica 19/2, 32000 Vukovar, Croatia Abstracting and non-prot use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2010 In-teh www.intechweb.org Additional copies can be obtained from: publication@intechweb.org First published March 2010 Printed in India Technical Editor: Maja Jakobovic Cover designed by Dino Smrekar Signal Processing, Edited by Sebastian Miron p. cm. ISBN 978-953-7619-91-6 V Preface The exponential development of sensor technology and computer power over the last few decades, transformed signal processing in an essential tool for a wide range of domains such as telecommunications, medicine or chemistry. Signal processing plays nowadays a key role in the progress of knowledge, from the discoveries on the universe underlying structure, to the recent breakthroughs in the understanding of the sub-atom structure of the matter. Internet, GSM, GPS, HDTV technologies are also indebted to the accelerated evolution of signal processing methods. Today, a major challenge in this domain is the development of fast and efcient algorithms capable of dealing with the huge amount of data provided by the modern sensor technology. This book intends to provide highlights of the current research in signal processing area, to offer a snapshot of the recent advances in this eld. This work is mainly destined to researchers in the signal processing related areas but it is also accessible to anyone with a scientic background desiring to have an up-to-date overview of this domain. The twenty-ve chapters present methodological advances and recent applications of signal processing algorithms in various domains as telecommunications, array processing, biology, cryptography, image and speech processing. The methodologies illustrated in this book, such as sparse signal recovery, are hot topics in the signal processing community at this moment. The editor would like to thank all the authors for their excellent contributions in the different areas of signal processing and hopes that this book will be of valuable help to the readers. January 2010 Editor Sebastian MIRON Centre de Recherche en Automatique de Nancy Nancy-Université, CNRS VI VII Contents Preface V 1. NewAdaptiveAlgorithmsfortheRapidIdentication ofSparseImpulseResponses 001 MarianeR.Petraglia 2. Vectorsensorarrayprocessingforpolarizedsourcesusing aquadrilinearrepresentationofthedatacovariance 019 SebastianMiron,XijingGuoandDavidBrie 3. NewTrendsinBiologically-InspiredAudioCoding 037 RaminPichevar,HosseinNajaf-Zadeh,LouisThibaultandHassanLahdili 4. Constructingwaveletframesandorthogonalwaveletbases onthesphere 059 DanielaRoşcaandJean-PierreAntoine 5. MIMOChannelModelling 077 FaisalDarbari,RobertW.StewartandIanA.Glover 6. Finite-contextmodelsforDNAcoding* 117 ArmandoJ.Pinho,AntónioJ.R.Neves,DanielA.Martins, CarlosA.C.BastosandPauloJ.S.G.Ferreira 7. Space-llingCurvesinGeneratingEquidistrubutedSequences andTheirPropertiesinSamplingofImages 131 EwaSkubalska-RafajłowiczandEwarystRafajłowicz 8. Sparsesignaldecompositionforperiodicsignalmixtures 151 MakotoNakashizuka 9. Wavelet-basedtechniquesinMRS 167 A.Suvichakorn,H.Ratiney,S.Cavassila,andJ PAntoine 10. RecentFingerprintingTechniqueswithCryptographicProtocol 197 MinoruKuribayashi 11. Semiparametriccurvealignmentandshiftdensityestimation: ECGdataprocessingrevisited 217 T.Trigano,U.Isserles,T.MontaguandY.Ritov VIII 12. SpatialpredictionintheH.264/AVCFRExtcoderanditsoptimization 241 SimoneMilani 13. DetectionofSignalsinNonstationaryNoiseviaKalman Filter-BasedStationarizationApproach 263 HiroshiIjimaandAkiraOhsumi 14. DirectDesignofInniteImpulseResponseFiltersbasedonAllpoleFilters 275 AlfonsoFernandez-VazquezandGordanaJovanovicDolecek 15. RobustUnsupervisedSpeakerSegmentationforAudioDiarization 307 HachemKadri,ManuelDavyandNoureddineEllouze 16. Newdirectionsinlatticebasedlossycompression 321 AdrianaVasilache 17. SegmentedOnlineNeuralFilteringSystemBasedOnIndependent ComponentsOfPre-ProcessedInformation 337 RodrigoTorres,EduardoSimasFilho,DanilodeLimaandJosédeSeixas 18. PracticalSourceCodingwithSideInformation 359 LorenzoCappellari 19. Crystal-likeSymmetricSensorArrangementsforBlindDecorrelation ofIsotropicWaveeld 385 NobutakaOnoandShigekiSagayama 20. PhaseScramblingforImageMatchingintheScrambledDomain 397 HitoshiKiyaandIzumiIto 21. FastAlgorithmsforInventoryBasedSpeechEnhancement 415 RobertM.Nickel,TomohiroSugimotoandXiaoqiangXiao 22. Compressionofmicroarrayimages 429 AntónioJ.R.NevesandArmandoJ.Pinho 23. RoundoffNoiseMinimizationforState-EstimateFeedbackDigital ControllersUsingJointOptimizationofErrorFeedbackandRealization 449 TakaoHinamoto,KeijiroKawai,MasayoshiNakamotoandWu-ShengLu 24. Signalprocessingfornon-invasivebrainbiomarkersofsensorimotor performanceandbrainmonitoring 461 RodolpheJ.Gentili,HyukOh,TrentJ.Bradberry, BradleyD.HateldandJoséL.Contreras-Vidal 25. Theuseoflow-frequencyultrasonicsinspeechprocessing 503 FarzanehAhmadiandIanMcloughlin NewAdaptiveAlgorithmsfortheRapidIdenticationofSparseImpulseResponses 1 NewAdaptiveAlgorithmsfortheRapidIdenticationofSparseImpulse Responses MarianeR.Petraglia 0 New Adaptive Algorithms for the Rapid Identification of Sparse Impulse Responses Mariane R. Petraglia Federal University of Rio de Janeiro Brazil 1. Introduction It is well known that the convergence of the adaptive filtering algorithms becomes slow when the number of coefficients is very large. However, in many applications, such as digital net- work and acoustical echo cancelers, the system being modeled presents sparse impulse re- sponse, that is, most of its coefficients have s mall magnitudes. The classical adaptation ap- proaches, such as the least-mean square (LMS) and recursive least squares (RLS) algorithms, do not take into account the spar seness characteristics of such systems. In order to improve the convergence for these appli cations, several algor ithms have been pro- posed recently, which employ individual step-s izes for the updating of the different coeffi- cients. The adaptation step-sizes are made larger for the coefficients with larger magnitudes, resulting in a faster convergence for the most s ignificant coefficie nts. Such idea was first in- troduced in (Duttweiler, 2000) resulting in the so-called proportionate normalized least mean square (PNLMS) algori thm. However, the performance of the PNLMS alg orithm for the iden- tification of non-sparse impulse response can be very poo r, even slower than that of the con- ventional LMS algorithm. An improved version of such algorithm, which employs an extra parameter to control the amount of proportionality in the step-size normalization, was pro- posed in (Benesty & Gay, 2002). An observed characteristic of the PNLMS algorithm is a rapid initial convergence, due to the fast adaptation speed of the larg e value coefficients, followed by an expressive performance degradation, owing to the small adaptation speed of the small value coefficients. Such be- havior is more significant in the mod eling of not very sparse imp ulse responses. In order to reduce this problem, the application of a non-linear function to the coefficients in the step-size normalization was proposed in (Deng & Doroslovacki, 2006). The well-known slow convergence of the gr adient algorithms for colo red input signals is also observed in the proportionate-type NLMS algorithms. Implementations that combine the ide as o f the PNLMS and transform-domain adaptive algorithms were proposed in (Deng & Doroslovacki, 2007) and (Petraglia & Barboza, 2008) for accelerating the co nvergence for colored input signals. In this chapter, we give an overview of the most important adaptive algorithms developed for the fast identification of systems with sparse impulse responses. The convergence of the proposed algorithms are compared through computer simulations for the identification of the channel impulse responses in a digital network echo cancellation application. 1 SignalProcessing2 2. Sparse Impulse Response Systems Sparse impulse responses are encountered in several applications, such as in acoustic and digital network echo cancelers. The adaptive filters employed in the modeling of the unknown system in such applications present a small number of coefficients with significant magnitude. Figure 1 illustrates the modeling of an unknown system w o , which is assumed to be linear, time-invariant and of finite impulse respo nse length (N), by an adaptive filter. The vector containing the adaptive filter coefficients is denoted as w (n) = [w 0 (n) w 1 (n) ···w N−1 (n)] T and its input vector as x(n) = [x(n) x(n − 1) ···x(n − N + 1)] T . The adaptive filter output is denoted as y (n), the desired response as d (n) and the estimation error as e(n). One of the most used adaptation techniques is the normalized least mean-square (NLMS) algorithm, shown in Table 1, where β is a fixed step-size factor and δ is a small constant needed in order to avoid division by zero. As shown in Table 1 for the NLMS algorithm, typical initialization parameters are gi ven for all algorithms studied in this chapter. v[n] w o e[n ] d[n] x[n] w(n) y[n] Fig. 1. System identification through adaptive filtering. Initialization (typi cal values) δ = 0.01, β = 0.25 w (0) =  w 0 (0) w 1 (0) ··· w N−1 (0)  T = 0 Processing and Adaptation For n = 0, 1, 2, ··· x(n) =  x (n) x(n −1) ··· x(n − N + 1)  T y(n) = x T (n)w(n) e(n) = d(n) − y(n) w(n + 1) = w(n) + β x (n)e(n) x T (n)x(n) + δ End Table 1. NLMS Algorithm Described i n the next sections, adaptive algorithms that take into account the sparseness of the unknown system impulse response have been recently d eveloped. The convergence behavior of such algorithms depends on how sparse the modeled impulse response is. A sparseness measure of an N-length impulse response w was proposed in (Hoyer, 2004) as ξ w = N N − √ N  1 − || w|| 1 √ N||w|| 2  (1) where ||w| | l is the l-norm of the vector w. It should be observed that 0 ≤ ξ w ≤ 1, and that ξ w = 0 when all elements of w are equal in magnitude (non-sparse impulse response) and ξ w = 1 when only one element of w is non-zero (the sparsest impulse response). In the simulations presented throughout this chapter, the id entification of the digital network channels of IT U-T Recommendation G.168 (G.168, 2004), by an adaptive filter with N = 512 coefficients, will be considered. Figures 2(a) and 2(b) show the impulse responses of the most and least sparse digital network channel models (gm1 and gm4, respectively ) described in (G.168, 2004). Figure 2(c) presents the gm4 channel impulse response with a white noise (uni- formly dis tr ibuted in [ -0.05,0.05]) added to it, such as to simulate a non-sparse system. The corresponding sparseness measures are ξ w = 0.8970 for the gm1 channel, ξ w = 0.7253 for the gm4 channel and ξ w = 0.2153 for the gm4 plus noise channel. 0 50 100 150 200 250 300 350 400 450 500 −0.2 −0.1 0 0.1 0.2 (a) Model gm1 Samples Amplitude 0 50 100 150 200 250 300 350 400 450 500 −0.2 −0.1 0 0.1 0.2 (b) Model gm4 Samples Amplitude 0 50 100 150 200 250 300 350 400 450 500 −0.2 −0.1 0 0.1 0.2 (c) Model gm4+noise Samples Amplitude Fig. 2. Channel i mp ulse responses: (a) gm1, (b) gm4 and (c) gm4+noise. 3. Proportionate-type NLMS Algorithms The proportionate-type NLMS algorithms employ a different step-size for each coefficient, such that larger adjustments are applied to the larger coefficients (or active coefficients), re- [...]... simulation results of the 10 Signal Processing (a) Model gm1 0 NLMS SPNLMS MSE (dB) 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 (b) Model gm4 0 NLMS SPNLMS 10 MSE (dB) 2 4 x 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 (c) Model gm4+noise 0 NLMS SPNLMS 10 MSE (dB) 2 4 x 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 6 MSE evolution... Table 7, where xk (n ) is the input signal of 12 Signal Processing (a) Model gm1 MSE (dB) 0 NLMS IMPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (b) Model gm4 NLMS IMPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (c) Model gm4+noise NLMS IMPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 7 MSE evolution for the IMPNLMS... Impulse Responses 13 (a) Model gm1 0 NLMS MPNLMS IMPNLMS MSE (dB) 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 (b) Model gm4 0 NLMS MPNLMS IMPNLMS 10 MSE (dB) 2 4 x 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 (c) Model gm4+noise 0 NLMS MPNLMS IMPNLMS 10 MSE (dB) 2 4 x 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 8 MSE evolution... Model gm1 0 MSE (dB) 5 NLMS PNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (b) Model gm4 NLMS PNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (c) Model gm4+noise NLMS PNLMS −20 −40 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 3 MSE evolution for the PNLMS and NLMS algorithms for white noise input and channels (a) gm1, (b)... Model gm1 0 MSE (dB) 7 NLMS IPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (b) Model gm4 NLMS IPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (c) Model gm4+noise NLMS IPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 4 MSE evolution for the IPNLMS and NLMS algorithms for white noise input and channels (a) gm1, (b)... 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (b) Model gm4 NLMS MPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 0 MSE (dB) 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 (c) Model gm4+noise NLMS MPNLMS −20 −40 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 5 MSE evolution for the MPNLMS and NLMS algorithms for white noise input and channels (a) gm1, (b) gm4 and (c) gm4+noise Such input signal has power spectrum... WMPNLMS−TD−Db4 10 MSE (dB) 2 4 x 10 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 1. 8 2 4 x 10 Fig 9 MSE evolution for the WMPNLMS-TD with Haar, Db2 and Db4 wavelets for colored noise input and channels (a) gm1, (b) gm4 and (c) gm4+noise x(n) - H0(z) z H1(z) z- H2(z) z- x0(n) 1 2 0 x1(n) x2(n) ^ (ny D ) ^ (ny D ) ^ (ny D ) 0 G0(zL0) 1 G1(zL1) 2 G2(zL2) d(n- D) HM -1( z) - z M -1 xM -1( n) GM -1( zLM -1) ^... Impulse Responses 11 Initialization (typical values) δ = 0. 01, = 0.0 01, β = 0.25, λ = 0 .1, ξ ( 1) = 0.96 w (0) = w 0 (0) x(n ) = x (n ) w 1 (0) ··· Processing and Adaptation For n = 0, 1, 2, · · · x ( n − 1) ··· y(n ) = x T (n )w(n ) e(n ) = d(n ) − y(n )  N √ 1 − ξ w (n ) = N− N w N − 1 (0) T =0 T x ( n − N + 1) ∑ N 1 | w j (n )| j =0 N ∑ N 1 | w j (n )|2 j =0 ξ (n ) = (1 − λ)ξ (n − 1) + λξ w (n )... gm1 0 WMPNLMS−SF−Haar WMPNLMS−SF−Db2 WMPNLMS−SF−Db4 WMPNLMS−SF−Bior4.4 10 MSE (dB) 17 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 2 4 x 10 (b) Model gm4 0 WMPNLMS−SF−Haar WMPNLMS−SF−Db2 WMPNLMS−SF−Db4 WMPNLMS−SF−Bior4.4 10 MSE (dB) 1. 8 −20 −30 −40 −50 −60 0.2 0.4 0.6 0.8 1 Samples 1. 2 1. 4 1. 6 2 4 x 10 0 WMPNLMS−SF−Haar WMPNLMS−SF−Db2 WMPNLMS−SF−Db4 WMPNLMS−SF−Bior4.4 10 MSE (dB) 1. 8... GM -1( zLM -1) ^ y (n- D) M -1 y(n) -+ e(n) Analysis Bank Fig 10 Adaptive subband structure composed of a wavelet transform and sparse subfilters 16 Signal Processing Initialization δp = δ = 0. 01, β = 0.25, ρ = 0. 01 For k = 0, 1, · · · , M − 1 w k (0) = wk,0 (0) wk ,1 (0) End ··· wk,Nk 1 (0) T =0 Processing and Adaptation For n = 0, 1, 2, · · · For k = 0, 1, · · · , M − 1 xk (n ) = NHk − 1 ∑ i =0 xk (n ) = h . results of the Signal Processing1 0 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −50 −40 −30 −20 10 0 (a) Model gm1 Samples MSE (dB) NLMS SPNLMS 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −50 −40 −30 −20 10 0 (b). is the input signal of Signal Processing1 2 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −40 −20 0 (a) Model gm1 Samples MSE (dB) NLMS IMPNLMS 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −40 −20 0 (b). function H (z) = 0.25 √ 3 1 − 1. 5z 1 −0.25z −2 . (5) 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −40 −20 0 (a) Model gm1 Samples MSE (dB) NLMS MPNLMS 0.2 0.4 0.6 0.8 1 1.2 1. 4 1. 6 1. 8 2 x 10 4 −60 −40 −20 0 (b)

Ngày đăng: 21/06/2014, 11:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan