1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Signal processing Part 16 pot

30 95 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 829,09 KB

Nội dung

SignalProcessing444 (a) BP = 14 (b) BP = 13 ? ? ? ? (c) BP = 12 ? (d) BP = 11 (e) BP = 10 Fig. 10. Context configuration obtained by the proposed method in five different bitplanes of the image “1230c1G”: (a) when encoding bitplane 14 (seven bits of context); (b) when encoding bitplane 13 (11 bits of context); (c) when encoding bitplane 12 (13 bits of context); (d) when encoding bitplane 11 (17 bits of context); (e) when encoding bitplane 10 (20 bits of context). Context positions falling outside the image at the image borders are considered as having zero value. approximately 21 million pixels) required about 220 minutes to compress when the whole image was used to performed the search. When we used a region of 256 × 256 pixels, it required approximately 6 minutes to compress the MicroZip test set (about 2 minutes more than the image-independent approach). These three images have sizes of 1916 ×1872, 5496 × 1956 and 3625 ×1929 pixels. Decoding is faster, because the decoder does not have to search for the best context: that information is embedded in the bitstream. 6. Experimental results Table 4 shows the average compression results, in bits per pixel, for the three sets of images described previously (see Section 3). In this table, we present experimental results of both the image-independent and the image-dependent approaches. We also include results obtained with SPIHT (Said and Pearlman, 1996) 4 and EIDAC (Yoo et al., 1998). Comparing with the results presented in Table 1, we can see that the fast version of the image- dependent method (indicated as “256 ×256” in the table) is 6.3% better than JBIG, 4.7% bet- ter than JPEG-LS and 8.6% better than lossless JPEG2000. It is important to remember that JPEG-LS does not provide progressive decoding, a characteristic that is intrinsic to the image- dependent multi-bitplane finite-context method and also to JPEG2000 and JBIG. From the re- sults presented in Table 4, it can also be seen that using an area of 256 ×256 pixels in the center of the image for finding the context, instead of the whole image, leads to a small degradation in the performance (about 0.3%), showing the appropriateness of this approach. 4 SPIHT codec from http://www.cipr.rpi.edu/research/SPIHT/ (version 8.01). Image set SPIHT EIDAC Image Image-dependent independent 256 ×256 Full APO_AI 10.812 10.543 10.280 10.225 10.194 ISREC 11.098 10.446 10.199 10.198 10.158 MicroZip 9.198 8.837 8.840 8.667 8.619 Average 10.378 10.005 9.826 9.741 9.708 Table 4. Average compression results, in bits per pixel, using SPIHT, EIDAC, the image- independent and the image-dependent methods. The “256 × 256” column indicates results obtained with a context model adjusted using only a square of 256 ×256 pixels at the center of the microarray image, whereas “Full” indicates that the search was performed in the whole image. The average results presented take into account the different sizes of the images, i.e., they correspond to the total number of bits divided by the total number of image pixels. Table 5 confirms the performance of the image-dependent method relatively to two recent specialized methods for compressing microarray images: MicroZip (Lonardi and Luo, 2004) and Zhang’s method (Adjeroh et al., 2006; Zhang et al., 2005). As can be observed, the image- dependent multi-bitplane finite-context method provides compression gains of 9.1% relatively to MicroZip and 6.2% in relation to Zhang’s method, on a set of test images that has been used by all these methods. Images MicroZip Zhang Image Image-dependent independent 256 ×256 Full array1 11.490 11.380 11.105 11.120 11.056 array2 9.570 9.260 8.628 8.470 8.423 array3 8.470 8.120 7.962 7.717 7.669 Average 9.532 9.243 8.840 8.667 8.619 Table 5. Compression results, in bits per pixel, using two specialized methods, MicroZip and Zhang’s method, the image-independent method and the image-dependent method. The “256 × 256” column indicates results obtained with a context model adjusted using only a square of 256 ×256 pixels at the center of the microarray image, whereas “Full” indicates that the search was performed in the whole image. Figure 11 shows, for three different images, the average number of bits per pixel that are needed for representing each bitplane. As expected, this value generally increases when going from most significant bitplanes to least significant bitplanes. For the case of images “Def661Cy3” and “1230c1G”, it can be seen that the average number of bits per pixel re- quired by the eight least significant bitplanes is close to one, as pointed out by Jörnsten et al. (2003). However, image “array3” shows a different behavior. Because this image is less noisy, the compression algorithm is able to exploit redundancies even in lower bitplanes. This is done without compromising the compression efficiency of noisy images, due to the mech- anism that monitors and controls the average number of bits per pixel required for encoding each bitplane. The maximum number of context bits that we allowed for building the contexts was limited to 20. Since the coding alphabet is binary, this implies, at most, 2 ×2 20 = 2 097 152 counters that can be stored in approximately 8 MBytes of computer memory. In a 2 GHz Pentium 4 Compressionofmicroarrayimages 445 (a) BP = 14 (b) BP = 13 ? ? ? ? (c) BP = 12 ? (d) BP = 11 (e) BP = 10 Fig. 10. Context configuration obtained by the proposed method in five different bitplanes of the image “1230c1G”: (a) when encoding bitplane 14 (seven bits of context); (b) when encoding bitplane 13 (11 bits of context); (c) when encoding bitplane 12 (13 bits of context); (d) when encoding bitplane 11 (17 bits of context); (e) when encoding bitplane 10 (20 bits of context). Context positions falling outside the image at the image borders are considered as having zero value. approximately 21 million pixels) required about 220 minutes to compress when the whole image was used to performed the search. When we used a region of 256 × 256 pixels, it required approximately 6 minutes to compress the MicroZip test set (about 2 minutes more than the image-independent approach). These three images have sizes of 1916 ×1872, 5496 × 1956 and 3625 ×1929 pixels. Decoding is faster, because the decoder does not have to search for the best context: that information is embedded in the bitstream. 6. Experimental results Table 4 shows the average compression results, in bits per pixel, for the three sets of images described previously (see Section 3). In this table, we present experimental results of both the image-independent and the image-dependent approaches. We also include results obtained with SPIHT (Said and Pearlman, 1996) 4 and EIDAC (Yoo et al., 1998). Comparing with the results presented in Table 1, we can see that the fast version of the image- dependent method (indicated as “256 ×256” in the table) is 6.3% better than JBIG, 4.7% bet- ter than JPEG-LS and 8.6% better than lossless JPEG2000. It is important to remember that JPEG-LS does not provide progressive decoding, a characteristic that is intrinsic to the image- dependent multi-bitplane finite-context method and also to JPEG2000 and JBIG. From the re- sults presented in Table 4, it can also be seen that using an area of 256 ×256 pixels in the center of the image for finding the context, instead of the whole image, leads to a small degradation in the performance (about 0.3%), showing the appropriateness of this approach. 4 SPIHT codec from http://www.cipr.rpi.edu/research/SPIHT/ (version 8.01). Image set SPIHT EIDAC Image Image-dependent independent 256×256 Full APO_AI 10.812 10.543 10.280 10.225 10.194 ISREC 11.098 10.446 10.199 10.198 10.158 MicroZip 9.198 8.837 8.840 8.667 8.619 Average 10.378 10.005 9.826 9.741 9.708 Table 4. Average compression results, in bits per pixel, using SPIHT, EIDAC, the image- independent and the image-dependent methods. The “256 × 256” column indicates results obtained with a context model adjusted using only a square of 256 ×256 pixels at the center of the microarray image, whereas “Full” indicates that the search was performed in the whole image. The average results presented take into account the different sizes of the images, i.e., they correspond to the total number of bits divided by the total number of image pixels. Table 5 confirms the performance of the image-dependent method relatively to two recent specialized methods for compressing microarray images: MicroZip (Lonardi and Luo, 2004) and Zhang’s method (Adjeroh et al., 2006; Zhang et al., 2005). As can be observed, the image- dependent multi-bitplane finite-context method provides compression gains of 9.1% relatively to MicroZip and 6.2% in relation to Zhang’s method, on a set of test images that has been used by all these methods. Images MicroZip Zhang Image Image-dependent independent 256×256 Full array1 11.490 11.380 11.105 11.120 11.056 array2 9.570 9.260 8.628 8.470 8.423 array3 8.470 8.120 7.962 7.717 7.669 Average 9.532 9.243 8.840 8.667 8.619 Table 5. Compression results, in bits per pixel, using two specialized methods, MicroZip and Zhang’s method, the image-independent method and the image-dependent method. The “256 × 256” column indicates results obtained with a context model adjusted using only a square of 256 ×256 pixels at the center of the microarray image, whereas “Full” indicates that the search was performed in the whole image. Figure 11 shows, for three different images, the average number of bits per pixel that are needed for representing each bitplane. As expected, this value generally increases when going from most significant bitplanes to least significant bitplanes. For the case of images “Def661Cy3” and “1230c1G”, it can be seen that the average number of bits per pixel re- quired by the eight least significant bitplanes is close to one, as pointed out by Jörnsten et al. (2003). However, image “array3” shows a different behavior. Because this image is less noisy, the compression algorithm is able to exploit redundancies even in lower bitplanes. This is done without compromising the compression efficiency of noisy images, due to the mech- anism that monitors and controls the average number of bits per pixel required for encoding each bitplane. The maximum number of context bits that we allowed for building the contexts was limited to 20. Since the coding alphabet is binary, this implies, at most, 2 ×2 20 = 2 097 152 counters that can be stored in approximately 8 MBytes of computer memory. In a 2 GHz Pentium 4 SignalProcessing446 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 bpp Bitplane Def661Cy3 1230c1G array3 Fig. 11. Average number of bits per pixel required for encoding each bitplane of three different microarray images (one from each test set). computer with 512 MBytes of memory, the image-dependent algorithm required about six minutes to compress the MicroZip test set (note that this compression time is only indicative, because the code has not been optimized for speed). Decoding is faster, because the decoder does not have to search for the best context. Just for comparison, the codecs of the compression standards took approximately one minute to encode the same set of images. 7. Conclusions The use of microarray expression data in state-of-the-art biology has been well established. The widespread adoption of this technology, coupled with the significant volume of data gen- erated per experiment, in the form of images, has led to significant challenges in storage and query-retrieval. In this work, we have studied the problem of coding this type of images. We presented a set of comprehensive results regarding the lossless compression of microar- ray images by state-of-the-art image coding standards, namely, lossless JPEG2000, JBIG and JPEG-LS. From the experimental results obtained, we conclude that JPEG-LS gives the best lossless compression performance. However, it lacks lossy-to-lossless capability, which may be a decisive functionality if remote transmission over possibly slow links is a requirement. Complying to this requirement we find JBIG and lossless JPEG2000, lossless JPEG2000 being the best considering rate-distortion in the sense of the L 2 -norm and JBIG the most efficient when considering the L ∞ -norm. Moreover, JBIG is consistently better than lossless JPEG2000 regarding lossless compression ratios. Motivated by these findings, we have developed efficient methods for lossless compression of microarray images, allowing progressive, lossy-to-lossless decoding. These methods are based on bitplane compression using image-independent or image-dependent finite-context models and arithmetic coding. They do not require griding and/or segmentation as most of the specialized methods that have been proposed do. This may be an advantage if only compression is sought, since it reduces the complexity of the method. Moreover, since they do not require griding, they are robust, for example, against layout changes in spot placement. The results obtained by the multi-bitplane context-based methods have been compared with the three image coding standards and with two recent specialized methods: MicroZip and Zhang’s method. The results obtained show that these new methods have better compression performance in all image test sets used. 8. References Adjeroh, D., Y. Zhang, and R. Parthe (2006, February). On denoising and compression of DNA microarray images. Pattern Recognition 39, 2478–2493. Bell, T. C., J. G. Cleary, and I. H. Witten (1990). Text compression. Prentice Hall. Faramarzpour, N. and S. Shirani (2004, March). Lossless and lossy compression of DNA mi- croarray images. In Proc. of the Data Compression Conf., DCC-2004, Snowbird, Utah, pp. 538. Faramarzpour, N., S. Shirani, and J. Bondy (2003, November). Lossless DNA microarray im- age compression. In Proc. of the 37th Asilomar Conf. on Signals, Systems, and Computers, 2003, Volume 2, pp. 1501–1504. Hampel, H., R. B. Arps, C. Chamzas, D. Dellert, D. L. Duttweiler, T. Endoh, W. Equitz, F. Ono, R. Pasco, I. Sebestyen, C. J. Starkey, S. J. Urban, Y. Yamazaki, and T. Yoshida (1992, April). Technical features of the JBIG standard for progressive bi-level image com- pression. Signal Processing: Image Communication 4(2), 103–111. Hegde, P., R. Qi, K. Abernathy, C. Gay, S. Dharap, R. Gaspard, J. Earle-Hughes, E. Snesrud, N. Lee, and J. Q. (2000, September). A concise guide to cDNA microarray analysis. Biotechniques 29(3), 548–562. Hua, J., Z. Liu, Z. Xiong, Q. Wu, and K. Castleman (2003, September). Microarray BASICA: background adjustment, segmentation, image compression and analysis of microar- ray images. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP-2003, Volume 1, Barcelona, Spain, pp. 585–588. Hua, J., Z. Xiong, Q. Wu, and K. Castleman (2002, October). Fast segmentation and lossy-to- lossless compression of DNA microarray images. In Proc. of the Workshop on Genomic Signal Processing and Statistics, GENSIPS, Raleigh, NC. ISO/IEC (1993, March). Information technology - Coded representation of picture and audio infor- mation - progressive bi-level image compression. International Standard ISO/IEC 11544 and ITU-T Recommendation T.82. ISO/IEC (1999). Information technology - Lossless and near-lossless compression of continuous-tone still images. ISO/IEC 14495–1 and ITU Recommendation T.87. ISO/IEC (2000a). Information technology - JPEG 2000 image coding system. ISO/IEC International Standard 15444–1, ITU-T Recommendation T.800. ISO/IEC (2000b). JBIG2 bi-level image compression standard. International Standard ISO/IEC 14492 and ITU-T Recommendation T.88. Jörnsten, R., W. Wang, B. Yu, and K. Ramchandran (2003). Microarray image compression: SLOCO and the effect of information loss. Signal Processing 83, 859–869. Jörnsten, R. and B. Yu (2000, March). Comprestimation: microarray images in abundance. In Proc. of the Conf. on Information Sciences, Princeton, NJ. Jörnsten, R. and B. Yu (2002, July). Compression of cDNA microarray images. In Proc. of the IEEE Int. Symposium on Biomedical Imaging, ISBI-2002, Washington, DC, pp. 38–41. Jörnsten, R., B. Yu, W. Wang, and K. Ramchandran (2002a, September). Compression of cDNA and inkjet microarray images. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP- 2002, Volume 3, Rochester, NY, pp. 961–964. Compressionofmicroarrayimages 447 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 bpp Bitplane Def661Cy3 1230c1G array3 Fig. 11. Average number of bits per pixel required for encoding each bitplane of three different microarray images (one from each test set). computer with 512 MBytes of memory, the image-dependent algorithm required about six minutes to compress the MicroZip test set (note that this compression time is only indicative, because the code has not been optimized for speed). Decoding is faster, because the decoder does not have to search for the best context. Just for comparison, the codecs of the compression standards took approximately one minute to encode the same set of images. 7. Conclusions The use of microarray expression data in state-of-the-art biology has been well established. The widespread adoption of this technology, coupled with the significant volume of data gen- erated per experiment, in the form of images, has led to significant challenges in storage and query-retrieval. In this work, we have studied the problem of coding this type of images. We presented a set of comprehensive results regarding the lossless compression of microar- ray images by state-of-the-art image coding standards, namely, lossless JPEG2000, JBIG and JPEG-LS. From the experimental results obtained, we conclude that JPEG-LS gives the best lossless compression performance. However, it lacks lossy-to-lossless capability, which may be a decisive functionality if remote transmission over possibly slow links is a requirement. Complying to this requirement we find JBIG and lossless JPEG2000, lossless JPEG2000 being the best considering rate-distortion in the sense of the L 2 -norm and JBIG the most efficient when considering the L ∞ -norm. Moreover, JBIG is consistently better than lossless JPEG2000 regarding lossless compression ratios. Motivated by these findings, we have developed efficient methods for lossless compression of microarray images, allowing progressive, lossy-to-lossless decoding. These methods are based on bitplane compression using image-independent or image-dependent finite-context models and arithmetic coding. They do not require griding and/or segmentation as most of the specialized methods that have been proposed do. This may be an advantage if only compression is sought, since it reduces the complexity of the method. Moreover, since they do not require griding, they are robust, for example, against layout changes in spot placement. The results obtained by the multi-bitplane context-based methods have been compared with the three image coding standards and with two recent specialized methods: MicroZip and Zhang’s method. The results obtained show that these new methods have better compression performance in all image test sets used. 8. References Adjeroh, D., Y. Zhang, and R. Parthe (2006, February). On denoising and compression of DNA microarray images. Pattern Recognition 39, 2478–2493. Bell, T. C., J. G. Cleary, and I. H. Witten (1990). Text compression. Prentice Hall. Faramarzpour, N. and S. Shirani (2004, March). Lossless and lossy compression of DNA mi- croarray images. In Proc. of the Data Compression Conf., DCC-2004, Snowbird, Utah, pp. 538. Faramarzpour, N., S. Shirani, and J. Bondy (2003, November). Lossless DNA microarray im- age compression. In Proc. of the 37th Asilomar Conf. on Signals, Systems, and Computers, 2003, Volume 2, pp. 1501–1504. Hampel, H., R. B. Arps, C. Chamzas, D. Dellert, D. L. Duttweiler, T. Endoh, W. Equitz, F. Ono, R. Pasco, I. Sebestyen, C. J. Starkey, S. J. Urban, Y. Yamazaki, and T. Yoshida (1992, April). Technical features of the JBIG standard for progressive bi-level image com- pression. Signal Processing: Image Communication 4(2), 103–111. Hegde, P., R. Qi, K. Abernathy, C. Gay, S. Dharap, R. Gaspard, J. Earle-Hughes, E. Snesrud, N. Lee, and J. Q. (2000, September). A concise guide to cDNA microarray analysis. Biotechniques 29(3), 548–562. Hua, J., Z. Liu, Z. Xiong, Q. Wu, and K. Castleman (2003, September). Microarray BASICA: background adjustment, segmentation, image compression and analysis of microar- ray images. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP-2003, Volume 1, Barcelona, Spain, pp. 585–588. Hua, J., Z. Xiong, Q. Wu, and K. Castleman (2002, October). Fast segmentation and lossy-to- lossless compression of DNA microarray images. In Proc. of the Workshop on Genomic Signal Processing and Statistics, GENSIPS, Raleigh, NC. ISO/IEC (1993, March). Information technology - Coded representation of picture and audio infor- mation - progressive bi-level image compression. International Standard ISO/IEC 11544 and ITU-T Recommendation T.82. ISO/IEC (1999). Information technology - Lossless and near-lossless compression of continuous-tone still images. ISO/IEC 14495–1 and ITU Recommendation T.87. ISO/IEC (2000a). Information technology - JPEG 2000 image coding system. ISO/IEC International Standard 15444–1, ITU-T Recommendation T.800. ISO/IEC (2000b). JBIG2 bi-level image compression standard. International Standard ISO/IEC 14492 and ITU-T Recommendation T.88. Jörnsten, R., W. Wang, B. Yu, and K. Ramchandran (2003). Microarray image compression: SLOCO and the effect of information loss. Signal Processing 83, 859–869. Jörnsten, R. and B. Yu (2000, March). Comprestimation: microarray images in abundance. In Proc. of the Conf. on Information Sciences, Princeton, NJ. Jörnsten, R. and B. Yu (2002, July). Compression of cDNA microarray images. In Proc. of the IEEE Int. Symposium on Biomedical Imaging, ISBI-2002, Washington, DC, pp. 38–41. Jörnsten, R., B. Yu, W. Wang, and K. Ramchandran (2002a, September). Compression of cDNA and inkjet microarray images. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP- 2002, Volume 3, Rochester, NY, pp. 961–964. SignalProcessing448 Jörnsten, R., B. Yu, W. Wang, and K. Ramchandran (2002b, October). Microarray image com- pression and the effect of compression loss. In Proc. of the Workshop on Genomic Signal Processing and Statistics, GENSIPS, Raleigh, NC. Kothapalli, R., S. J. Yoder, S. Mane, and T. P. L. Jr (2002). Microarray results: how accurate are they? BMC Bioinformatics 3. Leung, Y. F. and D. Cavalieri (2003, November). Fundamentals of cDNA microarray data analysis. Trends on Genetics 19(11), 649–659. Lonardi, S. and Y. Luo (2004, August). Gridding and compression of microarray images. In Proc. of the IEEE Computational Systems Bioinformatics Conference, CSB-2004, Stanford, CA. Moore, S. K. (2001, March). Making chips to probe genes. IEEE Spectrum 38(3), 54–60. Netravali, A. N. and B. G. Haskell (1995). Digital pictures: representation, compression and stan- dards (2nd ed.). New York: Plenum. Neves, A. J. R. and A. J. Pinho (2006, October). Lossless compression of microarray images. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP-2006, Atlanta, GA, pp. 2505–2508. Neves, A. J. R. and A. J. Pinho (2009, February). Lossless compression of microarray images using image-dependent finite-context models. IEEE Trans. on Medical Imaging 28(2), 194–201. Pinho, A. J. and A. J. R. Neves (2006, October). Lossy-to-lossless compression of images based on binary tree decomposition. In Proc. of the IEEE Int. Conf. on Image Processing, ICIP- 2006, Atlanta, GA, pp. 2257–2260. Rissanen, J. (1983, September). A universal data compression system. IEEE Trans. on Informa- tion Theory 29(5), 656–664. Rissanen, J. and G. G. Langdon, Jr. (1981, January). Universal modeling and coding. IEEE Trans. on Information Theory 27(1), 12–23. Said, A. and W. A. Pearlman (1996, June). A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. on Circuits and Systems for Video Technology 6(3), 243–250. Salomon, D. (2000). Data compression - The complete reference (2nd ed.). Springer. Sasik, R., C. H. Woelk, and J. Corbeil (2004, August). Microarray truths and consequences. Journal of Molecular Endocrinology 33(1), 1–9. Sayood, K. (2000). Introduction to data compression (2nd ed.). Morgan Kaufmann. Skodras, A., C. Christopoulos, and T. Ebrahimi (2001, September). The JPEG 2000 still image compression standard. IEEE Signal Processing Magazine 18(5), 36–58. Taubman, D. S. and M. W. Marcellin (2002). JPEG 2000: image compression fundamentals, stan- dards and practice. Kluwer Academic Publishers. Weinberger, M. J., G. Seroussi, and G. Sapiro (2000, August). The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS. IEEE Trans. on Image Processing 9(8), 1309–1324. Yoo, Y., Y. G. Kwon, and A. Ortega (1998, November). Embedded image-domain adaptive compression of simple images. In Proc. of the 32nd Asilomar Conf. on Signals, Systems, and Computers, Volume 2, Pacific Grove, CA, pp. 1256–1260. Zhang, Y., R. Parthe, and D. Adjeroh (2005, August). Lossless compression of DNA microarray images. In Proc. of the IEEE Computational Systems Bioinformatics Conference, CSB-2005, Stanford, CA. RoundoffNoiseMinimizationforState-EstimateFeedback DigitalControllersUsingJointOptimizationofErrorFeedbackandRealization 449 Roundoff Noise Minimization for State-Estimate Feedback Digital ControllersUsingJointOptimizationofErrorFeedbackandRealization TakaoHinamoto,KeijiroKawai,MasayoshiNakamotoandWu-ShengLu 0 Roundoff Noise Minimization for State-Estimate Feedback Digital Controllers Using Joint Optimization of Error Feedback and Realization Takao Hinamoto, Keijiro Kawai, Masayoshi Nakamoto and Wu-Sheng Lu Name-of-the-University-Company Country 1. INTRODUCTION Due to the finite precision nature of computer arithmetic, the output roundoff noise of a fixed- point IIR digital filter usually arises. This noise is critically dependent on the internal structure of an IIR digital filter [1],[2]. Error feedback (EF) is known as an effective technique for reduc- ing the output roundoff noise in an IIR digital filter [3]-[5]. Williamson [6] has reduced the output roundoff noise more effectively by choosing the filter structure and applying EF to the filter. Lu and Hinamoto [7] have developed a jointly optimized technique of EF and realiza- tion to minimize the effects of roundoff noise at the filter output subject to l 2 -norm dynamic- range scaling constraints. Li and Gevers [8] have analyzed the output roundoff noise of the closed-loop system with a state-estimate feedback controller, and presented an algorithm for realizing the state-estimate feedback controller with minimum output roundoff noise under l 2 -norm dynamic-range scaling constraints. Hinamoto and Yamamoto [9] have proposed a method for applying EF to a given closed-loop system with a state-estimate feedback con- troller. This paper investigates the problem of jointly optimizing EF and realization for the closed- loop system with a state-estimate feedback controller so as to minimize the output roundoff noise subject to l 2 -norm dynamic-range scaling constraints. To this end, the problem at hand is converted into an unconstrained optimization problem by using linear-algebraic techniques, and then an iterative technique which relies on a quasi-Newton algorithm [10] is developed. With a closed-form formula for gradient evaluation and an efficient quasi-Newton solver, the unconstrained optimization problem can be solved efficiently. Our computer simulation re- sults demonstrate the validity and effectiveness of the proposed technique. Throughout the paper, I n stands for the identity matrix of dimension n × n, the transpose (conjugate transpose) of a matrix A is indicated by A T (A ∗ ), and the trace and ith diagonal element of a square matrix A are denoted by tr [A] and (A) ii , respectively. 2. ROUNDOFF NOISE ANALYSIS Consider a stable, controllable and observable linear discrete-time system described by x (k + 1) = A o x(k) + b o u(k) y(k) = c o x(k) (1) 23 SignalProcessing450 where x(k) is an n ×1 state-variable vector, u(k) is a scalar input, y(k) is a scalar output, and A o , b o and c o are n × n, n × 1 and 1 × n real constant matrices, respectively. The transfer function of the linear system in (1) is given by H o (z) = c o (zI n − A o ) −1 b o . (2) If a regulator is designed by using the full-order state observer, we obtain a state-estimate feedback controller as ˜x (k + 1) = F o ˜x(k) + b o u(k) + g o y(k) = R o ˜x(k) + b o r(k) + g o y(k) u(k) = − k o ˜x(k) + r(k) (3) where ˜x (k) is an n ×1 state-variable vector in the full-order state observer, g o is an n ×1 gain vector chosen so that all the eigenvalues of F o = A o − g o c o are inside the unit circle in the complex plane, k o is a 1 ×n state-feedback gain vector chosen so that each of the eigenvalues of A o − b o k o is at a desirable location within the unit circle, r(k) is a scalar reference signal, and R o = F o −b o k o . The closed-loop control system consisting of the linear system in (1) and the state-estimate feedback controller in (3) is illustrated in Fig. 1. ~ u(k)r(k) y(k) HO(z) x(k) z -1 I O FO kO bO g Fig. 1. The closed-loop control system with a state-estimate feedback controller. When performing quantization before matrix-vector multiplication, we can express the finite- word-length (FWL) implementation of (3) with error feedback as ˆx (k + 1) = R Q[ˆx(k)] + br(k) + gy(k) + De(k) u(k) = − k Q[ˆx(k)] + r( k) (4) where e (k) = ˆx(k) − Q[ˆx(k)] is an n × 1 roundoff error vector and D is an n × n error feedback matrix. All coefficient matrices R, b, g and k are assumed to have an exact fractional B c bit representation. The FWL state-variable vector ˆx(k) and signal u(k) all have a B bit fractional representation, while the reference input r (k) is a (B − B c ) bit fraction. The vector quantizer Q[·] in (4) rounds the B bit fraction ˆx (k) to (B − B c ) bits after completing the multiplications and additions, where the sign bit is not counted. It is assumed that the roundoff error vector e (k) can be modeled as a zero-mean noise process with covariance σ 2 I n where σ 2 = 1 12 2 −2(B−B c ) . It is noted that if the ith element of the roundoff error vector e (k) is indicated by e i (k) for i = 1, 2, ··· , n then the variable e i (k) can be approximated by a white noise sequence uniformly distributed with the following probability density function: p (e i (k)) =  2 B−B c for − 1 2 2 −(B−B c ) ≤ e i (k) ≤ 1 2 2 −(B−B c ) 0 otherwise u(k)r(k) y(k) HO(z) z -1 I R k b g Q D e(k) ^ x(k) ^ [x(k)] Q Fig. 2. A state-estimate feedback controller with error feedback. The closed-loop system consisting of the linear system in (1) and the state-estimate feedback controller with error feedback in (4) is shown in Fig. 2, and is described by  x (k + 1) ˆx(k + 1)  = A  x (k) ˆx(k)  + br(k) + Be(k) y(k) = c  x (k) ˆx(k)  (5) RoundoffNoiseMinimizationforState-EstimateFeedback DigitalControllersUsingJointOptimizationofErrorFeedbackandRealization 451 where x(k) is an n ×1 state-variable vector, u(k) is a scalar input, y(k) is a scalar output, and A o , b o and c o are n × n, n × 1 and 1 × n real constant matrices, respectively. The transfer function of the linear system in (1) is given by H o (z) = c o (zI n − A o ) −1 b o . (2) If a regulator is designed by using the full-order state observer, we obtain a state-estimate feedback controller as ˜x (k + 1) = F o ˜x(k) + b o u(k) + g o y(k) = R o ˜x(k) + b o r(k) + g o y(k) u(k) = − k o ˜x(k) + r(k) (3) where ˜x (k) is an n ×1 state-variable vector in the full-order state observer, g o is an n ×1 gain vector chosen so that all the eigenvalues of F o = A o − g o c o are inside the unit circle in the complex plane, k o is a 1 ×n state-feedback gain vector chosen so that each of the eigenvalues of A o − b o k o is at a desirable location within the unit circle, r(k) is a scalar reference signal, and R o = F o −b o k o . The closed-loop control system consisting of the linear system in (1) and the state-estimate feedback controller in (3) is illustrated in Fig. 1. ~ u(k)r(k) y(k) HO(z) x(k) z -1 I O FO kO bO g Fig. 1. The closed-loop control system with a state-estimate feedback controller. When performing quantization before matrix-vector multiplication, we can express the finite- word-length (FWL) implementation of (3) with error feedback as ˆx (k + 1) = R Q[ˆx(k)] + br(k) + gy(k) + De(k) u(k) = − k Q[ˆx(k)] + r( k) (4) where e (k) = ˆx(k) − Q[ˆx(k)] is an n × 1 roundoff error vector and D is an n × n error feedback matrix. All coefficient matrices R, b, g and k are assumed to have an exact fractional B c bit representation. The FWL state-variable vector ˆx(k) and signal u(k) all have a B bit fractional representation, while the reference input r (k) is a (B − B c ) bit fraction. The vector quantizer Q[·] in (4) rounds the B bit fraction ˆx (k) to (B − B c ) bits after completing the multiplications and additions, where the sign bit is not counted. It is assumed that the roundoff error vector e (k) can be modeled as a zero-mean noise process with covariance σ 2 I n where σ 2 = 1 12 2 −2(B−B c ) . It is noted that if the ith element of the roundoff error vector e (k) is indicated by e i (k) for i = 1, 2, ··· , n then the variable e i (k) can be approximated by a white noise sequence uniformly distributed with the following probability density function: p (e i (k)) =  2 B−B c for − 1 2 2 −(B−B c ) ≤ e i (k) ≤ 1 2 2 −(B−B c ) 0 otherwise u(k)r(k) y(k) HO(z) z -1 I R k b g Q D e(k) ^ x(k) ^ [x(k)] Q Fig. 2. A state-estimate feedback controller with error feedback. The closed-loop system consisting of the linear system in (1) and the state-estimate feedback controller with error feedback in (4) is shown in Fig. 2, and is described by  x (k + 1) ˆx(k + 1)  = A  x (k) ˆx(k)  + br(k) + Be(k) y(k) = c  x (k) ˆx(k)  (5) SignalProcessing452 where A =  A o −b o k gc o R  , b =  b o b  B =  b o k D − R  , c = [ c o 0 ] . From (5), the transfer function from the roundoff error vector e (k) to the output y(k) is given by G D (z) = c (zI 2n − A) −1 B. (6) The output noise gain J (D) = σ 2 out /σ 2 is then computed as J (D) = tr[W D ] (7) with W D = 1 2πj  |z|=1 G ∗ D (z)G D (z) dz z (8) where σ 2 out stands for the noise variance at the output. For tractability, we evaluate J(D) in (7) by replacing R, b, g and k by R o , b o , g o and k o , respectively. Defining S =  I n 0 I n −I n  , (9) the transfer function in (6) can be expressed as G D (z) = cS(zI 2n −S −1 AS) −1 S −1 B = c(zI 2n −Φ) −1  b o k o F o − D  = c o (zI n − A o + b o k o ) −1 b o k o (zI n − F o ) −1 ·(zI n − D) = c(zI 2n −Φ) −1 U(zI n − D) (10) where Φ =  A o −b o k o b o k o 0 F o  U =  0 I n  . It is noted that the stability of the closed-loop control system is determined by the eigenvalues of matrix A in (5), or equivalently, those of matrix Φ in (10). This means that neither of the roundoff error vector e (k) and the error-feedback matrix D affects the stability. Substituting (10) into matrix W D in (8) gives W D = (b 0 k 0 ) T W 1 b 0 k 0 + (b 0 k 0 ) T W 2 (F 0 − D) +( F 0 − D) T W 3 b 0 k 0 +(F 0 − D) T W 4 (F 0 − D) (11) where W = Φ T WΦ + c T c W =  W 1 W 2 W 3 W 4  . Since W is positive semidefinite, it can be shown that there exists an n ×n matrix P such that W 3 = W 4 P. In addition, (11) can be written by virtue of W 2 = W T 3 as W D = (F 0 + Pb 0 k 0 − D) T W 4 (F 0 + Pb 0 k 0 − D) +( b 0 k 0 ) T (W 1 −P T W 4 P)b 0 k 0 . (12) Alternatively, applying z-transform to the first equation in (5) under the assumption that e (k) = 0, we obtain  X (z) ˆ X (z)  = (zI − A) −1 bR(z) (13) where X (z), ˆ X(z) and R(z) represent the z-transforms of x(k), ˆx(k) and r(k), respectively. Replacing R, b, k and g by R o , b o , k o and g o , respectively, and then using S −1  X (z) ˆ X (z)  = (zI 2n −S −1 AS) −1 S −1 b yields ˆ X (z) = X(z) = F(z)R(z) (14) where F (z) = [zI n −(A o −b o k o )] −1 b o . The controllability Gramian K defined by K = 1 2πj  |z|=1 F(z)F ∗ (z) dz z (15) can be obtained by solving the following Lyapunov equation: K = (A o −b o k o )K(A o −b o k o ) T + b o b T o . (16) 3. ROUNDOFF NOISE MINIMIZATION Consider the system in (4) with D = 0 and denote it by (R, b, g, k) n . By applying a coordinate transformation ˜x  (k) = T −1 ˆx(k) to the above system (R , b, g, k) n , we obtain a new realization characterized by ( ˜ R, ˜ b, ˜g, ˜ k ) n where ˜ R = T −1 RT, ˜ b = T −1 b ˜g = T −1 g, ˜ k = kT. (17) For the system described by (17), the counterparts of W i for i = 1, 2, 3,4 are given by ˜ W i = T T W i T (18) [...]... summarize these results and suggest future research directions 2 Signal Processing Methods The aim of this second section is not to provide an exhaustive presentation of all the existing processing methods for EEG and MEG signals, but rather, to introduce some signal processing approaches for EEG and MEG signals to, first, pre-process the signal to remove artifacts and, then, to derive non-invasive functional... rh k  , respectively) and then subtracted from the EEG signal under ˆ ˆ 468 Signal Processing the assumption that the desired ocular artifacts cleaned EEG signal is a zero-mean stationary random signal that is uncorrelated with the ocular artifacts and the two reference signals Thus, the desired output produced by the whole system is the EEG signal without ocular artifacts Hence the whole system can... used for linear prediction in order to model the signal of interest; here an EEG or MEG signal Namely, the real EEG/MEG signal can be considered as the sum of the signal modeled by the AR filter and an error term Thus, by subtracting the real EEG/MEG signal to the one filtered by the AR model, the prediction error can be determined (Fig.3) 470 Signal Processing + x(k) e(k) r p  a x( k  r ) r 1... Circuits Syst (ISCAS’02), May 2002, vol 1, pp 289-292 R Fletcher, Practical Methods of Optimization, 2nd ed New York: Wiley, 1987 460 Signal Processing Signal processing for non-invasive brain biomarkers of sensorimotor performance and brain monitoring 461 24 X Signal processing for non-invasive brain biomarkers of sensorimotor performance and brain monitoring Rodolphe J Gentili, Hyuk Oh, Trent J Bradberry,... in humans 464 Signal Processing 2.1 Pre -processing During recording, EEG/MEG signals are generally corrupted with some undesirable artifacts such as body movements, muscular artifacts, eye movements, eye blinks, environmental noise or heart beat These artifacts produce possible biases in the detection and interpretation of brain biomarkers that will be later derived from the EEG/MEG signals Constraints... setting cannot be realistically expected in an ecological situation Therefore, in order to remove such artifacts, pre -processing of the EEG/MEG signals may be a necessary and critical step (Georgopoulos et al., 2007) Although several signal processing methods are available, such pre -processing stage can be performed by using various methods such as Independent Component Analysis (ICA) and adaptive filtering... given as a set of mixed signals with noise For example, in the same way conversations are recorded by a number of microphones in a crowded party, brain signals containing artifacts are measured through multiple EEG/MEG sensors The information in each of the original signals can be analyzed as long as it is possible to identify the system corresponding to the source that emits these signals captured by... sensors In this regard, blind source separation is a relevant method to approximately recover the original source signals from a set of observed mixed signals without any a priori knowledge about either the source signals or the mixing system Regarding applications in biomedical signal processing, ICA is currently considered one of the most sophisticated statistical approaches for solving the general... 1.220068 −0.792487  T =  −1.717225 0.546599 −0.854 316 2.295944 ˆ and the minimized noise gain was found to be J ( D, T ) = 16. 2006 Next, the EF matrix D = αI 3 was rounded to a power-of-two representation with 3 bits after the binary point as well as 458 Signal Processing 120 Scalar Diagonal 100 General Noise gain J 80 60 40 20 0 0 2 4 6 8 10 12 14 16 18 20 Iterations k Fig 3 Profiles of iterative noise... [9] Diagonal Joint General Separate [9] General Joint Infinite Precision Accuracy of D 3 Bit Quantization Integer Quantization 28.6187 20.1235 20.1810 26.0527 16. 2006 16. 2370 18.2063 16. 4104 16. 4547 17.4039 12.7097 12.7722 13.7535 11.6352 11.7054 16. 5814 4.8823 23.4873 293.0187 ˆ Table 1 Noise gain J ( D, T ) for different EF schemes More reduction of the noise gain might be possible by re-designing the . ed. New York: Wiley, 1987. Signal Processing4 60 Signal processing fornon-invasivebrain biomarkersofsensorimotorperformanceandbrainmonitoring 461 Signal processing for non-invasive. 20.1810 26.0527 Scalar Joint 16. 2006 16. 2370 18.2063 Diagonal Separate [9] 16. 4104 16. 4547 17.4039 Diagonal Joint 12.7097 12.7722 13.7535 General Separate [9] 11.6352 11.7054 16. 5814 General Joint 4.8823. 20.1810 26.0527 Scalar Joint 16. 2006 16. 2370 18.2063 Diagonal Separate [9] 16. 4104 16. 4547 17.4039 Diagonal Joint 12.7097 12.7722 13.7535 General Separate [9] 11.6352 11.7054 16. 5814 General Joint 4.8823

Ngày đăng: 21/06/2014, 11:20