1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

nixon, m. s. (2002) feature extraction and image processing

360 769 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 360
Dung lượng 4,2 MB

Nội dung

Feature Extraction and Image Processing Dedication We would like to dedicate this book to our parents. To Gloria and to Joaquin Aguado, and to Brenda and the late Ian Nixon. Feature Extraction and Image Processing Mark S. Nixon Alberto S. Aguado Newnes OXFORD AUCKLAND BOSTON JOHANNESBURG MELBOURNE NEW DELHI Newnes An imprint of Butterworth-Heinemann Linacre House, Jordan Hill, Oxford OX2 8DP 225 Wildwood Avenue, Woburn, MA 01801-2041 A division of Reed Educational and Professional Publishing Ltd A member of the Reed Elsevier plc group First edition 2002 © Mark S. Nixon and Alberto S. Aguado 2002 All rights reserved. No part of this publication may be reproduced in any material form (including photocopying or storing in any medium by electronic means and whether or not transiently or incidentally to some other use of this publication) without the written permission of the copyright holder except in accordance with the provisions of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London, England W1P 0LP. Applications for the copyright holder’s written permission to reproduce any part of this publication should be addressed to the publishers British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0 7506 5078 8 Typeset at Replika Press Pvt Ltd, Delhi 110 040, India Printed and bound in Great Britain Preface ix Why did we write this book? ix The book and its support x In gratitude xii Final message xii 1 Introduction 1 1.1 Overview 1 1.2 Human and computer vision 1 1.3 The human vision system 3 1.4 Computer vision systems 10 1.5 Mathematical systems 15 1.6 Associated literature 24 1.7 References 28 2 Images, sampling and frequency domain processing 31 2.1 Overview 31 2.2 Image formation 31 2.3 The Fourier transform 35 2.4 The sampling criterion 40 2.5 The discrete Fourier transform ( DFT) 45 2.6 Other properties of the Fourier transform 53 2.7 Transforms other than Fourier 57 2.8 Applications using frequency domain properties 63 2.9 Further reading 65 2.10 References 65 3 Basic image processing operations 67 3.1 Overview 67 3.2 Histograms 67 3.3 Point operators 69 3.4 Group operations 79 3.5 Other statistical operators 88 3.6 Further reading 95 3.7 References 96 4 Low- level feature extraction ( including edge detection) 99 4.1 Overview 99 4.2 First-order edge detection operators 99 4.3 Second- order edge detection operators 120 4.4 Other edge detection operators 127 4.5 Comparison of edge detection operators 129 4.6 Detecting image curvature 130 4.7 Describing image motion 145 4.8 Further reading 156 4.9 References 157 5 Feature extraction by shape matching 161 5.1 Overview 161 5.2 Thresholding and subtraction 162 5.3 Template matching 164 5.4 Hough transform (HT) 173 5.5 Generalised Hough transform (GHT) 199 5.6 Other extensions to the HT 213 5.7 Further reading 214 5.8 References 214 6 Flexible shape extraction ( snakes and other techniques) 217 6.1 Overview 217 6.2 Deformable templates 218 6.3 Active contours (snakes) 220 6.4 Discrete symmetry operator 236 6.5 Flexible shape models 240 6.6 Further reading 243 6.7 References 243 7 Object description 247 7.1 Overview 247 7.2 Boundary descriptions 248 7.3 Region descriptors 278 7.4 Further reading 288 7.5 References 288 8 Introduction to texture description, segmentation and classification 291 8.1 Overview 291 8.2 What is texture? 292 8.3 Texture description 294 8.4 Classification 301 8.5 Segmentation 306 8.6 Further reading 307 8.7 References 308 Appendices 311 9.1 Appendix 1: Homogeneous co-ordinate system 311 9.2 Appendix 2: Least squares analysis 314 9.3 Appendix 3: Example Mathcad worksheet for Chapter 3 317 9.4 Appendix 4: Abbreviated Matlab worksheet 336 Index 345 Preface Why did we write this book? We will no doubt be asked many times: why on earth write a new book on computer vision? Fair question: there are already many good books on computer vision already out in the bookshops, as you will find referenced later, so why add to them? Part of the answer is that any textbook is a snapshot of material that exists prior to it. Computer vision, the art of processing images stored within a computer, has seen a considerable amount of research by highly qualified people and the volume of research would appear to have increased in recent years. That means a lot of new techniques have been developed, and many of the more recent approaches have yet to migrate to textbooks. But it is not just the new research: part of the speedy advance in computer vision technique has left some areas covered only in scant detail. By the nature of research, one cannot publish material on technique that is seen more to fill historical gaps, rather than to advance knowledge. This is again where a new text can contribute. Finally, the technology itself continues to advance. This means that there is new hardware, new programming languages and new programming environments. In particular for computer vision, the advance of technology means that computing power and memory are now relatively cheap. It is certainly considerably cheaper than when computer vision was starting as a research field. One of the authors here notes that the laptop that his portion of the book was written on has more memory, is faster, has bigger disk space and better graphics than the computer that served the entire university of his student days. And he is not that old! One of the more advantageous recent changes brought by progress has been the development of mathematical programming systems. These allow us to concentrate on mathematical technique itself, rather than on implementation detail. There are several sophisticated flavours of which Mathcad and Matlab, the chosen vehicles here, are amongst the most popular. We have been using these techniques in research and in teaching and we would argue that they have been of considerable benefit there. In research, they help us to develop technique faster and to evaluate its final implementation. For teaching, the power of a modern laptop and a mathematical system combine to show students, in lectures and in study, not only how techniques are implemented, but also how and why they work with an explicit relation to conventional teaching material. We wrote this book for these reasons. There is a host of material we could have included but chose to omit. Our apologies to other academics if it was your own, or your favourite, technique. By virtue of the enormous breadth of the subject of computer vision, we restricted the focus to feature extraction for this has not only been the focus of much of our research, but it is also where the attention of established textbooks, with some exceptions, can be rather scanty. It is, however, one of the prime targets of applied computer vision, so would benefit from better attention. We have aimed to clarify some of its origins and development, whilst also exposing implementation using mathematical systems. As such, we have written this text with our original aims in mind. ix The book and its support Each chapter of the book presents a particular package of information concerning feature extraction in image processing and computer vision. Each package is developed from its origins and later referenced to more recent material. Naturally, there is often theoretical development prior to implementation (in Mathcad or Matlab). We have provided working implementations of most of the major techniques we describe, and applied them to process a selection of imagery. Though the focus of our work has been more in analysing medical imagery or in biometrics (the science of recognising people by behavioural or physiological characteristic, like face recognition), the techniques are general and can migrate to other application domains. You will find a host of further supporting information at the book’s website http:// www.ecs.soton.ac.uk/~msn/book/.First, you will find the worksheets (the Matlab and Mathcad implementations that support the text) so that you can study the techniques described herein. There are also lecturing versions that have been arranged for display via a data projector, with enlarged text and more interactive demonstration. The website will be kept as up to date as possible, for it also contains links to other material such as websites devoted to techniques and to applications, as well as to available software and on-line literature. Finally, any errata will be reported there. It is our regret and our responsibility that these will exist, but our inducement for their reporting concerns a pint of beer. If you find an error that we don’t know about (not typos like spelling, grammar and layout) then use the mailto on the website and we shall send you a pint of good English beer, free! There is a certain amount of mathematics in this book. The target audience is for third or fourth year students in BSc/BEng/MEng courses in electrical or electronic engineering, or in mathematics or physics, and this is the level of mathematical analysis here. Computer vision can be thought of as a branch of applied mathematics, though this does not really apply to some areas within its remit, but certainly applies to the material herein. The mathematics essentially concerns mainly calculus and geometry though some of it is rather more detailed than the constraints of a conventional lecture course might allow. Certainly, not all the material here is covered in detail in undergraduate courses at Southampton. The book starts with an overview of computer vision hardware, software and established material, with reference to the most sophisticated vision system yet ‘developed’: the human vision system. Though the precise details of the nature of processing that allows us to see have yet to be determined, there is a considerable range of hardware and software that allow us to give a computer system the capability to acquire, process and reason with imagery, the function of ‘sight’. The first chapter also provides a comprehensive bibliography of material you can find on the subject, not only including textbooks, but also available software and other material. As this will no doubt be subject to change, it might well be worth consulting the website for more up-to-date information. The preference for journal references are those which are likely to be found in local university libraries, IEEE Transactions in particular. These are often subscribed to as they are relatively low cost, and are often of very high quality. The next chapter concerns the basics of signal processing theory for use in computer vision. It introduces the Fourier transform that allows you to look at a signal in a new way, in terms of its frequency content. It also allows us to work out the minimum size of a picture to conserve information, to analyse the content in terms of frequency and even helps to speed up some of the later vision algorithms. Unfortunately, it does involve a few x Preface equations, but it is a new way of looking at data and at signals, and proves to be a rewarding topic of study in its own right. We then start to look at basic image processing techniques, where image points are mapped into a new value first by considering a single point in an original image, and then by considering groups of points. Not only do we see common operations to make a picture’s appearance better, especially for human vision, but also we see how to reduce the effects of different types of commonly encountered image noise. This is where the techniques are implemented as algorithms in Mathcad and Matlab to show precisely how the equations work. The following chapter concerns low-level features which are the techniques that describe the content of an image, at the level of a whole image rather than in distinct regions of it. One of the most important processes we shall meet is called edge detection. Essentially, this reduces an image to a form of a caricaturist’s sketch, though without a caricaturist’s exaggerations. The major techniques are presented in detail, together with descriptions of their implementation. Other image properties we can derive include measures of curvature and measures of movement. These also are covered in this chapter. These edges, the curvature or the motion need to be grouped in some way so that we can find shapes in an image. Our first approach to shape extraction concerns analysing the match of low-level information to a known template of a target shape. As this can be computationally very cumbersome, we then progress to a technique that improves computational performance, whilst maintaining an optimal performance. The technique is known as the Hough transform and it has long been a popular target for researchers in computer vision who have sought to clarify its basis, improve it speed, and to increase its accuracy and robustness. Essentially, by the Hough transform we estimate the parameters that govern a shape’s appearance, where the shapes range from lines to ellipses and even to unknown shapes. Some applications of shape extraction require to determine rather more than the parameters that control appearance, but require to be able to deform or flex to match the image template. For this reason, the chapter on shape extraction by matching is followed by one on flexible shape analysis. This is a topic that has shown considerable progress of late, especially with the introduction of snakes (active contours). These seek to match a shape to an image by analysing local properties. Further, we shall see how we can describe a shape by its symmetry and also how global constraints concerning the statistics of a shape’s appearance can be used to guide final extraction. Up to this point, we have not considered techniques that can be used to describe the shape found in an image. We shall find that the two major approaches concern techniques that describe a shape’s perimeter and those that describe its area. Some of the perimeter description techniques, the Fourier descriptors, are even couched using Fourier transform theory that allows analysis of their frequency content. One of the major approaches to area description, statistical moments, also has a form of access to frequency components, but is of a very different nature to the Fourier analysis. The final chapter describes texture analysis, prior to some introductory material on pattern classification. Texture describes patterns with no known analytical description and has been the target of considerable research in computer vision and image processing. It is used here more as a vehicle for the material that precedes it, such as the Fourier transform and area descriptions though references are provided for access to other generic material. There is also introductory material on how to classify these patterns against known data but again this is a window on a much larger area, to which appropriate pointers are given. Preface xi [...]... vision and image processing The introductory texts include: Fairhurst, M C.: Computer Vision for Robotic Systems (Fairhurst, 1988); Low, A.: Introductory Computer Vision and Image Processing (Low, 1991); Teuber, J.: Digital Image Processing (Teuber, 1993); and Baxes, G A.: Digital Image Processing, Principles and Applications (Baxes, (1994) which includes software and good coverage of image processing. .. data, not just the images from cameras 2 Feature Extraction and Image Processing Synthesised images are good for evaluating techniques and finding out how they work, and some of the bounds on performance Two synthetic images are shown in Figure 1.2 Figure 1.2(a) is an image of circles that were specified mathematically The image is an ideal case: the circles are perfectly defined and the brightness... Computer Vision and Image Understanding and Graphical Models and Image Processing arose from the splitting of one of the subject’s earlier journals, Computer Vision, Graphics and Image Processing (CVGIP), into two parts Do not confuse Pattern Recognition (Pattern Recog.) with Pattern Recognition Letters (Pattern Recog Lett.), published under the aegis of the Pattern Recognition Society and the International... visualisation of 14 Feature Extraction and Image Processing information flow during processing However, the underlying mathematics is not made clear to the user, as it can be when a mathematical system is used There is a new textbook, and a very readable one at that, by Nick Efford (Efford, 2000) which is based entirely on Java and includes, via a CD, the classes necessary for image processing software... Code 1.3 To add a value, we simply call the function and supply an image and the chosen brightness level as the arguments add_value(inpic,value):= for x 0 cols(inpic)–1 for y 0 rows(inpic)–1 newpicturey,x ←inpicy,x+value newpicture Code 1.3 18 Function to add a value to an image in Mathcad Feature Extraction and Image Processing Mathematically, for an image which is a matrix of N × N points, the brightness... most famous images in image processing It is called the Lena image, and is derived from a picture of Lena Sjööblom in Playboy in 1972.) Figure 1.1(b) is an ultrasound image of the carotid artery (which is near the side of the neck and supplies blood to the brain and the face), taken as a cross-section through it The top region of the image is near the skin; the bottom is inside the neck The image arises... can process images using mathematical packages; introduction to the Matlab and Mathcad systems Ease, consistency, support, visualisation of results, availability, introductory use, example worksheets Literature Other textbooks and other places to find information on image processing, computer vision and feature extraction Magazines, textbooks, websites and this book’s website 1.2 Human and computer... function inverted=invert (image) %Subtract image point brightness from maximum % % Usage: [new image] =invert (image) % % Parameters: image- array of points % % Author: Mark S Nixon %get dimensions [rows,cols]=size (image) ; %find the maximum maxi=max(max (image) ); %subtract image points from maximum for x=1:cols %address all columns for y=1:rows %address all rows inverted(y,x)=maxi -image( y,x); end end Code... stocked These journals have different merits: some are targeted at short papers only, whereas some have short and long papers; some are more dedicated to the development of new theory whereas others are more pragmatic and 24 Feature Extraction and Image Processing focus more on practical, working, image processing systems But it is rather naive to classify journals in this way, since all journals welcome... referenced to the co-ordinates x, y in the image Accordingly, a computer image is a matrix of points For a greyscale image, the value of each point is proportional to the brightness of the corresponding point in the scene viewed, and imaged, by the camera These points are the picture elements, or pixels 16 Feature Extraction and Image Processing Consider, for example, the matrix of pixel values in Figure . Human and computer vision 1 1.3 The human vision system 3 1.4 Computer vision systems 10 1.5 Mathematical systems 15 1.6 Associated literature 24 1.7 References 28 2 Images, sampling and. conversion, computer languages, processing packages. Mathematical How we can process images using Ease, consistency, support, visualisation systems mathematical packages; intro- of results, availability,. eyes, and the eyebrows, to make some measurements to describe, and then recognise, a face. (Figure 1.1(a) is perhaps one of the most famous images in image processing. It is called the Lena image, and

Ngày đăng: 18/04/2014, 12:28

TỪ KHÓA LIÊN QUAN