Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 11 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
11
Dung lượng
107 KB
Nội dung
Gabor T Herman “Algorithms for Computed Tomography” 2000 CRC Press LLC Algorithms for Computed Tomography Gabor T Herman University of Pennsylvania 26.1 26.1 Introduction 26.2 The Reconstruction Problem 26.3 Transform Methods 26.4 Filtered Backprojection (FBP) 26.5 The Linogram Method 26.6 Series Expansion Methods 26.7 Algebraic Reconstruction Techniques (ART) 26.8 Expectation Maximization (EM) 26.9 Comparison of the Performance of Algorithms References Introduction Computed tomography is the process of reconstructing the interiors of objects from data collected based on transmitted or emitted radiation The problem occurs in a wide range of application areas Here we discuss the computer algorithms used for achieving the reconstructions 26.2 The Reconstruction Problem We want to solve the following general problem There is a three-dimensional structure whose internal composition is unknown to us We subject this structure to some kind of radiation, either by transmitting the radiation through the structure or by introducing the emitter of the radiation into the structure We measure the radiation transmitted through or emitted from the structure at a number of points Computed tomography (CT) is the process of obtaining from these measurements the distribution of the physical parameter(s) inside the structure that have an effect on the measurements The problem occurs in a wide range of areas, such as x-ray CT, emission tomography, photon migration imaging, and electron microscopic reconstruction; see, e.g., [1, 2] All of these are inverse problems of various sorts; see, e.g., [3] Where it is not otherwise stated, we will be discussing the special reconstruction problem of estimating a function of two variables from estimates of its line integrals As it is quite reasonable for any application, we will assume that the domain of the function is contained in a finite region of the plane In what follows we will introduce all the needed notation and terminology; in most cases these agree with those used in [1] 1999 by CRC Press LLC c Suppose f is a function of the two polar variables r and φ Let [Rf ](`, θ ) denote the line integral of f along the line that is at distance ` from the origin and makes an angle θ with the vertical axis This operator R is usually referred to as the Radon transform The input data to a reconstruction algorithm are estimates (based on physical measurements) of the values of [Rf ](`, θ) for a finite number of pairs (`, θ ); its output is an estimate, in some sense, of f More precisely, suppose that the estimates of [Rf ](`, θ ) are known for I pairs: (`i , θi ), ≤ i ≤ I We use y to denote the I -dimensional column vector (called the measurement vector) whose ith component, yi , is the available estimate of [Rf ](`i , θi ) The task of a reconstruction algorithm is: given the data y, estimate the function f Following [1], reconstruction algorithms are characterized either as transform methods or as series expansion methods In the following subsections we discuss the underlying ideas of these two approaches and give detailed descriptions of two algorithms from each category 26.3 Transform Methods The Radon transform has an inverse, R −1 , defined as follows For a function p of ` and θ , Z πZ ∞ i h 1 p1 (`, θ )d`dθ , R −1 p (r, φ) = 2π −∞ r cos(θ − φ) − ` (26.1) where p1 (`, θ) denotes the partial derivative of p with respect to its first variable ` [Note that it is intrinsically assumed in this definition that p is sufficiently smooth for the existence of the integral in Eq (26.1)] It is known [1] that for any function f which satisfies some physically reasonable conditions (such as continuity and boundedness) we have, for all points (r, φ), i h (26.2) R −1 Rf (r, φ) = f (r, φ) Transform methods are numerical procedures that estimate values of the double integral on the right hand side of Eq (26.1) from given values of p(`i , θi ), for ≤ i ≤ I We now discuss two such methods: the widely adopted filtered backprojection (FBP) algorithm (called the convolution method in [1]) and the more recently developed linogram method 26.4 Filtered Backprojection (FBP) In this algorithm, the right hand side of Eq (26.1) is approximated by a two-step process (for derivational details see [1] or, in a more general context, [3]) First, for fixed values of θ , convolutions defined by Z ∞ p(`, θ )q `0 − `, θ d` (26.3) [p ∗Y q] (` , θ ) = −∞ are carried out, using a convolving function q (of one variable) whose exact choice will have an important influence on the appearance of the final image Second, our estimate f ∗ of f is obtained by backprojection as follows: Z π (26.4) f ∗ (r, φ) = [p ∗Y q] (r cos(θ − φ), θ) dθ To make explicit the implementation of this for a given measurement vector, let us assume that the data function p is known at points (nd, m1), −N ≤ n ≤ N, ≤ m ≤ M − 1, and M1 = π Let 1999 by CRC Press LLC c us further assume that the function f is to be estimated at points (rj , φj ), ≤ j ≤ J The computer algorithm operates as follows A sequence f0 , , fM−1 , fM of estimates is produced; the last of these is the output of the algorithm First we define (26.5) f0 rj , φj = 0, for ≤ j ≤ J Then, for each value of m, ≤ m ≤ M − 1, we produce the (m + 1)st estimate from the mth estimate by a two-step process: For −N ≤ n0 ≤ N, calculate N X p (nd, m1) q pc n0 d, m1 = d n0 − n d , (26.6) n=−N using the measured values of p(nd, m1) and precalculated values (same for all m) of q((n0 − n)d) This is a discretization of Eq (26.3) One possible (but by no means only) choice is For ≤ j ≤ J , we set fm+1 rj , φj = fm rj , φj + 1pc rj cos m1 − φj , m1 (26.7) This is a discretization of Eq (26.4) To it, we need to interpolate in the first variable of pc from the values calculated in Eq (26.6) to obtain the values needed in Eq (26.7) In practice, once fm+1 (rj , φj ) has been calculated, fm (rj , φj )is no longer needed and the computer can reuse the same memory location for f0 (rj , φj ), , fM−1 (rj , φj ), fM (rj , φj ) In a complete execution of the algorithm, the uses of Eq (26.6) require M(2N + 1) multiplications and additions, while all the uses of Eq (26.7) require MJ interpolations and additions Since J is typically of the order of N and N itself in typical applications is between 100 and 1000, we see that the cost of backprojection is likely to be much more computationally demanding than the cost of convolution In any case, reconstruction of a typical 512 × 512 cross-section from data collected by a typical x-ray CT device is not a challenge to state-of-the art computational capabilities; it is routinely done in the order of a second or so and can be done, using a pipeline architecture, in a fraction of a second [4] 26.5 The Linogram Method The basic result that justifies this method is the well-known projection theorem which says that “taking the two-dimensional Fourier transform is the same as taking the Radon transform and then applying the Fourier transform with respect to the first variable” [1] The method was first proposed in [5] and the reason for the name of the method can be found there The basic reason for proposing this method is its speed of execution and we return to this below In the description that follows, we use the approach of [6] That paper deals with the fully three-dimensional problem; here we simplify it to the two-dimensional case For the linogram approach we assume that the data were collected in a special way (that is, at points whose locations will be precisely specified below); if they were collected otherwise, we need to interpolate prior to reconstruction If the function is to be estimated at an array of points with rectangular coordinates {(id, j d)| − N ≤ i ≤ N, −N ≤ j ≤ N} (this array is assumed to cover the object to be reconstructed), then the data function p needs to be known at points (ndm , θm ) , −2N − ≤ n ≤ 2N + 1, −2N − ≤ m ≤ 2N + 1999 by CRC Press LLC c (26.8) and at points ndm , π + θm , −2N − ≤ n ≤ 2N + 1, −2N − ≤ m ≤ 2N + 1, (26.9) where θm = tan−1 2m and dm = d cos θm 4N + (26.10) The linogram method produces from such data estimates of the function values at the desired points using a multi-stage procedure We now list these stages, but first point out two facts One is that the most expensive computation that needs to be used in any of the stages is the taking of discrete Fourier transforms (DFTs), which can always be implemented (possibly after some padding by zeros) very efficiently by the use of the fast Fourier transform (FFT) The other is that the output of any stage produces estimates of function values at exactly those points where they are needed for the discrete computations of the next stage; there is never any need to interpolate between stages It is these two facts which indicate why the linogram method is both computationally efficient and accurate (From the point of view of this handbook, these facts justify the choice of sampling points in Eqs (26.8) through (26.10); a geometrical interpretation is given in [7].) Fourier transforming of the data — For each value of the second variable, we take the DFT of the data with respect to the first variable in Eq (26.8) and Eq (26.9) By the projection theorem, this provides us with estimates of the two-dimensional Fourier transform F of the object at points (in a rectangular coordinate system) k k (4N +3)d , (4N +3)d tan θm , −2N − ≤ k ≤ 2N + , −2N − ≤ m ≤ 2N + (26.11) and at points (also in a rectangular coordinate system) k (4N +3)d tan π + θm , (4N k+3)d , −2N − ≤ k ≤ 2N + , −2N − ≤ m ≤ 2N + (26.12) Windowing — At this point we may suppress those frequencies which we suspect to be noise-dominated by multiplying with a window function (corresponding to the convolving function in FBP) Separating into two functions — The sampled Fourier transform F of the object to be reconstructed is written as the sum of two functions, G and H G has the same values as F at all the points specified in Eq (26.11) except at the origin and is zero-valued at all other points H has the same values as F at all the points specified in Eq (26.12) except at the origin and is zero-valued at all other points Clearly, except at the origin, F = G + H The idea is that by first taking the two-dimensional inverse Fourier transforms of G and H separately and then adding the results, we get an estimate (except for a DC term which has to be estimated separately, see [6]) of f We only follow what needs to be done with G; the situation with H is analogous Chirp z-transforming in the second variable — Note that the way the θm were selected implies that if we fix k, then the sampling in the second variable of Eq (26.11) is uniform Furthermore, we know that the value of G is zero outside the sampled region Hence, for 1999 by CRC Press LLC c each fixed k, < |k| ≤ 2N + 1, we can use the chirp z-transform to estimate the inverse DFT in the second variable at points k , j d , −2N − ≤ k ≤ 2N + 1, −N ≤ j ≤ N (26.13) (4N + 3)d The chirp z-transform can be implemented using three FFTs, see [7] Inverse transforming in the first variable — The inverse Fourier transform of G can now be estimated at the required points by taking, for every fixed j , the inverse DFT in the first variable of the values at the points of Eq (26.13) 26.6 Series Expansion Methods This approach assumes that the function, f , to be reconstructed can be approximated by a linear combination of a finite set of known and fixed basis functions, f (r, φ) ≈ J X xj bj (r, φ), (26.14) j =1 and that our task is to estimate the unknowns, xj If we assume that the measurements depend linearly on the object to be reconstructed (certainly true in the special case of line integrals) and that we know (at least approximately) what the measurements would be if the object to be reconstructed was one of the basis functions (we use ri,j to denote the value of the ith measurement of the j th basis function), then we can conclude [1] that the ith of our measurements of f is approximately J X ri,j xj (26.15) j =1 Our problem is then to estimate the xj from the measured approximations (for ≤ i ≤ I ) to Eq (26.15) The estimate can often be selected as one that satisfies some optimization criterion To simplify the notation, the image is represented by a J -dimensional image vector x (with components xj ) and the data form an I -dimensional measurement vector y There is an assumed projection matrix R (with entries ri,j ) We let ri denote the transpose of the ith row of R (1 ≤ i ≤ I ) and so the inner product hri , xi is the same as the expression in Eq (26.15) Then y is approximately Rx and there may be further information that x belongs to a subset C of R J , the space of J -dimensional real-valued vectors In this formulation R, C, and y are known and x is to be estimated Substituting the estimated values of xj into Eq (26.14) will then provide us with an estimate of the function f The simplest way of selecting the basis functions is by subdividing the plane into pixels (or space into voxels) and choosing basis functions whose value is inside a specific pixel (or voxel) and is everywhere else However, there are other choices that may be preferable; for example, [8] uses spherically symmetric basis functions that are not only spatially limited, but also can be chosen to be very smooth The smoothness of the basis functions then results in smoothness of the reconstructions, while the spherical symmetry allows easy calculation of the ri,j It has been demonstrated [9], for the case of fully three-dimensional positron emission tomography (PET) reconstruction, that such basis functions indeed lead to statistically significant improvements in the task-oriented performance of series expansion reconstruction methods In many situations only a small proportion of the ri,j are nonzero (For example, if the basis functions are based on voxels in a 200 × 200 × 100 array and the measurements are approximate 1999 by CRC Press LLC c line integrals, then the percent of nonzero ri,j is less than 0.01, since a typical line will intersect fewer than 400 voxels.) This makes certain types of iterative methods for estimating the xj surprisingly efficient This is because one can make use of a subroutine which, for any i, returns a list of those j s for which ri,j is not zero, together with the values of the ri,j [1, 10] We now discuss two such iterative approaches: the so-called algebraic reconstruction techniques (ART) and the use of expectation maximization (EM) 26.7 Algebraic Reconstruction Techniques (ART) The basic version of ART operates as follows [1] The method cycles through the measurements repeatedly, considering only one measurement at a time Only those xj are updated for which the corresponding ri,j for the currently considered measurement i is nonzero and the change made to xj is proportional to ri,j The factor of proportionality is adjusted so that if Eq (26.15) is evaluated for the resulting xj , then it will match exactly the ith measurement Other variants will use a block of measurements in one iterative step and will update the xj in different ways to ensure that the iterative process converges according to a chosen estimation criterion Here we discuss only one specific optimization criterion and the associated algorithm (Others can be found, for example, in [1]) Our task is to find the x in R J which minimizes r ky − Rx k2 + kx − µx k2 (26.16) (k · k indicates the usual Euclidean norm), for a given constant scalar r (called the regularization parameter) and a given constant vector µx The algorithm makes use of an I -dimensional vector u of additional variables, one for each measurement First we define u(0) to be the I -dimensional zero vector and x (0) to be the J -dimensional zero vector Then, for k ≥ 0, we set u(k+1) = u(k) + c(k) eik , x (k+1) = x (k) + rc(k) rik , (26.17) where ei is an I -dimensional vector whose ith component is with all other components being and c(k) (k) i − u(k) r y − hr , x i i k k ik = λ(k) , + r krik k2 (26.18) with ik = [k(modI ) + 1] THEOREM 26.1 (see [1] for a proof) Let y be any measurement vector, r be any real number, and µx be any element of R J Then for any real numbers λ(k) satisfying < ε1 ≤ λ(k) ≤ ε2 < , (26.19) the sequence x (0) , x (1) , x (2) , determined by the algorithm given above converges to the unique vector x which minimizes Eq (26.16) The implementation of this algorithm is hardly more complicated than that of basic ART which is described at the beginning of this subsection We need an additional sequence of I -dimensional vectors u(k) , but in the kth iterative step only one component of u(k) is needed or altered Since the ik s are defined in a cyclic order, the components of the vector u(k) (just as the components of the 1999 by CRC Press LLC c measurement vector y) can be sequentially accessed (The exact choice of this — often referred to as the data access ordering — is very important for fast initial convergence; it is described in [11] The underlying principle is that in any subsequence of steps, we wish to have the individual actions to be as independent as possible.) We also use, for every integer k ≥ 0, a positive real number λ(k) (These are the so-called relaxation parameters They are free parameters of the algorithm and in practice need to be optimized [11].) The ri are usually not stored at all, but the location and size of their nonzero elements are calculated as and when needed Hence, the algorithm described by Eq (26.17) and Eq (26.18) shares the storage efficient nature of basic ART and its computational requirements are essentially the same Assuming, as is reasonable, that the number of nonzero ri,j is of the same order as J , we see that the cost of cycling through the data once using ART is of the order I J , which is approximately the same as the cost of reconstructing using FBP (That this is indeed so is confirmed by the timings reported in [12].) An important thing to note about Theorem 26.1 is that there are no restrictions of consistency in its statement Hence, the algorithm of Eqs (26.17) and (26.18) will converge to the minimizer of Eq (26.16) — the so-called regularized least squares solution — using the real data collected in any application 26.8 Expectation Maximization (EM) We may wish to find the x that maximizes the likelihood of observing the actual measurements, based on the assumption that ith measurement comes from a Poisson distribution whose mean is given by Eq (26.15) An iterative method to exactly that, based on the so-called EM (expectation maximization) approach, was proposed in [13] Here we discuss a variant of this approach that was designed for a somewhat more complicated optimization criterion [14], which enforces smoothness of the results where the original maximum likelihood criterion may result in noisy images J denote those elements of R J in which all components are non-negative Our task is to find Let R+ J which minimizes the x in R+ I X [hri , xi − yi lnhri , xi] + i=1 γ T x Sx, (26.20) where the J × J matrix S (with entries denoted by sj,u ) is a modified smoothing matrix [1] which has the following property (This definition is only applicable if we use pixels to define the basis functions.) Let N denote the set of indexes corresponding to pixels that are not on the border of the digitization Each such pixel has eight neighbors, let Nj denote the indexes of the pixels associated with the neighbors of the pixel indexed by j Then X x T Sx = j ∈N 2 X xj − xk (26.21) k∈Nj Consider the following rules for obtaining x (k+1) from x (k) I X (k) pj (k) qj 1999 by CRC Press LLC c = ri,j i=1 9γ sj,j (k) = xj 9γ sj,j (k) − xj J X + sj,u xu(k) , 9sj,j I X ri,j yi , hri , x (k) i i=1 (26.22) u=1 (26.23) (k+1) xj = (k) −pj + r (k) pj ! (k) + 4qj (26.24) Since the first term of Eq (26.22) can be precalculated, the execution of Eq (26.22) requires essentially no more effort than multiplying x (k) with the modified smoothing matrix As explained in [1], there is a very efficient way of doing this The execution of Eq (26.23) requires approximately the same effort as cycling once through the data set using ART; see Eq (26.18) Algorithmic details of efficient computations of Eq (26.23) appeared in [15] Clearly, the execution of Eq (26.24) requires a trivial amount of computing Thus, we see that one iterative step of the EM algorithm of Eq (26.22) to Eq (26.24) requires, in total, approximately the same computing effort as cycling through the data set once with ART, which costs about the same as a complete reconstruction by FBP A basic difference between the ART method and the EM method is that the former updates its estimate based on one measurement at a time, while the latter deals with all measurements simultaneously THEOREM 26.2 x (0) , x (1) , x (2) , J of (26.20) in R+ 26.9 (see [14] for a proof) For any x (0) with only positive components, the sequence generated by the algorithm of Eqs (26.22) to (26.24) converges to the minimizer Comparison of the Performance of Algorithms We have discussed four very different-looking algorithms and the literature is full of many others, only some of which are surveyed in books such as [1] Many of the algorithms are available in general purpose image reconstruction software packages, such as SNARK [10] The novice faced with a problem of reconstruction is justified in being puzzled as to which algorithm to use Unfortunately, there is not a generally valid answer: the right choice may very well be dependent on the area of application and on the instrument used for gathering the data Here we make only some general comments regarding the four approaches discussed above, followed by some discussion of the methodologies that are available to comparatively evaluate reconstruction algorithms for a particular application Concerning the two transform methods we have discussed, the linogram method is faster than FBP (essentially an N log N method, rather than an N method as is the FBP) and, when the data are collected according to the geometry expressed by Eqs (26.8) and (26.9), then the linogram method is likely to be more accurate because it requires no interpolations However, data are not normally collected this way and the need for an initial interpolation together with the more complicatedlooking expressions that need to be implemented for the linogram method may indeed steer some users towards FBP, in spite of its extra computational requirements Advantages of series expansion methods over transform methods are their flexibility (no special relationship needs to be assumed between the object to be reconstructed and the measurements taken, such as that the latter are samples of the Radon transform of the former) and the ability to control the type of solution we want by specifying the exact sense in which the image vector is to be estimated from the measurement vector; see Eqs (26.16) and (26.20) The major disadvantage is that it is computationally much more intensive to find these precise estimators than to numerically evaluate Eq (26.1) Also, if the model (the basis functions, the projection matrix, and the estimation criterion) is not well chosen, then the resulting estimate may be inferior to that provided by a transform method The recent literature has demonstrated that usually there are models that make the efficacy of a reconstruction provided by a series expansion method at least as good as that provided by a transform method To avoid the problem of computational expense, one usually stops the iterative process involved in the optimization long before the method has converged to the mathematically specified estimator Practical experience indicates that this can be done very efficaciously For 1999 by CRC Press LLC c example, as reported in [12], in the area of fully three-dimensional PET, the reconstruction times for FBP are slightly longer than for cycling through the data just once with a version of ART using spherically symmetric basis functions and the accuracy of FBP is significantly worse than what is obtained by this very early iterate produced by ART Since the iterative process is, in practice, stopped early, in evaluating the efficacy of the result of a series expansion method one should look at the actual outputs rather than the ideal mathematical optimizer Reported experiences comparing an optimized version of ART with an optimized version of EM [9, 11] indicate that the former can obtain as good or better reconstructions as the latter, but at a fraction of the computational cost This computational advantage appears to be due to not trying to make use of all the measurements in each iterative step The proliferation of image reconstruction algorithms imposes a need to evaluate the relative performance of these algorithms and to understand the relationship between their attributes (free parameters) and their performance In a specific application of an algorithm, choices have to be made regarding its parameters (such as the basis functions, the optimization criterion, constraints, relaxation, etc.) Such choices affect the performance of the algorithm and there is a need for an efficient and objective evaluation procedure which enables us to select the best variant of an algorithm for a particular task and to compare the efficacy of different algorithms for that task An approach to evaluating an algorithm is to first start with a specification of the task for which the image is to be used and then define a figure of merit (FOM) which determines quantitatively how helpful the image is, and hence the reconstruction algorithm, for performing that task In the numerical observer approach [11, 16, 17], a task-specific FOM is computed for each image Based on the FOMs for all the images produced by two different techniques, we can calculate the statistical significance at which we can reject the null hypothesis that the methods are equally helpful for solving a particular task in favor of the alternative hypothesis that the method with the higher average FOM is more helpful for solving that task Different imaging techniques can then be rank-ordered on the basis of their average FOMs It is strongly advised that a reconstruction algorithm should not be selected based on the appearance of a few sample reconstructions, but rather that a study along the lines indicated above be carried out In addition to the efficacy of images produced by the various algorithms, one should also be aware of the computational possibilities that exist for executing them A survey from this point of view can be found in [2] References [1] Herman, G.T., Image Reconstruction from Projections: The Fundamentals of Computerized Tomography, Academic Press, New York, 1980 [2] Herman, G.T., Image reconstruction from projections, J Real-Time Imag., 1, 3–18, 1995 [3] Herman, G.T., Tuy, H.K., Langenberg, K.J., and Sabatier, P.C., Basic Methods of Tomography and Inverse Problems, Adam Hilger, Bristol, England, 1987 [4] Sanz, J.L.C., Hinkle, E.B., and Jain, A.K., Radon and Projection Transform-Based Computer Vision, Springer-Verlag, Berlin, 1988 [5] Edholm, P and Herman, G.T., Linograms in image reconstruction from projections, IEEE Trans Med Imag., 6, 301–307, 1987 [6] Herman, G.T., Roberts, D., and Axel, L., Fully three-dimensional reconstruction from data collected on concentric cubes in Fourier space: implementation and a sample application to MRI, Phys Med Biol., 37, 673–687, 1992 [7] Edholm, P., Herman, G.T., and Roberts, D.A., Image reconstruction from linograms: implementation and evaluation, IEEE Trans Med Imag., 7, 239–246, 1988 [8] Lewitt, R.M., Alternatives to voxels for image representation in iterative reconstruction algorithms, Phys Med Biol., 37, 705–716, 1992 1999 by CRC Press LLC c [9] Matej, S., Herman, G.T., Narayan, T.K., Furuie, S.S., Lewitt, R.M., and Kinahan, P., Evaluation of task-oriented performance of several fully 3–D PET reconstruction algorithms, Phys Med Biol., 39, 355–367, 1994 [10] Browne, J.A., Herman, G.T., and Odhner, D., SNARK93 — a programming system for image reconstruction from projections, Technical Report MIPG198, Department of Radiology, University of Pennsylvania, Philadelphia, 1993 [11] Herman, G.T and Meyer, L.B., Algebraic reconstruction techniques can be made computationally efficient, IEEE Trans Med Imag., 12, 600–609, 1993 [12] Matej, S and Lewitt, R.M., Efficient 3D grids for image reconstruction using spherically symmetric volume elements, IEEE Trans Nucl Sci., 42, 1361–1370, 1995 [13] Shepp, L.A and Vardi, Y., Maximum likelihood reconstruction in positron emission tomography, IEEE Trans Med Imag., 1, 113–122, 1982 [14] Herman, G.T., De Pierro, A.R., and Gai, N., On methods for maximum a posteriori image reconstruction with a normal prior, J Visual Comm Image Represent., 3, 316–324, 1992 [15] Herman, G.T., Odhner, D., Toennies, K.D., and Zenios, S.A., A parallelized algorithm for image reconstruction from noisy projections, in Coleman, T.F and Li, Y Eds., Large-Scale Numerical Optimization, SIAM, Philadelphia, PA, 1990, pp 3–21 [16] Hanson, K.M., Method of evaluating image-recovery algorithms based on task performance, J Opt Soc Am A, 7, 1294–1304, 1990 [17] Furuie, S.S., Herman, G.T., Narayan, T.K., Kinahan, P., Karp, J.S., Lewitt, R.M., and Matej, S., A methodology for testing for statistically significant differences between fully 3-D PET reconstruction algorithms, Phys Med Biol., 39, 341–354, 1994 1999 by CRC Press LLC c .. .Algorithms for Computed Tomography Gabor T Herman University of Pennsylvania 26. 1 26. 1 Introduction 26. 2 The Reconstruction Problem 26. 3 Transform Methods 26. 4 Filtered Backprojection (FBP) 26. 5... Method 26. 6 Series Expansion Methods 26. 7 Algebraic Reconstruction Techniques (ART) 26. 8 Expectation Maximization (EM) 26. 9 Comparison of the Performance of Algorithms References Introduction Computed. .. (26. 22) u=1 (26. 23) (k+1) xj = (k) −pj + r (k) pj ! (k) + 4qj (26. 24) Since the first term of Eq (26. 22) can be precalculated, the execution of Eq (26. 22) requires essentially no more effort