1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Digital Signal Processing Handbook P26 pptx

11 321 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 11
Dung lượng 107 KB

Nội dung

Gabor T. Herman. “Algorithms for Computed Tomography” 2000 CRC Press LLC. <http://www.engnetbase.com>. AlgorithmsforComputed Tomography GaborT.Herman UniversityofPennsylvania 26.1Introduction 26.2TheReconstructionProblem 26.3TransformMethods 26.4FilteredBackprojection(FBP) 26.5TheLinogramMethod 26.6SeriesExpansionMethods 26.7AlgebraicReconstructionTechniques(ART) 26.8ExpectationMaximization(EM) 26.9ComparisonofthePerformanceofAlgorithms References 26.1 Introduction Computedtomographyistheprocessofreconstructingtheinteriorsofobjectsfromdatacollected basedontransmittedoremittedradiation.Theproblemoccursinawiderangeofapplicationareas. Herewediscussthecomputeralgorithmsusedforachievingthereconstructions. 26.2 TheReconstructionProblem Wewanttosolvethefollowinggeneralproblem.Thereisathree-dimensionalstructurewhose internalcompositionisunknowntous.Wesubjectthisstructuretosomekindofradiation,eitherby transmittingtheradiationthroughthestructureorbyintroducingtheemitteroftheradiationintothe structure.Wemeasuretheradiationtransmittedthroughoremittedfromthestructureatanumber ofpoints.Computedtomography(CT)istheprocessofobtainingfromthesemeasurementsthe distributionofthephysicalparameter(s)insidethestructurethathaveaneffectonthemeasurements. Theproblemoccursinawiderangeofareas,suchasx-rayCT,emissiontomography,photon migrationimaging,andelectronmicroscopicreconstruction;see,e.g.,[1,2].Alloftheseareinverse problemsofvarioussorts;see,e.g.,[3]. Whereitisnototherwisestated,wewillbediscussingthespecialreconstructionproblemof estimatingafunctionoftwovariablesfromestimatesofitslineintegrals.Asitisquitereasonable foranyapplication,wewillassumethatthedomainofthefunctioniscontainedinafiniteregionof theplane.Inwhatfollowswewillintroducealltheneedednotationandterminology;inmostcases theseagreewiththoseusedin[1]. c  1999byCRCPressLLC Suppose f is a function of the two polar variables r and φ.Let[Rf ](, θ) denote the line integral of f along the line that is at distance  from the origin and makes an angle θ with the vertical axis. This operator R is usually referred to as the Radon transform. The input data to a reconstruction algorithm are estimates (based on physical measurements) of the values of [Rf ](, θ) for a finite number of pairs (, θ ); its output is an estimate, in some sense, of f . More precisely, suppose that the estimates of [Rf ](, θ) are known for I pairs: ( i ,θ i ), 1 ≤ i ≤ I. We use y to denote the I-dimensional column vector (called the measurement vector) whose ith component, y i , is the available estimate of [Rf ]( i ,θ i ). The task of a reconstruction algorithm is: given the data y, estimate the function f . Following [1], reconstruction algorithms are characterized either as transform methods or as series expansion methods. In the following subsections we discuss the underlying ideas of these two approaches and give detailed descriptions of two algorithms from each category. 26.3 Transform Methods The Radon transform has an inverse, R −1 , defined as follows. For a function p of  and θ,  R −1 p  (r, φ) = 1 2π 2  π 0  ∞ −∞ 1 r cos(θ − φ) −  p 1 (,θ)ddθ , (26.1) where p 1 (, θ ) denotes the partial derivative of p with respect to its first variable . [Note that it is intrinsically assumed in this definition that p is sufficiently smooth for the existence of the integral in Eq. (26.1)]. It is known [1] that for any function f which satisfies some physically reasonable conditions (such as continuity and boundedness) we have, for all points (r, φ),  R −1 Rf  (r, φ) = f(r,φ). (26.2) Transform methods are numerical procedures that estimate values of the double integral on the right hand side of Eq. (26.1)fromgivenvaluesofp( i ,θ i ), for 1 ≤ i ≤ I. We now discuss two such methods: the widely adopted filtered backprojection (FBP) algorithm (called the convolution method in [1]) and the more recently developed linogram method. 26.4 Filtered Backprojection (FBP) In this algorithm, the right hand side of Eq. (26.1) is approximated by a two-step process (for derivational details see [1] or, in a moregeneral context, [3]). First, for fixed values of θ, convolutions defined by [ p ∗ Y q ] (  ,θ) =  ∞ −∞ p(, θ )q    − , θ  d (26.3) are carried out, using a convolving function q (of one variable) whose exact choice will have an important influence on the appearance of the final image. Second, our estimate f ∗ of f is obtained by backprojection as follows: f ∗ (r, φ) =  π 0 [ p ∗ Y q ] ( r cos(θ − φ),θ ) dθ. (26.4) To make explicit the implementation of this for a given measurement vector, let us assume that the data function p is known at points (nd, m), −N ≤ n ≤ N,0 ≤ m ≤ M − 1, and M = π.Let c  1999 by CRC Press LLC us further assume that the function f is to be estimated at points (r j ,φ j ), 1 ≤ j ≤ J . The computer algorithm operates as follows. A sequence f 0 , .,f M−1 ,f M of estimates is produced; the last of these is the output of the algorithm. First we define f 0  r j ,φ j  = 0, (26.5) for 1 ≤ j ≤ J . Then, for each value of m,0≤ m ≤ M − 1, we produce the (m + 1)st estimate from the mth estimate by a two-step process: 1. For −N ≤ n  ≤ N, calculate p c  n  d,m  = d N  n=−N p ( nd, m ) q  n  − n  d  , (26.6) using the measured values of p(nd, m) and precalculated values (same for all m)of q((n  − n)d). This is a discretization of Eq. (26.3). One possible (but by no means only) choice is 2. For 1 ≤ j ≤ J ,weset f m+1  r j ,φ j  = f m  r j ,φ j  + p c  r j cos  m − φ j  ,m  . (26.7) This is a discretization of Eq. (26.4). To do it, we need to interpolate in the first variable of p c from the values calculated in Eq. (26.6) to obtain the values needed in Eq. (26.7). Inpractice, once f m+1 (r j ,φ j ) has been calculated, f m (r j ,φ j )is no longer neededand the computercanreusethesamememorylocationfor f 0 (r j ,φ j ), .,f M−1 (r j ,φ j ), f M (r j ,φ j ). In a complete execution of the algorithm, the uses of Eq. (26.6) require M(2N +1) multiplications and additions, while all the uses of Eq. (26.7) require MJ interpolations and additions. Since J is typically of the order of N 2 and N itself in typical applications is between 100 and 1000, we see that the cost of backprojection is likely to be much more computationally demanding than the cost of convolution. In any case, reconstruction of a typical 512 × 512 cross-section from data collected by a typical x-ray CT device is not a challenge to state-of-the art computational capabilities; it is routinely done in the order of a second or so and can be done, using a pipeline architecture, in a fraction of a second [4]. 26.5 The Linogram Method The basic resultthat justifies this method is the well-known projection theorem which says that “taking the two-dimensional Fourier transform is the same as taking the Radon transform and then applying the Fourier transform with respect to the first variable” [1]. The method was first proposed in [5] and the reason for the name of the method can be found there. The basic reason for proposing this method is its speed of execution and we return to this below. In the description that follows, we use the approach of [6]. That paper deals with the fully three-dimensional problem; here we simplify it to the two-dimensional case. For the linogram approach we assume that the data were collected in a special way (that is, at points whose locations will be precisely specified below); if they were collected otherwise, we need to interpolate prior to reconstruction. If the function is to be estimated at an array of points with rectangular coordinates {(id, j d)|−N ≤ i ≤ N,−N ≤ j ≤ N} (this array is assumed to cover the object to be reconstructed), then the data function p needs to be known at points ( nd m ,θ m ) , −2N − 1 ≤ n ≤ 2N + 1, −2N − 1 ≤ m ≤ 2N + 1 (26.8) c  1999 by CRC Press LLC and at points  nd m , π 2 + θ m  , −2N − 1 ≤ n ≤ 2N + 1, −2N − 1 ≤ m ≤ 2N + 1, (26.9) where θ m = tan −1 2m 4N + 3 and d m = d cos θ m . (26.10) The linogram method produces from such data estimates of the function values at the desired points using a multi-stage procedure. We now list these stages, but first point out two facts. One is that the most expensive computation that needs to be used in any of the stages is the taking of discrete Fourier transforms (DFTs), which can always be implemented (possibly after some padding by zeros) very efficiently by the use of the fast Fourier transform (FFT). The other is that the output of any stage produces estimates of function values at exactly those points where they are needed for the discrete computations of the next stage; there is never any need to interpolate between stages. It is these two facts which indicate why the linogram method is both computationally efficient and accurate. (From the point of view of this handbook, these facts justify the choice of sampling points in Eqs. (26.8) through (26.10); a geometrical interpretation is given in [7].) 1. Fourier transforming of the data — For each value of the second variable, we take the DFT of the data with respect to the first variable in Eq. (26.8) and Eq. (26.9). By the projection theorem, this provides us with estimates of the two-dimensional Fourier transform F of the object at points (in a rectangular coordinate system)  k (4N+3)d  , k (4N+3)d tan θ m  , −2N − 1 ≤ k ≤ 2N + 1 , −2N − 1 ≤ m ≤ 2N + 1 (26.11) and at points (also in a rectangular coordinate system)  k (4N+3)d tan  π 2 + θ m  , k (4N+3)d  , −2N − 1 ≤ k ≤ 2N + 1 , −2N − 1 ≤ m ≤ 2N + 1 . (26.12) 2. Windowing — At this point we may suppress those frequencies which we suspect to be noise-dominatedbymultiplyingwith awindow function (correspondingtotheconvolving function in FBP). 3. Separating into two functions — The sampled Fourier transform F of the object to be reconstructed is written as the sum of two functions, G and H. G has the same values as F at all the points specified in Eq. (26.11) except at the origin and is zero-valued at all other points. H has the same values as F at all the points specified in Eq. (26.12)exceptat the origin and is zero-valued at all other points. Clearly, except at the origin, F = G + H . The idea is that by first taking the two-dimensional inverse Fourier transforms of G and H separately and then adding the results, we get an estimate (except for a DC term which has to be estimated separately, see [6]) of f .Weonlyfollowwhatneedstobedonewith G; the situation with H is analogous. 4. Chirp z-transforming in the second variable — Note that the way the θ m were selected implies that if we fix k, then the sampling in the second variable of Eq. (26.11) is uniform. Furthermore, we know that the value of G is zero outside the sampled region. Hence, for c  1999 by CRC Press LLC each fixed k,0< |k|≤2N+ 1, we can use the chirp z-transform to estimate the inverse DFT in the second variable at points  k (4N + 3)d ,jd  , −2N − 1 ≤ k ≤ 2N + 1, −N ≤ j ≤ N. (26.13) The chirp z-transform can be implemented using three FFTs, see [7]. 5. Inverse transforming in the first variable — The inverse Fourier transform of G can now be estimated at the required points by taking, for every fixed j, the inverse DFT in the first variable of the values at the points of Eq. (26.13). 26.6 Series Expansion Methods This approach assumes that the function, f , to be reconstructed can be approximated by a linear combination of a finite set of known and fixed basis functions, f(r,φ) ≈ J  j=1 x j b j (r, φ), (26.14) and that our task is to estimate the unknowns, x j . If we assume that the measurements depend linearly on the object to be reconstructed (certainly true in the special case of line integrals) and that we know (at least approximately) what the measurements would be if the object to be reconstructed was one of the basis functions (we use r i,j to denote the value of the ith measurement of the jth basis function), then we can conclude [1] that the ith of our measurements of f is approximately J  j=1 r i,j x j . (26.15) Our problem is then to estimate the x j from the measured approximations (for 1 ≤ i ≤ I)to Eq. (26.15). The estimate can often be selected as one that satisfies some optimization criterion. To simplify the notation, the image is represented by a J-dimensional image vector x (with compo- nents x j ) and the data form an I-dimensional measurement vector y. There is an assumed projection matrix R (with entries r i,j ). We let r i denote the transpose of the ith row of R (1 ≤ i ≤ I) and so the inner product r i ,x is the same as the expression in Eq. (26.15). Then y is approximately Rx and there may be further information that x belongs to a subset C of R J , the space of J-dimensional real-valued vectors. In this formulation R, C, and y are known and x is to be estimated. Substituting the estimated values of x j into Eq. (26.14) will then provide us with an estimate of the function f . The simplest way of selecting the basis functions is by subdividing the plane into pixels (or space into voxels) and choosing basis functions whose value is 1 inside a specific pixel (or voxel) and is 0 everywhere else. However, there are other choices that may be preferable; for example, [8] uses spherically symmetric basis functions that are not only spatially limited, but also can be chosen to be verysmooth. The smoothnessofthe basisfunctions thenresultsinsmoothnessofthereconstructions, while the spherical symmetry allows easy calculation of the r i,j . It has been demonstrated [9], for the case of fully three-dimensional positron emission tomography (PET) reconstruction, that such basis functions indeed lead to statistically significant improvements in the task-oriented performance of series expansion reconstruction methods. In many situations only a small proportion of the r i,j are nonzero. (For example, if the basis functions are based on voxels in a 200 × 200 × 100 array and the measurements are approximate c  1999 by CRC Press LLC line integrals, then the percent of nonzero r i,j is less than 0.01, since a typical line will intersect fewer than 400 voxels.) This makes certain types of iterative methods for estimating the x j surprisingly efficient. This is because one can make use of a subroutine which, for any i, returns a list of those js for which r i,j is not zero, together with the values of the r i,j [1, 10]. We now discuss two such iterative approaches: the so-called algebraic reconstruction techniques (ART) and the use of expectation maximization (EM). 26.7 Algebraic Reconstruction Techniques (ART) The basic version of ART operates as follows [1]. The method cycles through the measurements repeatedly, considering only one measurement at a time. Only those x j are updated for which the corresponding r i,j for the currently considered measurement i is nonzero and the change made to x j is proportional to r i,j . The factor of proportionality is adjusted so that if Eq. (26.15)isevaluatedfor the resulting x j , then it will match exactly the ith measurement. Other variants will use a block of measurements in one iterative step and will update the x j in different ways to ensure that the iterative process converges according to a chosen estimation criterion. Here we discuss only one specific optimization criterion and the associated algorithm. (Others can be found, for example, in [1]). Our task is to find the x in R J which minimizes r 2 y − R x  2 +x − µ x  2 (26.16) (·indicates the usual Euclidean norm), for a given constant scalar r (called the regularization parameter) and a given constant vector µ x . The algorithm makes use of an I-dimensional vector u of additional variables, one for each mea- surement. First we define u (0) to be the I-dimensional zero vector and x (0) to be the J-dimensional zero vector. Then, for k ≥ 0, we set u (k+1) = u (k) + c (k) e i k , x (k+1) = x (k) + rc (k) r i k , (26.17) where e i is an I-dimensional vectorwhose ith component is 1 with all other components being 0 and c (k) = λ (k) r  y i k −r i k ,x (k)   − u (k) i k 1 + r 2 r i k  2 , (26.18) with i k =[k(modI) + 1]. THEOREM 26.1 (see [1]foraproof). Lety be any measurement vector, r be any real number, and µ x be any element of R J . Then for any real numbers λ (k) satisfying 0 <ε 1 ≤ λ (k) ≤ ε 2 < 2 , (26.19) the sequence x (0) ,x (1) ,x (2) , .determined by the algorithm given above converges to the unique vector x which minimizes Eq. (26.16). The implementation of this algorithm is hardly more complicated than that of basic ART which is described at the beginning of this subsection. We need an additional sequence of I-dimensional vectors u (k) , but in the kth iterative step only one component of u (k) is needed or altered. Since the i k s are defined in a cyclic order, the components of the vector u (k) (just as the components of the c  1999 by CRC Press LLC measurement vector y) can be sequentially accessed. (The exact choice of this — often referred to as the data access ordering — is very important for fast initial convergence; it is described in [11]. The underlying principle is that in any subsequence of steps, we wish to have the individual actions to be as independent as possible.) We also use, for every integer k ≥ 0, a positive real number λ (k) . (These are the so-called relaxation parameters. They are free parameters of the algorithm and in practice need to be optimized [11].) The r i are usually not stored at all, but the location and size of their nonzero elements are calculated as and when needed. Hence, the algorithm described by Eq. (26.17) and Eq. (26.18) shares the storage efficient nature of basic ART and its computational requirements are essentially the same. Assuming, as is reasonable, that the number of nonzero r i,j is of the same order as J , we see that the cost of cycling through the data once using ART is of the order IJ, which is approximately the same as the cost of reconstructing using FBP. (That this is indeed so is confirmed by the timings reported in [12].) An important thing to note about Theorem 26.1 is that there are no restrictions of consistency in its statement. Hence, the algorithm of Eqs. (26.17) and (26.18) will converge to the minimizer of Eq. (26.16) — the so-called regularized least squares solution — using the real data collected in any application. 26.8 Expectation Maximization (EM) We may wish to find the x that maximizes the likelihood of observing the actual measurements, based on the assumption that ith measurement comes from a Poisson distribution whose mean is givenbyEq.(26.15). An iterative method to do exactly that, based on the so-called EM (expectation maximization) approach, was proposed in [13]. Here we discuss a variant of this approach that was designed for a somewhat more complicated optimization criterion [14], which enforces smoothness of the results where the original maximum likelihood criterion may result in noisy images. Let R J + denote those elements of R J in which all components are non-negative. Our task is to find the x in R J + which minimizes I  i=1 [ r i ,x−y i lnr i ,x ] + γ 2 x T Sx, (26.20) where the J × J matrix S (with entries denoted by s j,u )isamodified smoothing matrix [1] which has the following property. (This definition is only applicable if we use pixels to define the basis functions.) Let N denote the set of indexes corresponding to pixels that are not on the border of the digitization. Each such pixel has eight neighbors, let N j denote the indexes of the pixels associated with the neighbors of the pixel indexed by j. Then x T Sx =  j∈N   x j − 1 8  k∈N j x k   2 . (26.21) Consider the following rules for obtaining x (k+1) from x (k) . p (k) j = I  i=1 r i,j 9γs j,j − x (k) j + 1 9s j,j J  u=1 s j,u x (k) u , (26.22) q (k) j = x (k) j 9γs j,j I  i=1 r i,j y i r i ,x (k)  , (26.23) c  1999 by CRC Press LLC x (k+1) j = 1 2  −p (k) j +   p (k) j  2 + 4q (k) j  . (26.24) Since the first term of Eq. (26.22) can be precalculated, the execution of Eq. (26.22) requires essentially no more effort than multiplying x (k) with the modified smoothing matrix. As explained in [1], there is a very efficient way of doing this. The execution of Eq. (26.23) requires approximately the same effort as cycling once through the data set using ART; see Eq. (26.18). Algorithmic details of efficient computations of Eq. (26.23) appeared in [15]. Clearly, the execution of Eq. (26.24) requires a trivial amount of computing. Thus, wesee that one iterative step of the EM algorithm of Eq. (26.22) to Eq. (26.24) requires, in total, approximately the same computing effort as cycling through the data set once with ART, whichcosts about the same as a complete reconstruction byFBP. A basic difference between the ART method and the EM method is that the former updates its estimate based on one measurement at a time, while the latter deals with all measurements simultaneously. THEOREM 26.2 (see [14]foraproof). Foranyx (0) with only positive components, the sequence x (0) ,x (1) ,x (2) , . generated by the algorithm of Eqs. (26.22)to(26.24) converges to the minimizer of (26.20)inR J + . 26.9 Comparison of the Performance of Algorithms We have discussed four very different-looking algorithms and the literature is full of many others, only some of which are surveyed in books such as [1]. Many of the algorithms are available in general purpose image reconstruction software packages, such as SNARK [10]. The novice faced with a prob- lem of reconstruction is justified in being puzzled as to which algorithm to use. Unfortunately, there is not a generally valid answer: the right choice may very well be dependent on the area of application and on the instrument used for gathering the data. Here we make only some general comments regarding the four approaches discussed above, followed by some discussion of the methodologies that are available to comparatively evaluate reconstruction algorithms for a particular application. Concerning the two transform methods we have discussed, the linogram method is fasterthan FBP (essentially an N 2 log N method, rather than an N 3 method as is the FBP) and, when the data are collected according to the geometry expressed by Eqs. (26.8) and (26.9), then the linogram method is likely to be more accurate because it requires no interpolations. However, data are not normally collected this way and the need for an initial interpolation together with the more complicated- looking expressions that need to be implemented for the linogram method may indeed steer some users towards FBP, in spite of its extra computational requirements. Advantages of series expansion methods over transform methods are their flexibility (no special relationship needs to be assumed between the object to be reconstructed and the measurements taken, such as that the latter are samples of the Radon transform of the former) and the ability to control the type of solution we want by specifying the exact sense in which the image vector is to be estimated from the measurement vector; see Eqs. (26.16) and (26.20). The major disadvantage is that it is computationally much more intensive to find these precise estimators than to numerically evaluate Eq. (26.1). Also, if the model (the basis functions, the projection matrix, and the estimation criterion)isnotwellchosen,then theresultingestimatemaybeinferiortothatprovidedbyatransform method. The recent literature has demonstrated that usually there are models that make the efficacy of a reconstruction provided by a series expansion method at least as good as that provided by a transform method. To avoid the problem of computational expense, one usually stops the iterative process involved in the optimization long before the method has converged to the mathematically specified estimator. Practical experience indicates that this can be done very efficaciously. For c  1999 by CRC Press LLC example, as reported in [12], in the area of fully three-dimensional PET, the reconstruction times for FBP are slightly longer than for cycling through the data just once with a version of ART using spherically symmetric basis functions and the accuracy of FBP is significantly worse than what is obtained by this very early iterate produced by ART. Since the iterative process is, in practice, stopped early, in evaluating the efficacy of the result of a series expansion method one should look at the actual outputs rather than the ideal mathematical optimizer. Reported experiences comparing an optimized version of ART with an optimized version of EM [9, 11] indicate that the former can obtain as good or better reconstructions as the latter, but at a fraction of the computational cost. This computational advantage appears to be due to not trying to make use of all the measurements in each iterative step. The proliferation of image reconstruction algorithms imposes a need to evaluate the relative per- formance of these algorithms and to understand the relationship between their attributes (free pa- rameters) and their performance. In a specific application of an algorithm, choices have to be made regarding its parameters (such as the basis functions, the optimization criterion, constraints, relax- ation, etc.). Such choices affect the performance of the algorithm and there is a need for an efficient and objective evaluation procedure which enables us to select the best variant of an algorithm for a particular task and to compare the efficacy of different algorithms for that task. An approach to evaluating an algorithm is to first start with a specification of the task for which the image is to be used and then define a figure of merit (FOM) which determines quantitatively how helpful the image is, and hence the reconstruction algorithm, for performing that task. In the numerical observer approach [11, 16, 17], a task-specific FOM is computed for each image. Based on the FOMs for all the images produced by two different techniques, we can calculate the statistical significance at which we can reject the null hypothesis that the methods are equally helpful for solving a particular task in favor of the alternative hypothesis that the method with the higher average FOM is more helpful for solving that task. Different imaging techniques can then be rank-ordered on the basis of their average FOMs. It is strongly advised that a reconstruction algorithm should not be selected based on the appearance of a few sample reconstructions, but rather that a study along the lines indicated above be carried out. In addition to the efficacy of images produced by the various algorithms, one should also be aware of the computational possibilities that exist for executing them. A survey from this point of view can be found in [2]. References [1] Herman, G.T., Image Reconstruction from Projections: The Fundamentals of Computerized Tomography, Academic Press, New York, 1980. [2] Herman, G.T., Image reconstruction from projections, J. Real-Time Imag., 1, 3–18, 1995. [3] Herman, G.T., Tuy, H.K., Langenberg, K.J., and Sabatier, P.C., Basic Methods of Tomography and Inverse Problems, Adam Hilger, Bristol, England, 1987. [4] Sanz, J.L.C., Hinkle, E.B., and Jain, A.K., Radon and Projection Transform-Based Computer Vision, Springer-Verlag, Berlin, 1988. [5] Edholm, P. and Herman, G.T., Linograms in image reconstruction from projections, IEEE Trans. Med. Imag., 6, 301–307, 1987. [6] Herman, G.T., Roberts, D., and Axel, L., Fully three-dimensional reconstruction from data collected on concentric cubes in Fourier space: implementation and a sample application to MRI, Phys. Med. Biol., 37, 673–687, 1992. [7] Edholm, P., Herman, G.T., and Roberts, D.A., Image reconstruction from linograms: imple- mentation and evaluation, IEEE Trans. Med. Imag., 7, 239–246, 1988. [8] Lewitt, R.M., Alternatives to voxels for image representation in iterative reconstruction algo- rithms, Phys. Med. Biol., 37, 705–716, 1992. c  1999 by CRC Press LLC . both computationally efficient and accurate. (From the point of view of this handbook, these facts justify the choice of sampling points in Eqs. (26.8) through

Ngày đăng: 22/12/2013, 21:17

TỪ KHÓA LIÊN QUAN