Measurement and Analysis of Biodiversity 183 Figure The accumulation of individuals as a function of accumulated incidence for a bird community in southeastern Arizona (W Leitner, unpublished data) Note that Nk is nearly exponential with Sk We shall have few data points (e.g., N1 and N2) at which we wish to concentrate our estimation effort (S1 and S2) When we have a large proportion of rare species, the convergence will of course be much slower Now, Sobs is a random variable and the variance in Sobs is VAR ðSobs Þ ¼ s Z ðð1 À pÞt À ð1 À pÞ2t ÞhðpÞdp ð11Þ The root mean square error will then decrease approximately like the square root of the error in the mean Perhaps one of the indices of diversity that measure evenness could be used to sharpen confidence intervals in some way Before undertaking the design of such a method, it would be beneficial to know if any function of F could be used to increase our confidence in our estimate of E[Sobs] Burnham and Overton (1978) show that (F1,y, Ft) are sufficient for p under the assumption of iid pi Now, the distribution of Sobs is determined by the distriP bution of p because Sobs ¼ ti¼1 Fi Therefore, to increase our confidence in our estimator we should focus directly on reducing its variance Thus, our goals in estimating SR should include getting estimators that decrease the bias faster than (1 À E[p])t and whose standard errors decrease faster than q Eẵpịt Often, these goals will conflict with one another However, knowing that the bias decreases like (1 À E[p])t tells us that for large enough t, the bias decreases faster than t À k; k ¼ 1, 2,y Resampling methods can use this fact to accelerate the rate of bias reduction Resampling Methods (SJk, SB) We can very rarely expect Eẵf Xị ẳ f EẵXị 12ị When f(x) is linear in x then Eq (12) holds Suppose that f(x) is nonlinear Then as x gets spread away from x ¼ E[X] the function f can over- or underweigh the contribution of X to the mean Thus, both the nature of f and the distribution of X contribute to bias For this reason, outliers (points far from E[X]) can severely alter the approximation E[f(X)]Ef(E[X]) Most estimators derive from just such an approximation Therefore, when we have outliers, we can sometimes improve our estimate by deleting them from the data set A version of this approach applied to data not so clearly identified as extreme is the intuitive basis for the jackknife and bootstrap estimators If the order in which data are recorded does not matter, then we should expect that the relationship between the distribution of t and t À data points should inform us about the relationship between the distribution of t and t ỵ data points Thus, the common property that underlies both techniques is the exchangeability of the data There are two resampling approaches we consider: jackknifing and bootstrapping In each case, we describe the resampling strategy that gives rise to the estimator However, in each case it turns out that actual resampling need not be done The F statistics contain the same information we would obtain from resampling Jackknife Estimator (SJk) Jackknifing involves computing the average value of a statistic on a reduced data set One removes each combination of k data points in a data set and computes the statistic of interest to give a set of new pseudostatistics Then, by taking a suitably weighted average of these statistics we get a reduced-bias version of the original statistic The number of data points removed at a time gives the order of the jackknife The most obvious statistic to which we can apply the jackknife is the