1. Trang chủ
  2. » Thể loại khác

Encyclopedia of biodiversity encyclopedia of biodiversity, (7 volume set) ( PDFDrive ) 3310

1 3 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1
Dung lượng 69,21 KB

Nội dung

Measurement and Analysis of Biodiversity data from communities in which the parameters are reasonably well-known from separate, intensive sampling Finally, there are many properties of ecological communities which are likely to influence estimator performance These include factors such as species richness s, various properties of the species abundance distribution, intraspecific clumping, and interspecific associations Each of these can vary widely Simulated data allow us to explore large parts of this parameter space More important, with simulated data we can cover the range corresponding to anticipated field conditions This allows us to test and select estimators before taking them into the field Nevertheless, only real data can tell us which parts of the vast parameter space are relevant Thus, we cannot ignore either approach: The biological reality of actual data is complemented by the versatility of simulated data Some progress has been made in the evaluation of estimation methods with data Every such study cannot be detailed here, and not enough work has been done to allow generalizations to be made Chazdon et al (1998) provide one useful example of an approach combining both real and simulated data They examined the performance of eight estimators using young woody regeneration in six tropical forest sites Noticing that patchiness (intraspecific clumping) varied across sites, they then resampled the data sets to create simulated data sets with a range of patchiness levels Testing estimators on these simulated data sets revealed the effect of patchiness on the performance of various estimators Estimator Evaluation Criteria Clearly, one comprehensive study of the performance of all estimators over all possible data is impossible Evaluation of estimators, then, will necessarily be done by many investigators with different communities and over different ranges of simulated data To facilitate this cooperative approach, we must identify a common set of evaluation criteria that can be used to measure the performance of an estimator on a data set These criteria are slightly different from theoretical properties of estimators Numerous such criteria have been put forth; although they differ in calculation methods, they are often variants of a few common themes, which are listed as follows: Bias: As defined previously, bias measures deviation from the true value Common measures of bias include E [Sest] À s, or deviation from s, and (E[Sest] À s)/s, or relative deviation from s In the evaluation of estimators with data, bias can be calculated as a mean across replicate data sets or can be assessed with increasing sample size within a data set Variance: As discussed previously, variance measures uncertainty in an estimate If we have high variance, we can have little confidence that a single observation is a good indication of the mean Variance may be calculated numerically over replicates, or an analytical estimator of variance may be used These measures can also be expressed as confidence intervals and may be used in hypothesis testing when comparing estimated richnesses Note that this measure does not depend on s, which is considered by other criteria Sample size independence: This measures the rate of convergence of an estimator’s mean to its asymptotic value An estimator that rapidly approaches its asymptotic value requires less sampling effort to obtain an equally good estimate Such 193 an estimator is relatively sample size independent This measure, too, does not depend on s Note that the previous three criteria may be calculated over multiple realizations of the same underlying parameters (e.g., the capture probabilities qi) The following two may only be considered among multiple data sets in which these parameters vary Correlation with S: Since s is a fixed parameter, we introduce S as a random variable representing the true richness of any of a number of data sets, between which S and capture probabilities qi may vary If the correlation between an estimator Sest and true richness S across these data sets is high, Sest may be useful in comparing SR between sites or census occasions (Palmer, 1990) Note that correlation with S does not require that an estimator have low bias Rather, correlation reflects a relative deviation of Sest from S that is somewhat constant across data sets Robustness: A robust estimator is one for which performance (as measured by any of the previous criteria) changes little across a range of data sets It is important to note that the relative importance of each of these criteria depends on the problem of interest In measuring absolute species richness, minimizing bias may be a priority In comparisons across time or space, variance and correlation with S become more important, whereas bias may become more tolerable A final criterion represents a minimum requirement of sorts for estimator performance: Beating Sobs: An estimator should perform better than Sobs, the number of species observed Most estimators should have lower bias than Sobs most of the time However, there are trade-offs For example, the variance of some estimators may be higher than that of Sobs Selecting an Estimator Although it is tempting to think that an estimator may exist which is robust to all census conditions, such an estimator is unlikely given the difficulty of the problem Therefore, estimators should be chosen based on the context of the problem of interest To select the estimators that are most likely to perform well on a new data set, one might use the following approach First, consider the anticipated properties of a data setFbased on knowledge of the system’s biologyFin relation to the modeling assumptions behind estimators Select those estimators whose assumptions are best met by the expected data If possible, select or modify the sampling strategy so that assumptions will be better met and/or more estimators may be used Then, test estimators using data (simulated or previously existing) similar to that anticipated The previous approach should indicate (i) which estimators might perform best on new data and (ii) whether these estimators perform well enough to meet the needs of the problem at hand Although evaluating a number of estimators on multiple replicate data sets may seem a daunting task, computer programs (e.g., EstimateS, R K Colwell, http://viceroy.eeb.uconn.edu/estimates; WS2M, W R Turner et al., unpublished) may be used to automate many of these calculations However, use of such programs should complement, not replace, thoughtful

Ngày đăng: 28/10/2022, 11:45

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN