Environmental Risk Assessment Reports - Chapter 19 potx

23 222 0
Environmental Risk Assessment Reports - Chapter 19 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

LA4111 ch19 new Page 389 Wednesday, December 27, 2000 2:54 PM CHAPTER 19 Using Statistics in Health and Environmental Risk Assessments Michael E Ginevan CONTENTS I II III IV V VI Introduction .390 Statistical Thinking and Regulatory Guidance 391 A Risk Assessment 391 The Hazard Index 391 Assessment of Chemical Cancer Risk 392 B Risk Assessment of Radionuclides .393 C Evaluation of Exposure 394 D Data Quality Objectives (DQOs) 394 The Data Quality Assessment Process 395 Evaluation of the Utility of Environmental Sampling for Health and Environmental Risk Assessments 395 A Graphical Methods B Distributional Fitting and Other Hypothesis Testing 400 C Nondetects 402 D Sample Support 402 E Does Contamination Exceed Background? 403 Estimation of Relevant Exposure: Data Use and Mental Models 404 Finding Out What is Important: A Checklist 408 Tools 409 References 411 389 © 2001 by CRC Press LLC LA4111 ch19 new Page 390 Wednesday, December 27, 2000 2:54 PM 390 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS I INTRODUCTION This chapter reviews the role that statistical thinking and methodology should play in the conduct of health and environmental risk assessments What I mean when I refer to “statistics?” The Random House Unabridged Dictionary defines statistics as “the science that deals with the collection, classification, analysis, and interpretation of numerical facts or data, and that, by use of mathematical theories of probability, imposes order and regularity on aggregates of more or less disparate elements.” This has a simple translation: statistics finds ways of coping with uncertain, incomplete, and otherwise not wholly satisfactory data Therefore, if you know the answer exactly, you don’t need statistics How might this methodology apply to planning, generating, and evaluating risk assessment reports? This question can best be approached by considering the four components of the risk assessment process, described in Chapters and The first step of the assessment is “hazard identification,” which reviews the inventory or materials present in the environment and uses information from toxicology or epidemiology studies to determine which of these might pose a risk to human health and/or the environment Statistical principles play important roles in epidemiology and both environmental and laboratory toxicology studies, but the form of these studies, and the role of statistics in them is so diverse that a meaningful discussion is beyond the scope of this chapter Many hazards are quite well characterized (e.g., there is little debate that high levels of environmental lead are hazardous in a variety of contexts), so the identification can be taken as a given However, in some cases the hazard identification of a material may rest on one or two studies that are of dubious validity If the risk assessment is driven by such materials (we will discuss how to determine the factors that are of greatest importance to the estimation of risk) it is often worthwhile to reconsider the underlying literature to determine how valid the studies underlying the hazard identification actually are The next step, toxicity assessment, requires development of a dose-response function A dose-response function provides the risk coefficients used to translate exposure into risk In essence it answers the question, “Given that substance X is bad, how rapidly its effects increase with increasing dose?” Many such coefficients are specified by regulatory agencies and will not be readily open to reevaluation However, in our discussion we will consider how a dose-response function is developed We will also treat the problem that arises because many “approved” doseresponse coefficients are either 95% statistical upper bounds, or incorporate “safety factors” of between 100 and 10,000 That is, if one is assessing the risk of one material, an upper bound or safety factor estimate is arguably appropriate because such assessments should err on the side of safety However, when, as is the case for hazardous waste sites, many risk coefficients are used, many materials are relevant to determining overall site risk It has been observed that if one sums 95% upper bounds for 10 dose-response coefficients, the probability of all of the coefficients being at or above their 95% upper bound is 0.0510, or about x 10-l3 As it turns out, this calculation, though correct, is not entirely relevant to the question of the conservatism inherent in a sum of upper bounds We will discuss some approaches to getting a better answer to this problem © 2001 by CRC Press LLC LA4111 ch19 new Page 391 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 391 The “exposure assessment” step frames the question of what actual exposures are likely to be Estimation of exposure is what drives (or should drive) environmental sampling efforts and subsequent exposure assessment modeling Both areas have substantial statistical content and will be treated in some detail Important topics include the pattern of environmental sampling, and why many “engineering judgement” or “compliance monitoring” samples may be nearly useless in terms of assessing actual exposures; the importance of having a model of human (or animal) behavior as the basis for estimating actual exposure; and the necessity of understanding the origin of your environmental contamination numbers The final step, risk characterization, is the product of the estimated exposure and the risk coefficients adopted In practice both quantities may have substantial uncertainties We will examine the source of such uncertainties, and the use of analytic and Monte Carlo methods for obtaining an overview of the uncertainties in the final risk estimates II STATISTICAL THINKING AND REGULATORY GUIDANCE There is a lot of good (and some not so good) statistical advice to be found in regulatory guidance documents This section will review three pertinent areas: risk assessment (U.S EPA, 1989), data quality objectives (U.S EPA, 1993), and data quality assessment (U.S EPA 1996) A Risk Assessment The Risk Assessment Guidance for Superfund (RAGS) document codifies many of the standard procedures used in HHRA This describes three distinct subprocesses: risk assessment of nonradioactive, noncarcinogenic, chemical toxicants using a quantity referred to as the Hazard Index (HI); risk assessment of chemical carcinogens using q1* values (also termed “slope factors” or “cancer potency factors”); and, risk assessment of radioactive materials (radionuclides) The Hazard Index The HI is given by: N HI = ∑ Di ÷ RfDi (1) i=1 where Di = dose received from the ith toxicant; RfDi = reference dose from the ith toxicant The origin of the RfD deserves some consideration It is generally taken from a single animal or, rarely, human study The starting point is the dose at which no biological response was observed (the no observed effect level or NOEL), the lowest dose level at which an effect was observed (the lowest observed effect level or © 2001 by CRC Press LLC LA4111 ch19 new Page 392 Wednesday, December 27, 2000 2:54 PM 392 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS LOEL), or either the dose predicted to yield a response in 10% of the individuals (the ED10) or a 95% lower bound on this dose (the LED10) Once a starting dose has been determined, various safety factors of 10 are applied That is, the value is usually divided by 10 to reflect uncertainties in animal to human extrapolation, and a second factor of 10 to reflect interindividual human variability Additional factors of 10 may be invoked if the starting dose is an LOEL, rather than an NOEL, if the study from which the dose number was derived was a subchronic, as opposed to a chronic, bioassay, and if the person developing the RfD had reservations about the quality of the study from which data originated Thus most RfDs are 100 to 1000fold below a dose which caused no or minimal effect, and reflect substantial regulatory conservatism The site may be considered safe if the HI is less than Actually, following the approach in RAGS, many HIs must usually be defined for the same site For example, there may be HIs of chronic (long-term or lifetime) exposure and subchronic exposure (shorter term than chronic; usually weeks or months); inhalation HIs, ingestion HIs, and HIs for developmental toxicants; or HIs broken out by mode of action of the toxicants involved (e.g., all liver toxicants) It should be stressed that, despite this variety, the HI is not a quantitative measure of risk A quantitative measure of risk is the RfD, which may be loosely defined as a dose at which we are quite sure nothing bad will happen Three elements are lacking from the HI: a quantitative description of the degree of conservatism inherent in a given RfD, a definition of what bad is, and some notion how rapidly things get worse as the RfD is exceeded (a slope factor) For example an HI of might mean that an exposed individual would suffer a small chance of a small depression in cholinesterase activity (an event of dubious clinical significance), or it might mean that an exposed individual could experience acute liver toxicity and possibly death Likewise, while HI values less than may be taken as safe, it does not follow that a site with an HI of 0.3 is safer than a site with an HI of 0.5 From a statistical perspective there is not much to say The HI is intended as a screening index, not a quantitative statement of risk Moreover, the diversity of the origin of the RfDs, and the arbitrary degrees of conservatism inherent in their derivation, makes it futile to discuss “distributional” properties of the HI One can, however, make some quantitative statements First, if one has a report with a single HI for all toxicants at a site, it is almost certainly too large, and its derivation contrary to regulatory guidance That is, as noted above, RAGS clearly states that HIs should be calculated separately for toxicants with different modes of action and differing exposure scenarios A second area of concern, which also applies to cancer risk assessment, is the accuracy of the exposure numbers used to derive the HI These statistical issues will be discussed in detail in subsequent sections Assessment of Chemical Cancer Risk At first look, the determination of cancer risk for chemical carcinogens, CRC, looks much like the HI calculation: N CRC = ∑ Di i=1 © 2001 by CRC Press LLC × q1*i (2) LA4111 ch19 new Page 393 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 393 where Di = the dose or exposure from the ith carcinogen of interest; q1*i = the cancer potency for that carcinogen However, this is an actual quantitative expression of risk, with units given in lifetime cancers per exposed individual Thus, any calculation of this type has a common endpoint An important feature of this calculation is that each q1*i i is an upper bound on the risk calculated on the basis of some model (usually the linearized multistage model of carcinogenesis) The derivation of these upper bounds deserves discussion The starting point is usually an animal study, where to groups of animals are exposed to different doses of a carcinogen, and a separate control group of animals is left unexposed The cancer response in these groups is fit with a dose-response model and the resulting dose-response model is used to develop a linear 95% upper bound on doseresponse, referred to as the cancer potency factor, or q1* value Thus, one statistical issue is that Equation (2) involves the summing of possibly many upper bounds, which seems to many to be excessively conservative One approach to determining the conservatism inherent in Equation (2) involves Monte Carlo simulation methods These methods first assume that the estimate of cancer potency, q1* follows a log-normal probability density (Putzrath and Ginevan, l99l) The logarithmic mean (µ) is calculated as: µ = 1n(qmle) (3) where qmle = the maximum likelihood or “best” estimate of q1* The logarithmic standard deviation (σ) can also be estimated as: σ = [ 1n (q1*) ] ữ Z0.95 (4) where Z0.95 = 1.645 (the normal score associated with an upper 95% bound on q1) After µ and σ have been determined for each carcinogen of interest, a large number (500 – 1000) of realizations are generated of Equation (2) using randomly generated q1s, and the 95th percentile of this empirical distribution can be determined Use of this approach can show that the supposed conservatism is less than one might think, in that the result of Equation (2) using q1*s is rarely more than twice as large as the 95th percentile of the Monte Carlo empirical distribution Still, Monte Carlo calculations like those described may be worthwhile when the number of carcinogens considered in Equation (2) is large Differences of a factor of or more are possible when the number of carcinogens is greater than 20 A more important aspect of Equation (2) is that Di is the lifetime average daily dose for the carcinogen in question Thus Di must be a dose estimate derived from very long-term average exposure This brings us again to the importance of exposure estimation to the entire risk assessment process B Risk Assessment of Radionuclides The situation for radiation is somewhat different from the situation for chemical carcinogens First, there is an extensive literature on the epidemiology of humans © 2001 by CRC Press LLC LA4111 ch19 new Page 394 Wednesday, December 27, 2000 2:54 PM 394 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS exposed to radiation Thus cancer risk coefficients are well known and relatively precise Second, the physical means by which radiation damages cells are well known, and precise dosimetric calculations are nearly always possible Finally, radiation is relatively easy to measure in the environment, and actual concentrations can be determined unambiguously It should also be mentioned that, because of the superior database, radiation cancer risk coefficients are usually based on best estimates rather than upper bounds C Evaluation of Exposure A general theme running through the RAGS document is that exposure assessments and, thus, doses should be based on values which are conservative, but not too conservative Yet the question of uncertainty is treated in a way which would be surprising to most statisticians: “Highly quantitative statistical uncertainty analysis is usually not practical or necessary for Superfund site risk assessments for a number of reasons, not the least of which are the resource requirements to collect and analyze site data in such a way that results can be presented as valid probability distributions.” It seems clear that this is not so, and given that cleanup costs are often in the tens of millions dollars it is hard to see why resources to the job right would not be forthcoming Moreover, the two U.S EPA documents which outline the DQO and Data Quality Assessment process, give careful guidance and recommend many good statistical tools which can be used in assessing data needs and data quality, and which are directly relevant to the issue of assessing environmental contamination, and hence the potential for exposure to human beings or other biota This contrast is interesting because the DQO and Data Quality Assessment documents are much more recent than the RAGS document, and reflect the evolving position of U.S EPA in the area of desirable levels of statistical sophistication In terms of regulatory risk assessment, we are moving from a qualitative to a quantitative world and from simple deterministic models to more sophisticated probabilistic ones D Data Quality Objectives (DQOs) The DQO process as defined by U.S EPA is useful for any data collection, not just the collection of data for Superfund sites (see Chapter 11) This process has seven steps: l State the problem: What sort of environmental contamination is it that you want to characterize? One might be interested in gas phase contaminants (e.g., radon, volatile organics), particulates (e.g., asbestos), or soil contamination One might be concerned with exposure from inhalation (e.g., radon, asbestos), dermal contact (e.g., pesticides), or soil ingestion (metals) Likewise the exact exposure scenario will affect data needs Identify the decision: What sort of question needs to be answered? Do you want to know about longterm average exposures (carcinogens), short-term maxima (neurotoxicants), or © 2001 by CRC Press LLC LA4111 ch19 new Page 395 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 395 episodic exposure resulting from particular human activities? What is the exact form of the question you want answered? Identify the inputs to the decision: How will you use the data? A hypothesis testing exercise might have different data requirements from a modeling study Define the study boundaries: Where and when should the data apply? Are you interested in current risks, or a particular site, or risks that may evolve over time (e.g., groundwater)? Develop a decision rule: You want to be able to say that given these data the exposure of interest is: a quantity, acceptable, unacceptable; or to precisely define the extent of remediation required Specify limits on decision errors: How precise exposure estimates need to be? What is the “loss” of calling an acceptable exposure unacceptable or vice versa If you are trying to infer doseresponse, will your study lose an unacceptable amount of power because of imprecise exposure data? Optimize the design for obtaining data: Define the most resource-effective sampling and analysis design for generating the data needed to satisfy the DQOs of the project The purpose of this seven-step process is to identify the characteristics of the data required, and to arrive at a strategy for collection It should be noted that the interaction between Step and Steps l – is iterative That is, if one defines DQOs that exceed one’s resources, one must rethink the question to identify DQOs with more reasonable resource requirements In the extreme case, one might be forced to abandon a particular study because meaningful data cannot be collected for reasonable cost The Data Quality Assessment Process This process assumes that you already have environmental contamination data and want to determine whether or not this data is adequate to the task at hand, i.e., assessing exposures and thus risks of a particular site or activity It is nearly the same as for defining a data collection effort, except here one must identify DQOs that can be met by the data at hand That is, the whole process of data quality assessment is aimed at defining whether or not a set of data meets a particular set of DQOs, or alternatively defining what set of DQOs a given data set will support III EVALUATION OF THE UTILITY OF ENVIRONMENTAL SAMPLING FOR HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS As noted above, exposure assessment is the factor which most often drives the uncertainty in a risk assessment, and environmental monitoring data are the factors which most commonly drive the exposure assessment Following the DQO process, we need to state the problem, which is to characterize the risk a given site or activity © 2001 by CRC Press LLC LA4111 ch19 new Page 396 Wednesday, December 27, 2000 2:54 PM 396 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS might pose to human health and the environment The ultimate decision (DQO, Step 2) we want to make is whether environmental contamination poses an unacceptable risk We must also specify the model we will use to determine whether or not unacceptable risks are present, because the form of the model will determine the inputs required (DQO, Step 3) We must also specify where and when the decision applies That is, what is the extent of the area of interest, and what time frame applies the decision of interest (DQO, Step 4)? Having defined the parameters of our decision, we must then determine what overall scale will determine whether risks are unacceptable (DQO, Step 5), and how sure we want to be about our decision (DQO, Step 6) Finally, armed with a clear description of what we want to accomplish, we can either plan our environmental sampling efforts, or evaluate the data at hand (DQO/DQA Step 7) This, of course, is not how things usually happen, but it is good to have an ideal as a yardstick Perhaps the most frequent flaw in environmental sampling efforts is the substitution of “compliance sampling” for “characterization sampling.” Compliance sampling is embodied by the “engineering-judgement sample” also described as the “sample-the-dirty-spots strategy.” This approach focuses sampling efforts on those areas assumed (a priori) most likely to be contaminated This approach evolved from disciplines like industrial hygiene where the goal is worker protection Here, if one samples all high exposure areas and these are found to be in compliance, exposures from the process may be assumed to be acceptably low For a well-defined process, this strategy is excellent, but for most environmental contamination problems the purpose of the sampling effort is to determine the nature and extent of contamination Thus, it is a characterization problem, not a compliance monitoring problem A Graphical Methods Figure shows a pseudo 3-D ball and stick plot of contamination for “bad stuff” at a hypothetical hazardous waste site There are four quadrants, each with 150 potential samples Quadrant is uncontaminated, quadrant is lightly contaminated, quadrant is moderately contaminated, and quadrant is heavily contaminated What would happen if we followed a compliance monitoring approach and sampled almost exclusively in quadrant 4? Clearly the site is heavily contaminated, but is this the correct answer? An evenly distributed sample would give a better overview of the extent of contamination and would allow a more reasonable risk assessment Plots like Figure can give a very good idea of the distributions of the samples taken at a site, and can indicate whether a given sample is unbiased and representative with respect to defining environmental contamination One should also be interested in the distribution of contamination in the sample used to characterize the site The box and whisker plot is a graphical aid that is useful in this context (see Figure 2) The line in the center in Figure marks the 50th percentile or median of the data The upper and lower “hinges” appear at the 25th and 75th percentiles of the data The “whiskers” connect the upper and lower hinges to the largest and smallest data point within 1.5 times the distance between the hinges, termed the interquartile range (IQ) from its respective hinge Outside points are between the hinge plus (upper) or minus (lower) 1.5 times the IQ and © 2001 by CRC Press LLC LA4111 ch19 new Page 397 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS Figure 397 This hypothetical hazardous waste site has four areas Areas and have little contamination, while has moderate contamination and is heavily contaminated times the IQ Far outside points are beyond the hinges plus or minus times the IQ Outside points are atypical of the data and may represent statistical outliers Figure shows box plots for the log-transformed data from quadrants of our hypothetical site It is easy to see that, as one goes from quadrant to quadrant 4, each data for each quadrant has a reasonably symmetrical distribution and a median about 10fold greater than the median for the preceding quadrant While box plots are a simple way to convey the central tendency and form of a set of data, one can use even simpler graphics See, for example, the dot plot in Figure This plot was generated by sorting the data into “bins” of specified width (here about 0.2) and plotting the points in a bin as a stack of dots (hence the name dot plot) Dot plots give a general idea of the shape and spread of a set of data and they are very simple to interpret Aside from the spatial structure of the data and its general shape and central tendency, we are often interested in the temporal structure of a data set Pesticide risk studies, for example, frequently involve or sets of data collected on the day of pesticide application and at several time periods postapplication Figure illustrates a temporal set of environmental measurements In Figure we see a set of log-transformed pesticide residue measurements plotted against the time since application The plot shows clearly that residue measurements diminish over time and © 2001 by CRC Press LLC LA4111 ch19 new Page 398 Wednesday, December 27, 2000 2:54 PM 398 Figure A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS A sample box plot The median is the 50% point of the data; the upper hinge (UH) is the 75% point of the data The lower hinge (LH) is the 25% point of the data; the upper whisker extends from the UH to the largest data value less than the UH plus 1.5 times the difference between the UH and the LH hinges [the interquartile range (IQ)]; the lower whisker extends from the LH to the smallest data value greater than the LH minus 1.5 times the LQ; outside points are either between the UH plus 1.5 IQ and UH plus IQ or LH minus 1.5 IQ and LH minus IQ Far outside points are beyond UH plus IQ or LH minus IQ that the rate of decline in the logarithm of residue levels is well-approximated by a straight line If we fit a linear regression to such data, the equation is of the form: Log(Ct) = A – B • t (5) where Ct = the concentration at time t; A and B = fitted regression coefficients If we rearrange (5) we get Ct = exp ( A – B • t ) © 2001 by CRC Press LLC (6) LA4111 ch19 new Page 399 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 399 Figure A box plot of contamination levels in areas 1-4 Note that median contamination level increases about an order of magnitude as one moves from area to area Figure An example dot-plot © 2001 by CRC Press LLC LA4111 ch19 new Page 400 Wednesday, December 27, 2000 2:54 PM 400 Figure A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS A scatter plot showing exponential decay That is, the pesticide in question is following an exponential decay process in measured residue values, which in turn suggests that a constant fraction of the material per unit time is being broken down Thus, such plots can give an idea of the magnitude of concentration change over time and can also suggest functional forms and even general processes responsible for concentration changes We have focused, up to this point, on the use of graphical methods to determine the general form of the distribution of environmental samples Graphical methods are also useful to gain insight into what “statistical” distribution, such as the normal or log-normal distribution, approximates the observed data distribution Figure shows a rankit, or normal-scores, plot of 100 random numbers from a standard normal (mean zero, variance one) distribution A plot like this is constructed by plotting the values of the data against their expected normal scores or “rankits.” The expected normal scores are the Z scores predicted from an observation’s rank and the total number of observations For example, the largest value in a sample of 50 has an expected normal score of about 2.2 If, as is the case here, the plot tends to fall on a straight line, it provides evidence that the data fit a normal distribution Note also that rankit plots can be constructed using the logarithms of the data If such transformed data produce a linear rankit plot, it suggests that the data fit a lognormal distribution B Distributional Fitting and Other Hypothesis Testing Figure illustrates a graphical method of evaluating fit to a normal of log-normal distribution The Wilk-Shapiro statistic is a goodness-of-fit statistic often included with a normal quantile plot It represents the correlation between the observed data © 2001 by CRC Press LLC LA4111 ch19 new Page 401 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS Figure 401 A normal scores plot of 100 random normal numbers Note that the points lie approximately on a straight line and their expected normal scores Note that, since the expected value of this statistic is much greater than zero, the usual tests cannot be used to assess its significance Use of this statistic for assessing normality is discussed in Gilbert (1987) There are a variety of alternative ways to test for fit to a normal distribution (remember that for a log-normal the same tests apply to the log-transformed data) One good method is to standardize the data by subtracting the sample mean and dividing by the sample standard deviation, and applying Lillifors test for normality This test is a modification of the Kolmogrov-Smirnov goodness-of-fit test which takes into account the fact that the data have been standardized (Wilkinson et al., l994) A larger question is whether or not you should care what distribution a set of measurements follow It is often helpful to know whether or not a set of measurements appears consistent with a log-normal or normal distribution and, under certain circumstances, it may be helpful to know if measurements fit some other distribution suggested by an a priori hypothesis In the last case, one of the best goodness-offit tests is the Kolmogrov-Smirnov (KS) test discussed in Conover (l980) However, checking goodness-of-fit to a variety of esoteric distributions to say that the data are most consistent with, e.g., a Lapace distribution, seems pointless If you have a limited amount of data, a statistical “fit” only says that one cannot reject the candidate distribution If you have a large amount of data, the fit” may offer fairly strong evidence that the data are in fact from a particular candidate distribution, but is this useful information? That is, you can use normal theory statistics, or you can use nonparametric statistics, but in general other distributions are not immediately help- © 2001 by CRC Press LLC LA4111 ch19 new Page 402 Wednesday, December 27, 2000 2:54 PM 402 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS ful for the development of statistical tests or confidence intervals You might argue that a known distribution might be an advantage for a Monte Carlo modeling exercise However, if data are sufficient to strongly suggest a particular distribution, they can also be used to construct a nonparametric density function (Silverman, l986) Such nonparametric density estimates have the added advantage that, given adequate data, they are always appropriate C Nondetects Another distributional problem concerns nondetects These occur when the analytical method used for a particular substance cannot distinguish the measured concentration from zero Data sets which contain nondetects are said to be “left-censored” because all one knows about the low values (on the left side of the axis) are that they are less than the detection limit If the number of nondetects is low, the easiest approach is to simply assume that nondetects are worth one-half the detection limit This assumes that the distribution of nondetects is uniform between the detection limit and zero To be more conservative, you could also assume that nondetects follow a triangular distribution between zero and the detection limit and, thus, assign a value of two-thirds of the detection limit to each nondetect (see Figure 7) In practice, these approaches work acceptably well if the number of nondetect values is less than l0% Where larger numbers of nondetects occur, you can make use of the fact that observations from a normal distribution tend to fall on a straight line when plotted against their expected normal scores (see Figure 6) This is true even if some of the data are below the limit of detection For example, if only 50 of the values in Figure were above the detection limit, they would still tend to fall on a straight line when plotted against the Z-scores derived from their ranks Calculating a linear regression of the form C = A + B • Z-Score (7) where C = the measured concentration; A = an estimate of the mean; B = an estimate of the standard deviation; Z-Score = expected normal score based on the rank order of the data (Gilbert, l987; Helsel, l990) Finally, what if nondetects are numerous and detects appear to follow a lognormal or normal distribution? There is no really good answer Nonparametric techniques might be used (Cleveland, l993; Silverman, l986) to try to get an idea of the likely shape of the nondetect part of distribution However, this is an area for further research D Sample Support A related point on environmental contamination measurements concerns the analytical “support” for a particular measurement Support refers to the actual volume of material that a particular concentration measurement represents For example, in soil sampling it is not uncommon for a grab sample to contain about 500 grams of © 2001 by CRC Press LLC LA4111 ch19 new Page 403 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS Figure 403 An illustration of two options for assigning a value to nondetects material In the laboratory, the chemist will select a 10 gram subsample, which will be extracted with a solvent The actual analysis may involve a milliliter of the extract being put through the analyzer Why is this important to risk assessment? Because “outliers” sometimes occur which are a result of this process In one data set we encountered a sample that was listed as being 30,000 ppm (3%) lead The lab was requested to reanalyze the sample, and they reported that the reading was accurate Further investigation revealed that “replicate” analysis consisted of injecting a second aliquot of extract into the massspectrometer The lab was then asked to replicate the measurement, starting with the original 800 gm sample The result was somewhat lower: 0.8 ppm Thus, a “heavily contaminated” area was found to be essentially clean The moral of this story is that if a few really high numbers drive the risk assessment, concerns about sample support may be in order E Does Contamination Exceed Background? It is often assumed that health and environmental risk assessments should not concern themselves with background contamination levels In practice, however, an intelligent approach to risk assessment requires two things: first, a definition of background contamination; second, a definition of exceedance Definitions of background can range from that contamination found in a pristine environment to the contamination that would be present if the activity of interest had not occurred In quantitative terms any definition of background should specify some distribution and include a measure of central tendency and a measure of variability For example, background measurements might have a geometric mean of 10 units and a geometric standard deviation of units © 2001 by CRC Press LLC LA4111 ch19 new Page 404 Wednesday, December 27, 2000 2:54 PM 404 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS Having defined background, we must then define exceedance A sample might be deemed to be background, for example, if it is not significantly different from background on the basis of a t-test The definition might also require a sample size above some specific N (usually 10 – 30) to guarantee that findings of no difference were based on comparisons of sufficient statistical power Definition of an exceedence could also specify that any sample exceeds background if it is more than the background mean, plus background standard deviations This definition, however, will result in the finding that 2% of background samples are above background Discussions of useful statistical tools for approaching the background question are provided in the U.S EPA DQA document and in Gilbert (1987) IV ESTIMATION OF RELEVANT EXPOSURE: DATA USE AND MENTAL MODELS A reasonable exposure assessment proceeds from a careful characterization of the spatial and temporal aspects of contamination and a model of how this contamination reaches the receptor Otherwise, the risk assessment assumptions may give rise to highly improbable or impossible events We have encountered risk assessments for carcinogens, for example, with a Di based on an assumed long-term exposure to the maximum concentration encountered at the site for every toxicant sampled at the site Similarly, maximum concentrations may be combined for pairs of toxicants which not occur together anywhere on the site; soil ingestion by children at a site may be based on toxicant concentrations from soil core samples taken from a depth of 20 feet In one pesticide risk assessment we encountered, it was alleged that the pesticide in question posed unacceptable risks to nesting birds The pesticide concentration present in the crop on the day of application was used to support this argument, despite the facts that pesticide application occurred about months before bird-nesting season, and that the pesticide in question has an environmental halflife of about 16 days These sorts of assumptions may result in alarmingly high risk numbers, although they have value as a screening exercise That is, if a given situation is found to be safe under extreme assumptions, it is clearly not a problem Unfortunately, if a site, facility, or activity fails to pass such a screen, it may prove difficult to later dislodge the assumptions with better science For example, if a screening exercise suggests cancer risks as high as 10-3 (assuming a family that lives in the sludge disposal pit, eats lots of soil, and grows all of their food in their backyard), it may be very hard to convince stakeholders that more careful analysis shows little or no remedial action is necessary Assuming that we have a defensible exposure model, the next question is how to turn environmental contamination measurements into exposure estimates Such a determination depends on the toxic endpoint of concern For acute toxic endpoints, such as neurotoxicity, an upper 95% bound or even the maximum of the distribution of sample measurements might be appropriate For cancer risk, at the other end of the spectrum, U.S EPA guidance suggests that an upper 95% bound on the arithmetic mean of the sample measurements is an appropriate “conservative” exposure This © 2001 by CRC Press LLC LA4111 ch19 new Page 405 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 405 is often reasonable guidance If there is a persistent chemical in the environment, a person or animal living in that environment will receive a dose proportionate to the arithmetic mean of the environmental concentration Risk assessments of chronic exposures, based on geometric means of environmental contaminants, are always incorrect The geometric mean is a good measure of central tendency for a lognormal distribution, but the central limit theorem guarantees that the average of a large number of samples from a log-normal distribution (or any distribution for that matter) follows a normal distribution with a mean equal to the arithmetic mean of the environmental contamination (log-normal) distribution Moreover, an exposure estimate based on the geometric mean is always too low and, thus, understates actual exposures This brings up the question of a defensible upper bound on this mean (see Figure 1) Here there are four quadrants, each with 150 potential samples As Figure shows, going from quadrant to quadrant 4, each quadrant has a geometric mean about 10fold greater than the one preceding A simple random sample of this site has an expected logarithmic mean (M) of -1.12 and an expected logarithmic standard error (S) of 3.04 Using these quantities in the well-known formula for the arithmetic mean µ of a log-normal distribution: = exp [ M + (S2 ữ 2) ] (8) generates a µ value of about 33.4* (which overstates matters a bit, given that the actual arithmetic mean is about 8.6!) One might assume that this example is contrived in that such a great disparity among areas would show as a lack-of-fit to a log-normal This is not the case Figure 8, where a rankit plot of the entire 600 sample universe is shown, suggests a tolerably good fit to a log-normal Clearly, if the entire sample cannot readily reject a log-normal, it is unrealistic to expect subsamples to so If one goes for a conservative 95% upper bound on the arithmetic mean, UB0.95, as a “worst-case” exposure: UB0.95 = exp [ M + (S2 ÷ 2) + S × C0.95 × (N – 1)–1/2 (9) where N = the sample size; C0.95 = a tabled constant C0.95 is a tabled constant which depends on both S and N (Gilbert, 1987) Assume we take a sample of 80 from our universe and obtain our expected M and S values (M = -1.12; S = 3.04) The resulting worst-case exposure is 241 or about 30 times the actual expected exposure Moreover, this result is still not a worst-case because it assumes expected values for M and S when, in fact, the result from a small sample could yield an even higher UB0.95 There are at least two morals in this story First, in environmental risk assessment you ignore spatial (and temporal) heterogeneity at your peril Second, if you can avoid making a lot of parametric assumptions, it is best to so This raises the question of how to estimate a reasonable upper bound exposure Actually, Equation (9) may * In this example all concentrations are unitless © 2001 by CRC Press LLC LA4111 ch19 new Page 406 Wednesday, December 27, 2000 2:54 PM 406 Figure A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS A rankit plot of the data shown in Figure Note that the data appear to fit a lognormal fairly well give a pretty good answer if the data are really log-normal Since this is not always easy to verify, the bootstrap method (Efron and Tibshirani, 1993), a nonparametric method for generating error bounds, is generally preferred In the bootstrap method, the data is resampled with replacement and the mean of these samples is calculated In the example above, we would take samples of our 80 values with replacement (which means that a measurement is likely to occur more than once in a given sample) using one of two possible sampling strategies In the first strategy, about 30 resamples are taken; their mean and variance are calculated, and standard normal statistical theory is used to calculate an upper bound This generally works well, but the result may be affected by outliers An even safer strategy is the second; a thousand bootstrap means are generated and the 950th largest is taken as a 95% upper bound This gives an utterly defensible, totally nonparametric 95% upper bound The bootstrap method may appear to give “something for nothing” because resampling data does not seem like a valid way of generating new information This is not so, but an explanation is beyond the scope of our discussion (see Efron and Tibshirani, 1993 for reassurance) At this point a good question is, “what I if I am stuck with a ‘dirty-spots’ sample?” If there is a great deal of money riding on the decision, redo the sampling Note also that nothing is ever so bad that it cannot be made worse In one case, for example, a dirty-spots sample was taken first This was pointed out to the client, who then went out and took a comparable number of samples from an area known © 2001 by CRC Press LLC LA4111 ch19 new Page 407 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 407 to be clean At this point the formula given by Equation (9) was applied to the combined data (which were strongly bimodal because of the clean/dirty dichotomy) The resulting upper bound on the mean exceeded the largest observation from the dirty spots sample! These data were beyond salvage by even the bootstrap method The original sample had been taken to find dirty spots and, thus, was simply not representative of the site The clean sample had been taken to compensate for the bias of the first sample and, thus, was likewise unrepresentative of the site The result was a set of about 100 measurements which told us almost nothing about the nature and extent of contamination at the site The client finally instituted a statistically designed sampling plan Problems can also arise with HI calculations At one waste site, for example, a very large number of toxicants were sampled A total of 50 samples were taken, using relatively imprecise,* but cheap, analytic methods in the interest of costsavings The site was not heavily contaminated, so for most toxicants all samples were below the LOD Nonetheless, the risk assessment showed HI calculations greater than This was because, in the absence of other information, the risk assessor assumed 1/2 the LOD as the concentration of the toxicants that have not been found Since many toxicants had not been found and the LODs were high, relative to the RfDs, the HIs exceeded A risk-oriented approach could have avoided this problem First, consider the decision to sample for so many toxicants There was no good reason for this, so fewer toxicants should have been sampled Second, the usefulness of LODs which are 20%-30% of the RfD, should have been questioned, since for this case, a sample with a number of contaminants at levels below the LOD is not necessarily “safe” from a regulatory viewpoint LODs should be specified to reflect this concern Finally, when 50 samples are all LODs, order statistics can be used to show that the mean is much less than 1/2 the LOD, if you have an idea of the variability of higher concentration compounds at the site (Ginevan, 1993) Monte Carlo methods are an important recent development in exposure assessment methods To illustrate the use of this tool, assume that we have an estimate of environmental contamination C, an exposure scenario S, and we want to calculate an average daily intake of toxicant in milligrams (mg) per kilogram (kg) of body weight We can find the total daily intake of toxicant D in mg as a function of environmental contamination level and our exposure scenario An example: Assume the upper 95% bound on the mean of “bad stuff” (BS) in soil is 10 ppm (or 10mg/kg) Assume our exposure of interest is to a 20 kg child and the child eats 100 mg of soil per day Intake in mg/kg is given as 10 mg/kg (mgs bad stuff per gram of soil) ì 0.1 g (100 mg) soil ữ 20 kg (child weight) = 0.05 mg/kg If we put this model in Monte Carlo terms, we would specify a probability distribution for the concentration of bad stuff in soil, a probability distribution for the amount of soil ingested per day, and a probability distribution for the body weight * The limit of detection (LOD) was 20-30% of the RfD for most chemicals © 2001 by CRC Press LLC LA4111 ch19 new Page 408 Wednesday, December 27, 2000 2:54 PM 408 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS of children A large number of BS intakes are generated using randomly selected values for BS concentration in soil, ingestion amount, and child body weight The result is a distribution of daily intakes in mg/kg An extensive discussion of this sort of Monte Carlo modeling can be found in Holland and Sielken (1993) In some cases, this might be a real advance For example, if we were concerned about an acute toxicant, we could use a Monte Carlo model to calculate proportion of soil ingestion episodes which would result in an unacceptable intake of toxicant For this model we would use the distributions discussed in our example If we were, instead, interested in the distribution of chronic intake we would use the same distribution for child body weight, but would need to define a distribution for the average soil intake across children (e.g., how much soil per day does a specific child eat on the average?), and we would need a distribution for the uncertainty in the arithmetic mean soil concentration (probably from the bootstrap method) Also, we would need to be very careful how the resulting chronic intake distribution was employed because children not remain children for a lifetime, so the resulting chronic distribution applies to a limited period Monte Carlo methods are often useful, but like any other tool they carry with them the potential for misuse A bad model translated into a Monte Carlo simulation is still bad, and even a good model requires correctly specified inputs V FINDING OUT WHAT IS IMPORTANT: A CHECKLIST The preceding discussion was intended to give the reader an overview of those statistical issues we feel are most important in planning or evaluating risk assessments Here we summarize those points in the context of planning and evaluating actual risk assessments The first requirement is to understand the origin of the environmental measurements used as the basis of the exposure analysis This includes a good understanding of the distribution of the measurements in space (see Figure 1) and time (see Figure 5) and the rationale of the sampling plan used to collect the data There should be clear graphical displays of the data and a description of the sampling plan, which clearly states its purpose Samples that appear to be clustered, rather than evenly distributed across the area of interest, should raise concern Likewise, if the report casts its discussion in terms of a bounding exercise, there is reason to be concerned that the samples are taken in a manner that tends to overstate contamination levels Sample support is another important issue Does the report say how a particular type of sampling was conducted and exactly how chemical analyses were performed? It is likely that it does not, but if outliers appear to be a problem, or if the data appear odd in other ways, this is an area worth exploring Also look for information on the distributional form of contamination data This should include both goodness-of-fit tests and graphical representations of the data If the contamination data appear to have been derived in a reasonable way, the next area of concern is how they are used to estimate exposure For human cancer risk, or other endpoints based on chronic exposure, exposure estimates should be based on either the arithmetic mean of exposure measurements or an upper bound on the © 2001 by CRC Press LLC LA4111 ch19 new Page 409 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 409 arithmetic mean of these measurements Upper bounds based on bootstrap procedures are generally preferable to those derived from assumptions of log-normality The latter are acceptable, however, if the fit of the data to a log-normal distribution is good If cancer risk is determined by several chemicals, be aware that it is rather conservative to assume the mean concentration of each chemical will be at its upper 95% bound Not all compounds are likely to actually be at their respective upper bounds A better answer could be derived in a manner similar to the Monte Carlo procedure described above for obtaining an upper bound on the sum of cancer potency factors Consider the temporal aspects of exposure For example, many pesticides have rather short environmental half-lives, which need to be taken into account in estimating exposure (e.g., see Figure 5) Similarly, assuming long-term exposure to volatile organic compounds (VOCs) may result in incorrect answers One assessment we reviewed assumed a constant 30-year emission rate for a VOC The result of this model was a total emission that was times the total amount of VOC present at the site A final concern is the choice of exposure scenario Is it the hypothetical family living in the sludge pit, or is it a reasonable scenario? This is important because one scenario which fails the “laugh test” will cast doubt on the whole risk assessment If exposure estimates seem reasonable, then consider the dose response models used to determine risk As noted earlier, these tend to be numbers that are approved by U.S EPA or some other regulatory agency However, if the assessment is driven by one or two compounds, a review of the origin of their dose response data may be in order The foregoing assumes a deterministic risk assessment, based on statistical upper bounds For a Monte Carlo-based risk assessment, examine the derivation of each input distribution, the structure of the model and the assumptions behind it A good quick check is to run the best estimates of all of the input parameters through the algebraic form of the model (e.g., consider the soil ingestion example) The result should be near the center (median) of the Monte Carlo result Similarly, run the reasonable upper bounds for each input; the result should be above the 95th percentile of the Monte Carlo result (sometimes quite far above) If Monte Carlo results in a substantially different central value, or an even more extreme upper bound, something is wrong VI TOOLS So, now that you know how to apply statistical principles to planning and evaluating health and environmental risk assessments, what tools should you use? If a lot is riding on the assessment, the first thing you should get is a good statistical consultant What are some traits to look for in such an individual? One important point in problems of this sort is that their primary focus is not data analysis The first step in solving these sorts of problems always involves the client and the statistical consultant working together to define the questions to be asked Look for someone who asks lots of questions and who wants to work with you to understand the © 2001 by CRC Press LLC LA4111 ch19 new Page 410 Wednesday, December 27, 2000 2:54 PM 410 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS problem If a statistician’s first instinct is to start crunching numbers, you are almost certain to be badly served Another point is that much of the preceding discussion probably did not seem very “statistical.” Consider the problem of combining cancer potency estimates The actual problem was to identify a plausible distribution for q1 Similarly, the point of the example of the family living in the sludge pit is that bad assumptions can ruin a technically correct analysis At first glance these might both be viewed as qualitative problems; in fact they are vital to sensible quantitative analysis Sometimes qualitative problems are quantitative problems disguised by uncertainty A statistician can help you tell the difference This discussion also placed a lot of emphasis on graphics This is because it is very important to present analyses and results as clearly as possible and graphics are a powerful way to clarify things (imagine Figure as a table) A good consultant uses lots of graphics If your consultant’s reports are so “scientific” that few can understand them, they are unlikely to be useful in decision making Finally, problems of the sort discussed here often require a large library of specialized statistical, modeling, and graphics software, and considerable computer power Even large companies may not have good resources for statistical analysis and modeling, and may try to tailor your problem to fit what they can do, rather than what you need to be done If you are hiring a statistical consultant, don’t be afraid to ask him about his resources What tools might be appropriate for do-it-yourself analysis? The short answer is that if you have a recent version of one of the major statistical packages, such as SPSS, SAS, or S-Plus, they will all of the graphics and tests discussed here, and much more An environmental statistics module for S-Plus is now available (Millard, 1997) If you are purchasing new software, however, note that all major statistics packages are expensive and require training for effective use At the other extreme, the release of the final U.S EPA DQA document contains a mini-statistical package which will most, but not all, of the analyses and graphics discussed here For the actual analyses discussed here, I use three packages: Statistix for Windows (Analytical Software, 1996), Axum for Windows (MathSoft, 1996), and Systat for Windows (SPSS, 1996) Of the three, Statistix will much of the analysis presented here (but not 3-D plots), and is very easy to use Axum is likewise quite easy to use and will a very broad range of statistical graphics Systat is another comprehensive package which will everything, but it is probably not a good choice for the casual user For those intrigued by the bootstrap method, I recommend Resampling Stats for Windows (Bruce, Simon, and Oswald, 1996) It is one of the easiest ways to implement this sort of methodology and is inexpensive The foregoing discussion should give the reader an idea of the spectrum of tools available, but it is by no means an exhaustive list of tools There are many other excellent statistical packages on the market For additional advice, talk to people who are doing the kinds of statistics you need to For further reading I particularly recommend the U.S EPA DQO and DQA documents, and two general references: Holland and Sielken (1993), and Gilbert (1987) I hope the reader finds this discussion useful and that it clarifies the use of statistics in health and environmental risk assessment © 2001 by CRC Press LLC LA4111 ch19 new Page 411 Wednesday, December 27, 2000 2:54 PM USING STATISTICS IN HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS 411 REFERENCES Analytical Software, Statistix for Windows, Tallahassee, FL, 1996 Bruce, P., Simon, J., and Oswald, T., Resampling Stats for Windows, Operation Guide, Resampling Stats, Inc., Arlington, VA, 1996 Caulcutt, R and Boddy R., Statistics for Analytical Chemists, Chapman & Hall, London, 1983 Cleveland, W.S., The Elements of Graphing Data, Wadsworth and Brooks/Cole, New York, 1985 Cleveland, W.S., Visualizing Data, AT&T Bell Laboratories, Murray Hill, NJ, 1993 Conover, W.J., Practical Non-parametric Statistics, John Wiley & Sons, New York, 1980 Efron, B and Tibshirani, R.J., An Introduction to the Bootstrap, Chapman & Hall, London, 1993 Gilbert, R.O., Statistical Methods for Environmental Pollution Monitoring, Van Nostrand Reinhold, New York, 1987 Ginevan, M.E., Bounding the mean concentration for environmental contaminants when all observations are below the limit of detection, American Statistical Association 1993 Proceedings of the Section on Statistics and the Environment, Hayward, CA, 123, 1993 Helsel, D.R., Less than obvious: statistical treatment of data below the detection limit, Environ Sci and Technol., 24, 1766, 1990 Holland, C.D and Sielken, R.L., Quantitative Cancer Modeling and Risk Assessment, PTR Prentice-Hall, Englewood Cliffs, NJ, 1993 MathSoft, Axum User’s Guide, Cambridge, MA, 1996 Millard, S.P., Environmental Statistics for S-Plus, Users Manual, Probability, Statistics & Information, Seattle, WA, 1997 National Academy of Sciences, Health Effects of Exposures to Low Levels of Ionizing Radiation: BEIR V, National Academy Press, Washington, 1990 National Academy of Sciences, Health Effects of Radon and Other Internally Deposited Alpha Emitters: BEIR IV, National Academy Press, Washington, 1988 Putzrath, R.M And Ginevan, M.E., Meta-analysis: methods for combining data to improve quantitative risk assessment, Regulatory Toxicol and Pharmacol., 14, 178, 1991 Scheaffer, R.L., Mendenhall, W., and Ott, L., Elementary Survey Sampling, PWS- ent, Boston, 1990 Silverman, B.W., Density Estimation for Statistics and Data Analysis, Chapman & Hall, London, 1986 SPSS, Systat 6.0 for Windows, Statistics, SPSS Inc., Chicago, 1996 U.S Environmental Protection Agency, Data Quality Objectives Process for Superfund: Interim Final Guidance, Office of Emergency and Remedial Response, Washington, 1993 U.S Environmental Protection Agency, Guidance for Data Quality Assessment: Practical Methods for Data Analysis, EPA QA/G-9, QA96 version, Washington, 1996 U.S Environmental Protection Agency, Risk Assessment Guidance for Superfund,volume I: Human Health Evaluation Manual (Part A), Interim Final, Office of Emergency and Remedial Response, Washington, 1989 Wilkinson, L.M., et al., SYSTAT for DOS: Using SYSTAT, Version 6, SYSTAT Inc., Evanston, IL, 1994 © 2001 by CRC Press LLC ... three pertinent areas: risk assessment (U.S EPA, 198 9), data quality objectives (U.S EPA, 199 3), and data quality assessment (U.S EPA 199 6) A Risk Assessment The Risk Assessment Guidance for... OF ENVIRONMENTAL SAMPLING FOR HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS As noted above, exposure assessment is the factor which most often drives the uncertainty in a risk assessment, and environmental. .. evaluating risk assessment reports? This question can best be approached by considering the four components of the risk assessment process, described in Chapters and The first step of the assessment

Ngày đăng: 12/08/2014, 04:22

Từ khóa liên quan

Mục lục

  • Table of Contents

  • CHAPTER 19: Using Statistics in Health and Environmental Risk Assessments

    • CONTENTS

    • I. INTRODUCTION

    • II. STATISTICAL THINKING AND REGULATORY GUIDANCE

      • A. Risk Assessment

        • 1. The Hazard Index

        • 2. Assessment of Chemical Cancer Risk

        • B. Risk Assessment of Radionuclides

        • C. Evaluation of Exposure

        • D. Data Quality Objectives (DQOs)

          • 1. The Data Quality Assessment Process

          • III. EVALUATION OF THE UTILITY OF ENVIRONMENTAL SAMPLING FOR HEALTH AND ENVIRONMENTAL RISK ASSESSMENTS

            • A. Graphical Methods

            • B. Distributional Fitting and Other Hypothesis Testing

            • C. Nondetects

            • D. Sample Support

            • E. Does Contamination Exceed Background?

            • IV. ESTIMATION OF RELEVANT EXPOSURE: DATA USE AND MENTAL MODELS

            • V. FINDING OUT WHAT IS IMPORTANT: A CHECKLIST

            • VI. TOOLS

            • REFERENCES

            • APPENDIX A: Risk Assessment Resources Guide

Tài liệu cùng người dùng

Tài liệu liên quan