© 2002 by CRC Press LLC Section II Scientific Principles and Characteristics of SSDs This section deals with theoretical and technical issues in the derivation of species sensitivity distributions (SSDs). The issues include the assumptions underlying the use of SSDs, the selection and treatment of data sets, methods of deriving distribu- tions and estimating associated uncertainties, and the relationship of SSDs to dis- tributions of exposure. A unifying statistical principle is presented, linking all current definitions of risk that have been proposed in the framework of SSDs. Further, methods for using SSDs for data-poor substances are proposed, based on effect patterns related to the toxic mode of action of compounds. Finally, the issue of SSD validation is addressed in this section. These issues are the basis for the use of SSDs in setting environmental criteria and assessing ecological risks (Sections IIIA and B). © 2002 by CRC Press LLC Theory of Ecological Risk Assessment Based on Species Sensitivity Distributions Nico M. van Straalen CONTENTS 4.1 Introduction 4.2 Derivation of Ecological Risk 4.3 Probability Generating Mechanisms 4.4 Species Sensitivity Distributions in Scenario Analysis 4.5 Conclusions Acknowledgments Abstract — The risk assessment approach to environmental protection can be consid- ered as a quantitative framework in which undesired consequences of potentially toxic chemicals in the environment are identified and their occurrence is expressed as a relative frequency, or a probability. Risk assessment using species sensitivity distributions (SSDs) focuses on one possible undesired event, the exposure of an arbitrarily chosen species to an environmental concentration greater than its no-effect level. There are two directions in which the problem can be addressed: the forward approach considers the concentration as given and aims to estimate the risk; the inverse approach considers the risk as given and aims to estimate the concentration. If both the environmental concen- tration (PEC) and the no-effect concentration (NEC) are distributed variables, the expected value of risk can be obtained by integrating the product of the probability density function of PEC and the cumulative distribution of NEC over all concentrations. Analytical expressions for the expected value of risk soon become rather formidable, but numerical integration is still possible. An application of the theory is provided, focusing on the interaction between soil acidification and toxicity of metals in soil. The analysis shows that the concentration of lead in soil that is at present considered a safe reference value in the Dutch soil protection regulation may cause an unacceptably high risk to soil invertebrate communities when soils are acidified from pH 6.0 to pH 3.5. This is due to a strong nonlinear effect of lead on the expected value of risk with decreasing pH. For cadmium the nonlinear component is less pronounced. The example illustrates the power of quantitative risk assessment using SSDs in scenario analysis. 4 © 2002 by CRC Press LLC 4.1 INTRODUCTION The idea of risk has proved to be an attractive concept for dealing with a variety of environmental policy problems such as regulation of discharges, cleanup of contam- inated land, and registration of new chemicals. The reason for its attractiveness seems to be because of the basically quantitative nature of the concept and because it allows a variety of problems associated with human activities to be expressed in a common currency. This chapter addresses the scientific approach to ecological risk assess- ment, with emphasis on an axiomatic and theoretically consistent framework. The most straightforward definition of risk is the probability of occurrence of an undesired event . As a probability, risk is always a number between 0 and 1, sometimes multiplied by 100 to achieve a percentage. An actual risk will usually lie closer to 0 than to 1, because undesired events by their nature are relatively rare. For the moment, it is easiest to think of probabilities as relative frequencies, being measured by the ratio of actual occurrences to the total number of occurrences possible. The mechanisms generating probabilities will be discussed later in this chapter. In the scientific literature and in policy documents, many different definitions of risk have been given. Usually, not only the occurrence of undesired events is considered as part of the concept of risk, but also the magnitude of the effect. This is often expressed as risk = (probability of adverse effect) × (magnitude of effect). This is a misleading definition, because, as pointed out by Kaplan and Garrick (1981), it equates a low probability/high damage event with a high probability/low damage event, which are obviously two different types of events, not the same thing at all. This is not to negate the importance of the magnitude of the effect; rather, magnitude of effect should be considered alongside risk. This was pictured by Kaplan and Garrick (1981) in their concept of the “risk curve”: a function that defines the relative frequency (occurrence) of a series of events ordered by increasing severity. So risk itself should be conceptually separated from the magnitude of the effect; however, the maximum acceptable risk for each event will depend on its severity. When the above definition of risk is accepted, the problem of risk assessment reduces to (1) specifying the undesired event and (2) establishing its relative inci- dence. The undesired event that I consider the basis for the species sensitivity framework is a species chosen randomly out of a large assemblage is exposed to an environmental concentration greater than its no-effect level . It must be emphasized that this endpoint is only one of several possible. Suter (1993) has extensively discussed the various endpoints that are possible in ecological risk assessments. Undesired events can be indicated on the level of ecosystems, communities, populations, species, or individuals. Specification of undesired events requires an answer to the question: what is it that we want to protect in the envi- ronment? The species sensitivity distribution (SSD) approach is only one, narrowly defined, segment of ecological risk assessment. Owing to its precise definition of the problem, however, the distribution-based approach allows a theoretical and quantitative treatment, which adds greatly to its practical usefulness. © 2002 by CRC Press LLC It is illustrative to break down the definition of our undesired event given the above and to consider each part in detail: •“ A species chosen randomly ” — This implies that species, not individuals, are the entities of concern. Rare species are treated with the same weight as abundant species. Vertebrates are considered equal to invertebrates. It also implies that species-rich groups, e.g., insects, are likely to dominate the sample taken. •“ Out of a large assemblage ” — There is no assumed structure among the assemblage of species, and they do not depend on each other. The fact that some species are prey or food to other species is not taken into account. •“ Is exposed to an environmental concentration ” — This phrase assumes that there is a concentration that can be specified, and that it is a constant. If the environmental concentration varies with time, risk will also vary with time and the problem becomes more complicated. •“ Greater than its no-effect level ” — This presupposes that each species has a fixed no-effect level, that is, an environmental concentration above which it will suffer adverse effects. Considered in this very narrowly defined way, the distribution-based assessment may well seem ridiculous because it ignores all ecological relations between the species. Still the framework derived on the basis of the assumptions specified is interesting and powerful, as is illustrated by the various examples in this book that demonstrate applications in management problems and decision making. 4.2 DERIVATION OF ECOLOGICAL RISK If it is accepted that the undesired event specified in the previous section provides a useful starting point for risk assessment, there are two ways to proceed, which I call the forward and the inverse approach (Van Straalen, 1990). In the forward problem , the exposure concentration is considered as given and the risk associated with that exposure concentration has to be estimated. This situation applies when chemicals are already present in the environment and decisions have to be made regarding the acceptability of their presence. Risk assessment can be used here to decide on remediation measures or to choose among management alternatives. Experiments that fall under the forward approach are bioassays conducted in the field to estimate in situ risks. In the inverse problem , the risk is considered as given (set, for example to a maximum acceptable value) and the concentration associated with that risk has to be estimated. This is the traditional approach used for deriving environmental quality standards. The experimental counterpart of this consists of ecotoxicological testing, the results of which are used for deriving maximum accept- able concentrations for chemicals that are not yet in the environment. Both in the forward and the inverse approach, the no-effect concentration (NEC) of a species is considered a distributed variable c . For various reasons, including © 2002 by CRC Press LLC asymmetry of the data, it is more convenient to consider the logarithm of the concentration than the concentration itself. Consequently, I will use the symbol c to denote the logarithm of the concentration (to the base e). Although the concentration itself can vary, in principle, from 0 to ∞ , c can vary from – ∞ to + ∞ , although, in practice a limited range may be applicable. Denote the probability density distribu- tion for c by n ( c ), with the interpretation that (4.1) equals the probability that a species in the assemblage has a log NEC between c 1 and c 2 . Consequently, if only c is a distributed variable and the concentration in the environment is constant, ecological risk, δ , may be defined as: (4.2) where h is the log concentration in the environment. In the forward problem, h is given and δ is estimated; in the inverse problem, δ is given and h is estimated. There are various possible distribution functions that could be taken to represent n ( c ), for example, the normal distribution, the logistic distribution, etc. These dis- tributions have parameters representing the mean and the standard deviation. Con- sequently, Equation 4.2 defines a mathematical relationship among δ , h , and the mean and standard deviation of the distribution. This relationship forms the basis for the estimation procedure. The assumption of a constant concentration in the environment can be relaxed relatively easily within the framework outlined above. Denote the probability density function for the log concentration in the environment by p ( c ), with the interpretation that: (4.3) equals the probability that the log concentration falls between c 1 and c 2 . The eco- logical risk δ , as defined above, can now be expressed in terms of the two distribu- tions, n ( c ) and p ( c ) as follows: (4.4) where P ( c ) is the cumulative distribution of p ( c ), defined as follows: (4.5) ncdc c c () ∫ 1 2 δ= () −∞ ∫ ncdc h pcdc c c () ∫ 1 2 δ= − () () () =− ()() −∞ ∞ −∞ ∞ ∫∫ 11Pc ncdc Pcncdc Pc pudu c () = () −∞ ∫ © 2002 by CRC Press LLC where u is a dummy variable of integration (Van Straalen, 1990). Again, if n and p are parameterized by choosing specific distributions, δ may be expressed in terms of the means and standard deviations of these distributions. The actual calculations can become quite complicated, however, and it will in general not be possible to derive simple analytical expressions. For example, if n and p are both represented by logistic distributions, with means µ n and µ p , and shape parameters β n and β p , Equation 4.4 becomes (4.6) Application of this equation would be equivalent to estimating PEC/PNEC ratios (predicted environmental concentrations over predicted NECs). Normally in a PNEC/PEC comparison only the means are compared and their quotient is taken as a measure of risk (Van Leeuwen and Hermens, 1995). However, even if the mean PEC is below the mean PNEC, there still may be a risk if there is variability in PEC and PNEC, because low extremes of PNEC may concur with high extremes of PEC. In Equation 4.6 both the mean and the variability of PNEC and PEC are taken into account. Equation 4.4 is graphically visualized in Figure 4.1a. This shows that δ can be considered an area under the curve of n ( c ), after this is multiplied by a fraction that becomes smaller and smaller with increasing concentration. If there is no variability in PEC, P ( c ) reduces to a step function, and δ becomes equivalent to a percentile of the n ( c ) distribution (Figure 4.1b). So the derivation of HC p (hazardous concen- tration for p % of the species) by the methods explained in Chapters 2 and 3 of this book, is a special case, which ignores the variability in environmental exposure concentrations, of a more general theory. Equation 4.4 can also be written in another way; applying integration by parts and recognizing that N (– ∞ ) = P (– ∞ ) = 0 and N ( ∞ ) = P ( ∞ ) = 1, the equation can be rewritten as (4.7) where N(c) is the cumulative distribution of n ( c ), defined by (4.8) where u is a dummy variable of integration. A graphical visualization of Equation 4.7 is given in Figure 4.1c. Again, δ can be seen as an area under the curve, now the δ µ β β µ β µ β =− − + − + − −∞ ∞ ∫ 1 11 2 exp exp exp n n n n n p p c c c dc δ= () () −∞ ∞ ∫ pcNcdc Nc nudu c () = () −∞ ∫ © 2002 by CRC Press LLC curve of p ( c ), after it is multiplied by a fraction that becomes larger and larger with increasing c . In the case of no variability in p ( c ), it reduces to an impulse (Dirac) function at c = h . In that case δ becomes equal to the value of N ( c ) at the intersection point (Figure 4.1d). This graphical representation was chosen by Solomon et al. (1996) in their assessment of triazine residues in surface water. The theory summarized above, originally formulated in Van Straalen (1990), is essentially the same as the methodology described by Parkhurst et al. (1996). These authors argue from basic probability theory, derive an equation equivalent to Equation 4.7, and also provide a simple discrete approximation. This can be seen as follows. Suppose that the concentrations of a chemical in the environment can be grouped in a series of discrete classes, each class with a certain frequency of occurrence. Let p i be the density of concentrations in class i , with width ∆ c i , and N i the fraction of species with a NEC below the median of class i , then (4.9) if there are m classes of concentration covering the whole range of occurrence. The calculation is illustrated here using a fictitious numerical example with equal class widths (Table 4.1). The example shows that, given the values of p i and N i provided, FIGURE 4.1 Graphical representation of the calculation of ecological risk, δ , defined as the probability that environmental concentrations are greater than NECs. The probability density distribution of environmental concentrations is denoted p (c), the distribution of NECs is denoted n(c). P(c) and N(c) are the corresponding cumulative distributions. In a and c, both variables are subject to error; in b and d, the environmental concentration is assumed to be constant. Parts a and b illustrate the calculation of δ according to Equation 4.4; parts c and d illustrate the (mathematically equivalent) calculation according to Equation 4.7. Part b illus- trates the derivation of HC p (see Chapter 3), and part d is equivalent to the graphical repre- sentation in Solomon (1996). n(c) 1-P(c) 0 1 0 1 δ 1-P(c) n(c) p(c) N(c) p(c) N(c) 1 Concentration (a) (c) (b) (d) 1 Concentration Concentration Concentration Probability Probability Probability Probability Probability density Probability density Probability density Probability density δ δ δ δ= = ∑ pN c ii i i m ∆ 1 © 2002 by CRC Press LLC the expected value of risk is 19.7%. In the example, the greatest component of the risk is associated with the fourth class of concentrations, although the third class has a higher frequency (Table 4.1). In summary, this section has shown that the risk assessment approaches devel- oped from SSDs, as documented in this book, can all be derived from the same basic concept of risk as the probability that a species is exposed to an environmental concentration greater than its no-effect level. Both the sensitivities of species and the environmental concentrations can be viewed as distributed variables, and once their distributions are specified, risk can be estimated (the forward approach) or maximum acceptable concentrations can be derived (the inverse approach). 4.3 PROBABILITY GENERATING MECHANISMS The previous section avoided the question of the actual reason that species sensitivity is a distributed variable. Suter (1998a) has rightly pointed out that the interpretation of sensitivity distributions as probabilistic may not be quite correct. The point is that probability distributions are often postulated without specifying the mechanism generating variability. One possible line of reasoning is: “Basically the sensitivity of all species are the same, however, our measurement of sensitivity includes errors. That is why species sensitivity comes as a distribution.” Another line of reasoning is: “There are differences in sensitivity among species. The sensitivity of each species is measured without error, but some species appear to be inherently more sensitive than others. That is why species sensitivity comes as a distribution.” TABLE 4.1 Numerical (Fictitious) Example Illustrating the Calculation of Expected Value of Risk (␦) Class No., i Concentration Interval, ⌬c Probability of Concentration in the Environment, p i ⌬c Cumulative Probability of Affected Species at Median of Class, N i Risk per Interval a 1 0–10 0 0.01 0 2 10–20 0.10 0.05 0.005 3 20–30 0.48 0.10 0.048 4 30–40 0.36 0.30 0.108 5 40–50 0.06 0.60 0.036 6 >50 0 1 0 Total 1 0.197 (= ␦) a From the probability density of concentrations in the environment (p i ) and the cumulative probability of affected species (N i ), according to Equation 4.9 in the text. © 2002 by CRC Press LLC In the first view, the reason one species happens to be more sensitive than another is not due to species-specific factors, but to errors associated with testing, medium preparation, or exposure. A given test species can be anywhere in the distribution, not at a specific place. The choice of test species is not critical, because each species can be selected to represent the mean sensitivity of the community. The distribution could also be called “community sensitivity distribution.” According to this view, ecological risk is a true probability, namely, the probability that the community is exposed to a concentration greater than its no-effect level. In the second view, the distribution has named species that have a specified position. When a species is tested again, it will produce the same NEC. There are patterns of sensitivity among the species, due to biological similarities. The choice of test species is important, because an overrepresentation of some taxonomic groups may introduce bias in the mean sensitivity estimated. Suter (1998a) pointed out that the second view is not to be considered probabi- listic. The mechanism generating the distribution in this case is entirely deterministic. The cumulative sensitivity distribution represents a gradual increase of effect, rather than a cumulative probability. When the concept of HC 5 (see Chapters 2 and 3) is considered as a concentration that leaves 5% of the species unprotected, this is a nonprobabilistic view of the distribution. The problem is similar to the difference between LC 50 , as a stochastic variable measured with error, and EC 50 , as a deter- ministic 50% effect point in a concentration–response relationship. According to the second view, the SSD represents variability, rather than uncertainty. Although the SSD may not be considered a true probability density distribution, there is an element of probability in the estimation of its parameters. Parameters such as µ and β in Equation 4.6 are unknown constants whose values must be estimated from a sample taken from the community. Since the sampling procedure will introduce error, there is an element of uncertainty in the risk estimation. This, then, is the probabilistic element. Ecological risk itself can be considered a deter- ministic quantity (a measure of relative effect, i.e., the fraction of species affected), which is estimated with an uncertainty margin due to sampling error. The probability generating mechanism is the uncertainty about how well the sample represents the community of interest. This approach was taken when establishing confidence inter- vals for HCS and HC p (see Chapter 3). It is also similar to the view expressed by Kaplan and Garrick (1981), who considered the relative frequency of events as separate from the uncertainty associated with estimating these frequencies. Their concept of risk curve includes both types of probabilities. It is difficult to say whether the probability generating mechanism should be restricted to sampling only. In practice, the determination of sensitivity of one species is already associated with error and so the SSD does not represent pure biological differences. In the extreme case, differences between species could be as large as differences between replicated tests on one species (or tests conducted under differ- ent conditions). A sharp distinction between variance due to specified factors (spe- cies) and variance due to unknown (random) error is difficult to make. This shows that the discussion about the interpretation of distributions is partly semantic. Con- sidering the general acceptance of the word risk and its association with probabilities, © 2002 by CRC Press LLC there does not seem to be a need for a drastic change in terminology, as long as it is understood what is analyzed. 4.4 SPECIES SENSITIVITY DISTRIBUTIONS IN SCENARIO ANALYSIS Most of the practical applications of SSDs in ecotoxicology have focused on the derivation of environmental quality criteria. However, a perhaps more powerful use of the concept is the estimation of risk (δ) for different options associated with a management problem. The lowest value for δ would then indicate the preferable management option. Different scenarios for environmental management or emission of chemicals could be assessed, based on minimization of δ or on an optimization of risk reduction vs. costs. An example of this approach is given in a report by Kater and Lefèvre (1996). These authors were concerned with management options for a contaminated estuary, the Westerschelde, in the Netherlands. Different scenarios were considered, dredging of contaminated sediments and emisson reductions. Risk estimations showed that zinc and copper were the most problematic components. Another interesting aspect of the species sensitivity framework is that the concept of ecological risk (δ) can integrate different types of effects and can express their joint risk in a single number. If we consider two independent events, for example, exposure to two different chemicals, the joint risk, δ T can be expressed in terms of the individual risks, δ 1 and δ 2 , as follows: (4.10) or in general: (4.11) if there are n independent events. The concept of δ lends itself very well to use in maps, where it can serve as an indicator of toxic effects if the concentrations are given in a geographic information system. In this approach, δ can be considered as the fraction of a community that is potentially affected by a certain concentration of chemical, abbreviated PAF. The PAF concept was applied by Klepper et al. (1998) to compare the risks of heavy metals with those of pesticides, in different areas of the Netherlands, and by Knoben et al. (1998) to measure water quality in monitoring programs. In general, PAF can be considered an indicator for “toxic pressure” on ecosystems (Van de Meent, 1999; Chapter 16). To illustrate further the idea of scenario analysis based on SSDs, I will review an example concerning interaction between soil acidification and ecological risk (Van Straalen and Bergema, 1995). In this analysis, data on ecotoxicity of cadmium and lead to soil invertebrates were used to estimate ecological risk (δ) for the δδδ T =− − () − () 11 1 12 δδ Ti i n =− − () = ∏ 11 1 [...]... Model 5 .2. 1 Normal Distribution Parameter Estimates and Log Sensitivity Distribution Units 5 .2. 2 Probability Plots and Goodness-of-Fit 5 .2. 2.1 CDF Probability Plot and CDF-Based Goodness-of-Fit (Anderson-Darling Test) 5 .2. 2 .2 Quantile-Quantile Probability Plot and Correlation/Regression-Based Goodness-of-Fit (Filliben, Shapiro and Wilk Tests) Bayesian and Confidence Limit-Directed Normal SSD Uncertainty... extend indefinitely to minus and plus in nity after employing the transformation The Hazen plotting positions do not transform to midpoints of the transformed jumps, except for the middle observation for n odd © 20 02 by CRC Press LLC Single-Fit Normal SSD: CDF on Normal Probability Paper 99 98 2 95 1 80 70 50 0 z- Value Fraction Affected (%) 90 30 20 -1 10 5 -2 2 1 -2 -1 0 1 Log Cadmium Standardized 2 FIGURE... for Individual Cadmium Data Points from Table 5.1 Sensitivity Quotients log HC5 Rank (Species) Data (NOEC) log10 Standardized Lower Median Upper 1 2 3 4 5 6 7 0.97 3.33 3.63 13.50 13.80 18.70 154.00 –0.01 323 0. 522 44 0.55991 1.13033 1.13988 1 .27 184 2. 187 52 –1.40086 –0.638 62 –0.58531 0 .22 638 0 .23 996 0. 427 74 1.73071 0.94 0.50 0.47 0.01 0.01 –0.10 –0.84 0.55 0.33 0.31 0.08 0.07 0. 02 –0.36 0.36 0 .24 0 .23 ... log HC50 exactly is derived in Section 5.6.7 5.3.4 SENSITIVITY OF LOG HCp TO INDIVIDUAL DATA POINTS Up to now, we have neglected any possible error in the data points Moreover, a commonly expressed fear about SSD-based extrapolation is that high points (insensitive species) have an unduly, and negative, that is, lowering, in uence on log HCp © 20 02 by CRC Press LLC TABLE 5 .2 Sensitivity Quotients of the... linearly interpolating the Hazen plotting positions For example, with x(1), x (2) , …, x(7) denoting the ordered data (standardized logs), the first quartile can be estimated as 0.75 · x (2) + 0 .25 · x(3) = –0. 625 , which compares nicely with the fitted value: Φ–1 (0 .25 ) = –0.674 The nonparametric forward and inverse algorithms are given in Section 5.6 .2 Single-Fit Normal SSD: CDF, ECDF, and Hazen Plotting... method is that, in the Bayesian interpretation, PDFs and corresponding CDFs are not determined as single curves, but as distributed curves It acknowledges the fact that density © 20 02 by CRC Press LLC estimates are uncertain The methodology is related to second-order Monte Carlo methods in which uncertainty is separated from variation (references in Aldenberg and Jaworska, 20 00) In the normal PDF SSD model,... 0.5)/n (dots), and 5th (thin), median (thick), and 95th (thin) percentile curves of posterior CDF values (FA at EC) Bayesian Fit Normal SSD CDF Uncertainty: 5% Extrapolation Cross-Hair 0.5 0.45 0.4 Fraction Affected 0.35 0.3 0 .25 0 .2 0.15 0.1 0.05 0 -3 .5 -3 -2 .5 -2 -1 .5 -1 Log Cadmium Standardized -0 .5 0 FIGURE 5.7 Enlarged lower portion of Figure 5.6 with 5% extrapolation cross-hair: horizontal cut at... Affected (%) 80 60 40 20 0 -3 -2 -1 0 1 Log Cadmium Standardized 2 3 FIGURE 5 .2 Single-fit normal SSD with CDF estimated through mean and sample standard deviation ECDF displays a staircase-like shape Halfway the jumps of 1/7th, dots are plotted: the so-called Hazen plotting positions pi = (i – 0.5)/n The plot is compatible with the Anderson–Darling goodness-of-fit test and other quadratic CDF-based statistics... Denneman, 1989), Common Logarithms and Standardized log Values (log SDU = log Sensitivity Distribution Unit) Species NOEC (mg Cd/kg) log10 Standardized 1 2 3 4 5 6 7 0.97 3.33 3.63 13.50 13.80 18.70 154.00 –0.01 323 0. 522 44 0.55991 1.13033 1.13988 1 .27 184 2. 187 52 –1.40086 –0.638 62 –0.58531 0 .22 638 0 .23 996 0. 427 74 1.73071 0.97 124 0.7 027 6 0.00000 1.00000 (5th Percentile) 0.65 –0.18469 –1.64485 (EC) 0.80 –0.09691... fit The line is not fitted by regression, but by parameter estimation through mean and standard deviation The fit can be judged to be quite satisfactory Regression- or correlationbased goodness-of-fit is treated in the next section 5 .2. 2 .2 Quantile-Quantile Probability Plot and Correlation/Regression-Based Goodness-of-Fit (Filliben, Shapiro and Wilk Tests) CDF plots such as the ones in Figures 5 .2 and 5.3 . Goodness-of-Fit 5 .2. 2.1 CDF Probability Plot and CDF-Based Goodness-of-Fit (Anderson-Darling Test) 5 .2. 2 .2 Quantile-Quantile Probability Plot and Correlation/Regression-Based Goodness-of-Fit (Filliben,. the probability of selecting a species below log HC 5 (5th percentile) equal to z -value –1.64. -3 -2 -1 0 1 2 3 0 0.1 0 .2 0.3 0.4 Density Single-Fit Normal SSD: PDF Log Cadmium Standardized . measurement of sensitivity includes errors. That is why species sensitivity comes as a distribution.” Another line of reasoning is: “There are differences in sensitivity among species. The sensitivity