Trace Environmental Quantitative Analysis: Principles, Techniques, and Applications - Chapter 2 potx

82 448 0
Trace Environmental Quantitative Analysis: Principles, Techniques, and Applications - Chapter 2 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

37 2 Calibration, Verification, Statistical Treatment of Analytical Data, Detection Limits, and Quality Assurance/Quality Control If you can measure that of which you speak, and can express it by a number, you know something of your subject, but if you cannot measure it, your knowledge is meager and unsatisfactory. —Lord Kelvin CHAPTER AT A GLANCE Good laboratory practice 38 Error in laboratory measurement 41 Instrument calibration and quantification 45 Linear least squares regression 58 Uncertainty in interpolated linear least squares regression 64 Instrument detection limits 68 Limit of quantitation 81 Quality control 85 Linear vs. nonlinear least squares regression 91 Electronic interfaces between instruments and PCs 104 Sampling considerations 112 References 117 Chromatographic and spectroscopic analytical instrumentation are the key determi- native tools to quantitate the presence of chemical contaminants in biological fluids and in the environment. These instruments generate electrical signals that are related to the amount or concentration of an analyte of environmental or environmental health significance. This analyte is likely to be found in a sample matrix taken from the environment, or from body fluids. Typical sample matrices drawn from the environment include groundwater, surface water, air, soil, wastewater, sediment, sludge, and so forth. Computer technology has merely aided the conversion of an © 2006 by Taylor & Francis Group, LLC 38 Trace Environmental Quantitative Analysis, Second Edition analog signal from the transducer to the digital domain. It is the relationship between the analog or digital output from the instrument and the amount or concentration of a chemical species that is discussed in this chapter. The process by which an electrical signal is transformed to an amount or concentration is called instrument calibration. Chemical analysis based on measuring the mass or volume obtained from chemical reactions is stoichiometric. Gravimetric (where the analyte of interest is weighed) and volumetric (where the analyte of interest is titrated) techniques are methods that are stoichiometric. Such methods do not require calibration. Most instrumental determinative methods are nonstoichiometric and thus require instrument calibration. This chapter introduces the most important aspect of TEQA for the reader. After the basics of what constitutes good laboratory practice are discussed, the concept of instrumental calibration is introduced and the mathematics used to establish such calibrations are developed. The uncertainty present in the interpolation of the cali- bration is then introduced. A comparison is made between the more conventional approach to determining instrument detection limits and the more contemporary approaches that have recently been discussed in the literature. 1–6 These more con- temporary approaches use least squares regression and incorporate relevant elements from statistics. 7 Quality assurance/quality control principles are then introduced. A contemporary statistical approach toward evaluating the degree of detector linearity is then considered. The principles that enable a detector’s analog signal to be digitized via analog-to-digital converters are introduced. Principles of environmental sampling are then introduced. Readers can compare QA/QC practices from two environmental testing laboratories. Every employer wants to hire an analyst who knows of and practices good laboratory behavior. 1. WHAT IS GOOD LABORATORY PRACTICE? Good laboratory practice (GLP) requires that a quality control (QC) protocol for trace environmental analysis be put in place. A good laboratory QC protocol for any laboratory attempting to achieve precise and accurate TEQA requires the following considerations: • Deciding whether an external standard, internal standard, or standard addition mode of instrument calibration is most appropriate for the intended quantitative analysis application. • Establishing a calibration curve that relates instrument response to analyte amount or concentration by preparing reference standards and measuring their respective instrument responses. • Performing a least squares regression analysis on the experimental cali- bration data to evaluate instrument linearity over a range of concentrations of interest and to establish the best relationship between response and concentration. • Computing the statistical parameters that assist in specifying the uncer- tainty of the least squares fit to the experimental data points. • Running one or more reference standards in at least triplicate as initial calibration verification (ICV) standards throughout the calibration range. © 2006 by Taylor & Francis Group, LLC Calibration, Verification, Statistical Treatment 39 ICVs should be prepared so that their concentrations fall to within the mid-calibration range. • Computing the statistical parameters for the ICV that assist in specifying the precision and accuracy of the least squares fit to the experimental data points. • Determining the instrument detection limits (IDLs). • Determining the method detection limits (MDLs), which requires estab- lishing the percent recovery for a given analyte in both a clean matrix and the sample matrix. With some techniques, such as static headspace gas chromatography (GC), the MDL cannot be determined independently from the instrument’s IDL. • Preparing and running QC reference standards at a frequency of once every 5 or 10 samples. This QC standard serves to monitor instrument precision and accuracy during a batch run. This assumes that both cali- bration and ICV criteria have been met. A mean value for the QC reference standard should be obtained over all QC standards run in the batch. The standard deviation, s, and the relative standard deviation (RSD) should be calculated. • Preparing the running QC surrogates, matrix spikes, and, in some cases, matrix spike duplicates per batch of samples. A batch is defined in EPA methods to be approximately 20 samples. These reference standard spikes serve to assess extraction efficiency where applicable. Matrix spikes and duplicates are often required in EPA methods. • Preparing and running laboratory blanks, laboratory control samples, and field and trip blanks. These blanks serve to assess whether samples may have become contaminated during sampling and sample transport. It has been stated many times by experienced analysts that in order to achieve GLP, close to one QC sample must be prepared and analyzed for nearly each and every real-world environmental sample. 2. CAN DATA REDUCTION, INTERPRETATION, AND STATISTICAL TREATMENT BE SUMMARIZED BEFORE WE PLUNGE INTO CALIBRATION? International Union of Pure and Applied Chemistry (IUPAC) recommendations, as discussed by Currie, 1 is this author’s attempt to do just that. The true amount that is present in the unknown sample can be expressed as an amount such as a #ng analyte, or as a concentration [#µg analyte/kg of sample (weight/weight) or #µg analyte/L of sample (weight/volume)]. The amount or concentration of true unknown sented by τ is shown in Figure 2.1 being transformed to an electrical signal y. accomplished. The signal y, once obtained, is then converted to the reported estimate © 2006 by Taylor & Francis Group, LLC Yes, indeed. Figure 2.1, adapted and modified, while drawing on recently published present in either an environmental sample or human/animal specimen and repre- Chapters 3 and 4 describe how the six steps from sampling to transducer are 40 Trace Environmental Quantitative Analysis, Second Edition x 0 , as shown in Figure 2.1. This chapter describes how the eight steps from calibration to statistical evaluation are accomplished. The ultimate goal of TEQA is then real- ized, i.e., a reported estimate x 0 with a calculated uncertainty using statistics in the measurement expressed as ±u. We can assume that the transduced signal varies linearly with x, where x is the known analyte amount or concentration of a standard reference. This analyte in the standard reference must be chemically identical to the analyte in the unknown sample represented by its true value τ. x is assumed to be known with certainty since it can be traced to accurately known certified reference standards, such as that obtained from the National Institute of Standards and Tech- nology (NIST). We can realize that where y 0 = the y intercept, the magnitude of the signal in the absence of analyte. m = slope of the best-fit regression line (what we mean by regression will be taken up shortly) through the experimental data points. The slope also defines the sensitivity of the specific determinative technique. e y = the error associated with the variation in the transduced signal for a given value of x. We assume that x itself (the amount or concentration of the analyte of interest) is free of error. This assumption is used throughout the mathematical treatment in this chapter and serves to simplify the mathematics introduced. FIGURE 2.1 The process of trace environmental quantitative analysis. (Adapted from L. Currie, Pure and Applied Chemistry, 67, 1699–1723, 1995.) x o ± u Sampling Sample preservation and storage Extraction Cleanup Injection Transducer Calibration Quantification Verification Measure IDLs Calculate MDLs Conduct QA/QC Interpretation Statistical evaluation y Signal (y) from transducer that corresponds to τ; signal may or may not include background interferences; requires quality analytical instrumentation, efficient sample preparation and competent analytical scientists and technicians Reported estimate of amount or concentration of unknown targeted analyte (x 0 ) with calculated uncertainty (±u) in the measurement; the ultimate goal and limitation of TEQA True amount or concentration (τ) of unknown targeted analyte in environmental sample or animal specimen; satisfies a societal need to know, the need for TEQA! τ y = y 0 + m x+ e y yy mxe y =+ + 0 © 2006 by Taylor & Francis Group, LLC Calibration, Verification, Statistical Treatment 41 τ the amount or concentration at a trace level, represented by x 0 , with an uncertainty u such that x 0 could range from a low of x 0 – u to a high of x 0 + u. Let us focus a bit more on the concept of error in measurement. 2.1 H OW I S M EASUREMENT E RROR D EFINED ? Let us digress a bit and discuss measurement error. Each and every measurement includes error. The length and width of a page from this book cannot be measured without error. There is a true length of this page, yet at best we can only estimate its length. We can measure length only to within the accuracy and precision of our measuring device, in this case, a ruler or straightedge. We could increase our pre- cision and accuracy for measuring the length of this page if we used a digital caliper. Currie has defined x 0 as the statistical estimate derived from a set of observations. The error in x 0 represented by e is shown to consist of two parts, systematic or bias error represented by ∆ and random error represented by δ such that: 8 ∆ is defined as the absolute difference between a population mean represented by µ (assuming a Gaussian or normal distribution) and the true value τ. δ is defined as the absolute difference between the estimated analytical result for the unknown sample x 0 and the population mean µ. δ can also be viewed in terms of a multiple z of the population standard deviation σ, σ being calculated from a Gaussian or normal distribution of x values from a population. 2.2 A RE T HERE L ABORATORY -B ASED E XAMPLES OF H OW ∆∆ ∆∆ AND δδ δδ A RE U SED ? Yes, indeed. Bias, ∆, reflects systematic error in a measurement. Systematic error may be instrumental, operational, or personal. x 0 = τ + e |x 0 − µ||µ − τ| ∆δ δ = zσ © 2006 by Taylor & Francis Group, LLC Referring to Figure 2.1, we can, at best, only estimate and report a result for 42 Trace Environmental Quantitative Analysis, Second Edition Instrumental errors arise from a variety of sources such as: 9 • Poor design or manufacture of instruments •Faulty calibration of scales •Wear of mechanical parts or linkages • Maladjustment • Deterioration of electrical, electronic, or mechanical parts due to age or location in a harsh environment • Lack of lubrication or other maintenance Errors in this category are often the easiest to detect. They may present a challenge in attempting to locate them. Use of a certified reference standard might help to reveal just how large the degree of inaccuracy as expressed by a percent relative error really is. The percent relative error (%error), i.e., the absolute differ- ence between the mean or average of a small set of replicate analyses, x ave , and the true or accepted value, τ divided by τ and multiplied by 100, is mathematically stated (and used throughout this book) as follows: It is common to see the expression “the manufacturer states that its instrument’s accuracy is better than 2% relative error.” The analyst should work in the laboratory with a good idea as to what the percent relative error might be in each and every measurement that he or she must make. It is often difficult if not impossible to know the true value. This is where certified reference standards such as those provided by the NIST are valuable. High precision may or may not mean acceptable accuracy. Operational errors are due to departures from correct procedures or methods. These errors often are time dependent. One example is that of drift in readings from an instrument before the instrument has had time to stabilize. A dependence of instrument response on temperature can be eliminated by waiting until thermal equilibrium has been reached. Another example is the failure to set scales to zero or some other reference point prior to making measurements. Interferences can cause either positive or negative deviations. One example is the deviation from Beer’s law at higher concentrations of the analyte being measured. However, in trace analysis, we are generally confronted with analyte concentration levels that tend toward the opposite direction. Personal errors result from bad habits and erroneous reading and recording of data. Parallax error in reading the height of a liquid in a buret from titrimetic analysis is a classic case in point. One way to uncover personal bias is to have someone else repeat the operation. Occasional random errors by both persons are to be expected, but a discrepancy between observations by two persons indicates bias on the part of one or both. 9 Consider the preparation of reference standards using an analytical balance that reads a larger weight than it should. This could be due to a lack of adjusting the %error = − × x τ τ 100 © 2006 by Taylor & Francis Group, LLC Calibration, Verification, Statistical Treatment 43 zero within a set of standard masses. What if an analyst, who desires to prepare a solution of a reference standard to the highest degree of accuracy possible, dissolves what he thinks is 100 mg of standard reference (the solute), but really is only 89 mg, in a suitable solvent using a graduated cylinder and then adjusts the height of the solution to the 10-mL mark? Laboratory practice would suggest that this analyst use a 10-mL volumetric flask. Use of a volumetric flask would yield a more accurate measurement of solution volume. Perhaps 10 mL turns out to be really 9.6 mL when a graduated cylinder is used. We now have inaccuracy, i.e., bias, in both mass and in volume. Bias has direction, i.e., the true mass is always lower or higher. Bias is usually never lower for one measurement and then higher for the next measurement. The mass of solute dissolved in a given volume of solvent yields a solution whose concentration is found from dividing the mass by the total volume of solution. The percent relative error in the measurement of mass and the percent relative error in the measurement of volume propagate to yield a combined error in the reported concentration that can be much more significant than each alone. Here is where the cliché “the whole is greater than the sum of its parts” has some meaning. Random error, δ, occurs among replicate measurement without direction. If we were to weigh 100 mg of some chemical substance, such as a reference standard, on the most precise analytical balance available and repeat the weighing of the same mass additional times while remembering to rezero the balance after each weighing, we might get data such as that shown below: Notice that the third replicate weighing yields a value that is less than the second. Had the values kept increasing through all five measurements, systematic error or bias might be evident. Another example for the systematic vs. random error “defective,” this time using analytical instrumentation, is to make repetitive 1-µL injections of a reference standard solution into a gas chromatograph (GC). A GC with an atomic emission detector (GC-AED) was used by this author to evaluate whether systematic error was evident for triplicate injection of a 20 ppm reference standard containing tetra- chloro-m-xylene (TCMX) and decachlorobiphenyl (DCBP) dissolved in the solvent iso-octane. Both analytes are used as surrogates in EPA organochlorine pesti- cide/polychlorinated biphenyl (PCB)-related methods such as EPA Methods 608 and 8080. The atomic emission from microwave-induced plasma excitation of chlorine atoms, monitored at a wavelength of 837.6 nm, formed the basis for the transduced Replicate No. Weight (mg) 1 99.98 2 100.10 3 100.04 4 99.99 5 100.02 © 2006 by Taylor & Francis Group, LLC electrical signal. Both analytes are separated chromatographically (refer to Chapter 4 for an introduction to the principles underlying chromatographic separations) and 44 Trace Environmental Quantitative Analysis, Second Edition appear in a chromatogram as distinct peaks, each with an instrument response. The emitted intensity is displayed graphically in terms of a peak whose area beneath the curve is given in units of counts-seconds. These data are shown below: The drop between the first and second injections in the peak area along with the rise between the second and third injections suggests that systematic error has been largely eliminated. A few days before these data were generated a similar set of triplicate injections was made using a somewhat more diluted solution containing TCMX and DCBP into the same GC-AED. The following data were obtained: The rise between the first and second injections in peak area followed by the drop between the second and third injections suggests again that systematic error has been largely eliminated. One of the classic examples of systematic error, and one that is most relevant to TEQA, is to compare the bias and percent relative standard deviations in the peak area for five identical injections using a liquid- handling autosampler against a manual injection into a graphite furnace atomic absorption spectrophotometer using a common 10- µL glass liquid-handling syringe. It is almost impossible for even the most skilled analyst around to achieve the degree of reproducibility afforded by most automated sample delivery devices. Good laboratory practice suggests that it should behoove the analyst to eliminate any bias, ∆, so that the population mean equals the true value. Mathematically stated: ∆ = 0 = µ − τ ∴ µ = τ Eliminating ∆ in the practice of TEQA enables one to consider only random errors. Mathematically stated: TCMX (counts-seconds) DCBP (counts-seconds) 1st injection 48.52 53.65 2nd injection 47.48 52.27 3rd injection 48.84 54.46 TCMX (counts-seconds) DCBP (counts-seconds) 1st injection 37.83 41.62 2nd injection 38.46 42.09 3rd injection 37.67 40.70 δ µ= −x 0 © 2006 by Taylor & Francis Group, LLC Calibration, Verification, Statistical Treatment 45 Random error alone becomes responsible for the absolute difference between the reported estimate x 0 and the statistically obtained population mean. Random proceed in this chapter to take a more detailed look at those factors that transform 0 τ 3. HOW IMPORTANT IS INSTRUMENT CALIBRATION AND VERIFICATION? It is very important and the most important task for the analyst who is responsible for operation and maintenance of analytical instrumentation. Calibration is followed by a verification process in which specifications can be established and the analyst can evaluate whether the calibration is verified or refuted. A calibration that has been verified can be used in acquiring data from samples for quantitative analysis. A calibration that has been refuted must be repeated until verification is achieved, e.g., if, after establishing a multipoint calibration for benzene via a gas chromato- graphic determinative method, an analyst then measures the concentration of benzene in a certified reference standard. The analyst expects no greater than a 5% relative error and discovers to his surprise a 200% relative error. In this case, the analyst must reconstruct the calibration and measure the certified reference standard again. Close attention must be paid to those sources of systematic error in the laboratory that would cause the relative error to greatly exceed the minimally acceptable relative error criteria previously developed for this method. An analyst who expects to implement TEQA and begins to use any one of the various chromatography data acquisition and processing software packages available in the marketplace today is immediately confronted with several calibration modes available. Most software packages will contain most of the modes of instrumental tages as well as the overall limitations are given. Area percent and normalization percent (norm%) are not suitable for quantitative analysis at the trace concentration level. This is due to the fact that a concentration of 10,000 ppm is only 1% (parts per hundred), so that a 10 ppb concentration level of, for example, benzene, in drinking water is only 0.000001% benzene in water. Weight% and mole% are subsets of norm% and require response factors for each analyte in units or peak area or peak with its corresponding quantification equation. Quantification follows calibration and thus achieves the ultimate goal of TEQA, i.e., to perform a quantitative analysis of a sample of environmental or environmental health interest in order to determine the concentration of each targeted chemical analyte of interest at a trace concentra- tion level. Table 2.1 and Table 2.2 are useful as reference guides. We now proceed to focus on the most suitable calibration modes for TEQA. Referring again to Table 2.1, these calibration modes include external standard (ES), internal standard (IS), to include its more specialized isotope dilution mass spectrom- etry (IDMS) calibration mode, and standard addition (SA). Each mode will be dis- cussed in sufficient detail to enable the reader to acquire a fundamental understanding © 2006 by Taylor & Francis Group, LLC calibration that appear in Table 2.1. For each calibration mode, the general advan- height per gram or per mole, respectively. Table 2.2 relates each calibration mode error can never be completely eliminated. Referring again to Figure 2.1, let us y to x . We focus on those factors that transform to y in Chapters 3 and 4. 46 Trace Environmental Quantitative Analysis, Second Edition TABLE 2.1 Advantages and Limitations of the Various Modes of Instrument Calibration Used in TEQA Calibration Mode Advantages Limitations Area% No standards needed; provides for a preliminary evaluation of sample composition; injection volume precision not critical Need a nearly equal instrument response for all analytes so peak heights/areas all uniform; all peaks must be included in calculation; not suitable for TEQA Norm% Injection volume precision not critical; accounts for all instrument responses for all peaks All peaks must be included; calibration standards required; all peaks must be calibrated; not suitable for TEQA ES Addresses wide variation in GC detector response; more accurate than area%, norm%; not all peaks in a chromatogram of a given sample need to be quantitated; compensates for recovery losses if standards are taken through sample prep in addition to samples; does not have to add any standard to the sample extract for calibration purposes; ideally suited to TEQA Injection volume precision is critical; instrument reproducibility over time is critical; no means to compensate for a change in detector sensitivity during a batch run; needs a uniform matrix whereby standards and samples should have similar matrices IS Injection volume precision not critical; instrument reproducibility over time not critical; compensates any variation in detector sensitivity during a batch run; ideally suited to TEQA Need to identify a suitable analyte to serve as an IS; bias is introduced if the IS is not added to the sample very carefully; does not compensate for percent recovery losses during sample preparation since IS is usually added after both extraction and cleanup are performed IDMS Same as for IS; injection volume precision not critical; instrument reproducibility over time not critical; compensates for analyte percent recovery losses during sample preparation since isotopes are added prior to extraction and cleanup; eliminates variations in analyte vs. internal standard recoveries; ideally suited to TEQA Need to obtain a suitable isotopically labeled analog of each target analyte; isotopically labeled analogs are very expensive; bias is introduced if the labeled isotope is not added to the sample very carefully; needs a mass spectrometer to implement; mass spectrometers are expensive in comparison to element- selective GC detectors or non-MS LC detectors SA Useful when matrix interference cannot be eliminated; applicable where analyte-free matrix cannot be obtained; commonly used to measure trace metals in “dirty” environmental samples Need two aliquots of same sample to make one measurement; too tedious and time consuming for multiorganics quantitative analysis Source: Modified and adapted from Agilent Technologies GC-AED Theory and Practice, Training Course from Diablo Analytical, Inc., 2001. © 2006 by Taylor & Francis Group, LLC [...]...  ∑ y − m ∑ x  i (2. 22) i i i Rearranging Equation (2. 21) for m, m= ∑ i xi yi − b ∑ i xi ∑i xi2 (2. 23) Next, substitute for b from Equation (2. 22) into Equation (2. 23): m= ( ) ∑i xi yi − (1/N ) ∑ i yi − m ∑ i xi ∑i xi ∑i xi2 Upon simplifying, we obtain m= © 20 06 by Taylor & Francis Group, LLC N ∑ i xi yi − ∑ i xi ∑ i yi ( N ∑i xi2 − ∑ i xi ) 2 (2. 24) 62 Trace Environmental Quantitative Analysis,... (2. 38) 2 x−x Hence, substituting Equations (2. 37) and (2. 38) into Equation (2. 36) gives 2 Vy = σ 2 + 2 x − x 2 + N 2 N xi − x ∑ (2. 39) i Factoring out 2 gives   1 Vy = σ 1 + + N   2    2 xi − x   x−x ∑ n i 2 (2. 40) The residual variance 2 may be replaced by its estimate s2, and upon substituting Equation (2. 40) into Equation (2. 35), it gives 1 y = [ y − m ( x − x )] ± ts 1 + + N x−x ∑ N i 2. .. (2. 25) ∑ y = Ny (2. 26) i i i i Upon substituting Equations (2. 25) and (2. 26) into Equation (2. 24), we arrive at an expression for the least squares slope m in terms of only measurable data points: m= ∑i xi yi − Nxy ∑i xi2 − Nx 2 (2. 27) Defining the sum of the squares of the deviations in x and y calculated from all N pairs of calibration points gives N SS xx = ∑ (x − x ) 2 i i N SS yy = ∑ (y − y ) 2. .. (2. 28) and the y intercept can be obtained by knowing only the slope m and the mean value of all of the x data and the mean value of all of the y data according to b = y − mx © 20 06 by Taylor & Francis Group, LLC (2. 29) Calibration, Verification, Statistical Treatment 63 Equations (2. 28) and (2. 29) enable the best-fit calibration line to be drawn through the experimental x, y points Once the slope m and. .. 150000 100000 50000 0 0 5 10 15 20 [ppm AR 124 2 total] FIGURE 2. 2 Calibration for Aroclor 124 2 using an external standard i i the ith analyte in the reference standard, ∆CS , as ∆CS approaches zero Quantitative analysis is then carried out by relating the instrument response to the analyte concentration in an unknown sample according to i i Ai = RF C unknown (2. 2) i Equation (2. 2) is then solved for the... method 625 Internal standards are extracted from samples; calibration is established in appropriate solvent; e.g EPA method 525 .2 Internal standards are extracted from standards and samples; calibration is established from extracted standard solutions; e.g EPA method 524 .2 Isotope dilution GC-MS (organics): isotopically labeled priority pollutants are used; e.g EPA method 1613, 828 0B and 829 0C (dioxins,... Second Edition N, N-DM-2AE callb (ES) 3500000 3000000 25 00000 yie Peak area 20 00000 Qi 1500000 yic 1000000 500000 0 0 100 20 0 300 400 500 (ppm) N, N-DM-2AE FIGURE 2. 5 Experimental vs calculated ith data point for a typical ES calibration showing a linear LS fit where yc is found according to yic = mxi + b with m being the slope for the best-fit straight line through the data points and b being the y intercept... regression slope and intercept, the sum of the residuals over all N calibration points, defined as Q, is first considered: N Q= ∑y −y e i 2 c i i N Q= ∑[ y − (mx + b)] e i 2 i i The total residual is now minimized with respect to both the slope m and the intercept b: ∂Q = 0 = 2 ∂b ∂Q = 0 = 2 ∂m N ∑ [y e i − (mxi + b )] (2. 20) − (mxi + b )] (2. 21) i N ∑ x[ y e i i Rearranging Equation (2. 20) for b, b=... 22 '466'PCBP 140 120 100 80 60 40 20 0 0 2 4 6 8 #ppm Clofibric acid methyl ester (CFME) 10 12 FIGURE 2. 3 Calibration for CFME using 2, 2′,4,6,6′PCBP as IS This plot demonstrates adequate linearity over the range of CF methyl ester concentrations shown Any instability of the GC-MS instrument during the injection of these calibration standards is not reflected in the calibration Therein lies the value and. .. Cunknown, in the unknown environmental sample Refer to the quantification equation for ES in Table 2. 2 Figure 2. 2 graphically illustrates the ES approach to multipoint instrument calibration Six reference standards, each containing Aroclor 124 2 (AR 124 2), were injected into a gas chromatograph that incorporates a capillary column appropriate to the separation and an electron-capture detector This instrumental . to FIGURE 2. 2 Calibration for Aroclor 124 2 using an external standard. [ppm AR 124 2 total] Sum of Peak Areas 051015 20 350000 300000 25 0000 20 0000 150000 100000 50000 0 ARC i F ii = unknown © 20 06. ratio FIGURE 2. 3 Calibration for CFME using 2, 2′,4,6,6′PCBP as IS. #ppm Clofibric acid methyl ester (CFME) Peak area CFME/Peak area 22 '466'PCBP 024 6810 12 0 20 40 60 80 100 120 140 lim ∆ ∆ ∆ C C S i IS i S i IS i S i S i A A C C       →          0    ≡. consider only random errors. Mathematically stated: TCMX (counts-seconds) DCBP (counts-seconds) 1st injection 48. 52 53.65 2nd injection 47.48 52. 27 3rd injection 48.84 54.46 TCMX (counts-seconds) DCBP

Ngày đăng: 11/08/2014, 21:21

Từ khóa liên quan

Mục lục

  • Table of Contents

  • Chapter 2: Calibration, Verification, Statistical Treatment of Analytical Data, Detection Limits, and Quality Assurance/Quality Control

    • CHAPTER AT A GLANCE

    • 1. WHAT IS GOOD LABORATORY PRACTICE?

    • 2. CAN DATA REDUCTION, INTERPRETATION, AND STATISTICAL TREATMENT BE SUMMARIZED BEFORE WE PLUNGE INTO CALIBRATION?

      • 2.1 HOW IS MEASUREMENT ERROR DEFINED?

      • 2.2 ARE THERE LABORATORY-BASED EXAMPLES OF HOW Delta AND δ ARE USED?

      • 3. HOW IMPORTANT IS INSTRUMENT CALIBRATION AND VERIFICATION?

        • 3.1 HOW DOES THE EXTERNAL MODE OF INSTRUMENT CALIBRATION WORK?

        • 3.2 HOW DOES THE IS MODE OF INSTRUMENT CALIBRATION WORK AND WHY IS IT INCREASINGLY IMPORTANT TO TEQA?

          • 3.2.1 What Is Isotope Dilution?

          • 3.2.2 Can a Fundamental Quantification Equation Be Derived from Simple Principles?

          • 3.2.3 What Is Organics IDMS?

          • 3.3 HOW DOES THE SA MODE OF INSTRUMENT CALIBRATION WORK?

            • 3.3.1 Can We Derive a Quantification Equation for SA?

            • 4. WHAT DOES LEAST SQUARES REGRESSION REALLY MEAN?

              • 4.1 HOW DO YOU DERIVE THE LEAST SQUARES REGRESSION EQUATIONS?

              • 4.2 TO WHAT EXTENT ARE WE CONFIDENT IN THE ANALYTICAL RESULTS?

              • 4.3 HOW CONFIDENT ARE WE OF AN INTERPOLATED RESULT?

              • 5. HOW DO YOU DERIVE EQUATIONS TO FIND INSTRUMENT DETECTION LIMITS?

                • 5.1 CAN WE DERIVE EQUATIONS FOR CONFIDENCE INTERVALS ABOUT THE REGRESSION?

                • 5.2 WHAT IS WEIGHTED LEAST SQUARES AND HOW DOES THIS INFLUENCE IDLS?

                • 5.3 IS THERE A DIFFERENCE IN SIMPLIFIED VS. CONTEMPORARY IDLS?

                • 5.4 HOW DO MDLS DIFFER FROM IDLS?

                • 5.5 HOW DO I OBTAIN MDLS FOR MY ANALYTICAL METHOD?

                • 6. WHY SO MANY REPLICATE MEASUREMENTS?

                • 7. HOW DO I FIND THE LIMIT OF QUANTITATION?

                  • 7.1 IS THERE A WAY TO COUPLE THE BLANK AND CALIBRATION APPROACHES TO FIND XLOQ?

Tài liệu cùng người dùng

Tài liệu liên quan