1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Engineering Statistics Handbook Episode 5 Part 6 pot

16 225 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 145,41 KB

Nội dung

4. Process Modeling 4.1. Introduction to Process Modeling 4.1.3. What are process models used for? 4.1.3.2.Prediction More on Prediction As mentioned earlier, the goal of prediction is to determine future value(s) of the response variable that are associated with a specific combination of predictor variable values. As in estimation, the predicted values are computed by plugging the value(s) of the predictor variable(s) into the regression equation, after estimating the unknown parameters from the data. The difference between estimation and prediction arises only in the computation of the uncertainties. These differences are illustrated below using the Pressure/Temperature example in parallel with the example illustrating estimation. Example Suppose in this case the predictor variable value of interest is a temperature of 47 degrees. Computing the predicted value using the equation yields a predicted pressure of 192.4655. 4.1.3.2. Prediction http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd132.htm (1 of 5) [5/1/2006 10:21:52 AM] Of course, if the pressure/temperature experiment were repeated, the estimates of the parameters of the regression function obtained from the data would differ slightly each time because of the randomness in the data and the need to sample a limited amount of data. Different parameter estimates would, in turn, yield different predicted values. The plot below illustrates the type of slight variation that could occur in a repeated experiment. Predicted Value from a Repeated Experiment 4.1.3.2. Prediction http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd132.htm (2 of 5) [5/1/2006 10:21:52 AM] Prediction Uncertainty A critical part of prediction is an assessment of how much a predicted value will fluctuate due to the noise in the data. Without that information there is no basis for comparing a predicted value to a target value or to another prediction. As a result, any method used for prediction should include an assessment of the uncertainty in the predicted value(s). Fortunately it is often the case that the data used to fit the model to a process can also be used to compute the uncertainty of predictions from the model. In the pressure/temperature example a prediction interval for the value of the regresion function at 47 degrees can be computed from the data used to fit the model. The plot below shows a 99% prediction interval produced using the original data. This interval gives the range of plausible values for a single future pressure measurement observed at a temperature of 47 degrees based on the parameter estimates and the noise in the data. 99% Prediction Interval for Pressure at T=47 4.1.3.2. Prediction http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd132.htm (3 of 5) [5/1/2006 10:21:52 AM] Length of Prediction Intervals Because the prediction interval is an interval for the value of a single new measurement from the process, the uncertainty includes the noise that is inherent in the estimates of the regression parameters and the uncertainty of the new measurement. This means that the interval for a new measurement will be wider than the confidence interval for the value of the regression function. These intervals are called prediction intervals rather than confidence intervals because the latter are for parameters, and a new measurement is a random variable, not a parameter. Tolerance Intervals Like a prediction interval, a tolerance interval brackets the plausible values of new measurements from the process being modeled. However, instead of bracketing the value of a single measurement or a fixed number of measurements, a tolerance interval brackets a specified percentage of all future measurements for a given set of predictor variable values. For example, to monitor future pressure measurements at 47 degrees for extreme values, either low or high, a tolerance interval that brackets 98% of all future measurements with high confidence could be used. If a future value then fell outside of the interval, the system would then be checked to ensure that everything was working correctly. A 99% tolerance interval that captures 98% of all future pressure measurements at a temperature of 47 degrees is 192.4655 +/- 14.5810. This interval is wider than the prediction interval for a single measurement because it is designed to capture a larger proportion of all future measurements. The explanation of tolerance intervals is potentially confusing because there are two percentages used in the description of the interval. One, in this case 99%, describes how confident we are that the interval will capture the quantity that we want it to capture. The other, 98%, describes what the target quantity is, which in this case that is 98% of all future measurements at T=47 degrees. 4.1.3.2. Prediction http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd132.htm (4 of 5) [5/1/2006 10:21:52 AM] More Info For more information on the interpretation and computation of prediction and tolerance intervals, see Section 5.1. 4.1.3.2. Prediction http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd132.htm (5 of 5) [5/1/2006 10:21:52 AM] Thermocouple Calibration Just as in estimation or prediction, if the calibration experiment were repeated, the results would vary slighly due to the randomness in the data and the need to sample a limited amount of data from the process. This means that an uncertainty statement that quantifies how much the results of a particular calibration could vary due to randomness is necessary. The plot below shows what would happen if the thermocouple calibration were repeated under conditions identical to the first experiment. Calibration Result from Repeated Experiment 4.1.3.3. Calibration http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm (2 of 4) [5/1/2006 10:21:53 AM] Calibration Uncertainty Again, as with prediction, the data used to fit the process model can also be used to determine the uncertainty in the calibration. Both the variation in the estimated model parameters and in the new voltage observation need to be accounted for. This is similar to uncertainty for the prediction of a new measurement. In fact, calibration intervals are computed by solving for the predictor variable value in the formulas for a prediction interval end points. The plot below shows a 99% calibration interval for the original calibration data used in the first plot on this page. The area of interest in the plot has been magnified so the endpoints of the interval can be visually differentiated. The calibration interval is 387.3748 +/- 0.307 degrees Celsius. 4.1.3.3. Calibration http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm (3 of 4) [5/1/2006 10:21:53 AM] In almost all calibration applications the ultimate quantity of interest is the true value of the primary-scale measurement method associated with a measurement made on the secondary scale. As a result, there are no analogs of the prediction interval or tolerance interval in calibration. More Info More information on the construction and interpretation of calibration intervals can be found in Section 5.2 of this chapter. There is also more information on calibration, especially "one-point" calibrations and other special cases, in Section 3 of Chapter 2: Measurement Process Characterization. 4.1.3.3. Calibration http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm (4 of 4) [5/1/2006 10:21:53 AM] As with prediction and calibration, randomness in the data and the need to sample data from the process affect the results. If the optimization experiment were carried out again under identical conditions, the optimal input values computed using the model would be slightly different. Thus, it is important to understand how much random variability there is in the results in order to interpret the results correctly. Optimization Result from Repeated Experiment 4.1.3.4. Optimization http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd134.htm (2 of 4) [5/1/2006 10:21:53 AM] [...]... designs for optimization Section 5. 5.3 specifically focuses on optimization methods and their associated uncertainties http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd134.htm (4 of 4) [5/ 1/20 06 10:21 :53 AM] 4.1.4 What are some of the different statistical methods for model building? http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd14.htm (2 of 2) [5/ 1/20 06 10:21 :53 AM] 4.1.4.1 Linear Least... is a 95% joint confidence region for the two process inputs In this region the throughput of the process will be approximately 217 units/hour http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd134.htm (3 of 4) [5/ 1/20 06 10:21 :53 AM] 4.1.3.4 Optimization Contour Plot, Estimated Optimum & Confidence Region More Info Computational details for optimization are primarily presented in Chapter 5: Process... clear answers to scientific and engineering questions Disadvantages of Linear Least Squares The main disadvantages of linear least squares are limitations in the shapes that linear models can assume over long ranges, possibly poor extrapolation properties, and sensitivity to outliers http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd141.htm (3 of 4) [5/ 1/20 06 10:21 :54 AM] 4.1.4.1 Linear Least Squares... also linear in the statistical sense because they are linear in the parameters, though not with respect to the observed explanatory variable, http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd141.htm (2 of 4) [5/ 1/20 06 10:21 :54 AM] 4.1.4.1 Linear Least Squares Regression Nonlinear Model Example Just as models that are linear in the statistical sense do not have to be linear with respect to the... validation, especially with respect to outliers, critical to obtaining sound answers to the questions motivating the construction of the model http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd141.htm (4 of 4) [5/ 1/20 06 10:21 :54 AM] ... (1983)] [Stigler (19 86) ] working in Germany, France and America, respectively In the least squares method the unknown parameters are estimated by minimizing the sum of the squared deviations between the data and the model The minimization process reduces the overdetermined system of equations formed by the data to a sensible system of (where is the number of parameters in the functional part of the model)... always more extreme This means that linear models may not be effective for extrapolating the results of a process for which data cannot be collected in the region of interest Of course extrapolation is potentially dangerous regardless of the model type Finally, while the method of least squares often gives optimal estimates of the unknown parameters, it is very sensitive to the presence of unusual data... process modeling because of its effectiveness and completeness Though there are types of data that are better described by functions that are nonlinear in the parameters, many processes in science and engineering are well-described by linear models This is because either the processes are inherently linear or because, over short ranges, any process can be well-approximated by a linear model The estimates . equation yields a predicted pressure of 192.4 65 5 . 4.1.3.2. Prediction http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd132.htm (1 of 5) [5/ 1/20 06 10:21 :52 AM] Of course, if the pressure/temperature. Repeated Experiment 4.1.3.2. Prediction http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd132.htm (2 of 5) [5/ 1/20 06 10:21 :52 AM] Prediction Uncertainty A critical part of prediction is an assessment of how much. Prediction http://www.itl.nist.gov/div898 /handbook/ pmd/section1/pmd132.htm (4 of 5) [5/ 1/20 06 10:21 :52 AM] More Info For more information on the interpretation and computation of prediction and tolerance intervals, see Section 5. 1. 4.1.3.2.

Ngày đăng: 06/08/2014, 11:20

TỪ KHÓA LIÊN QUAN