1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Modeling Hydrologic Change: Statistical Methods - Chapter 11 pps

39 330 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 39
Dung lượng 491,17 KB

Nội dung

Hydrologic Simulation 11.1 INTRODUCTION Evaluating the effects of future watershed change is even more difficult than eval- uating the effects of change that has already taken place. In the latter case, some data, albeit nonstationary, are available. Where data are available at the site, some measure of the effect of change is possible, with the accuracy of the effect dependent or the quantity and quality of the data. Where on-site data are not available, which is obviously true for the case where watershed change has not yet occurred, it is sometimes necessary to model changes that have taken place on similar watersheds and then a priori project the effects of the proposed change onto the watershed of interest. Modeling for the purpose of making an a priori evaluation of a proposed watershed change would be a task simulation task. Simulation is a category of modeling that has been widely used for decades. One of the most notable early uses of simulation that involved hydrology was with the Harvard water program (Maass et al., 1962; Hufschmidt and Fiering, 1966). Continuous streamflow models, such as the Stanford watershed model (Crawford and Linsley, 1964) and its numerous offspring, are widely used in the simulation mode. Simulation has also been used effectively in the area of flood frequency studies (e.g., Wallis, Matalas, and Slack, 1974). Because of recent advances in the speed and capacity of computers, simulation has become a more practical tool for use in spatial and temporal hydrologic modeling. Effects of spatial changes to the watershed on the hydrologic processes throughout the watershed can be simulated. Similarly, temporal effects of watershed changes can be assessed via simulation. For example, gradual urbanization will change the frequency characteristics of peak discharge series, and simulation can be used to project the possible effects on, for example, the 100-year flood. It is also possible to develop confidence limits on the simulated flood estimates, which can be useful in making decisions. While simulation is a powerful tool, it is important to keep in mind that the simulated data are not real. They are projected values obtained from a model, and their accuracy largely depends on the quality of the model, including both its formulation and calibration. An expertly developed simulation program cannot over- come the inaccuracy introduced by a poorly conceived model; however, with a rational model simulation can significantly improve decision making. 11.1.1 D EFINITIONS Before defining what is meant by hydrologic simulation, it is necessary to provide a few definitions. First, a system is defined herein as a set of processes or components that are interdependent with each other. This could be a natural system such as a 11 L1600_Frame_C11 Page 293 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC watershed, a geographic system such as a road network, or a structural system such as a high-rise building. Because an accident on one road can lead to traffic congestion on a nearby road, the roads are interdependent. Second, a model is defined herein as a representation of a real system. The model could be either a physical model, such as those used in laboratories, or a mathemat- ical model. Models can be developed from either theoretical laws or empirical analyses. The model includes components that reflect the processes that govern the functioning of the system and provides for interaction between components. Third, an experiment is defined for our purposes here as the process of observing the system or the model. Where possible, it is usually preferable to observe the real system. However, the lack of control may make this option impossible or unrealistic. Thus, instead of observing the real system, simulation enables experiments with a model to replace experiments on the real system when it is not possible to control the real system. Given these three definitions, a preliminary definition can now be provided for simulation. Specifically, simulation is the process of conducting experiments on a model when we cannot experiment directly on the system. The uncertainty or randomness inherent in model elements is incorporated into the model and the experiments are designed to account for this uncertainty. The term simulation run or cycle is defined as an execution of the model through all operations for a length of simulated time. Some additional terms that need to be defined are as follows: 1. A model parameter is a value that is held constant over a simulation run but can be changed from run to run. 2. A variable is a model element whose value can vary during a simulation run. 3. Input variables require values to be input prior to the simulation run. 4. Output variables reflect the end state of the system and can consist of single values or a vector of values. 5. Initial conditions are values of model variables and parameters that estab- lish the initial state of the model at the beginning of a simulation run. 11.1.2 B ENEFITS OF S IMULATION Simulation is widely used in our everyday lives, such as flight simulators in the space and aircraft industries. Activities at the leading amusement parks simulate exciting space travel. Even video games use simulation to mimic life-threatening activities. Simulation is widely used in engineering decision making. It is a popular mod- eling tool because it enables a representation of the system to be manipulated when manipulating the real system is impossible or too costly. Simulation allows the time or space framework of the problem to be changed to a more convenient framework. That is, the length of time or the spatial extent of the system can be expanded or compressed. Simulation enables the representation of the system to be changed in order to better understand the real system; of course, this requires the model to be a realistic representation of the system. Simulation enables the analyst to control L1600_Frame_C11 Page 294 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC any or all model parameters, variables, or initial conditions, which that is not possible for conditions that have not occurred in the past. While simulation is extremely useful, it is not without problems. First, it is quite possible to develop several different, but realistic, models of the same system. The different models could lead to different decisions. Second, the data that are used to calibrate the model may be limited, so extrapolations beyond the range of the measured data may be especially inaccurate. Sensitivity analyses are often used to assess how a decision based on simulation may change if other data had been used to calibrate the model. 11.1.3 M ONTE C ARLO S IMULATION The interest in simulation methods started in the early 1940s for the purpose of developing inexpensive techniques for testing engineering systems by imitating their real-world behavior. These methods are commonly called Monte Carlo simulation techniques. The principle behind the methods is to develop an analytical model, which is usually computer based, that predicts the behavior of a system. Then parameters of the model are calibrated using data measured from a system. The model can then be used to predict the response of the system for a variety of conditions. Next, the analytical model is modified by incorporating stochastic com- ponents into the structure. Each input parameter is assumed to follow a probability function and the computed output depends on the value of the respective probability distribution. As a result, an array of predictions of the behavior are obtained. Then statistical methods are used to evaluate the moments and distribution types for the system’s behavior. The analytical and computational steps of a Monte Carlo simulation follow: 1. Define the system using a model. 2. Calibrate the model. 3. Modify the model to allow for random variation and the generation of random numbers to quantify the values of random variables. 4. Run a statistical analysis of the resulting model output. 5. Perform a study of the simulation efficiency and convergence. 6. Use the model in decision making. The definition of the system should include its boundaries, input parameters, output (or behavior) measures, architecture, and models that specify the relationships of input and output parameters. The accuracy of the results of simulation are highly dependent on an accurate definition for the system. All critical parameters and vari- ables should be included in the model. If an important variable is omitted from the model, then the calibration accuracy will be less than potentially possible, which will compromise the accuracy of the results. The definition of the input parameters should include their statistical or probabilistic characteristics, that is, knowledge of their moments and distribution types. It is common to assume in Monte Carlo simulation that the architecture of the system is deterministic, that is, nonrandom. However, model uncertainty is easily incorporated into the analysis by including bias factors L1600_Frame_C11 Page 295 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC and measures of sampling variation of the random variables. The results of these generations are values for the input parameters. These values should then be sub- stituted into the model to obtain an output measure. By repeating the procedure N times (for N simulation cycles), N response measures are obtained. Statistical meth- ods can now be used to obtain, for example, the mean value, variance, or distribution type for each of the output variables. The accuracy of the resulting measures for the behavior are expected to increase by increasing the number of simulation cycles. The convergence of the simulation methods can be investigated by studying their limiting behavior as N is increased. Example 11.1 To illustrate a few of the steps of the simulation process, assume that theory suggests the relationship between two variables, Y and X , is linear, Y = a + bX . The calibration data are collected, with the followings four pairs of values: The mean and standard deviation of the two variables follow: = 5.0, S x = 2.582, = 6.75, and S y = 2.630. Using least squares, fitting yields a = 2.5, b = 0.85, and a standard error of estimate S e = 1.7748. The goal is to be able to simulate random pairs of X and Y . Values of X can be generated by assuming that X is normally distributed with µ x = , σ x = S y , and the following linear model: (11.1a) in which z is a standard normal deviate N (0, 1). The generated values of X are then used in the generation of values of Y by (11.1b) in which z is N (0, 1). The last term represents the stochastic element of the model, whereas the first two terms represent the deterministic portion of the model in that it yields the same value of 2.5 + 0.85 X whenever the same value of X is used. Eight values of z are required to generate four pairs of ( X , Y ). Four values of z are used to generated the values of X with Equation 11.1a. The generated values of X are then inserted into Equation 11.1b to generate values of Y . Consider the following example: X 2468 Y 3798 zz − 0.37 4.04 0.42 6.68 0.82 7.12 − 0.60 7.48 0.12 5.31 1.03 8.84 − 0.58 6.50 − 0.54 7.06 X Y X ˆ XzS xx =+ µ ˆ YXzS e =+ +25 085 ˆ X ˆ Y L1600_Frame_C11 Page 296 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC The sample statistics for the generated values of X and Y are = 5.74, S x = 1.36, = 7.515, and S y = 0.94. These deviate considerably from the calibration data, but are within the bounds of sampling variation for a sample size of four. The above analyses demonstrate the first four of the six steps outlined. The linear model was obtained from theory, the model was calibrated using a set of data, a data set was calibrated, and the moments of the generated data were computed and compared to those of the calibration data. To demonstrate the last two steps would require the generation of numerous data sets such that the average characteristics of all generated samples approached the expected values. The number of generated samples would be an indication of the size of the simulation experiment. 11.1.4 I LLUSTRATION OF S IMULATION The sampling distribution of the mean is analytically expressed by the following theorem: If a random sample of size n is obtained from a population that has the mean µ and variance σ 2 , then the sample mean is a value of a random variable whose distribution has the mean µ and the variance σ 2 /n. If this theorem were not known from theory, it could be uncovered with simulation. The following procedure illustrates the process of simulating the sampling distribu- tion of the mean: 1. From a known population with mean µ and variance σ 2 , generate a random sample of size n . 2. Compute the mean and variance S 2 of the sample. 3. Repeat steps 1 and 2 a total of N s times, which yields N s values of and S 2 . 4. Repeat steps 1 to 3 for different values of µ , σ 2 , and n . 5. For each simulation run (i.e., steps 1 to 3) plot the N s values of , examine the shape of the distribution of the values, compute the central tendency and spread, and relate these to the values of µ , σ 2 , and n . The analysis of the data would show that the theorem stated above is valid. For this example, the model is quite simple, computing the means and variances of samples. The input parameters are µ , σ 2 , and n . The number of samples generated, N s , is the length of a simulation run. The number of executions of step 4 would be the number of simulation runs. The output variables are and S 2 . Example 11.1 and the five steps described for identifying the sampling distri- bution of the mean illustrate the first four steps of the simulation process. In the fifth step, the efficiency and convergence of the process was studied. In Example 11.1, only one sample was generated. The third step of the above description indicates that N s samples should be generated. How large does N s need to be in order to identify the sampling distribution? The process of answering this question would be a measure of the convergence of the process to a reliable answer. Assuming that the above five steps constitute a valid algorithm for identifying the sampling distri- bution of the mean, the number of simulations needed to develop the data would indicate that the algorithm has converged to a solution. X Y X X X X X X L1600_Frame_C11 Page 297 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC If the effect of the assumed population from which the n values in step 1 were sampled was of interest, the experiment could be repeated using different underlying populations for generating the sample values of X . The outputs of step 5 could be compared to assess whether the distribution is important. This would qualitatively evaluate the sensitivity of the result to the underlying distribution. A sensitivity analysis performed to assess the correctness of the theorem when the population is finite of size N rather than infinite would show that the variance is σ 2 ( N − n)/[n(N − 1)] rather than σ 2 /n. 11.1.5 RANDOM NUMBERS The above simulation of the distribution of the mean would require a random-number generator for step 1. Random numbers are real values that are usually developed by a deterministic algorithm, with the resulting numbers having a uniform distribution in the range (0, 1). A sequence of random numbers should also satisfy the condition of being uncorrelated, that is, the correlation between adjacent values equals zero. The importance of uniform random numbers is that they can be transformed into real values that follow any other probability distribution of interest. Therefore, they are the initial form of random variables for most engineering simulations. In the early years of simulation, mechanical random-number generators were used, such as, drawing numbered balls, throwing dice, or dealing out cards. Many lotteries are still operated this way. After several stages of development, computer-based, arithmetic random-number generators were developed that use some analytical gen- erating algorithm. In these generators, a random number is obtained based on a previous value (or values) and fixed mathematical equations. Therefore, a seed is needed to start the process of generating a sequence of random numbers. The main advantages of arithmetic random-number generators over mechanical generators are speed, that they do not require memory for storage of numbers, and repeatability. The conditions of a uniform distribution and the absence of serial correlation should also be satisfied. Due to the nature of the arithmetic generation of random numbers, a given seed should result in the same stream of random values every time the algorithm is executed. This property of repeatability is important for debugging purposes of the simulation algorithm and comparative studies of design alternatives for a system. 11.2 COMPUTER GENERATION OF RANDOM NUMBERS A central element in simulation is a random-number generator. In practice, computer packages are commonly used to generate the random numbers used in simulation; however, it is important to understand that these random numbers are generated from a deterministic process and thus are more correctly called pseudo-random numbers. Because the random numbers are derived from a deterministic process, it is important to understand the limitations of these generators. Random-number generators produce numbers with specific statistical character- istics. Obviously, if the generated numbers are truly random, an underlying popu- lation exists that can be represented by a known probability function. A single die L1600_Frame_C11 Page 298 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC is the most obvious example of a random-number generator. If a single die was rolled many times, a frequency histogram could be tabulated. If the die were a fair die, the sample histogram for the generated population would consist of six bars of equal height. Rolling the die produces values of a random variable that has a discrete mass function. Other random-number generators would produce random numbers having different distributions, including continuously distributed random variables. When a computerized random-number generator is used, it is important to know the underlying population. 11.2.1 MIDSQUARE METHOD The midsquare method is one of the simplest but least reliable methods of generating random numbers. However, it illustrates problems associated with deterministic procedures. The general procedure follows: 1. Select at random a four-digit number; this is referred to as the seed. 2. Square the number and write the square as an eight-digit number using preceding (lead) zeros if necessary. 3. Use the four digits in the middle as the new random number. 4. Repeat steps 2 and 3 to generate as many numbers as necessary. As an example, consider the seed number of 2189. Squaring this yields the eight- digit number 04791721, which gives the first random number of 7917. The following sequence of 7 four-digit numbers results from using 2189 as the seed: 04791721 62678889 46076944 00591361 34963569 928332 25 69422224 Note that a leading 0 was included in the first number, and that two leading zeros were required for the fourth number. At some point one of these numbers must recur. At that point the same sequence that occurred on the first pass will repeat itself. The sequence of generated numbers is no longer a sequence of random numbers. For example, if the four-digit number of 3500 occurred, the following sequence would result: 12250000 06250000 06250000 062500 00 Such a sequence is obviously not random and would not pass statistical tests for randomness. While the procedure could be used for very small samples, it is limited L1600_Frame_C11 Page 299 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC in that a number will recur after more than a few values are generated. While the use of five-digit numbers could be used to produce ten-digit squares, the midsquare method has serious flaws that limit its usefulness. However, it is useful for intro- ducing the concept of random-number generation. 11.2.2 ARITHMETIC GENERATORS Many arithmetic random-number generators are available, including the midsquare method, linear congruential generators, mixed generators, and multiplicative gener- ators. All of these generators are based on the same principle of starting with a seed and having fixed mathematical equations for obtaining the random value. The result- ing values are used in the same equations to obtain additional values. By repeating this recursive process N times, N random number in the range (0, 1) are obtained. However, these methods differ according to the algorithms used as the recursive model. In all recursive models, the period for the generator is of concern. The period is defined as the number of generated random values before the stream of values starts to repeat itself. It is always desirable to have random-number generators with large periods, such as much larger than the number of simulation cycles needed in a simulation study of a system. 11.2.3 TESTING OF GENERATORS Before using a random-number generator, the following two tests should be per- formed on the generator: a test for uniformity and a test of serial correlation. These tests can be performed either theoretically or empirically. A theoretical test is defined as an evaluation of the recursive model itself of a random-number generator. The theoretical tests include an assessment of the suitability of the parameters of the model without performing any generation of random numbers. An empirical test is a statistical evaluation of streams of random numbers resulting from a random- number generator. The empirical tests start by generating a stream of random num- bers, that is, N random values in the range (0, 1). Then, statistical tests for distribution types, that is, goodness-of-fit tests such as the chi-square test, are used to assess the uniformity of the random values. Therefore, the objective in the uniformity test is to make sure that the resulting random numbers follow a uniform continuous prob- ability distribution. To test for serial correlation, the Spearman–Conley test (Conley and McCuen, 1997) could be used. The runs test for randomness is an alternative. Either test can be applied to a sequence of generated values to assess the serial correlation of the resulting random vector, where each value in the stream is considered to come from a different but identical uniform distribution. 11.2.4 DISTRIBUTION TRANSFORMATION In simulation exercises, it is necessary to generate random numbers from the pop- ulation that underlies the physical processes being simulated. For example, if annual floods at a site follow a log-Pearson type III distribution, then random numbers having a uniform distribution would be inappropriate for generating random sequences of L1600_Frame_C11 Page 300 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC flood flows. The problem can be circumvented by transforming the generated uni- form variates to log-Pearson type III variates. Distribution transformation refers to the act of transforming variates x from distribution f(x) to variates y that have distribution f(y). Both x and y can be either discrete or continuous. Most commonly, an algorithm is used to generate uniform variates, which are continuously distributed, and then the uniform variates are trans- formed to a second distribution using the cumulative probability distribution for the desired distribution. The task of distribution transformation is best demonstrated graphically. Assume that values of the random variate x with the cumulative distribution F(x) are generated and values of a second random variate y with the cumulative distribution F(y) are needed. Figure 11.1(a) shows the process for the case where both x and y are discrete random variables. After graphing the cumulative distributions F(x) and F(y), the value of x FIGURE 11.1 Transformation curves: (a) X and Y are discrete random variables; (b) X continuous, Y discrete; (c, d) X is U(0, 1), Y is discrete; and (e, f) Y is continuous, U is U(0, 1). (a) Y X F(y)=F(x) 1.0 0 (b) (d)(c) (e) YU F(y)=F(U) 1.0 0 YU F(y)=F(U) 1.0 0 YX F(y)=F(x) 1.0 0 01 Y U (f) 01 Y U 1 1 L1600_Frame_C11 Page 301 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC is entered on the x-axis and the value of its cumulative probability found. The cumulative value for y is assumed to equal the cumulative value of x. Therefore, the value of y is found by moving horizontally from F(x) to F(y) and then down to the y-axis, where the value of y i is obtained. Given a sample of n values of x i , a sample of n values of y i is generated by repeating this process. The same transformation process can be used when x is a continuously distrib- uted random variable. Figure 11.1(b) shows this case. Because many random number generators generate uniformly distributed random numbers, F(x) is most often the cumulative uniform distribution. This is illustrated in Figure 11.1(c). Since the cumulative distribution for a uniform variate is a constant-sloped line, Figure 11.1(c) can be simplified to Figure 11.1(d). Both Figure 11.1(c) and 11.1(d) show y as a discretely distributed random variable. Figures 11.1(e) and 11.1(f) show the corre- sponding graphs when y is a continuously distributed random variable. Example 11.2 Assume that the number of runoff producing storms per year (x in column 1 of Table 11.1) has the mass function given in column 2 of Table 11.1. Using f(x) in column 2, the cumulative mass function F(x) is formed (see column 3 of Table 11.1). The rule for transforming the uniform variate u to a value of the discrete random variable x follows: 0 if u i ≤ F(0) (11.2a) i if F(i − 1) < u i ≤ F(i) (11.2b) Assume that ten simulated values of an annual maximum flood record are needed. Then ten uniform variates u i would be generated (see column 5). Using the trans- formation algorithm of Equations 11.2, the values of u i are used to obtain generated values of x i (see column 6). For example, u 1 is 0.62. Entering column 3, u 1 lies between F(2) of 0.5 and F(3) of 0.70; therefore, x 1 equals 3. The value of x 7 of 0.06 TABLE 11.1 Continuous to Discrete Transformation (1) x (2) f(x) (3) F(x) (4) Simulation (5) Uniform Variate (6) Simulated 0 0.10 0.10 1 0.62 3 1 0.15 0.25 2 0.17 1 2 0.25 0.50 3 0.43 2 3 0.20 0.70 4 0.96 6 4 0.15 0.85 5 0.22 1 5 0.10 0.95 6 0.86 5 6 0.05 1.00 7 0.06 0 8 0.34 2 9 0.57 3 10 0.40 2 L1600_Frame_C11 Page 302 Friday, September 20, 2002 10:23 AM © 2003 by CRC Press LLC [...]... population probability Pp to assess the true bias and accuracy 11. 6 PROBLEMS 1 1-1 Identify a system and its components for each of the following: a A 200-square-mile watershed b A 1-acre residential lot c Groundwater aquifer d A 2-mile-long section of a river 1 1-2 Justify the use of the word model to describe the peak-discharge rational method, qp = CiA 1 1-3 Describe a laboratory experiment to show that the... relationship Xi = 10 yi transforms the normally distributed values to lognormal distributed values Example 11. 14 Table 11. 8 provides the 7-day, low-flow discharge series for a 27-year period (1939–1965), including the statistics for both the discharges and discharge logarithms The 2-year, 10-year, and 100-year 7-day, low flows are log Q2 = 1.957 − 0(0.134) = 1.957 ∴ Q2 = 90.6 ft 3 /s log Q10 = 1.957 − 1.2817(0.134)... Simulated Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 −0.646 −0.490 −0.067 −1.283 0.473 −0.203 0.291 −1.840 1.359 −0.687 −0. 811 0.738 −0.301 0.762 −0.905 0.435 0.741 −1.593 0.038 0.797 Mean Standard deviation 2.044 2.023 1.966 2.129 1.894 1.984 1.918 2.204 1.775 2.049 2.066 1.858 1.997 1.855 2.078 1.899 1.858 2.171 1.952 1.850 1.978 0 .117 111 105 92 135 78 96 83 160 60 112 116 72 99 72 120 79 72 148... the 2-, 1 0-, and 100year low flows: log Q2 = 1.975 − 0(0 .117 ) = 1.978 ∴ Q2 = 95.1 ft 3 /s log Q10 = 1.978 − 1.2817(0 .117 ) = 1.828 ∴ Q10 = 67.3 ft 3 /s log Q100 = 1.978 − 2.3267(0 .117 ) = 1.706 ∴ Q100 = 50.8 ft 3 /s For the small sample size, the simulated discharges are in good agreement with the measured values © 2003 by CRC Press LLC L1600_Frame_C11 Page 320 Friday, September 20, 2002 10:23 AM 11. 4.5... distribution: Xi = {1457, 5 711, 2580, 705} 11. 4.6 CHI-SQUARE DISTRIBUTION A chi-square deviate with υ degrees of freedom can be generated using standard normal deviates zi and uniform U(0, 1) deviates ui as follows:  χ i = −0.5 ln e    υ /2 ∏ j =1  u j  for υ even   (υ −1)/ 2  χ i = −0.5 ln e  u j  + zi2  j =1    ∏ © 2003 by CRC Press LLC for υ odd (11. 38a) (11. 38b) L1600_Frame_C11 Page 321 Friday,... number of n-year sequences, the distribution of the 7-day, 10-year low flow could be determined By performing this analysis for different scenarios of deforestation, the effect of deforestation on the 7day, 10-year low flow could be assessed © 2003 by CRC Press LLC L1600_Frame_C11 Page 326 Friday, September 20, 2002 10:23 AM TABLE 11. 9 Summary of 1 1- Year Simulations of Annual Rainfall Depth, Evapotranspiration,... interest: the nth-step state probability vector and the steady-state probability vector, which are denoted as P(n) and P(∞), respectively The nth-step vector gives the likelihood that the random variable x is in state i at time n The steady-state vector is independent of the initial conditions Given the one-step transition probability matrix P and the initial-state vector, the nth-step vector can be... probabilities of interest are as follows: P(xt = W|xt−1 = W) (11. 17a) P(xt = W|xt−1 = D) (11. 17b) P(xt = D|xt−1 = W) (11. 17c) P(xt = W|xt−1 = D) (11. 17d) Alternatively, if we assumed that three states (wet, moderate, and dry) were possible, then nine conditional probabilities would be of interest The probabilities of Equations 11. 17 are called one-step transition probabilities because they define the probabilities... p11 p 12 [ pij ] =  P=  M   pn1 © 2003 by CRC Press LLC p12 p22 M L L pn 2 L p1n  p2 n   M   pnn  (11. 18) L1600_Frame_C11 Page 312 Friday, September 20, 2002 10:23 AM The one-step transition probability matrix is subject to the constraint that the sum of the probabilities in any row must equal 1: n ∑p ij =1 (11. 19) j =1 To illustrate the transformation of the probabilities of Equation 11. 17... x ) =  0 otherwise  (11. 5) (11. 6) Equation 11. 5 can be used to transform uniform variates ui to variates xi that have the density function of Equation 11. 4 If the uniform variates have location and scale parameters of 0 and 1, respectively, then the values of ui can be set equal to F(x) of Equation 11. 5a and the value of xi computed by F( x ) xi =  + 0.25  3.2  0.5 (11. 7) The transformation . illustrated in Figure 11. 1(c). Since the cumulative distribution for a uniform variate is a constant-sloped line, Figure 11. 1(c) can be simplified to Figure 11. 1(d). Both Figure 11. 1(c) and 11. 1(d) show. f(x): (11. 4a) (11. 4b) The cumulative distribution of x is (11. 5) (11. 6) Equation 11. 5 can be used to transform uniform variates u i to variates x i that have the density function of Equation 11. 4. If. the coin. From the binomial mass function of Equation 11. 8, we can compute the following population probabilities: (11. 9a) (11. 9b) (11. 9c) (11. 9d) The sum of these probabilities is 1. To illustrate

Ngày đăng: 11/08/2014, 10:22