1. Trang chủ
  2. » Ngoại Ngữ

An Assessment of Seasonal Water Supply Outlooks in the Colorado River Basin

39 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề An Assessment of Seasonal Water Supply Outlooks in the Colorado River Basin
Tác giả Jean C. Morrill, Holly C. Hartmann, Roger C. Bales
Người hướng dẫn Roger C. Bales
Trường học University of Arizona
Chuyên ngành Hydrology and Water Resources
Thể loại thesis
Năm xuất bản 2024
Thành phố Tucson
Định dạng
Số trang 39
Dung lượng 595,5 KB

Nội dung

An Assessment of Seasonal Water Supply Outlooks Forecasts in the Colorado River Basin Jean C Morrill1, Holly C Hartmann1 and Roger C Bales2,a Department of Hydrology and Water Resources, University of Arizona, Tucson, AZ, USA School of Engineering, University of California, Merced, CA, USA a Corresponding author: 10/20/202210/20/2022 Roger C Bales University of California, Merced P.O Box 2039 Merced, CA 95344 209-724-4348 (o) 209-228-4047 (fax) Abstract A variety of forecast skill measures of interest to the water resources applications community and other stakeholders were used to assess the strengths and weaknesses of seasonal water supply outlooks (WSO’s) at 55 sites in the Colorado River basin, and provide a baseline against which alternative and experimental forecast methods can be compared These included traditional scalar measures (linear correlation, linear rootmean square error and bias), categorical measures (false alarm rate, threat score), probabilistic measures (Brier score, rank probability score) and distribution-oriented measures (resolution, reliability and discrimination) Despite the shortcomings of the WSO’s they are generally an improvement over climatology at most sites over the period of record The majority of forecast points have very conservative predications of seasonal flow, with below-average flows often over predicted and above-average flows under predicted Late-season forecasts at most locations are generally better than those issued in January There is a low false alarm rate for both low and high flows at most sites, however, these flows are not forecast nearly as often as they are observed Moderate flows have a very high probability of detection, but are forecast more often than they occur There is also good discrimination between high and low flows, i.e when high flows are forecast, low flows are not observed, and vice versa The diversity of forecast performance metrics reflects the multi-attribute nature of forecast and ensembles 10/20/202210/20/2022 Introduction Introduction Seasonal water supply outlooks, or volume of total seasonal runoff, are routinely used by decision makers in the southwestern United States for making commitments for water deliveries, determining industrial and agriculture water allocation, and carrying out reservoir operations These forecasts are based primarily on statistical regression equations developed from monthly precipitation, recent snow-water equivalent, and a subset of past streamflow observations (Day, 1985) In the Colorado River Basin the National Weather Services Colorado Basin River Forecast Center (CBRFC) and the Natural Resources Conservation Service (NRCS) jointly issue seasonal water supply outlook (WSO) forecasts of naturalized, or unimpaired, flow, i.e the flow that would most likely occur in the absence of diversions These forecast were not always issued jointly (Hartmann et.a , 200X?) Currently WSO’s are issued once each month from January to June However, until the mid-1990s, the forecasts were only issued until May Each forecast contains: the most probable value for the forecast period, a comparison to a historical, climatological mean value (usually a 10-to 30-year mean), a reasonable maximum (usually the 10% exceedance value), and a reasonable minimum (usually the 90% exceedance value) In some locations with strongly skewed flow distributions, the comparison is to a historical median, rather than the mean The forecast period is the period of time over which the forecasted flow is predicted to occur It is not the same for all sites, all years at one location, or even all months in a single year In the past decade, the most common forecast period has been April-July for most sites in the upper Colorado River basin and January-May for lthe ower Colorado, for each month a forecast was issued However, previously many sites used AprilSeptember forecast periods, and prior to that the forecast period for January forecast was January-September, for February forecast the forecast period was February-September, etc 10/20/202210/20/2022 Most of the sites at which forecasts are issued are impaired, i.e have diversion above the forecast and gauging location Therefore the CRBRFC combines measured discharges with historical estimates of diversion to reconstruct the unimpeded observed flow (Ref bulletins) Despite the shortcomings of this approach, it provides the best estimate against which to assess the skill of WSO’s Forecast verification is important for assessing forecast quality and performance, improving forecasting procedures, and providing users with information helpful in applying the forecasts (Murphy and Winkler, 1987) Decision makers take account of forecast skill in using forecast information and are interested in having access to a variety of skill measures (Bales et.al 2004; Franz et al., 2003) Shafer and Huddleston (1984) examined average forecast error at over 500 forecast points in 10 western states They used summary statistical measures and found that forecast errors tended to be approximately normally distributed, but with a slightly negative skew They used summary statistical measures and found that forecast errors tended to be approximately normally distributed, but with a slightly negative skew that resulted from a few large negative errors (under-forecasts) with no corresponding large positive errors High errors were not always associated with poor skill scores, however [Any additional result about skill??] The work reported here assesses the skill of forecasts relative to naturalized streamflow across the Colorado River basin Using a variety of methods of interest to stakeholders: traditional scalar measures (linear correlation, linear root-mean square error and bias), categorical measures (false alarm rate, threat score), probabilistic measures (Brier score, rank probability score) and distributive measures (resolution, reliability and discrimination) The purpose was to assess the strengths and weaknesses of the current water supply forecasts, and provide a baseline against which alternative and experimental forecast methods can be compared 2.1 Data and Methodsmethods Data WSO records from 136 forecast points on 84 water bodies were assembled, including some forecast locations that are no longer active NEED TO APPEND DATA Reconstructed flows were made available by the CRBRFC and NOAA (T Tolsdorf and 10/20/202210/20/2022 Shumate, personal communication), however data were not available for all forecast locations Many current forecast points were established in 1993, and so not yet have good long-term records For this study we chose 54 sites having at least 10 years of both forecast and observed data (Figure 1) Another 33 sites have fewer than 10 years of data, but most are still active, and so should be more useful for statistical analysis in a few years time The earliest water supply forecasts used in this study were issued in 1953 at 22 of the 54 locations These 54 forecasting sites were divided in smaller basins (or in the case of Lake Powell, a single location), compatible with the divisions used by CBRFC in the tables and graphs accompanying the WSO forecasts (Table 1) The maximum number of years in the combined forecast and observation record was 48 (1953–2000), the minimum used was 21, and the median and average number of years were 46 and 41.5 respectively Each forecast includes the most likely value, a reasonable maximum (usually the 10% exceedance value), and a reasonable minimum (usually the 90% exceedance value) These were used to calculate the 30 and 70% exceedance values associated with each forecast Five forecast flow categories were calculated for each forecast, based on exceedance probability: 0-10%, >10-30%, >30-70%, >70-90%, and >90% The probability of the flow falling within each of these categories is 0.1, 0.2, 0.4, 0.2 and 0.1 respectively 2.2 Summary and correlation measures Summary measures are scalar measures of accuracy from forecasts of continuous variables, and include the mean absolute error (MAE) and mean square error (MSE): n  f i  oi n i 1 n MSE    f i  oi  n i 1 where for a given location, f is the forecast seasonal runoff for period i and o the MAE  (1) (2) naturalized observed flow for the same period Since MSE is computed by squaring the forecast errors, it is more sensitive to larger errors than is MAE It increases from zero for perfect forecasts to large positive values as the discrepancies between the forecast and observations become larger RMSE is the square root of the MSE 10/20/202210/20/2022 Often an accuracy measure is not meaningful by itself, and is compared to a reference value, usually based on the historical record In order for a forecast technique to be worthwhile, it must generate better results than simply using the cumulative distribution of the climatological record, i.e assuming that the most likely flow next year is the average flow in the climatological record In order to judge this, skill scores are calculated for the accuracy measures: A  Aref (3) Aperf  Aref where SSA,If SSA A is a generic skill score, Aref is the accuracy of a reference set of values SS A  (e.g the climatological record) and Aperf is the value of A given by perfect forecasts If A=Aperf, SSA will be at its maximum, 100% If A=Aref, then SSA=0%, indicating no improvement over the reference forecast If SSA 0, the forecast is a better predictor of flow than is the observed mean, but if NS C< 0, the observed mean is a better predictor and there is a lack of correlation between the forecast and observed values Discussion of correlation is often combined with that of the percent bias, which measures the difference between the average forecasted and observed values (Wilks, 1995): f  o 100% (6) o which can assume positive (overforecasting), negative (underforecasting) or zero values Pbias  Shafer and Huddleston (1984) used a similar calculation to examine forecast error and the distribution of forecast error in the analysis of seasonal streamflow forecasts Forecast error for a particular forecast/observation pair was defined as E f  o 100 o ref (7) where o ref is the published seasonal average runoff at the time of the forecast (also called the climatological average or reference value) 10/20/202210/20/2022 Thy also defined a skew coefficient associated with the distribution of a set of errors: n G  n Ei  E  i 1 (n  1)(n  2)( E ) (8) 100 where  E is the standard deviation of errors G ranges … , with XX being a perfect forecast … 2.3 Categorical Measures measures A categorical forecast states that one and only one set of possible events will occur Contingency tables are used to display the possible combinations of forecast and event pairs, and the count of each pair An event (e.g seasonal flow in the upper 30% of the observed distribution) that is successfully forecast (both forecast and observed) occurs a times An event that is forecast but not observed occurs b times, and an event that is observed but not forecast occurs c times An event that is not forecast and not observed for the same period occurs d times The total number of forecasts in the data set is n=a+b+c+d A perfectly accurate binary (22) categorical forecast will have b = c =0 and a+d=n However, few forecasts are perfect Several measures can be used to examine the accuracy of the forecast, including hit rate, threat score, probability of detection and false alarm rate (Wilks, 1995) The hit rate is the proportion correct: ad n and ranges from one (perfect) to zero (worst) HR  (7) The threat score, also known as the critical success index, is the proportion of correctly forecast events out of the total number of times the event was either forecast or observed, and does not take into account the accurate non-occurrence of events: a a b c It also ranges from one (perfect) to zero (worst) TS  The probability of detection is the fraction of times when the event was correctly forecast to the number of times is actually occurred, or the probability of the forecast given the observation: 10/20/202210/20/2022 (8) a a c A perfect POD is and the worst POD  (9) A related statistic is the false alarm rate, FAR, which is the fraction of forecasted events that not happen In terms of condition probability, it is the probability of not observing an event given the forecast: b (10) a b Unlike the other categorical measure describe, the FAR has a negative orientation, with FAR  the best possible FAR being and the worst being The bias of the categorical forecasts compares the average forecast with the average observation, and is represented by the ratio of “yes” observations to “yes” forecasts: a b (11) ac A biased forecast has a value of 1, showing that the event occurred the same number of bias  times that it was forecast If the bias is greater than 1, the event is overforecast (forecast most often than observed); if the bias is less than one, the event is underforecast Since the bias does not actually show anything about whether the forecasts matched the observations, it is not an accuracy measure 2.4 Probabilistic Measuresmeasures Whereas categorical forecasts contain no expression of uncertainty, probabilistic forecasts Linear error in probability space assesses forecast errors with respect to their difference in probability, rather than their overall magnitude: LEPS i  Fc  f i   Fc  oi  (12) and LEPSi=0 if the distributions are the same… Fc(o) refers to the climatological cumulative distribution function of the observations, and Fc(f) to the corresponding distribution for the forecasgts The corresponding skill score is: n SS LEPS 1   F  f   F o    F  o  i 1 n i 1 c i c c i (13) i using the climatological median as reference forecast The Brier score is analogous to MSE : 10/20/202210/20/2022 n BS    f i  oi  (14) n i 1 However, it compares the probability associated with a forecast event with whether or not that event occurred instead of comparing the actual forecast and observation Therefore fi ranges from to 1, oi=1 if the event occurred or oi =0 if the event did not occur and BS=0 for perfect forecasts The corresponding skill score is: BS BS ref where the reference forecast is generally the climatological relative frequency SS BS 1  (15) The ranked probability score (RPS) is essentially an extension of the Brier score to multi-event situations Instead of just looking at the probability associated with one event or condition, it looks simultaneously at the cumulative probability of multiple events occurring RPS uses the forecast cumulative probability: m Fm  f j , m=1,…,J (16) j 1 where fj is the forecast probability at each of the J non-exceedance categories In this paper, fj = {0.1 0.2 0.4 0.2 0.1} for the five non-exceedance intervals {0-10%, >10-30%, >30-70%, >70-90%, and >90%},so Fm = {0.1 0.3 0.7 0.9 and 1.0} and J=5 The observation occurs in only one of the flow categories, which will be given a value of 1; all the others are given a value of zero: m Om  o j , m=1,…,J (17) j 1 The RPS for a single forecast/observation pair is calculated from: J RPS i   Fm  Om  (18) m 1 and the average RPS over a number of forecasts is calculated from: n (19)  RPSi n i 1 A perfect forecast will assign all the probability to the same percentile in which the event RPS  occurs, which will result in RPS=0 The RPS has a lower bound of and an upper bound of J-1 RPS values are rewarded for the observation being closer to the highest probability category The RPS skill score is defined as: 10/20/202210/20/2022 10 where  E is the standard deviation of errors Shafer and Huddleston (1984) noted that the presence of a few large negative errors not offset by correspondingly large positive errors results in a negatively skewed distribution at a forecast point, i.e large underforecasts rather than large overforecasts The also defined a forecast skill coefficient: n C SH  n oref  oi  i 1 n (22) n   f i  oi  i 1 A CSH value of 1.0 indicates that a prediction of the average streamflow would produce the same results as using the forecast A value of 2.0 implies that the forecast are twice as accurate as a constant prediction of the average A value less than implies that the forecast is les accurate than the climatology Shaefer and Huddleston (1984) compared forecast for two 15 years periods, 1951–65 and 1966–80, and concluded that a slight relative improvement (about 10%) in forecast ability occurred about the time computer became widely used in developing forecasts They attributed the gradual improvement in forecast skill to a combination of greater data processing capacity and the inclusion of data from additional hydrologically important sites They suggested that “modest improvement might be expected with the addition of satellite derived snow covered area or mean areal water equivalent data”, which were not readily available for most operation applications at the time Although satellite data of snow-covered area is now available, it is still not being used in operational models (Hartmann et al, 2002) Direct comparison with Shaefer and Huddleston (1984) is problematic, as their results were divided into states, rather than basins, and Colorado was paired with New Mexico According to their study, Arizona had the highest error (more than 55% for April streamflow forecasts), but also the highest skill (See Eqs and 11) Of the Colorado basin states, Wyoming (only part of which is in the basin) had the lowest forecast error (~20%) paired with the highest skill Following Shaefer and Huddleston (1984), we applied Eq XX) to our data We found similar trends The Gunnison / Dolores watershed, Upper Green watershed, and Lake Powell site consistently had absolute values of percent forecast errors less than 10/20/202210/20/2022 25 10%, the Gila watershed had percent errors ranging from 24 to 52%, and the five other watersheds mostly had errors between and 20% The largest improvement in forecast error occurred between January and April in the Virgin River Basin (May was less good, but still until 10%) and between January and March in the Gila River Basin (but April was extremely poor), although the March error in the Gila is still higher than at any other site Skill coefficients generally improved from January to May (except for April in the Gila watershed), from a Colorado basin-wide average of 1.31 to an average of 2.05 Despite the problems seen with some of the other forecast skill methods for Virgin River data, the Virgin River combined low forecast errors with high skill coefficients in April and May Conclusions Despite the shortcomings of the Water Supply Outlooks they are generally an improvement over climatology The majority of forecast points have very conservative predications of seasonal flow Below-average flows are often over predicted (forecast values are too high) and above average flows are under predicted (forecast values are too low) This problem is most severe for early forecasts (e.g January) at many locations, and improves somewhat with later forecasts (e.g May) For the low and high flows there is a low false alarm rate, which means than when low and high flows are forecast, these forecast are generally accurate However, for low and high flows there is also a low probability of detection at most sites, which indicates that these flows are not forecast nearly as often as they are observed Moderate flows have a very high probability of detection, but also a very high false alarm rate, indicating that the likelihood of moderate flows is overforecast There is also good discrimination between high and low flows, particularly with forecasts issued later in the year This means that when high flows are forecast, low flows are not observed, and vice versa However the probability that high and low flows will be accurately predicted, particularly early in year, is not as good The accuracy of forecasts tends to improve with each month, so that forecasts issued in May tend to be much more reliable than those issued in January Not all streams or areas show the same patterns and trends, but there is a lot of similarity in the relationship between forecast and observation, particularly in the Upper Colorado The changes in forecasting 10/20/202210/20/2022 26 periods (most recently to April-July in the Upper Basin and forecasting month-May in the Lower Basin) did not affect the accuracy of the forecasts While intuitively appealing, simple summary and correlation measures provide only a broad indication of forecast skill, with Pbias being perhaps the most intuitive The categorical measures are an improvement, and begin to give information about high and low forecasts They can readily communicate a more complete assessment of forecast skill Probalistic measures, while including uncertainty, are less straightforward to interpret … Distributive measures, e.g discrimination and reliability, provide the user with comprehensive forecast evaluations and allow performance in all streamflow categories to be examined individually In addition, these forecast quality measures examine the actual probability value within each category in contrast to “hit” and “miss” scores that convert the probability to an implied 100% probability for the category of interest However, their sensitivity to small sample sizes is an acknowledged limitation More use of the categorical and distributive measures is encouraged, although appropriate tools and baseline data for interpreting these measures are also needed (NEED MORE – HH has page of notes for this) Acknowledgements Support for this research was provided by the NOAA’s -Office of Global Programs through the supported Climate Assessment for the Southwest Project, at the University of Arizona Additional support was provided by the National Science Foundation through the Center for the Sustainability of semi-Arid Hydrology and Riparian Areas, also centered at the University of Arizona 10/20/202210/20/2022 27 References (Ref bulletins) Hartmann et al, 2002a – AMS, 2002b Cliamte Research?) Bales, R C., D M Liverman and B J Morehouse,Integrated Assessment as a Step Toward Reducing Climate Vulnerability in the Southwestern United States, Bulletin of the American Meteorological Society, 85 (11): 1727,2004 Day, G.N., 1985: Extended streamflow forecasting using NWSRFS Journal of Water Resources Planning and Management, 111(2), 157-170 Franz, K J., H C Hartmann, S Sorooshian and R Bales,Verification of National Weather Service Ensemble Streamflow Predictions for Water Supply Forecasting in the Colorado River Basin, Journal of Hydrometeorology,4(6): 1105-1118,2003(Bales et.al 2004; Franz et al., 2003) T Tolsdorf and S Shumate, personal communication), (Legates and McCabe, 1999) (Nash and Sutcliffe, 1970), Legates, D R and G J McCabe, Jr 1999 Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation Day, G.N., 1985: Extended streamflow forecasting using NWSRFS Journal of Water Resources Planning and Management, 111(2), 157-170 Day, G.N., L.E Brazil, C.S McCarthy, and D.P Laurine, 1992: Verification of the National Weather Service extended streamflow prediction procedure Proceedings, AWRA 28th Annual Conference and Symposium, Reno, NV, 163-172 Murphy, A.H., and Winkler, R.L., 1992: Diagnostic verification of probability forecasts International Journal of Forecasting, 7, 435-455 10/20/202210/20/2022 28 Murphy, A.H., Brown, B.G., and Chen, Y., 1989: Diagnostic verification of temperature forecasts Weather and Forecasting, 4, 485-501 Murphy, A.H and Winkler, R.L., 1987: A general framework for forecast verification Monthly Weather Review, 115, 1330-1338 Nash and Sutcliffe, 1970 Palmer, P.L., 1988: The SCS snow survey water supply forecasting program: current operations and future directions Proceedings, Western Snow Conference, Kalispell, MT, 43-51 Riverside Technology, Inc., 1999: National Weather Service Extended Streamflow Prediction Verification System (ESPVS) U.S National Weather Service Shafer, B.A and Huddleston, J.M., 1984: Analysis of seasonal volume streamflow forecast errors in the western United States Proceedings, A Critical Assessment of Forecasting in Water Quality Goals in Western Water Resources Management, Bethesda, MD, American Water Resources Association, 117-126 Wilks, D.S., 2001: Diagnostic Verification of the Climate Prediction Center Long-Lead Outlooks, 1995-98 Journal of Climate, 13, 2389-2403 Wilks, D.S., 1995: Forecast verification Statistical Methods in the Atmospheric Sciences, Academic Press, 467 p 10/20/202210/20/2022 29 Table The 54 sites used in this study USGS #no NAMEName MAIN STEM UPPER COLORADO Colorado River Inflow inflow to Lake L Granby, 9019000 CO 9037500 Williams Fork nearnr Parshall, CO Blue River R Inflow inflow to Dillon ReservoirRes., CO Blue River R Inflow inflow to Green Mountain 9057500 ReservoirRes., CO 9050700 9070000 Eagle River R belowbel Gypsum, CO 9070500 Colorado River R nearnr Dotsero, CO 9085000 Roaring Fork at Glenwood Springs, CO 9095500 Colorado River R nearnr Cameo, CO 9180500 Colorado River R nearnr Cisco, UT ELEV BASIN Elev , SIZEAre m a, km2 feet sq miles 2,415 8050 2,343 7809 2,628 8760 2,305 7683 1,883 6275 1,839 6130 1,716 5721 1,444 4814 1,227 4090 808312 476184 867335 1,55159 2,44494 11,3764 394 3,75614 51 20,8408 050 62,3922 4100 YEARS USEDYe ars 1953-00 1956-96 1972-00 1953-00 1974-00 1972-00 1953-00 1956-00 1956-00 GUNNISON / DOLORES 9112500 East River R at Almont, CO 9124800 Gunnison River R Inflow inflow to Blue Mesa ReservoirRes., CO 9147500 Uncompahgre River R at Colona, CO 9152500 Gunnison River R nearnr Grans Grand Junction, CO 9166500 Dolores River R at Dolores, CO 2,402 8006 2,145 7149 1,896 6319 1,388 4628 2,076 6919 748289 8,939345 1,160448 20,52579 28 1,305504 1956-00 1971-00 1953-00 1953-00 1953-00 UPPER GREEN 9188500 Green River R at Warren Bridge, WY 9196500 Pine Creek aboveabv Fremont LakeL., WY 10/20/202210/20/2022 30 2,240 7468 1,212468 2,235 19776 1956-00 1969-00 9205000 New Fork River R nearnr Big Piney, WY 9211150 Fontenelle Reservoir Res Inflowinflow, WY 9229500 Henrys Fork nearnr Manila, UT 7450 2,040 3,184123 6800 1,952 11,08042 6506 80 1,818 6060 1,346520 1974-00 1971-00 1971-94 YAMPA / WHITE 9239500 Yampa River R at Steamboat Springs, CO 9241000 Elk River R at Clark, CO 9251000 Yampa River R nearnr Maybell, CO 9257000 Little Snake River R nearnr Dixon, WY 9260000 Little Snake River R nearnr Lily, CO 9304500 White River R nearnr Meeker, CO 2,009 6695 2,180 7268 1,770 5900 1,899 6331 1,706 5685 1,890 6300 1,564604 559216 8,828341 2,558988 9,657373 1,955755 1953-00 1953-93 1956-00 1980-00 1953-00 1953-00 LOWER GREEN 1,869 6231 West W Fork Duchesne River R nearnr Hanna, 2,165 UT, unimpairedunimp 7218 Duchesne River R nearnr Tabonia, UT, 1,857 unimpairedunimp 6190 2,175 Rock Creek Cr nearnr Mountain Home, UT 7250 1,752 Duchesne River R aboveabv Knight Diversion, UT 5840 1,717 Strawberry River R nearnr Duchesne, UT 5722 Lake FK Fork River R belowbel Moon Lake L 2,391 nearnr Mountain Home, UT 7970 1,518 Duchesne River R at Myton, UT, unimpairedunimp 5061 2,160 Whiterocks River R nearnr Whiterocks, UT 7200 Green River R at Green River, UT 1,212 9266500 Ashley Creek Cr nearnr Vernal, UT 9275500 9277500 9279000 9279150 9288180 9291000 9295000 9299500 9315000 10/20/202210/20/2022 31 261101 16162 914353 381147 1,613623 2,374917 1953-00 1974-00 1953-00 1964-00 1964-00 1953-00 290112 1953-00 6,842264 1956-00 282109 116,1114 1953-00 1956-00 9317997 Huntington Creek nearnr Huntington, UT 4040 1,935 6450 4850 461178 1953-00 SAN JUAN R BASIN 9349800 Piedra River R nearnr Arboles, CO 9353500 Los Pinos River R nearnr Bayfield, CO 9355200 San Juan River R Inflow inflow to Navajo ReservoirRes., NM 9361500 Animas River R at Durango, CO 9363100 Florida River R iInflow to Lemon ReservoirRes., CO 9365500 La Plata River R at Hesperus, CO 9379500 San Juan River R nearnr Bluff, UT 1,844 6148 2,275 7583 1,697 5655 1,951 6502 1,941 6470 2,432 8105 1,214 4048 1,628629 1971-00 1953-00 699270 8,440326 1963-00 1,792692 4718 1953-00 1953-00 1954-00 9637 59,54423 1956-00 000 LAKE POWELL 9379900 Lake L Powell at Glen Canyon Dam, 93031 278,8221 00 07700 1963-00 VIRGIN RIVER 9406000 Virgin River R nearnr Virgin, UT 9408150 Virgin River R nearnr Hurricane, UT 1,050 1957-00 3500 2,475956 83427 3,881149 1972-00 80 GILA RIVER BASIN 9430500 Gila River R nearnr Gila, NM 9432000 Gila River R belowbel Blue Creek nearnr Virden, NM 9444000 San Francisco River R nearnr Glenwood, NM 9444500 San Francisco River R at Clifton, AZ 9466500 Gila River R at Calva, AZ 9498500 Salt River R nearnr Roosevelt, AZ 10/20/202210/20/2022 32 1,397 4655 1,227 4090 1,368 4560 1,031 3436 75525 17 65321 77 4,826186 7,324282 4,279165 7,161276 29,69411 470 11,14843 06 1964-00 1954-00 1964-00 1953-00 1963-98 1953-00 Tonto Creek Cr aboveabv Gun CreekCr.k, nearnr Roosevelt, AZ Verde River R belowbel Tangle CreekCr., 9508500 aboveabv Horseshoe Dam, AZ 9499000 10/20/202210/20/2022 33 75725 1955-00 23 1,747675 60920 15,16858 1953-00 29 59 Table Resolution in the 0.9 categories of flow BASINBasin Main Stem stem Upper upper CO Gunnison / Dolores Upper Green Yampa / White Lower Green San Juan R Basin Lake Powell Virgin River Gila River Basin All Main Stem stem Upper upper CO Gunnison / Dolores Upper Green Yampa / White Lower Green San Juan R Basin Lake Powell Virgin River Gila River Basin All Main Stem stem Upper upper CO Gunnison / Dolores Upper Green Yampa / White Lower Green San Juan R Basin Lake Powell Virgin River Gila River Basin All JANJa FEBF MAR APRA MAY n eb Mar pr May LOW FLOWSLow flows 0.4 0.5 0.6 0.6 0.7 0.5 0.6 0.7 0.8 0.8 0.3 0.5 0.5 0.6 0.7 0.5 0.6 0.6 0.7 0.8 0.6 0.7 0.7 0.8 0.9 0.5 0.6 0.7 0.7 0.8 0.8 0.8 0.8 0.8 0.8 0.4 0.4 0.5 0.7 0.5 0.6 0.7 0.6 0.6 –NaN 0.5 0.6 0.6 0.7 0.8 MODERATE FLOWSModerate flows 0.2 0.2 0.3 0.4 0.3 0.3 0.3 0.5 0.1 0.2 0.2 0.3 0.3 0.2 0.3 0.4 0.4 0.4 0.5 0.6 0.2 0.2 0.3 0.4 0.5 0.5 0.5 0.5 0.2 0.3 0.4 0.5 0.3 0.3 0.4 0.4 0.3 0.3 0.4 0.4 HIGH FLOWSHigh flows 0.5 0.6 0.5 0.5 0.7 0.6 0.5 0.4 –NaN 0.6 0.4 0.5 0.4 0.6 0.6 0.4 0.6 0.2 0.4 0.5 0.7 0.8 0.7 0.7 0.8 0.8 0.6 0.7 –NaN 0.8 10/20/202210/20/2022 0.5 0.5 0.6 0.5 0.6 0.4 0.6 0.4 0.5 0.5 0.5 0.6 0.6 0.6 0.7 0.5 0.6 0.6 0.6 0.6 0.6 0.7 0.6 0.6 0.8 0.7 0.6 0.6 0.6 0.7 34 Table Rank of sites by relative resolution of flows(omit ??) Basin LG LG LG LG LG Gila LP SJ LG Y/W MS-UC MS-UC MS-UC MS-UC UG MS-UC UG UG SJ Point Resolution*a Rank *sum of high and low probabilities Low Flowsflows, B best Resolutionresolution Duchesne R nr Tabonia, UT, unimp.DUCHESNE R NR TABIONA, UT UNIMPAIRED 0.95 Duchesne R at Myton, UT, unimp.DUCHESNE R AT MYTON, UT UNIMPAIRED 0.94 STRAWBERRY R NR SOLDIER SPRINGS UNIMP 0.94 Strawberry R nr Duchesne, UTSTRAWBERRY R NR DUCHESNE, UT 0.81 Huntington Cr nr Huntington, UTHUNTINGTON CK NR HUNTINGTON, UT 0.79 Gila R at Calva, AZGILA RIVER AT CALVA 0.76 L Powell at Glen Canyon DamLAKE POWELL AT GLEN CANYON DAM, AZ 0.76 Los Pinos R nr Bayfield, COLOS PINOS RIVER NEAR BAYFIELD 0.74 Green R at Green River, UTGREEN R AT GREEN RIVER, UT 0.74 Elk R at Clark, COELK RIVER AT CLARK 0.73 Low Flowsflows, Worst worst Resolutionresolution Colorado R nr Dotsero, COCOLORADO RIVER NEAR DOTSERO 0.54 Williams Fork nr Parshall, COWILLIAMS FORK NEAR PARSHALL 0.54 Eagle R bel Gypsum, COEAGLE RIVER BELOW GYPSUM 0.54 Colorado R nr Cameo, COCOLORADO RIVER NEAR CAMEO 0.52 Green R at Warren Bridge, WYGREEN R AT WARREN BRIDGE 0.49 Colorado R inflow to L Granby, COCOLORADO RIVER INFLOW TO LAKE GRANBY 0.47 Fontenelle Res inflow, WYFONTENELLE RESERVOIR INFLOW 0.44 New Fork R nr Big Piney, WYNEW FORK RIVER NR BIG PINEY 0.43 Florida R inflow to Lemon Res., COFLORIDA RIVER 0.43 10/20/202210/20/2022 35 10 4546 4647 4748 4849 4950 5051 5152 5253 5354 Vi LG Y/W LG UG LG LG SJ LG Gi LG Gi SJ MS-UC Vi Vi G/D SJ Y/W Gi UG INFLOW TO LEMON RESERVOIR Virgin R nr Virgin, UTVIRGIN R NR VIRGIN, UT 0.41 High Flowsflows, Best best Resolutionresolution Strawberry R nr Duchesne, UTSTRAWBERRY R NR DUCHESNE, UT 0.95 Little Snake R nr Dixon, WYLITTLE SNAKE R NR DIXON 0.91 STRAWBERRY R NR SOLDIER SPRINGS UNIMP 0.89 Fontenelle Res inflow, WYFONTENELLE RESERVOIR INFLOW 0.85 Duchesne R nr Tabonia, UT, unimp.DUCHESNE R NR TABIONA, UT UNIMPAIRED 0.81 Huntington Cr nr Huntington, UTHUNTINGTON CK NR HUNTINGTON, UT 0.77 Florida R inflow to Lemon Res., COFLORIDA RIVER INFLOW TO LEMON RESERVOIR 0.74 Whiterocks R nr Whiterocks, UTWHITEROCKS R NR WHITEROCKS, UT 0.72 Salt R nr Roosevelt, AZSALT RIVER NEAR ROOSEVELT 0.70 Duchesne R at Myton, UT, unimp.DUCHESNE R AT MYTON, UT UNIMPAIRED 0.68 High Flowsflows, Worst worst Resolutionresolution Verde R bel Tangle Cr., abv Horseshoe Dam, AZVERDE R BLW TANGLE CK, ABV HORSESHOE DAM 0.50 La Plata R at Hesperus, COLA PLATA RIVER AT HESPERUS 0.49 Eagle R bel Gypsum, COEAGLE RIVER BELOW GYPSUM 0.49 Virgin R nr Virgin, UTVIRGIN R NR VIRGIN, UT 0.49 Virgin R nr Hurricane, UTVIRGIN R NR HURRICANE, UT 0.49 Uncompahgre R at Colona, COUNCOMPAHGRE RIVER AT COLONA 0.47 Piedra R nr Arboles, COPIEDRA RIVER NEAR ARBOLES 0.44 White R nr Meeker, COWHITE RIVER NEAR MEEKER 0.43 Gila R at Calva, AZGILA RIVER AT CALVA 0.34 Henrys Fork nr Manila, UTHENRYS FK NR MANILA, UT 0.25 10/20/202210/20/2022 36 5455 10 4546 4647 4748 4849 4950 5051 5152 5253 5354 5455 a sum of high and low probabilities 10/20/202210/20/2022 37 List of figuresFigures Location of 136 54 water supply outlook forecast points in the Colorado River Basin The 54 points used in this study are shown by (distinguishing characteristics) Example reliability diagram illustrating relative frequency for various forecast skills Example discrimination diagram illustrating relative frequency For each year at New Fork River Near Big Piney3 forecast points: (a)Forecast/observed values with each circle representing a different month , (b) observed/average observed values,(c) years used in computing the climatological average on which the forecast is based , and (d) forecast period associated with each month, with the top hatch representing the first month and the lower hatch marking the last month of the forecast period The left column show forecast versus observed values, for each month Each point represents a single year, with a different symbol for each forecast period The 1:1 line is provided for reference The right column showsFor the same sites as on Figure 4, f i / oi against oi / o for April and May The horizontal lines at 0.8 and 1.2 are provided for reference R2 (left column) and NSC (right column)Summary and correlation measures associated with forecasts issued in January (top) through May (bottom) for the N sites used each month There were 55 54 sites used in January-April and only 47 used in May, because the Gila River Basin sites not issue May forecasts x7 NSCNash-Sutcliffe coefficient in April for entire area and each sub- regions 78 Skew coefficient G (equation 9) in April for entire CRB area and each sub region 89 Frequency histograms of Hit Rate for observations in the lowest 30% of flows, the middle 40% of flows, and the upper 30% of flows, plus the cumulative frequency 910 Frequency histograms of Threat Score Rate for observations in the lowest 30% of flows, the middle 40% of flows, and the upper 30% of flows, plus the cumulative frequency 10/20/202210/20/2022 38 1011 Frequency histograms of False Alarm Rate for observations in the lowest 30% of flows, the middle 40% of flows, and the upper 30% of flows, plus the cumulative frequency 1112 Frequency histograms of Probability of Detection for observations in the lowest 30% of flows, the middle 40% of flows, and the upper 30% of flows, plus the cumulative frequency 1213 The LEPS value (top left), Brier Score and Ranked Probability score (bottom left), and the associated skill scores (right column) for the New Fork River Near near Big Piney (SiteUSGS #9205000) The circles are the climatological scores, the crosses the forecast scores 1314 Monthly Average average (a) LEPS skill scores (b) Brier skill scores and (c) Rank probability skill scores for each basin 14 Flow histograms (resolution) for Green River Near Warren Bridge (Site #9188500) 15 Reliability diagrams for Green River Near Warren Bridge (Site #9188500) 16 Discrimination diagrams for Green River Near near Warren Bridge (Site #9188500) Discrimination diagrams for Colorado River Near Dotsero (Site #9070500) Reliability diagrams for Gunnison River Inflow to Blue Mesa Reservoir (Site #9124800) 10/20/202210/20/2022 39 ... were the Gila River basin locations; two were in the San Juan River basin (San Juan River near Bluff and the San Juan River inflow to Navajo Reservoir), one along the main stem of the upper Colorado. .. Snake River near Dixon, respectively) However, four of the remaining San Juan River basin sites had SSRPS values in the top ten (averaging 30-40) and four of the Yampa and White River basin sites... usually averaging less than 0.5, with values less than or equal to 0.3 in January and March at many of the basins Low and high flows have the poorest resolution in the Virgin River basin The best

Ngày đăng: 20/10/2022, 09:29

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w