Tip:
Highlight text to annotate it
X
Slide 53: Ken Nowak: All right at this point I’m going to turn it back over to Alan as we start looking at
actually the results of this modeling framework that he and I have just described.
Alan Butler: Thanks, Ken. We’ll now go through and look at the different system reliability results.
But, first we’ll start with some of our key modeling assumptions so that you can sort of better understand the results.
We’ll look at example results for all those different levels that we just discussed - the system response variables, resource metrics,
indicator metrics if there are vulnerabilities, vulnerable conditions and then we’ll get into sort of a portfolio tradeoff comparison.
Slide 54: Alan Butler: Again we’re just going to look at examples of all of those different levels of results
that we discussed in the system reliability framework as we go through this section.
Slide 55: Alan Butler: To start with, some of the key modeling assumptions that we made well we
started off by combining all of the different supply and demand combinations and then modeled those as a baseline,
so with no options are strategies in place and then with the options and strategies of those portfolios in place.
We also had two different assumptions for how Lake Powell and Lake Mead operate after 2026.
Currently we have interim guidelines that dictate the operation of these reservoirs through 2026,
but it’s uncertain how we’ll operate them after that.
We had two assumptions that we used. The first being that we would extend these interim guideline through 2060 and the second
assumption was that we could revert to the interim guidelines no action interim guidelines, EIS no action alternative.
When thinking about the shortages in the upper basin, it’s important to understand that most of the shortages in the upper basin are hydrologic,
meaning there’s just not enough flow in a given reach to meet the demands in that area.
We also had another shortage metric in the upper basin and this is the Lee Ferry deficit that Ken’s described.
Again this is any time the flow at Lee Ferry is less than 75 million acre-feet. We report that magnitude as a Lee Ferry deficit.
And we also inject water above Lake Powell and then release it from Lake Powell to make sure that we meet that 75 million acre-feet threshold.
In the lower basin, shortages are a little bit different. Any time there’s a shortage beyond what’s specified in the interim guidelines or the
no action alternative we don’t assign that shortage to any particular state.
So currently the interim guidelines specify shortages for the lower basin states any time Lake Mead is below 1075.
However, in certain cases Lake Mead just is so empty that it can’t meet the demands below Lake Mead and so these shortages aren’t
assigned to any individual state or user. We also assumed that Mexico shared in shortage in both of our policy assumptions.
Slide 56: Alan Butler: As we get into modeling both the baseline and the options and strategies - it's
important to understand how the demands above apportionment are handled, as Carly showed in the demands section, there are
demands above the current apportionment. When we did our baseline simulations, we capped deliveries at apportionment.
So we only delivered 7.5 million acre-feet to the lower basin states except for under surplus conditions.
However, once we started simulating with options and strategies in place, we would deliver more than 7.5 million acre-feet if an option such
as a desalination plant had been turned on.
When we implemented conservation, we applied this first to those demands above apportionment.
And any time Lake Mead fell below 1050 if there were options that imported water into the system online, then we would switch this
imported water from going to benefit the demands above apportionment to benefiting the system.
So we’d actually use that help prop Lake Mead up any time it fell below 1050.
Slide 57: Alan Butler: When you combined all of the different scenarios that we’ve had - the four
supply scenarios, the six demand scenarios and our two policy scenarios - it results in 48 baseline scenarios that we simulated using
CRSS and we evaluated the reliability metrics across all of these different scenarios.
Due to the varying number of realizations of the different supply scenarios, this results in over 20,000 individual realizations for our baseline
scenario.
When you combine that with our four portfolios, that results in 192 additional scenarios.
And when you look across all of these 240 different scenarios, there’s over 5.8 million years of data that we simulated.
So as we start to look at the results, we are trying to roll up of this 5 million years of data into something digestible and simple.
Slide 58: Alan Butler: Now to start with we’ll begin by looking at the system response variables and
shown here is just an example of one of those variables.
This is Lake Mead pool elevation. I won’t explain it now as I’ll get to that in a minute.
Again the system response variables are pretty much just raw modeling output -- things such as elevations or stream flow.
Now I’ll move into our sort of dynamic workbooks to present additional system response variables.
And Tableau is just a piece of software that we used that allows us to display the results and kind of dynamically select which scenarios we
want to look at, at any one time.
Alan Butler:So Tableau as you’ll see in a minute really does just help us visualize the data.
It’s sort of similar to Excel in that there’s different worksheets at the bottom. However, it allows you to pretty easily select what
scenarios you want to look at, at any one time.
So to start with we’re going to look at the annual flow of the Green River at the Green River Utah Gage.
And to start out with we’ll look at just the median flow for the Observed Resampled hydrology and for the C1 demand scenario.
And so as we adjust our filters here, you start to see the 10th, 50th and 90th percentiles of stream flow on the Green River.
And to begin with because of the way the Observed Resampled hydrology works and that each future year is simulated with every
historical year, you see a nice smooth trend in the data and this is primarily showing the effects that demand has on the system at the Green
River Gage.
And so you can see that as demands increase, we’re getting slightly lower stream flow going forward through the future.
If you add on an additional supply scenario - the Paleo Conditioned supply scenario - you see a little bit more variability due to the way this
scenario works.
So, it’s not just a smooth trend anymore, although the trend in demands is apparent, but you do see more variability year to year.
However, the percentiles are pretty close to the same as our Observed Resampled since it’s based on our Observed hydrology.
When you add in the Observed Resampled or the Downscaled GCM Projected scenario, then you start to see a little bit different information.
There’s much higher variability from year to year shown in this blue line. However on the Green River basin the magnitudes of the flow is roughly
the same as the Observed hydrology and this is consistent with what we’re seeing out of those GCM models as far as temperature and
precipitation projections in the Green River sub-basin. This won’t be the case when we look at a different sub-basin in a minute.
However, before we go there we can go ahead and add on all of our different remaining demand scenarios and the remaining supply scenarios
and sort of get an understanding of the entire variability and the range of this flow that we could expect to see on the Green River
anywhere from, you know, 2 million acre-feet per year to upwards of 6 million acre-feet per year across all scenarios.
When we move to the San Juan River Basin, we start to see a little bit different picture -- especially in comparing the Observed hydrology
to the GCM projected hydrology.
As you’ll see here most of the Observed hydrology is quite different from the GCM projected hydrology and there’s a much more
drastic trend in the GCM hydrology in this sub-basin.
We see that both at the 90th and the median and that they’re quite different.
And again this is very consistent with what we’re seeing out of the GCMs and the temperature and precipitation projections in the San Juan are
different from those in the Green River Basin.
Again under the Observed hydrology you kind of see a smooth trend indicating the effects of demand within the basin.
And when you look at the GCM hydrology you see more variability but a reduced median flow.
Reducing the median flow in the San Juan Basin from around a million acre-feet per year under the Observed hydrology down to, you know,
maybe half a million or a little bit more under the GCM projected.
Moving on to our next system response variable we’ll take a look at the Lee Ferry deficit.
This figure shows the different magnitudes of the Lee Ferry deficit in the upper three panes and then the percentage of traces the experience of
Lee Ferry deficit in this lower pane.
The two different columns compare the effects of the different policies, so whether or not we’re extending the interim guidelines after 2026 or if
we’re reverting to the no action alternative.
We’ll notice here that under the Observed hydrology there’s no instances in which we simulated a Lee Ferry deficit with our Observed
Resampled hydrology.
However, as we get to other hydrologies we do see that under the Paleo Conditioned we’re in the 5 to 10% range and 5 to 10% of traces ar
experiencing a Lee Ferry deficit in any one year.
And as we get to the GCM Projected this percentage increases.
Here the GCM Projected hydrology is showing upwards of 20% of traces are experiencing a Lee Ferry deficit in any one year.
As we compare the left column with the right column, you’ll notice that the policy doesn’t really have a huge effect on the number of traces
that a Lee Ferry deficit occurs.
As we look at the magnitudes, you’ll notice that regardless of what supply scenario the deficit is occurring under the magnitudes of these deficits
are roughly the same at the 10th, the 50th and the 90th percentiles. And this just has to do with the way Lake Powell and Lake Mead are
operated and kind of some constraints on the releases from those reservoirs.
The final system response variable that we’ll look at before moving on is the lower basin shortage.
Again we’re showing in the top three panes different magnitudes - the 10th, 50th and 90th percentiles of these shortage magnitudes.
And in the lowest pane we’re showing the percentage of traces where a lower basin shortage occurs under the two different policies.
As we look at the later decades there’s somewhere around 50 to 75% of the traces have a shortage associated with them.
And there is about a 25% increase from the Observed hydrology or 20% increase from the Observed hydrology to the GCM hydrology in
the number of traces that have a shortage.
When you look at the magnitudes, you’ll notice that the interim guidelines magnitudes follows very defined benchmarks and this has to do with
the way the interim guidelines assign shortages.
There’s particular magnitudes when Lake Mead falls below 1075, 1050 and 1025, so these different magnitudes are reflected at those
different percentiles.
When you start to look at the no action alternative - which will calculate a shortage volume necessary to keep Mead above 1000 -
you start seeing a lot of magnitudes that are above a million acre-feet as you get into the 50th and 90th percentiles to try and keep Lake Mead
above 1000.
Now we’ll go back in to the presentation and we’ll look at the reliability metrics, an example of the metrics not one of those 90 different
reliability metrics or excuse me of a resource metric.
Slide 59: (Resource metrics) Alan Butler: These are the 90 different resource metrics and they’re all included in the
appendices to Technical Report G.
Again these are primarily raw modeling output and then they have typically a reference value associated with them.
So in this particular example, we’re looking at the pool elevation at Blue Mesa for each month.
And each one of those red lines is an elevation in which a boating ramp or marina would go out of service.
And this particular view we’ve for the baseline scenario shown in the green box we’ve combined all supply and demand scenarios and
then the different portfolios are shown as other colors again with all of the supply demand policy scenarios combined.
The figures show the 10th, 25th, 50th, 75th and 90th percentiles in the boxplots and they show the effects that the portfolios have on propping
up Blue Mesa. And so for example in August you can see that at the 25th percentile in the last time period we’re well below all of the
shoreline facilities at the 25th percentile.
However, when we implement portfolios, we drastically increase that pool elevation and are above the reference value for several of those
facilities.
And so we have figures similar to this for all 90 resource metrics again that are available in the appendices to Technical Report G.
Slide 60: Alan Butler: At this point and time we’ll start to look at some examples of the indicator metrics
and the vulnerabilities and I’ll turn it back over to Ken to go through those.
Ken Nowak: All right thanks, Alan.
So to begin here we’re just going to kind of review this idea of vulnerability and indicator metric.
Vulnerability is a combination of usually an individual metric and some threshold to ultimately provide a resource-specific
perspective on a system condition.
You know, the example being if you look at Blue Mesa as Alan was just showing before, how far do you have to fall or how persistent do you have
to be below which of those recreational access reference values to identify a really vulnerable state.
And then we can also present the results as percent of traces meaning percent of futures that incurred this outcome at least once or percent of
years meaning across all the futures we considered what percent of those years actually included this.
And so to start exploring this, we’ve got an example figure here.
This is Lake Mead falling below that 1000-foot pool elevation as the vulnerability threshold.
This is presented as percent of traces instead of percent of years.
What you can see is there’s three time windows - the interim period 2012 to 2026, a middle period through 2040 and then 2041 to 2060.
Within each one of those three time periods you have results broken out by our different supply scenarios.
Demand scenarios are shown by different symbols and our policy choices are shown by color - blue or orange.
In the first period, you see there are no policy differences because we’re always operating under the interim guidelines.
The blue marks there’s really almost no separation amongst our different demand scenarios.
That’s because they really haven’t significantly deviated in their trajectories during that first period.
But, then we do see some differences between what hydrology are you under.
Most notably the Downscaled GCM has about 28% of traces incurring this vulnerability at least once during that period,
while our other scenarios are below 20% and some of them are in fact zero or very low.
As we move to the middle period, we start to see some separation amongst the policy and the demand scenarios but, still see generally
that being under the Downscaled GCM Projected hydrology is what’s driving the highest risks.
Finally, when we move to the third period, we see that there’s actually quite a wide range depending on your policy and demand
combination.
For example under Observed Resampled we see some combinations having zero percent of traces while others having as high as 50.
And so that really starts to allow you to explore what are the futures in decisions that are starting to drive or reduce the vulnerability of this
particular event from occurring.
So at this point we’d like to go back to the Tableau tools that Alan started to show.
Ken Nowak: All right so here we’re displaying a similar figure.
This time instead of looking at Mead falling below 1000 - which is a water delivery indicator metric - we’re actually looking at Mead falling
below 1050.
1050 is actually the electric power reference value and so we’re looking at the percent of years in which Lake Mead falls below 1050 -
which is the vulnerability threshold for hydropower generation.
What we’re showing right now is simply the baseline results, the Observed Resampled hydrology and the Current Projected demand
scenario.
As we start to introduce some more hydrology, we see different colors introduced here.
And what we can see is that the red being that Downscaled GCM is always across our three time periods the one with the most number of
years vulnerable.
And then as we introduced more demands we start to see spread here. And color again is our supply scenario and demand being the different
symbols.
Now finally, we can start to look at our different portfolios meaning what we’ve done in terms of options implemented.
So if we introduce one portfolio here, we now have two columns for every time period.
And you see in the first time period there’s almost no change meaning we probably don’t have options on hand to deal with these
vulnerabilities fast enough as they’re approaching and so the portfolio has very little effectiveness.
However, as we move on to our middle and latter periods, we actually see substantial savings in terms of the percent of years vulnerable.
Most noteworthy of course is that GCM scenario where you’re up around 65 to maybe even 70 percent of years vulnerable.
And with the implementation of a portfolio - in this case we’re looking at Portfolio A - we can bring that down in the range of 35 percent of
years vulnerable.
Obviously, that’s not an end goal or something that we want to really focus on as an acceptable or unacceptable outcome, but just that we have
the capability to make significant reductions in those amount of years or traces vulnerable.
To look at one other result in this manner, we’ll stick with the electric power resource category. And we will look at Lake Powell falling below
3490 - which is again a hydropower generation elevation vulnerability condition - and we see very similar results here.
We see that in the first period, the interim period, across the board we have low vulnerabilities and that as we move forward in
time we see vulnerability increasing.
And as we move further in time, we see the effectiveness of the portfolio increasing.
You see a nice scatter amongst the different scenarios as we move further and you start to in fact see some overlap between the colors and
symbols here in this third window.
All right so we’ve just looked at two different indicator metrics and tried to understand how supply, demand and policy impacts the results.
However, it’s pretty challenging to look at more than one indicator metric at a time when you breakout across those supply, demand and
policy dimensions.
And so to facilitate a more broad-viewed analysis, we look at all of the different traces aggregated together and look at results in that
manner.
Here we’re showing percents of years vulnerable by three time periods, by all traces and then also percent of traces in fact occurring that
vulnerability.
And what we’re looking at here is the indicator metric for shoreline recreational facilities - in particular we can start with Blue Mesa at the
top.
You see that there’s quite a few traces vulnerable - 96, 96, 98 percent, and that’s really because the criteria for vulnerability at Blue
Mesa has occurred in the past.
And so if these elevations have been seen in the past, you can imagine that with increasing demand and possibly reduced supply you would
see them happening quite frequently in fact in the future as well.
We also have Navajo, Flaming Gorge, Lake Powell and Lake Mead as shoreline indicator metrics throughout the basin.
So now if we go and we introduce a portfolio, you’ll see that we have the baseline results and Portfolio C here.
The thing is we’ve shifted now just to percent of traces as opposed to showing percent of traces and percent of years.
And what we can see is that there are some reductions in vulnerability not always a significant reduction depending on the portfolio.
But, another interesting outcome is that if you look at the Portfolio C results for our, I believe this is Lake Powell here,
you actually see the highest number of traces vulnerable in the middle time period -- which is somewhat curious.
But, what we’re believing is happening here is actually that in the first period you’re relatively low in terms of vulnerability.
But, as that risk increases, you may not have enough options either available or implemented in time to really bring that down.
But, by the time you get to the final period, you’ve in fact reduced and brought on enough options to make a significant dent.
And you see sort of that pyramid-shaped trend in the percent of traces vulnerable here versus monotonically increasing trend that we see
under the baseline results.
So what we’ll do here is now switch to an eco view similar and what we’re seeing now are ecological indicator metrics under the baseline
Portfolio C and then I’ll introduce Portfolio B.
And so the general theme is across both portfolios we’re seeing significant reductions in vulnerabilities relative to the baseline.
However, in some cases, for example the Yampa Portfolio C has even more notably reductions.
This is due to the options included in that portfolio that are helping to keep more water in stream.
And the reason it’s so pronounced on the Yampa is because of the nature of this indicator metric whereby for all intents and purposes
more flow in the river reduces vulnerability; whereas some of the other ecological indicator metrics have more complex requirements that
don’t simply equate more flow in the river to improved results.
They require specific timing and certain thresholds in terms of the volume and peak volumes.
And so you don’t see the same type of improvement on Portfolio C for some of the others as much as you see for the Yampa.
But we do see improvements across the board here for all of our ecological indicator metrics throughout the basin.
Last for this view of percent of traces vulnerable we turn to our water delivery indicator metrics. We have five of them here.
To start with we have upper basin shortage. We can see that under Portfolio B and Portfolio C we’re seeing continued reduction in percent of
traces as we move forward in time and to the point where in the third time window we’re going from about 60 percent of traces vulnerable down
to about 28 percent, so about a halving of the number of traces that are incurring a vulnerability in that third period.
We also have the Lee Ferry deficit that we’ve been over, Lake Mead pool elevation, two types of lower basin shortage indicator metrics and
then finally lower basin states demand above apportionment.
And as Alan mentioned under the baseline we did not deliver to demands above apportionment with the exception of surplus years.
And as a result you see a very high number of traces incurring a vulnerability suggesting that there are demands that are going un-met that
are substantial enough to flag our vulnerability concept.
However, under both portfolios once we have options in place - meaning introducing water that we are allowing to be available to go towards
these demands above apportionment - we see that they’re brought down considerably and are in line with the other vulnerabilities that we’re
seeing across the lower basin.