Tip:
Highlight text to annotate it
X
[0:01] [silence]
Steve Aichele: [0:10] Happy New Year, everyone. Welcome to
the sixth installment of the USGS Hydrography Seminar Series. My name's Steve Aichele. I'm
one of the coleads for Hydrography for the moment. I'm just acting. Al Rea is the other
colead. He's helping out here. You'll hear from him in a little bit.
[0:33] We also want to introduce Sue Buto who is on the phone and will be helping with
some of the questions and answer sessions later on in the call. Sue is our acting lead
for Watershed Boundary Dataset.
[0:49] Finally, Allison Jason is basically running the technology here and making sure
the phones, and the webinar, and all these various gizmos work throughout the session,
and providing an invaluable service in that respect.
[laughs] Oops. It's not in fact the fourth agenda, it's the sixth.
[1:16] Our two key talks today, the first one will be from Dr. David Tarboton on high
resolution hydrography and hydrologic modeling. Then we will have a short break for questions
and answers. As you can probably tell, there are quite a few people on the line, something
approaching 200 attendees. For questions and answers, we're going to handle that by the
chat window in the WebEx interface.
[1:52] Sue and Allyson will take turns reading off those questions. Dr. Tarboton will do
his best to answer them. Al Rea:
[2:02] Steve, it's labeled Q&A. Steve:
[2:06] Q&A is what the label is in the interface, very good. Al:
[2:09] Right. Steve:
[2:11] Actually, I can't see the interface when I'm presenting here. Then, we'll have
an update from Al and Ellen Finelli, with the National Geospatial Technical Operations
Center, on the NHDPlus high res progress, they had a few data sets delivered right before
Christmas. It's actually kind of exciting. We'll have another Q&A session related to
that. You may hear a little bit of chiming going on right now.
[2:45] During the session, most of the attendees will be placed in lecture mode, which means
you can talk in the phone, but we won't hear any of it. Your dog can bark, whatever. That's
also why we're handling some of the question and answer through chat.
[3:07] Moving on through the session, if we don't get to your question, we can tackle
those by email or we'll also post this information on the NHD website, along with a
p.1
recording of the session that will provide something to refer back to later. At the end
of the session, there will be a very short poll.
[3:35] We appreciate if you provide some feedback on the session and thoughts on some future
sessions. With that, I will pass the ball over to Dr. Tarboton and we can kick the show
off here. Dr. David Tarboton:
[3:53] I'm David Tarboton. I'm at Utah State University. Thank you very much for the opportunity
to present to you today. This talk about high resolution hydrography and hydrologic modeling
draws on ideas from research I've done on hydrologic digital elevation model analysis,
GIS in hydrology, and information systems and cyber infrastructure for advancing hydrology.
[4:21] My background has involved raster based digital elevation model analysis much more
than hydrography. There will be a fair bit of a raster orientation in what I say. I should
also note that some of the ideas have come from collaborators, that's listed at the bottom.
[4:38] David Maidment, who I've worked with closely, and my student Nazmus Sazib, and
other students at [inaudible 4:41] Texas. I've also put a few
slides in there on the great project of acknowledged collaborators on that. I'm appreciative of
the funding from the various agencies that have supported this.
[4:57] Hydrologic models are required for forecasting, flood plain mapping, water quality
assessments, river restoration, setting environmental flows and land management.
[5:13] There was a grand challenge written by the National Research Council in 2001 that
stated better hydrologic forecasting that quantifies effects and consequences of land
surface change on hydrologic processes is something that should be done.
[5:30] We really need modeling to inform the infrastructure that's going to protect us
from things in the pictures at the bottom, floods and drought, natural disasters that
sometimes aren't that natural, and with a bit of planning we could have avoided. On
this slide, the sentence in blue at the top captures vastly the goal of hydrologic modeling
research.
[5:57] It's to advance the capability for hydrologic prediction by developing models
that take advantage of new information and process understanding enabled by new technology.
[6:13] This is part of the general trend to more explicit physically based spatially distributed
models. As we get more data, the promise is that it's going to help our models have better
predictions and also represent processes better.
[6:34] That's where NHD and NHD Plus and high resolution hydrography, in general, come into.
Of course it's providing high resolution topography data that we can use in
p.2
hydrologic models. The image at the right is a sort of generic hillslope representation
actually comes from the RHESSys model.
[6:56] I'm going to give two examples. The first is going to be based on floodplain mapping,
and that's going to be most of the presentation. Then I've got some slides at the end on sediment
inputs from roads and the impact on streams. Then I'll talk a little bit about some of
the questions and challenges faced as we go to high resolution.
[7:24] There's an activity going on called the National Flood Interoperability Experiment
that was conceived by David Maidment. This got me involved into the question of floodplain
mapping. There we are really trying to go from weather forecasts put out by NOAA, simulated
with models such as the WRF model, into hydrologic models. In the case of this chain, it's the
NOAA MP model.
[7:57] Then it goes into a hydraulics model. This is actually an animation from the RAPID
model that one of David Maidment's students worked on in Texas. Then that would go into
mapping of the flood inundation, and trying to get the information to the emergency responders
that are trying to save lives.
[8:21] That's the problem, and we can see in these pictures at the left that there's
critical infrastructure that gets flooded and damages can mount up.
[8:35] The NFIE, or the National Flood Interoperability Experiment, is really a community partnership
between government and academic researchers. It's centered around the National Water Center
in Tuscaloosa that's really a partnership of many federal agencies. I think the National
Weather Service is the lead, but USGS is certainly involved. FEMA is certainly involved. Army
Corps of Engineers.
[9:00] Then the academic community is involved through CUAHSI, the Consortium of Universities
for Advancement of Hydrologic Science, that is funded by the National Science Foundation.
As part of the NFIE, we're really trying to extend to what's called here Continental Hydrology.
[9:25] Right now the river forecasting system that's operated by the National Weather Service
forecasts at 3,600 points nationally, and you can see this coverage of them. The typical
basin that's used there is about 400 square miles. With NFIE we're extending that to try
and predict at every single reach and the catchments from NHD Plus. This is where the
connection to hydrography comes in.
[9:57] That would be forecasting at about a one square mile resolution and essentially
becoming continuous across the continent. What has to happen to get that done?
[10:16] I showed earlier the pictorial representation where we're going from WRF to the weather
model, then NOAA MP as a hydrologic model, RAPID or SPRNT produced flows at three week
scale, but we need things like the ability to obtain reach level hydraulic properties
for input to those models for each of the 2.7 within reaches.
p.3
[10:45] We also need a way to map from the reach scale at which that information is being
produced to stage flood inundation. The idea is to exploit high resolution topography and
the 1:1 relationship between the stream reaches (this is where hydrography comes in) and the
catchments, in terms of elevation.
[11:10] This slide gives a few details on the SPRNT model. This is a model developed
by Ben Hodges at the University of Texas. It solves the St. Venant equations, which
are the fundamental equations for hydraulics in channels. It requires as input the hydraulic
geometry, the cross sectional area, the discharge, the weighted perimeter, and those sort of
things.
[11:42] It actually takes advantage of parallel computing, using some of the concepts involved
in chip design, so there's a partnership between the Texas group and IBM in doing that. To
provide some of the information needed, I've been throwing in some ideas related to my
work on terrain analysis.
[12:09] We've really started evaluating the concept of the height above the nearest stream
for floodplain mapping. The idea is, coming out of SPRNT, you would have discharge in
every single reach in, say, the National Hydrography Dataset. From that discharge you can get a
stage, an average stage, over the reach.
[12:33] Then we compute from a digital elevation model the height above the stream. The way
that computation is done is you go to each grid cell on the terrain and effectively slide
downhill following the flow directions until you hit the stream and record the elevation
difference. Computation is actually done in the opposite direction. It's done working
back upwards from the stream.
[13:01] Every time you go back up, you look at what the elevation difference is and add
it, we'll say. Works out more logically as a way to compute it. We've got a raster that
gives the height above the stream. We've got the catchments, which are the basic modeling
elements being used for the hydrology, and we've got the individual stream reaches.
[13:25] They are connected through the ComID as a unique identifier in the dataset that's
connected to catchments and flow lines. For each reach with a certain depth, you can then
use that value within the catchment to identify the inundation map. That's quite a nice connection
between vector GIS information and raster GIS information.
[13:53] One of the points that's going to be coming out of this is an importance for
all of this information to be consistent within itself. To do that we used the TauDEM software.
This is software that my group has developed over the years that really does stream and
watershed delineation.
[14:15] It has, as one of its capabilities, a multiple flow direction, flow field model
for the calculation of flow based derivative surfaces. This drop to stream is an ideal
use of that. p.4
Recently we've been working on making it high performance computer enabled to parallel processing.
[14:38] It's available as open source, and it's also deployed as an ArcGIS toolbox so
that you can go in and run the tools in the familiar ArcGIS toolbox modes, assuming you're
an ArcGIS user.
[14:52] The way that TauDEM does vertical distance to stream is sort of depicted here.
The D Infinity flow model portions flow from each grid field to up to two downslope grid
cells, so it can represent on average the flow paths going down perpendicular to contours,
not trying to resolve it in only one of the directions, which is sort of too coarse a
resolution.
[15:17] Then following along that flow path, it backtracks and it actually can compute
distance in any one of modes. It can compute distance in a vertical sense, in a horizontal
sense, or along a slope, or along the straight line. There was quite a lot of options there.
Here, we are using the case where we have vertical distance to the stream measured as
input.
[15:47] This is the example that you get in terms of a floodplain map where (and these
are numbers that I made up just for illustration) there's a height associated with each catchment
and that then is used to map the potential flooding within that catchment. The threshold
is different, depending on the catchment that you're in.
[16:13] I just put this slide in quickly to point out that while we developed an algorithm
published in this paper that spoke about all of these distances. The idea of using height
above the nearest drainage to map flooding should really be credited to Nobre, who published
it in these papers in "Journal of Hydrology" and "Hydrologic Processes."
[16:40] There's also a need in the hydraulic models to have hydraulic properties for each
reach. It's been quite interesting to me that we can actually use the height above the nearest
stream idea to get that information as well, and really get reach average properties.
[17:02] Each catchment in the area we're dealing with, you can look at each height and identify
cells where the height above the stream of that cell is less than a specific height.
Each cell has a plan area that's evaluated based on its cell size. The surface area of
the channel associated with the height is really just a sum of those plan areas.
[17:29] You can approximate the bed area by summing the bed area of each sloping grid
cell, so take the surface area and multiply it by one plus the slope squared, which effectively
scales it for it being on a sloping plane. Then you've got the wetted bed. You can get
the volume by basically just summing up the volumes in all of the cells the height times
the area.
[17:54] Then, with surface area, bed area, and volume you can get the cross sectional
area is volume divided by length, and that becomes an average cross sectional area. Wetted
p.5
perimeter is the bed area divided by length, and the top width surface area divided by
length, then the hydraulic radius is cross section divided by the wetted perimeter.
[18:15] You get these reach average properties out of this terrain analysis procedure, which
is quite interesting. This shows how it works for one specific catchment that I was using
for illustration. Here I used two heights of one meter and three meter, and evaluated
the various quantities and the various formulas, and checked that the numbers seemed reasonable.
[18:46] Then we went and applied a computer to evaluate it over a range of heights. You
can see how hydraulic radius, cross sectional area, top width, and all of those quantities
vary as the depth varies. I should point out in doing this, this has been calculated from
a digital elevation model raster with knowledge of the location of the positions of the streams.
[19:20] The sort odds that you get are only going to be as good as that digital elevation
model raster. If you've got a really coarse digital elevation model, it's going to perhaps
miss the streams and introduce errors in some of these parameters.
[19:34] It also requires consistency between where the streams are and the digital elevation
model. For those of you who are familiar with working with elevation data and hydrography
data you will have noticed that there's often inconsistencies between the two. For this
particular calculation we mapped the streams consistent with the digital elevation model,
but starting each one of them where NHDPlus streams start.
[20:06] That was done so that we basically had a stream network that was consistent to
the digital elevation model but at the same density as the National Hydrography Dataset
so that the streams are basically in the bottom of the valleys. This really speaks to the
need for consistency across datasets.
[20:29] Didn't always work perfectly, and this slide shows one of the problems that
occurs. The digital elevation model dataset has a road here. The hydrography dataset it
has a stream across, already underneath the road. You can see that from the shading of
the DEM. When we apply the process it actually took the stream off to the north and it eventually
joined up.
[20:56] There's a need for a hydrography conditioned digital elevation model and consistency across
these datasets. That's essentially the example for how hydrography and elevation data are
being combined to help with the National Flood Interoperability Experiment.
[21:17] I wanted to take some time to talk about some of the modeling use cases. I'm
going back to draw on some of the work that I've done with the Forest Service on the GRAIP
Program, Geomorphologic Road Analysis Inventory Program, to examine the impact on streams
from road erosion. p.6
[21:42] This takes information from a detailed hydrography network that gives you the streams.
It takes a digital elevation model to look at the terrain derived flow field. It takes
field surveys of road and drainage point conditions. Here we have one of the forest technicians
driving along the road with a GPS and with a computer that's being used for data capture.
[22:11] They'll actually go and report on the conditions of the road surface in terms
of it’s likelihood to be generating sediment. This is another case where modeling can be
improved by providing more detailed spatially explicit information.
[22:31] Then, based on each of the road segments involved, there's an aggregation of sediment
from roads to the drain point and then from the drain points to the streams. Here the
digital elevation model is providing road to stream connectivity. While they're doing
this, they will also stop at each drain point and assess whether it's clear or a barrier
for fish passage.
[22:59] That allows fragmentation in the stream area to be identified so you can, so in this
case I hope you see where my mouse is, the yellow ones are partially blocked, so this
will become a fragmented part of the stream network as far as fish are concerned.
[23:20] This just gives some of the results from one of the reports that used this approach,
showing which road segments where there's low, moderate, and high sediment production.
That can be factored into prioritizing the maintenance and prioritizing the repair.
[23:42] Also, combining it with...OK, those road sediments that are producing a lot of
sediment, which drain points do they drain to? Which streams do they impact? Here we've
got the sediment input to streams illustrated. Again, it's a sort of spatially explicit prioritization
based on additional information being mapped onto a hydrography network.
[24:09] Part of this all led us to the question of where do streams begin? There is a challenge
in mapping as we go into higher and higher resolutions, we end up with streams creeping
further up the network, so I wanted to just dwell on that a little bit.
[24:31] I went to sample a part of the NHDPlus high resolution that Al sent me. I guess this
is probably part that's just been made public with the new release, but I think he sent
it to me before that happened. I honed in on an area in Southwest North Carolina where
it's actually at really high resolution and just looked at the differences.
[24:57] Here you can see this...I put a 40 meter contour into the map so that you could
get a sense of what the topography is. We can see that the high resolution hydrography
is giving us a stream network at so much more detail than we had out of NHDPlus, which is
really the medium resolution hydrography.
[25:20] There comes to be a question, "Would you infer from these contours that there really
are all these additional streams, or, if you go looked on the landscape, would you
p.7
actually find water flowing in them?" Sometimes there's regulatory decisions and modeling
decisions that have to get made based on this.
[25:43] Getting philosophical for a little bit, there's two interesting quotes from literature
that I think pose alternative but equally valid views.
[25:54] The first one is from really long ago, Davis, the well known geographer wrote,
"Although the river and hillside waste do not resemble each other at first sight, they
are only the extreme members of a continuous series, and when this generalization is appreciated
one may fairly extend the river all over its basin and up to its very divides.
[26:14] "Ordinarily treated, the river is like the veins of a leaf, broadly viewed,
it is like the entire leaf." He's hinting at the need for thinking of the hillslope
and the channels in an integrated fashion. Montgomery and Dietrich more recently said,
"landscape dissection to distinct valleys is limited a threshold of channelization that
sets a finite scale to the landscape."
[26:40] They were recognizing that perhaps there is a real physical scale that represents
drainage density. I think that's actually important for us to try and address and quantify
because hydrologic processes are different in hillslopes and in channels. I think it's
important to recognize this and account for this in the delineation of streams, so the
question of where streams begin.
[27:04] Once we're in a stream, we think of the area as being concentrated. When you're
on a hillslope, drainage areas actually measured using a disperse concept of specific catchment
area, the contributing area per unit contour width. Those are just some of the things that...yeah,
there's a difference between hillslope and channel processes.
[27:28] If I go back to that same area that I showed you earlier and delineate the catchment
draining to each stream segment, and here are some catchments because they're often
used as our basic modeling elements. Within each modeling element sometimes the parameters
we use are things like hillslope length.
[27:49] Regardless of what the model is that you're using, you're going to get a different
answer from the left to the right because the left one is going to say that the hillslope
lengths are so much larger, and that's going to manifest itself.
[28:03] As we start bringing high resolution data into hydrologic models we have to think
about the differences in scales and how they can manifest in process simulations. That's
the last pictorial slide. I just wanted to end with a few conclusions.
[28:25] Elevation and hydrography should really just be viewed as part of an integrated representation
of the terrestrial environment. It's important to try and have them be consistent and develop
automated procedures to derive them in a consistent way.
[28:42] The integrated use of the different datasets really demands consistency between
elevation and hydrography at high resolution. I talked about the height above the nearest
stream approach as a way to approximate real time flood inundation and approximate reach
scale properties.
[28:59] Then thinking about ways streams begin, noted that model representation should recognize
scale effects. Thank you, and I'd be happy to try and answer any questions.
Steve: [29:14] All right. Thank you, David. I know
we have about five minutes for a few questions for David, if you'd like. Again, a reminder
to use the Q&A box in the WebEx interface, if you would.
Dr. Tarboton: [29:28] The only questions I'm seeing so far
are about availability of the slides, and it looks like Allyson has answered them.
Steve: [29:37] Perhaps you explained everything so
thoroughly it's beyond question. Sue Buto: [29:45] I have a question. Where does the
stream shape come from? Dr. Tarboton:
[29:54] Are you referring to the stream shape in the hydraulic properties? The cross sectional
shape is actually part of the digital elevation model. Involved in this work, we would need
to have, if we think back to...this slide is actually characterizing shape, but it's
an average shape over the length of the reach.
[30:28] It's been derived from the individual elevation value grid cells over the area.
It all depends on how good your digital elevation model is. The digital elevation model is presumed
to be able to adequately represent the stream bed.
[30:48] Is that good enough?
[30:57] Then there was a question about how do we get the TauDEM tool in ArcGIS? If you
just do a Google search for TauDEM, it'll take you to the website.
Sue: [31:10] There was also a link in your slides
which would... Dr. Tarboton: [31:13] There was also a link in my slides,
yes. Sue:
[31:14] Those slides will be posted in the NHD website. Another question. Have you tried
estimating channel hydraulic properties in different stream environments?
Dr. Tarboton: [31:27] No. This is still a very new idea.
We've been exploring and piloting it in the Austin, Texas, area which is where the focus
for this part of the work of NFIE was involved, but it's something we certainly would like
to do.
[31:52] The scale of the digital elevation model...this work was done using a 10 meter
digital elevation model. That started with the standard digital elevation model you can
now get from the National Elevation Dataset. There are parts of the country where you can
get to that three meter resolution, often derived from LIDAR. In the particular area
we were working that was not available. p.9
[32:24] There's a question here, "Any comments on the potential differences between arid
and humid environments?"
[32:39] Obviously, with arid and humid environments it's going to make a difference in terms of
the runoff being simulated, the runoff processes. I think, in terms of...I don't know whether
there's some good well established patterns for the differences in shape of channels in
arid and humid regions. This might be an interesting way to see, once we get to high enough resolution
data.
[33:14] One thing that is possibly of value is it'll be easier to get data for arid regions
because they tend to be less forest canopy, that’s making the data acquisition...will
give you a data acquisition trouble. You might also have...really, the digital elevation
model should be recording the bed of the streams.
[33:48] If you've got water flowing in it, most digital elevation models actually record
the water surface, so you're leaving out some of the part below the surface. In arid areas
where streams are smaller and drier that's probably less of a problem.
[34:08] Somebody asked here, can we argue that roads are all first order streams that
conduct water into the natural hydrography? Should we account for that in our models?
[34:22] Yes. I think we should and, with the work with GRAIP, that's what we were trying
to do. Trying to look at the sediment generated from road surfaces and where it goes to in
terms of drain points. One can also, with high resolution topography, see the impact
that roads have on redirecting flow paths.
[34:55] Sometimes they will serve as stream paths if they are fast into the terrain, oftentimes
a road is up above the terrain on fill, so it's not always comparable to a stream.
[35:12] How do your models perform using 10 meter NED DEMs versus two meter LIDAR derived
DEMs?
[35:23] We haven't actually done that yet. My sense is that it'll work better with the
LIDAR data.
[35:35] Have you found benefits to using one meter over three meter DEMs resampled from
a meter?
[35:45] Again, that's not something that we've directly evaluated. What we might say is that
there's a whole lot of questions that come up with as to, is higher resolution data better,
and what advantage do you have? There's a lot of work to be done in actually proving
out the added value from all of that.
[36:20] Somebody asked about SPRNT being available on GitHub.
p.10
[36:25] I don't know the URL. I would have to search for that, but we can track that
down from Dr. Maidment or Ben Hodges and give it as part of the information at the end of
this...well, the follow up from the seminar, I guess.
Steve: [36:43] David, let's cut this off for right
now, and we'll pass the ball to Al, please. Then we'll come back and we can do questions
about either session after Al and Ellen finish their talks.
Dr. Tarboton: [36:55] OK.
Steve: [36:57] All right. Thank you, though. That's
a lot of good...after an initial lag there, and a lot of good questions coming
in. Al Rea:
[37:06] First, I'd like to thank Dr. Tarboton for a really interesting talk. I think he
pointed out the need for the data that has been used a lot in the modeling, particularly
the NFIE modeling that he talked about, the NHDPlus.
[37:25] If you're not already aware, we are working on basically expanding NHDPlus to
use the high resolution NHD, so we are working on that. This is kind of a quick status update
on what we're doing along those lines.
[37:43] Ellen Finelli, who is with the National Geospatial Technical Operations Center, or
NGTOC, she is the hydrography lead at NGTOC. She'll be helping explain some of this in
a few minutes here, as I get through this.
[38:03] Try to be real quick here. What's the NHDPlus High Resolution going to look
like? It looks an awful lot like the medium resolution NHDPlus Version 2. It has snapshots
of the basic ingredient data, which are the high resolution NHD, the watershed boundaries
dataset, and 10 meter DEMs from the 3DEP program.
[38:27] It also has the value added attributes, very similar to the NHDPlus Version 2. It
has catchments, and flow direction, and accumulation grids. Again, just like Version 2. This table
has a comparison of the two products, between the medium resolution NHDPlus Version 2 and
what we're building now with the NHD High Res.
[38:53] It's basically the elevation where we're going from a 30 meter elevation model
to a 10 meter elevation model. We're going from 1:100,000 scale hydrography data to the
highest resolution NHD data, whatever is in the NHD. In some cases that's very high resolution,
1:2,400 scale or 1:5,000 scale. Most of the country it's 1:24,000 scale, from the USGS
topo maps, and in Alaska it's 1:63,360.
[39:33] The watershed boundaries dataset, we used a snapshot for Version 2 of NHDPlus.
It was kind of a composite of several different snapshots of WBD taken between 2010 and 2012,
so we'll be using an updated WBD for high res NHDPlus.
[39:56] We're changing from shape files and grids to file geodatabases and TIFs because
those behave much better with these really large datasets that we're using. We're
p.11
changing tile size from, basically, a region or a HUC 2 level to a HUC 4 level, so there
will be about 200 HUC 4s in the continental US that will make up the tiles that this data
will be produced in.
[40:43] Then we've updated the tools from the tools that were used to produce NHDPlus
medium res Version 2 to a set of tools using the same basic processes but updated for the
high resolution data.
[41:08] Here's just a picture. I'll show you just a little bit of a sneak preview, really,
of the NHDPlus High Res. This is an area where I'm showing the streams and the catchments
from version 2 which, again, is 1:100,000 scale streams and 30 meter elevation model.
[41:37] Here's that same area in the NHDPlus High Res, streams and catchments. Here you
can see those two together. It's quite a bit more detail and quite a bit more data, just
finer delineations on all of the catchments.
[42:14] How are we going to build the NHDPlus High Res? As I mentioned, we're adapting the
tools that were used to build NHDPlus Version 2, and we're really trying to emphasize automation.
The Version 2 tools had quite a lot of manual work involved, and we're trying to automate
as much of that as possible. There still are a few steps that are manual.
[42:57] Then, overall, we're trying to build an iterative process where we'll do an initial
run through the data, which will be a beta version of the data. We'll try to do that
first. Then we're going to move into a QC review where we'll be using the old Version
2.1, the medium res data, as one check of the data. Of course we'll also have lots of
other things that we'll want to check it against.
[43:44] Then we'll be moving pretty quickly into a refresh cycle where we will update
that initial data version at least once, as soon as we can. Then that refresh can continue
at later times, after the data's been updated more. It'll be a continuous refresh cycle,
but we'll definitely do one refresh.
[44:23] Finally, we'll be working a lot on generalization. The idea behind that is that
we want to try and get the entire community working off of this one basic framework which
we can then generalize from to many different scales. Let me show a little bit about how
that works.
[44:56] This is Region 6, the Tennessee River Basin. This is what's in the high res NHD
right now. You can see that there are some areas that are very dense. Most of the data
is basically a 1:24,000 scale. It's the data from the USGS 1:24,000 topo maps, but there
have been several areas where it's been densified quite a lot.
[45:33] Then North Carolina, they did a lot of work from lidar data and have densified
the network quite a bit there. This is what's in the NHD High Res, and this is what we'll
p.12
create catchments for, but we also will be trying to normalize that network to several
different scales. This is the network kind of normalized to
1: 24,000 scale. This is using some techniques that Larry Stanislawski and others have developed.
We can continue on. We have a 1:100,000 scale representation, a 1:250,000, 500,000, one
million, two million, and five million scale representation of the network.
[46:56] All of the features that you see are actually the original high resolution features.
The network has just been pruned out selectively to represent the network at these different
scales.
[47:17] What we'll be working on is also a way to merge the catchments together as we
do this generalization so that we'll have networks and catchments for several different
scales, but they all reference back to the high resolution NHD data.
[47:43] With that, I'm going to pass on to Ellen Finelli, and she's going to talk a little
bit more about the process and where we are. Ellen Finelli:
[48:00] Thank you, Al. As Al mentioned, rather than reinventing the wheel as we are producing
this high res NHDPlus, we're really adapting the Version 2 medium res, or the NHDPlus Version
2, as well as automating some of those processes.
[48:27] We are fortunate to be able to utilize some of the same development team, and that's
primarily Cindy McKay with Horizon System and our USGS Water Science Center folks, Curtis
Price, Rich Moore, and Craig Johnson.
[48:44] No worries here in this lightening session. We're not going to get too into the
weeds here on this workflow, but I did want to point out a couple of things. Currently
the yellow boxes are some of the manual steps. That's where Al had mentioned that, ideally,
we want to minimize that additional manual requirements there. Next slide.
[49:07] There's really three different parts of this workflow, or these tools. The first
part is the value added attributes, which are the vector aspect of this. Next is the
catchments, which are the raster processes. Then, finally, the last part of this is the
flow estimates. Together, that is the high res NHDPlus build refresh process. Next slide.
[49:38] Let's take a step back a little bit and look at that entire workflow that I just
mentioned is the red box here at the middle of the screen, or high res NHDPlus tools,
build refresh. Does all of that VAA, catchments, and flow estimations.
[49:56] Let's take a look back at some of the inputs here. We have, at the top in the
blue, our NHD and our WB datasets that are snapshots that are coming from our high res
NHD production database. As Al mentioned, that is a multi resolution database. It is
the best available hydrography data. Those snapshots, NHD and WBD, go into the build
refresh. p.13
[50:27] Additionally, we have our digital elevation model data. In this case coming
from our 3DEP program, the one third arc seamless data, are the three inputs to this. Initially,
we are going to run the build aspect of our NHDPlus tools, and we're going to produce
our first initial cut, being our high res NHDPlus, this beta dataset.
[50:57] As Al mentioned, a little change here due to the resolution in the size of the datasets.
We are producing these by region, but distributing them by HUC 4s.
[51:11] This is the build part of the beta. As Al mentioned, we don't want to necessarily
leave that out there for too long. The intent is to go directly into a QC process and then,
through that QC effort, the results then would be fed back in and editing would be required.
Manual editing to update our high res production database, our NHD and WBD database.
[51:44] Then we would produce an updated NHD and WBD that would then feed into the refresh
process. Let me make one more note before we go to the next slide. Although we may not
be feeding updates to the elevation program or edits to the elevation program through
our markup tools, the elevation program has quite an aggressive schedule as is.
[52:10] As we revisit these updated NHD and WBD, of course we will be incorporating the
new elevation data as well.
[52:22] Once we've updated the NHD, WBD, and we bring in our new 3DEP data, then we're
going to be utilizing the refresh part of the tools and producing then, in this case,
a new snapshot being the high res NHDPlus.
[52:41] The drivers for that, of course, before running that refresh, could be issues identified
during the QC process, or it could be, as we go down the road, changes in our source
data. We have new elevation data.
[52:56] We have new NHD or WBD data. Another thing I wanted to mention...next slide, Al.
I think I had it on the next slide. In terms of getting those edits accomplished, we have
a number of NHD and WBD editors that will participate in this QC, as well as our stewardship,
or partners, that actively update NHD and WBD. They will maybe be involved with the
editing from this QA/QC.
[53:33] We visualize a national oversight team participating in this QC. Some of the
other folks that are the ones with that local knowledge, whether that be our stewards participating
in the QC, or StreamStats folks, or other Water Science Center folks that have that
local knowledge.
[53:53] One of the tools we plan on utilizing for this QC is a web enabled markup tool.
We'll be working closely with that team to provide us the ability to track and quickly
refresh our high res NHDPlus. p.14
[54:13] I think that's about all I had in our time Al. Oops, one more. Sorry. Our priorities
currently are Region 1, Region 6, and Region 12, the three shown in the dark red.
[54:25] We currently have a draft schedule to do the nation in the following order. We
have contracted the first 18 regions, everything except for Alaska and Hawaii. At this point,
that is our current plan. We have, as Al mentioned, completed our initial beta for three of the
four HUC fours in Region 6 there.
[54:57] David had shown an example there, which of course was that one Eastern North
Carolina chunk that had some local res data in there as well. That's our current status
for the beta version for NHDPlus. I think that's about it for now. Any questions?
Steve: [55:20] For the questions, the Q&A box. Al,
if you could pass the ball directly to Allison, she will get the poll going. Thank you.
Al: [55:34] Yeah. Steve:
[55:38] All right. Sue:
[55:41] We do have a few questions. I'm trying to find them here in the Q&A box. Let's see.
Will the vector data be available in GeoPackage or Spatialite formats, as well as file geodatabase?
Al: [56:00] At this point, we don't have plans
to do that, no. That's good feedback that there's interest in that.
Sue: [56:11] Is there a plan for the Build/Refresh
Tools to be made available to local partners and stewards?
Al: [56:20] Yes, we plan to make the tools available
as open source. We don't have current plans to support them. We'll put them out as open
source and people would be able to take them and modify them. We don't have plans to actually
have that as a supported package that others would be doing and feeding back into the database.
At this point, we expect to run the process ourselves. Right now, it's being run through
a contractor. Sue:
[57:04] Is there a plan for a high res pilot project in Alaska?
Al: [57:09] Yes, we want to do some work in Alaska.
The current state of the data is, it's a little early to start doing that, but there are some
areas where the data is getting close to being ready. As that data reaches a point where
we think it would make sense to move forward with a pilot, we'll be looking at doing some
pilot areas up there. Sue:
[57:49] Is there a high res NHDPlus schedule available, or are you just working with priority
areas at this point? Al:
[58:00] We don't have a schedule yet. We're still really early in the process. We are
working through the process. We don't have a good handle yet on how long it takes to
get p.15
through all of this. Hopefully, we're getting close to the point where we'll be able to
have a better handle on that, and be able to put out at least an approximate schedule.
We're not quite yet to that point. Sue:
[58:45] Will the edits/updates to the NHD be run first through the base high res NHD,
and then imported to high res NHDPlus? Ellen:
[59:00] Yes, they will. We will continue to use our NHD update tools that allows us to
ensure our QC processes are adhered to. Then, the data will be fed into the NHDPlus process.
Steve: [59:22] We're a couple minutes over time.
We will have the questions, so the entire package here, the presentations, the Q&A,
and the recording will be posted to the NHD website, nhd.usgs.gov. I want to thank the
speakers for coming out today and sharing this information. I want to thank the crowd
for attending and doing a nice job on the Q&A session.
[59:53] I also want to invite folks to tune in on St. Patrick's Day. Share St. Patrick's
Day with us and find out about hydrography applications in fisheries and aquatic ecosystems.
We'll have announcements that go out through the same channels, listservs, etc. with information
to register for these. And last, I direct you to the poll that should be in the lower
right of your WebEx window.
[60:24] If you take a few seconds to provide a little bit of feedback for us, it will help
us shape these sessions in the future. With that, we'll call it a session. I hope to see
you back here on St. Patrick's Day. Thanks, everyone.