Figure 1. a) Overlapping watermasks from two years in red and blue;
purple indicates pixels classified as river in both years. b) Rivermasks
produced from the watermasks after identifying river pixels and a
closure operation. c) Bank aspect (only one image is shown, and the bank
pixels are dilated for visibility). d) distance or magnitude of change
for all river pixels that eroded or accreted. e) The SWORD centerline
overlain on erosion pixels. During vectorization, each pixel is assigned
to the closest river centerline node and a summary of geomorphic change
is calculated for each centerline node.
Uncertainty
We first quantify the uncertainty of our estimates by propagating the
water classification errors through our methods. Both the JRC and
Pickens datasets include classification error rates. For the JRC
dataset, omission and commission errors are estimated for each
seasonality class (seasonal or permanent) and sensor (Landsat 5, 7, or
8). Errors are highest in the seasonal omission category, and lowest in
the permanent commission for all sensors. There is not as large of a
difference between sensors, however the downlink capability during
Landsat 5 was limited, and the scan line corrector failure on Landsat 7
limits the quality of the seasonality classifications. The Pickens
dataset takes a different approach, quantifying the omission and
commission error rates as a function of distance from the land-water
boundary for both the Pickens and JRC datasets. We use these
distanced-based uncertainties for both datasets because we believe they
better represents the sources and patterns of error and uncertainty in
classifications. Further, the seasonality classifications in the JRC
dataset do not validate well against the high-resolution, manually
trained and classified validation data from Pickens et al. (2020).
We apply these error rates to the annual water masks by randomly adding
omission and commission errors according to the distance-rate function
in Pickens et al., (2020). We repeat this process according to the
number of observations in each annual watermask and then average the
erroneous watermasks together, simulating the effect of the per image
pixel-scale errors to the annual water masks (Figure S1). Each time we
analyze two images for change detection, we also create two noisy
watermasks and process them with the same river classification and
change analysis methods (sections 2.3 & 2.4). The difference between
our ‘clean’ watermasks and the ‘noisy’ watermasks is a quantification of
the erroneous planform change we would anticipate given no actual change
in planform (Figure S2).
Another source of uncertainty stems from variations in river stage. When
looking at water surfaces only in planform, changes in inundation cannot
be distinguished from changes in channel form. For example, if we happen
to observe a flood in year one and low flows in year two, our bank
migration data could suggest accretion along both banks. We reduce this
source of error by using the composited annual masks, though interannual
variability will still be present in our data. We investigate the
severity of interannual variation by performing our calculations on 9
combinations of years, in a 3x3 matrix of years from 2000-2002 compared
to 2017-2019 (Figure S3). Our final dataset incorporates these data by
presenting the minimum, maximum, and median riverbank erosion and
accretion values for each node.
Validation
To assess the accuracy of our riverbank migration estimates, we
calculated the bank erosion rate for river reaches where previous
studies have published rates of erosion. After filtering the dataset of
290 published studies (Rowland and Schwenk, 2019) down to studies that
obtain similar metrics (producing reach-averaged rates and over
decade-scale time intervals) and wider than 150 meters, there are 25
cases remaining (See supplemental S4 for a map of validation sites and
Supplemental T1 for full list of references and reach characteristics).
We limit the validation studies to decade-scale measurements because
year-to-year variations in discharge and erosion can bias or
misrepresent the characteristic erosion rate when the time scale of
observations changes (Donovan et al., 2019). Further, any change in
erosion rate over time could falsely add error to the validation, so we
exclude studies with data collection that ended before our satellite
record. The reach definition is not identical between the published
studies and our data so we compared the migration rate for each
validation reach to our observed migration rate nearest the reported
validation reach center.