Internal vs Forced Variability Metrics for Geophysical Flows Using
Information Theory
Abstract
We demonstrate the use of information theory metrics, Shannon entropy
and mutual information, for measuring internal and forced variability in
ensemble atmosphere, ocean, or climate models. This metric delineates
intrinsic and extrinsic variability reliably in a wider range of
circumstances. Information entropy quantifies variability by the size of
the visited probability distribution, as opposed to variance that
measures only its second moment. Shannon entropy and mutual information
manage correlated fields, apply to any data, and are insensitive to
outliers and a change of units or scale. In the first part of this
article, we use climate model ensembles to illustrate an example
featuring a highly skewed probability distribution (Arctic sea surface
temperature) to show that the new metric is robust even under sharp
nonlinear behavior (freezing point). We apply these two metrics to
quantify internal vs forced variability in (1) idealized Gaussian and
uniformly distributed data, (2) an initial condition ensemble of a
realistic coastal ocean model (OSOM), (3) the GFDL-ESM2M climate model
large ensemble. Each case illustrates the advantages of information
theory metrics over variance-based metrics. Our chosen metric can be
applied to any ensemble of models where intrinsic and extrinsic factors
compete to control variability and can be applied regardless of if the
ensemble spread is Gaussian. In the second part of this article, mutual
information and Shannon entropy are used to quantify the impact of
different boundary forcing in a coastal ocean model. Information theory
is useful as it enables ranking the potential impacts of improving
boundary and forcing conditions across multiple predicted variables with
different dimensions.