Post-hoc Uncertainty Quantification for Remote Sensing Observing Systems

This article sets forth a practical methodology for uncertainty quantification of physical state estimates derived from remote sensing observing systems. Remote sensing instruments observe parts of...

Retrieval is an inference problem: estimate X when you only get to see Y.
F 1 = forward model used in retrieval, R; B 1 = other retrieval inputs.

Introduction
Uncertainty quantification (UQ) provides a formalism for understanding uncertainty in the output of computational models.
Remote sensing data processing utilizes complex computational models. But: we lack information about uncertainty of known inputs there are many unknown unknowns (e.g., interaction effects) computational artifacts must be taken into account analysis must be performed after the fact and be computationally feasible Graphic courtesy of Annmarie Eldering.
Less than half of the carbon is staying in the atmosphere.
Where are the sinks that are absorbing more than half of this CO2?
Why does CO2 build-up vary from year to year with nearly uniform emission rates?
How will CO2 sinks respond to climate change? Benefit of space-based greenhouse-gas measurement is coverage and resolution.
Challenge is the need for high-precision measurements to detect small changes in flux against large background variations.
A primary objective of OCO-2 is to find and study the natural sinks.
Uncertainty is quantified by conditional distributions.
OE: P(X|Y); only source of uncertainty is .
Our view: P(X|X); uncertainty induced by mismatches of F 1 to F 0 , B 1 to B 0 , non-Gaussianity, computational artifacts (e.g., discretization, definition of vertical grid, etc.), and unknown interactions among all these sources.
How to proceed given that we can't even enumerate all sources of uncertainty? Partition V = (W , U ).

Metric 2:
βop(X T ) = X − X T and βs( X T denotes the TCCON value, subscropts op and s denote operational and simulated quantities, respectively.
Let Q s α/2 and Q s 1−α/2 denote the lower and upper α/2 quantiles of the simulated distribution.

R(·) X sim
The challenge was to develop a practical method to assign uncertainies to every estimate produced by a remote sensing obsreving system. Uncertainty is defined by the conditional distribution of the true state given the operationally produced state estimate.
Our method is similar to the bootstrap bias correction (Davison and Hinkley, 1997), but goes further: we correct the entire distribution (not just the mean).
Simulation-based confidence intervals are usually valid, but not always efficient.
Operational confidence intervals are rarely valid.
Method appears to perform relatively well in this study, but there are indications that we need to adjust the stratification so that the model is more globally representative.
The most computationally intensive stage in building the model is the retrieval on the simulated radiances.
Once the model is built, it is very fast to apply to new/actual retrieved estimates. "Top-down" approach: simulate the entire observing system, and compare retrieved states to synthetic truth over a representative ensemble of conditions. Treats the observing system/retrieval as an estimator and focuses on quantifying its mechanistic properties.
Quanfity mechanistic performance via P(X|X); this distribution is the most complete description of uncertainty in X after observingX.