Gaussian Approximations for Probability Measures on $\mathbf{R}^d$

This paper concerns the approximation of probability measures on $\mathbf{R}^d$ with respect to the Kullback-Leibler divergence. Given an admissible target measure, we show the existence of the best approximation, with respect to this divergence, from certain sets of Gaussian measures and Gaussian mixtures. The asymptotic behavior of such best approximations is then studied in the small parameter limit where the measure concentrates; this asymptotic behaviour is characterized using $\Gamma$-convergence. The theory developed is then applied to understanding the frequentist consistency of Bayesian inverse problems. For a fixed realization of noise, we show the asymptotic normality of the posterior measure in the small noise limit. Taking into account the randomness of the noise, we prove a Bernstein-Von Mises type result for the posterior measure.


Introduction.
In this paper, we study the "best" approximation of a general finite dimensional probability measure, which could be non-Gaussian, from a set of simple probability measures, such as a single Gaussian measure or a Gaussian mixture family. We define "best" to mean the measure within the simple class which minimizes the Kullback-Leibler divergence between itself and the target measure. This type of approximation is central to many ideas, especially including the so-called "variational inference" [30], that are widely used in machine learning [3]. Yet such approximation has not been the subject of any substantial systematic underpinning theory. The purpose of this paper is to develop such a theory in the concrete finite dimensional setting in two ways: (i) by establishing the existence of best approximations; (ii) by studying their asymptotic properties in a measure concentration limit of interest. The abstract theory is then applied to study frequentist consistency [28] of Bayesian inverse problems.
1.1. Background and Overview. The idea of approximation for probability measures with respect to Kullback-Leibler divergence has been applied in a number of areas; see for example [20,15,19,24]. Despite the wide usage of Kullback-Leibler approximation, systematic theoretical study has only been initiated recently. In [23], the measure approximation problem is studied from the calculus of variations point of view, and existence of minimizers established therein; the companion paper [22] proposed numerical algorithms for implement-ing Kullback-Leibler minimization in practice. In [19], Gaussian approximation is used as a new approach for identifying the most likely path between equilibrium states in molecular dynamics; furthermore, the asymptotic behavior of the Gaussian approximation in the small temperature limit is analyzed via Γ-convergence. Here our interest is to develop the ideas in [19] in the context of a general class of measure approximation problems in finite dimensions.
To be concrete we consider approximation of a family of probability measures {µ ε } ε>0 on R d with (Lebesgue) density of the form here Z µ,ε is the normalization constant. A typical example of a measure µ ε with this form is a posterior measure in Bayesian inverse problems. For instance, consider the inverse problem of identifying x from a sequence of noisy observations {y j } j∈N where and where the η j denote describe the random noise terms. This may model a statistical measurement with an increasing number of observations or with vanishing noise. In the Bayesian approach to this inverse problem, if we take a prior with density proportional to exp(−V 2 (x)), then the posterior measure is given by (1) with the function ε −1 V ε 1 , up to an additive constant, coinciding with the negative log-likelihood. The parameter ε is associated with the number of observations or the noise level of the statistical experiment.
Our study of Gaussian approximation to the measures µ ε in (1) is partially motivated by the famous Bernstein-von Mises (BvM) theorem [28] in asymptotic statistics. Roughly speaking, the BvM theorem states that under mild conditions on the prior, the posterior distribution of a Bayesian procedure converges to a Gaussian distribution centered at any consistent estimator (for instance the maximum likelihood estimator) in the limit of large data (or, relatedly, small noise [5]). The BvM theorem is of great importance in Bayesian statistics for at least two reasons. First, it gives a quantitative description of how the posterior contracts to the underlying truth. Second, it implies that the Bayesian credible sets are asymptotically equivalent to frequentist confidence intervals and hence the estimation of the latter can be realized by making use of the computational power of Markov Chain Monte Carlo algorithms. We interpret the BvM phenomenon in the abstract theoretical framework of best Gaussian approximations with respect to a Kullback-Leibler measure of divergence.

Main Contributions.
The main contributions of this paper are twofold: • We use the calculus of variations to give a framework to the problem of finding the best Gaussian (mixture) approximation of a given measure, with respect to a Kullback-Liebler divergence; • We study the resulting calculus of variations problem in the small noise (or large data) limits, therby making new links to, and ways to think about, the classical Bernsteinvon Mises theory of asymptotic normality. We describe these contributions in more detail. First we introduce a theoretical framework of calculus of variations to analyze the measure approximation problem. Given a measure µ ε defined by (1), we find a measure ν ε from a set of simple measures, Gaussians or mixtures of finitely many Gaussians, which minimizes the Kullback-Leibler divergence D KL (ν||µ ε ). We characterize the limiting behavior of the best approximation ν ε as well as the limiting behaviour of the Kullback-Leibler divergence as ε ↓ 0 using the framework of Γ-convergence. In particular, if µ ε is a multimodal distribution and ν ε is the best approximation from within the class of Gaussian mixtures, then the limit of the minimized KL-divergence D KL (ν ε ||µ ε ) can characterized explicitly as the sum of two contributions: a local term which consists of a weighted sum of the KL-divergences between the Gaussian approximations, as well as the Gaussian measure whose covariance is determined by the Hessian of V 2 at its minimizers; and a global term which measures how well the weights approximate the mass distribution between the modes; see Theorem 4.2.
We then adopt the abstract measure approximation theory to understanding the posterior consistency of finite dimensional Bayesian inverse problems. In particular, we give an alternative (and more analytical) proof of the Bernstein-von Mises theorem, see Theorem 5.4 and Corollary 5.5. We highlight the fact that our BvM result improves classical BvM results for parametric statistical models in two aspects. Firstly, the convergence of posterior in the total variation distance is improved to convergence in the KL-divergence, under certain regularity assumptions on the forward map. Secondly, our BvM result allows the posterior distribution to be multimodal, in which case the posterior approaches a mixture of Gaussian distributions rather than a single Gaussian distribution in the limit of infinite data. These improvements come at a cost, and we need to make stronger assumptions than those made in classical BvM theory.
1.3. Structure. The rest of the paper is organized as follows. In section 2 we set up various underpinning concepts which are used throughout the paper: in subsection 2.1 and subsection 2.2, we recall some basic facts on Kullback-Leibler divergence and Γ-convergence and in subsection 2.3 and subsection 2.4 we spell out the assumptions made and the notation used. In section 3 and section 4 we study the problem of approximation of the measure µ ε by, respectively, a single Gaussian measure and a Gaussian mixture. In particular, the small ε asymptotics of the Gaussians (or Gaussian mixtures) are captured by using the framework of Γ-convergence. In section 5, the theory which we have developed is applied to understand the posterior consistency for Bayesian inverse problems, and connections to the BvM theory. Finally, we finish in section 6 with several conclusion remarks.
2.1. Kullback-Leibler Divergence. Let ν and µ be two probability measures on R d and assume that ν is absolutely continuous with resepct to µ. The Kullback-Leibler divergence, or relative entropy, of ν with respect to µ is If ν is not absolutely continuous with respect to µ, then the Kullback-Leibler divergence is defined as +∞. By definition, the Kullback-Leibler divergence is non-negative but it is not a metric since it does not obey the triangle inequality and it is not symmetric in its two arguments. In this paper, we will consider minimizing D KL (ν||µ ε ) with respect to ν, over a suitably chosen set of measures, and with µ ε being the target measure defined in (1). Swapping the order of these two measures within the divergence is undesirable for our purposes. This is because minimizing D KL (µ ε ||·) within the set of all Gaussian measures will lead to matching of moments [3]; this is inappropriate for multimodal measures where a more desirable outcome would be the existence of multiple local minimizers at each mode [23,22]. Although the Kullback-Leibler divergence is not a metric, its information theoretic interpretation make it natural for approximate inference. Furthermore it is a convenient quantity to work with for at least two reasons. First the divergence provides useful upper bound for many metrics; in particular, one has the Pinsker inequality where d TV denotes the total variation distance. Second the logarithmic structure of D KL (·||·) allows us to carry out explicit calculations, and numerical computations, which are considerably more difficult when using the total variation distance directly.
2.2. Γ-convergence. We recall the definition and a basic result concerning Γ-convergence. This is a useful tool for studying families of minimization problems. In this paper we will use it to study the parametric limit ε → 0 in our approximation problem.
Definition 2.1. Let X be a metric space and E ε : X → R a family of functionals indexed by ε > 0. Then E ε Γ-converges to E : X → R as ε → 0 if the following conditions hold: (i) (liminf inequality) for every u ∈ X , and for every sequence u ε ∈ X such that u ε → u, it holds that E(u) ≤ lim inf ε↓0 E ε (u ε ); (ii) (limsup inequality) for every u ∈ X there exists a recovery sequence {u ε } such that u ε → u and E(u) ≥ lim sup ε↓0 E ε (u ε ).
We say a sequence of functionals {E ε } is compact if lim sup ε↓0 E ε (u ε ) < ∞ implies that there exists a subsequence {u ε j } such that u ε j → u ∈ X .
The notion of Γ-convergence is useful because of the following fundamental theorem, which can be proved by similar methods as the proof of [4, Theorem 1.21].
Theorem 2.2. Let u ε be a minimizer of E ε with lim sup ε↓0 E ε (u ε ) < ∞. If E ε is compact and Γ-converges to E, then there exists a subsequence u ε j such that u ε j → u where u is a minimizer of E.
Thus, when this theorem applies, it tells us that minimizers of E characterize the limits of convergent subsequences of minimizers of E ε . In other words the Γ−limit captures the behavior of the minimization problem in the small ε limit.

Assumptions.
Throughout the paper, we make the following assumptions on the potential functions V ε 1 and V 2 which define the target measure of interest. Assumption 2.3. (A-1) For any ε > 0, V ε 1 and V 2 are non-negative functions in the space C 4 (R d ) and C 2 (R d ) respectively. Moreover, there exists constants ε 0 > 0 and M V > 0 such that when ε < ε 0 , any |α| ≤ 4, |β| ≤ 2 and all x ∈ R d . (A-2) There exists n > 0 such that when ε ≪ 1, the set of minimizers of V ε 1 is E ε = {x 1 ε , x 2 ε , · · · , x n ε } and V ε 1 (x i ε ) = 0, i = 1, · · · , n. (A-3) There exists V 1 such that V ε 1 → V 1 pointwise. The limit V 1 has n distinct global minimisers which are given by E = {x 1 , x 2 , · · · , x n }. For each i = 1, . . . , n the Hessian There exist constants c 0 , c 1 > 0 and ε 0 > 0 such that when ε < ε 0 , Remark 2.4. Conditions (A-2)-(A-4) mean that for sufficiently small ε > 0, the function V ε 1 behaves like a quadratic function in the neighborhood of the minimizers x i ε and of x i . Consequently, the measure µ ε is asymptotically normal in the local neighborhood of x i ε . In particular, in conjunction with Condition (A-5) this implies that there exists δ > 0 and C δ > 0 such that ∀ 0 ≤ η < δ, Remark 2.5. The local boundedness of V ε 1 in C 4 (R d ) (Assumption (A-1)) together with the pointwise convergence of V ε 1 to V 1 (Assumption (A-3)) implies the much stronger locally uniform convergence of derivatives up to order 3. Furthermore, (A-4) then implies that Throughout the paper, C andC will be generic constants which are independent of the quantities of interest, and may change from line to line. Let S ≥ (R, d) and S > (R, d) be the set of all d×d real matrices which are positive semi-definite or positive definite, respectively. Denote by N (m, Σ) a Gaussian measure with mean m and covariance matrix Σ. We use |A| to denote the Frobenius norm of the d × d matrix A, namely |A| = Tr(A T A). We denote by λ min (A) the smallest eigenvalue of A. We let B(x, r) denote a ball in R d with center x and radius r. Given a random variable η, we use E η and P η when computing the expectation and the probability under the law of η respectively.

Approximation by Single Gaussian measures. Let
A be the set of Gaussian measures on R d , given by The set A is closed with respect to weak convergence of probability measures. Consider the variational problem Given ν = N (m, Σ) ∈ A, the Kullback-Leibler divergence D KL (ν||µ ε ) can be calculated explicitly as If Σ is non-invertible then D KL (ν||µ ε ) = +∞. The term − d 2 comes from the expectation E ν 1 2 (x − m) T Σ(x − m) and is independent of Σ. The term − log (2π) d det Σ prevents the measure ν from being too close to a Dirac measure. The following theorem shows that the problem (4) has a solution.
Proof. We first show that the infimum of (4) is finite. In fact, consider ν * = N (0, 1 4 I d ). Under the Assumption 2.3 (A-1) we have that Note that the integral in the last expression is finite due to − 4 2 + 1 < 0. Hence we know from (5) that inf ν∈A D KL (ν||µ ε ) < ∞. Then the existence of minimizers follows from the fact that the Kullback-Leibler divergence has compact sub-level sets and the closedness of A with respect to weak convergence of probability measures; see e.g. [23,Corollary 2.2].
We aim to understand the asymptotic behavior of the minimizers ν ε of the problem (4) as ε ↓ 0. Due to the factor 1 ε in front of V ε 1 in the definition of µ ε , (1), we expect the typical size of fluctuations around the minimizers to be of order √ ε and we reflect that in our choice of scaling. More precisely, for m ∈ R d , Σ ∈ S ≥ (R, d) we define ν ε = N (m, εΣ) and set Understanding the asymptotic behavior of minimizers ν ε in the small ε limit may be achieved by understanding Γ-convergence of the functional F ε .
Intuitively, as ε ↓ 0, we expect the measure µ ε to concentrate on the set {x i } with weights on each x i given by β; this intuition is reflected in the asymptotic behavior of the normalization constant Z µ,ε , as we now show. By definition, The following lemma follows from the Laplace approximation for integrals (see e.g. [14]) and Assumption 2.3 (A-4).
Lemma 3.2. Let V ε 1 and V 2 satisfy Assumption 2.3. Then as ε ↓ 0, Recall from (6) that F ε (m, Σ) = D KL (ν ε ||µ ε ) with the specific scaling ν ε = N (m, εΣ). In view of the expression (5) for the Kullback-Leibler divergence, it follows from Lemma 3.2 that Armed with this analysis of the normalization constant we may now prove the following theorem which identifies the Γ-limit of F ε . To this end we define The following corollary follows directly from the Γ-convergence of F ε .
Before we give the proof of Theorem 3.3, let us first discuss the limit functional F as well as its minimization. We assume that m = x i 0 for some i 0 ∈ {1, . . . , n} and rewrite the definition of F 0 (x i 0 , Σ), by adding and subtracting log( Now it is interesting to see that the first line of (10) gives the Kullback-Leibler divergence . The second line of (10) is equal to the Kullback-Leibler divergence D KL (e i 0 || β), for e i 0 := (0, · · · , 1, · · · , 0). In conclusion, in other words, in the limit ε ↓ 0, the Kullback-Leibler divergence between the best Gaussian measure ν ε and the measure µ ε consists of two parts: the first part is the relative entropy between the Gaussian measure with rescaled covariance Σ and the Gaussian measure with covariance determined by (D 2 V 1 (x i )) −1 ; the second part is the relative entropy between the Dirac mass supported at x i and a weighted sum of Dirac masses, with weights β, at the {x j } n j=1 . Clearly, to minimize F 0 (m, Σ), on the one hand, we need to choose m = x i and Σ = (D 2 V 1 (x i )) −1 for some i ∈ 1, · · · , n; for this choice the first term on the right side of (10) vanishes. In order to minimize the second term we need to choose the minimum x i with maximal weight β i . In particular, the following corollary holds.
Corollary 3.5. The minimum of F 0 is zero when n = 1, but it is strictly positive when n > 1.
Corollary 3.5 reflects the fact that, in the limit ε ↓ 0, a single Gaussian measure is not the best choice for approximating a non-Gaussian measure with multiple modes; this motivates our study of Gaussian mixtures in section 4.
The proofs of Theorem 3.3 and Corollary 3.4 are provided after establishing a sequence of lemmas. The following lemma shows that the sequence of functionals {F ε } is compact (recall Definition Definition 2.1). It is well known that the Kullback-Leibler divergence (with respect to a fixed reference µ) has compact sub-level sets with respect to weak convergence of probability measures. Here we prove a stronger statement, which is specific to the family of reference measures µ ε , namely a uniform bound from above and below for the rescaled covariances, i.e. we prove a bound from above and below for Σ ε if we control F ε (m ε , Σ ε ).
Since m ε and Σ ε are defined in finite dimensional spaces, we only need to show that both sequences are uniformly bounded. The proof consists of the following steps.
Step 1. We first prove the following rough bounds for Tr(Σ ε ): there exists positive constants C 1 , C 2 such that when ε ≪ 1, In fact, from the formula (8) and the assumption that V ε 1 and V 2 are non-negative, we can get that when ε ≪ 1 (14) log where the constant Then the lower bound of (13) follows from (14) and the arithmetic-geometric mean inequality which holds for any positive definite A. In addition, using the condition (A-5) for the potential V ε 1 , we obtain from (8) that when ε ≪ 1, where we have used the inequality (15) and the assumption that V 2 is non-negative. Dropping the non-negative terms on the right hand side we rewrite this expression as an estimate on Tr(Σ ε ), Step 2. In this step we show that for ε ≪ 1 the mass of ν ε concentrates near the minimizers. More precisely, we claim that there exist constants R 1 , R 2 > 0, such that for every ε ≪ 1 there exists an index i 0 ∈ {1, 2, · · · , n} such that On the one hand, from the expression (8) and the assumption that lim sup ε↓0 F ε (m ε , Σ ε ) ≤ M we know that there exist C 3 , C 4 > 0 such that when ε ≪ 1 On the other hand, it follows from (3) that for η ≪ 1 which combined with (18) leads to Now we choose η = η ε := 2ε(C 3 + C 4 log(det Σ ε ))/C δ (by the rough bound (13) this η ε tends to zero as ε → 0, which permits to apply (3)). This implies (17) with R 1 = 2C 3 C δ and R 2 = 2C 4 C δ , by passing to the complement and observing that sup i∈{1,...,n} Step 3. We prove the bounds (12). As in the previous step we set It follows from (17) that This implies that lim sup ε↓0 det Σ ε < C for some C > 0. In order to get a lower bound on ε of Σ ε , we rewrite the same integral in a slightly different way. We use the change of coordinates y = P T where P ε is orthogonal and diagonalises Σ ε and observe that under this transformation for j = 1, . . . , n}. This yields for any i ∈ {1, 2, · · · , d}. Together with uniform boundedness of det Σ ε this implies that Λ This proves (12).
Step 4. We show that dist(m ε , E ) ↓ 0 as ε ↓ 0. On the one hand, by the upper bound on the variance in (12) and standard Gaussian concentration, we see that there exists a constant c > 0, such that for ε ≪ 1 we have ν ε (B(m ε , √ εc)) ≥ 3 4 . On the other hand, we had already seen in (20) and hence B(m ε , √ εc) must intersect at least one of the B(x i , η ε ). This yields for this partic- and establishes the claim.
Then as ε ↓ 0, Proof. The lemma follows directly from the expression (5) and Taylor expansion. Indeed, we first expand V 2 near m ε up to the first order and then take expectation to get Thanks to the condition (A-1), one can obtain the bound when ε ≪ 1. Note that in the last inequality we have used the assumption that all eigenvalues of Σ ε are bounded from above which ensures that for ε ≪ 1 the matrix Similarly, one can take the fourth order Taylor expansion for V ε 1 near m ε and then take expectation to obtain that with r 2,ε ≤ Cε. Then (24) follows directly by inserting the above equations into the expression (5).
The following corollary is a direct consequence of Lemma 3.2, Lemma 3.6 and Lemma 3.7, providing an asymptotic formula for F ε (m ε , Σ ε ) as ε ↓ 0.
Remark 3.9. We do not have a bound on the convergence rate for the residual expression (26), because Lemma 3.2 does not provide a convergence rate on the Z µ,ε . This is because we do not impose any rate of convergence for the convergence of the x i ε to x i . The bound |r ε | ≤ Cε in Lemma 3.7 will be used to prove the rate of convergence for the posterior measures that arise from Bayesian inverse problems; see Theorem 5.4 in section 5, and its proof.
Proof of Theorem 3.3. We first prove the liminf inequality. Let (m ε , Σ ε ) be such that m ε → m and Σ ε → Σ. We want to show that F (m, Σ) ≤ lim inf ε↓0 F ε (m ε , Σ ε ). We may assume that lim inf ε↓0 F ε (m ε , Σ ε ) < ∞ since otherwise there is nothing to prove. By Lemma Lemma 3.6 this implies that m ∈ E and that Σ is positive definite. Then the liminf inequality follows from (26) and the fact that V ε 1 ≥ 0. Next we show the limsup inequality is true.
Then the limsup inequality follows from (26).

Approximation by Gaussian mixtures.
In the previous section we demonstrated the approximation of the target measure (1) by a Gaussian. Corollary Corollary 3.5 shows that, when the measure has only one mode, this approximation is perfect in the limit ε → 0: the limit KL-divergence tends to zero since both entropies in (11) tend to zero. However when multiple modes exist, and persist in the small ε limit, the single Gaussian is inadequate because the relative entropy term D KL (e i ||β) can not be small even though the relative entropy between Gaussians tends to zero. In this section we consider the approximation of the target measure µ ε by Gaussian mixtures in order to overcome this issue. We show that in the case of n minimizers of V 1 , the approximation with a mixture of n Gaussians is again perfect as ε → 0. The Gaussian mixture model is widely used in the pattern recognition and machine learning community; see the relevant discussion in [3,Chapter 9].
Let △ n be the standard n-simplex, i.e., For ξ ∈ (0, 1), we define △ n ξ = {α = (α 1 , α 2 , · · · , α n ) ∈ R n : α i ≥ ξ}. Recall that A is the set of Gaussian measures and define the set of Gaussian mixtures (27) M Also, for a fixed ξ = (ξ 1 , ξ 2 ) ∈ (0, 1) × (0, ∞) we define the set While M n is the set of all convex combinations of n Gaussians taken from A; the set M ξ n can be seen as an "effective" version of M n , in which each Gaussian component plays an active role, and no two Gaussians share a common center.
Consider the problem of minimizing D KL (ν||µ ε ) within M n or M ξ n . Since the sets M n and M ξ n are both closed with respect to weak convergence, we have the following existence result whose proof is similar to Theorem 3.1 and is omitted.
from the set M n , or from the set M ξ n with some fixed ξ = (ξ 1 , ξ 2 ) ∈ (0, 1) × (0, ∞). In both cases, there exists at least one minimizer to the functional (29). Now we continue to investigate the asymptotic behavior of the Kullback-Leibler approximations based on Gaussian mixtures. To that end, we again parametrize a measure ν in the set M n or M ξ n by the weights α = (α 1 , α 2 , · · · , α n ) as well as the n means as well as the n covariances matrices. Similar to the previous section we need to chose the right scaling in our Gaussian mixtures to reflect the typical size of fluctuations of µ ε . Thus for m = (m 1 , m 2 , · · · , m n ) and Σ = (Σ 1 , Σ 2 , · · · , Σ n ). we set (30) ν We can view D KL (ν ε ||µ ε ) as a functional of (α, m, Σ) and study the Γ-convergence of the resulting functional. For that purpose, we need to restrict our attention to finding the best Gaussian mixtures within M ξ n for some ξ ∈ (0, 1) × (0, ∞). The reasons are the following. First, we require individual Gaussian measures ν i to be active (i.e. α i > ξ 1 > 0) because D KL (ν ε , µ ε ), as a family of functionals of (α, m, Σ) indexed by ε, is not compact if we allow some of the α i to vanish. In fact, if α i ε = 0 for some i ∈ 1, 2, · · · , n, then D KL (ν ε ||µ ε ) is independent of m i ε and Σ i ε . In particular, if |m i ε | ∧ |Σ i ε | → ∞ while |m j ε | ∨ |Σ j ε | < ∞ for all the j's such that j = i, then it still holds that lim sup ε↓0 D KL (ν ε ||µ ε ) < ∞. Second, it makes more sense to assume that the individual Gaussian means stay apart from each other (i.e. min i =j |m i − m j | ≥ ξ 2 > 0) since we primarily want to locate different modes of the target measure. Moreover, it seems impossible to identify a sensible Γ-limit without such an assumption; see Remark 4.7.
Recall the Γ-limit F defined in (9). Then we have the following Γ-convergence result.
Remark 4.3. The right hand side of G consists of two parts: the first part is a weighted relative entropy which measures the discrepancy between two Gaussians, and the second part is the relative entropy between sums of Dirac masses at {x j } n j=1 with weights α and β respectively. This has the same spirit as the entropy splitting used in [21,Lemma 2.4].
Before we prove Theorem 4.2, we consider the minimization of the limit functional G. First let ξ 1 , ξ 2 be such that are the minimizers of V 1 . To minimize G, without loss of generality, we may choose m i = m i := x i . Then the weighted relative entropy in the first term in the definition The relative entropy of the weights also vanishes if we choose the weight α = α := β. To summarize, the minimizer (α, m, Σ) of G is given by and G(α, m, Σ) = 0. The following corollary is a direct consequence of the Γ-convergence of G ε .
For a non-Gaussian measure µ ε with multiple modes, i.e., n > 1 in the Assumption 2.3, we have seen in Corollary 3.5 that the Kullback-Leibler divergence between µ ε and the best Gaussian measure selected from A remains positive as ε ↓ 0. However, this gap is filled by using Gaussian mixtures, namely, with ν ε being chosen as the best Gaussian mixture, the Kullback-Leibler divergence D KL (ν ε ||µ ε ) ↓ 0 as ε ↓ 0.
Similarly to the proof of Theorem 3.3, Theorem 4.2 follows directly from Corollary 4.8 below, whose proof requires several lemmas. We start by showing the compactness of {G ε }. Lemma 4.5. Let G ε be defined by (31). Fix ξ = (ξ 1 , ξ 2 ) satisfying the condition (34). Let and dist(m i ε , E ) ↓ 0 as ε ↓ 0. In particular, for any i, there exists j = j(i) ∈ {1, 2, · · · , n} and a subsequence {m i k } k∈N of {m i ε } such that m k → x j as k → ∞. Proof. We write M = lim sup ε↓0 G ε (α ε , m ε , Σ ε ) and where the inequality follows simply from the monotonicity of the logarithmic function. As each of term D KL (ν j ε ||µ ε ) is non-negative, this implies the bound Using the lower bound α j ε > ξ 1 which holds by assumption we get a uniform upper bound on D KL (ν j ε ||µ ε ) which in turn permits to invoke Lemma Lemma 3.6.
Proof. By assumption, we know from (32) that where ρ ε = n i=1 α i ε ρ i ε is the probability density of the measure ν ε . First of all, applying the same Taylor expansion arguments used to obtain (24), one can deduce that with r 1,ε ≤ Cε and C = C(C 1 , c 1 , M V ). Next, we claim that the entropy of ρ ε can be rewritten as where r 2,ε ≤ e − C ε when ε ≪ 1 with the constant C = C(C 1 , c 2 , ξ 2 ). By definition, so it suffices to show that for each i ∈ {1, . . . , n} we have with r 2,ε ≤ e − C ε . Indeed, the monotonicity of the logarithmic function yields In order to show the matching lower bound we first recall that the means m i ε of the ν i ε are well separated by assumption, min j =i |m i ε − m j ε | > ξ 2 . Let δ ≪ ξ 2 to be fixed below and set B i δ = B(m i ε , δ) Then we write We first show that the error term E 2 ε is exponentially small. To that end, we first drop the exponential term in the Gaussian density to obtain the crude bound where in the second inequality we use the fact that det Σ i ε is bounded away from zero, which has been established in (36). Moreover, by definition we have Plugging bounds (43) and (43) in and using Gaussian concentration as well as the lower bound on λ min established in Lemma 4.5 when ε ≪ 1. Next, we want to bound E 1 ε . Notice that m j ε → m j for j = 1, · · · , n, hence if x ∈ B i δ and if δ < ξ 1 , then |x − m j ε | > ξ 1 − δ for any j = i when ε ≪ 1. As a consequence, This together with the elementary inequality where we used that α i ε is bounded below from zero. Hence (40) follows directly from (41)-(47). Finally, (37) follows from combining (38), (39) and the identity Remark 4.7. The assumption that min j =i |m i ε − m j ε | > ξ 2 > 0 is the crucial condition that allows us to express the entropy of the Gaussian mixture in terms of the mixture of entropies of individual Gaussian (i.e. the equation (39)), leading to the asymptotic formula (37). Neither formula (39) nor (24) is likely to be true without such an assumption since the cross entropy terms are not negligible.
The following corollary immediately follows from Lemma Lemma 4.6 by plugging in the Laplace approximation of the normalization constant Z µ,ε given in Lemma Lemma 3.2 and rearranging the terms.
Remark 4.9. Similarly to the discussion in Remark 3.9, the residual in (48) is here demonstrated to be of order o(1), but the quantitative bound that |r ε | ≤ Cε in (37) can be used to extract a rate of convergence. This can be used to study the limiting behaviour of posterior measures arising from Bayesian inverse problems when multiple modes are present; see the next section.

Applications in Bayesian inverse problems. Consider the inverse problem of recovering
x ∈ R d from the noisy data y ∈ R d , where y and x are linked through the equation Here G is called the forward operator which maps from R d into itself, η ∈ R d represents the observational noise. We take a Bayesian approach to solving the inverse problem. The main idea is to first model our knowledge about x with a prior probability distribution, leading to a joint distribution on (x, y) once the probabilistic structure on η is defined. We then update the prior based on the observed data y; specifically we obtain the posterior distribution µ y which is the conditional distribution of x given y, and is the solution to the Bayesian inverse problem. From this posterior measure one can extract information about the unknown quantity of interest. We remark that since G is non-linear in general, the posterior is generally not Gaussian even when the noise and prior are both assumed to be Gaussian. A systematic treatment of the Bayesian approach to inverse problems may be found in [27]. In Bayesian statistics there is considerable interest in the study of the asymptotic performance of posterior measures from a frequentist perspective; this is often formalized as the posterior consistency. To define this precisely, consider a sequence of observations {y j } j∈N , generated from the truth x † via (50) where {η j } j∈N is a sequence of random noises. This may model a statistical experiment with increasing amounts of data or with vanishing noise. In either case, posterior consistency refers to concentration of the posterior distribution around the truth as the data quality increases. For parametric statistical models, Doob's consistency theorem [28,Theorem 10.10] guarantees posterior consistency under the identifiability assumption about the forward model. For nonparametric models, in which the parameters of interest lie in infinite dimensional spaces, the corresponding posterior consistency is a much more challenging problem. Schwartz's theorem [25,2] provides one of the main theoretical tools to prove posterior consistency in infinite dimensional space, which replaces identifiability by a stronger assumption on testability. The posterior contraction rate, quantifying the speed that the posterior contracts to the truth, has been determined in various Bayesian statistical models (see [10,26,7]). In the context of the Bayesian inverse problem, the posterior consistency problem has mostly been studied to date for linear inverse problems with Gaussian priors [16,1]. The recent paper [29] studied posterior consistency for a specific nonlinear Bayesian inverse problem, using the stability estimate of the underlying inverse problem together with posterior consistency results for the Bayesian regression problem. In this section, our main interest is not in the consistency of posterior distribution, but in characterizing in detail its asymptotic behavior. We will consider two limit processes in (50): the small noise limit and the large data limit. In the former case, we assume that the noise η i = 1 √ i η where η is distributed according to the standard normal N (0, I d ), and we consider the data y N given by the most accurate observation, i.e. y N = y N . In the later case, the sequence {η i } i∈N is assumed to be independent identically distributed according to the standard normal and we accumulate the observations so that the data y N = {y 1 , y 2 , · · · , y N }. In addition, assume that the prior distribution is µ 0 which has the density with the normalization constant Z 0 > 0. Since the data and the posterior are fully determined by the noise η with η = η or η = {η i } i∈N , we denote the posterior by µ η N to indicate the dependence. By using Bayes's formula, we calculate the posterior distribution for both limiting cases below.
• Small noise limit • Large data limit In both cases, we are interested in the limiting behavior of the posterior distribution µ η N as N → ∞. For doing so, we assume the forward operator G satisfies one of the following assumptions.
(ii) G ∈ C 4 (R d ; R d ) and the zero set of the equation The following model problem gives a concrete example where these assumptions are satsified.
Model Problem. Consider the following one dimensional elliptic problem Here we assume that q, f ∈ L ∞ (0, 1) and that f is positive on (0, 1). The inverse problem of interest is to find q from the knowledge of the solution u. We restrict ourselves to a finite dimensional version of (53), which comes from the finite difference discretization Here u k , f k and q k are approximations to u(x k ), f (x k ) and q(x k )) with x k = k/M, k = 1, · · · , M and h = 1/(M + 1). The corresponding finite dimensional inverse problem becomes finding the vector q = {q k } M k=1 from the solution vector u = {u k } M k=1 given the right side f = {f k } M k=1 . For ease of notation, let us denote by A the matrix representation of the one dimensional discrete Laplacian, i.e. A ii = 2/h 2 for i = 1, 2, · · · , M and A ij = −1/h 2 when |i−j| = 1. Let Q be the diagonal matrix with Q ii = e q i , i = 1, 2, · · · , M . With these notations, we can write the forward map G as Note that both A and Q are positive definite so that (A + Q) is invertible. We now discuss this forward map, and variants on it, in relation to Assumption 5.1. First consider Assumption 5.1 (i). First, G is smooth in q since Q depends smoothly on q. In particular, for any fixed q ∈ R M with corresponding solution vector u, a direct calculation shows that the derivative matrix D q G of the forward map G is given by Here U is a diagonal matrix with the diagonal vector u. Due to our assumption that f k are positive, it follows from the (discrete) maximum principle that the u k are also positive, which in turn implies that U is invertible. Consequently, the matrix D q G is invertible and According to the inverse function theorem, the map G : R M → R M is invertible at every q ∈ R M and its inverse G −1 (u) is smooth in u. Therefore Assumption 5.1 (i) is fulfilled for any x † = q † ∈ R M . The problem (53) can be modified slightly so that Assumption 5.1 (ii) is satisfied. In fact, consider the problem (53) with the coefficient exp(q) replaced by q 2 . Then Assumption 5.1 (ii) is satisfied for any x † = q † without zero entries. More specifically, the resulting forward map in this case is still smooth, but the equation corresponding to the fact that q is only determined up to a sign in each entry. Moreover, if q † has no zero entry, G −1 is smooth near each of q † i . We divide our exposition below according to whether the noise is fixed or is considered as a random variable. For a fixed realization of noise η = η, by applying the theory developed in the previous section, we show the asymptotic normality for µ η N in the small noise limit. Furthermore, we obtain a Bernstein-Von Mises type theorem for µ η N with respect to both limit processes, small noise and large data.

Asymptotic Normality.
In this subsection, we assume that the data is generated from the truth x † and a single realization of the Gaussian noise η † , i.e.
Then the resulting posterior distribution µ η N has a density of the form where Z η N is the normalization constant. Notice that µ η N has the same form as the measure defined in (1) ; R) and that G satisfies one of the assumptions in Assumption (5.1). Then the potentials V ε 1 and V 2 satisfy Assumption 2.3. In particular, we have V ε Recall the set of Gaussian measures A and the set of Gaussian mixtures M n and M ξ n (defined in (27) and (28)). Again, we set ξ = (ξ 1 , ξ 2 ) such that ξ 1 ∈ (0, 1) and min i =j |x i − x j | ≥ ξ 2 > 0. The following theorem concerning the asymptotic normality of µ η N is a direct consequence of Corollary 3.4 and Corollary 4.4.
(ii) Let V 0 ∈ C 2 (R d ; R) and G satisfy Assumption 5.1 (ii). Given any N ∈ N, let ν N ∈ M ξ n be a minimizer of the functional . Theorem 5.2 (i) states that the measure µ η N is asymptotically Gaussian when certain uniqueness and stability properties hold in the inverse problem. Moreover, in this case, the asymptotic Gaussian distribution is fully determined by the truth and the forward map, and is independent of the prior. In the case where the uniqueness fails, but the data only corresponds to a finite number of unknowns, Theorem 5.2 (ii) demonstrates that the measure µ η N is asymptotically a Gaussian mixture, with each Gaussian mode independent of the prior. However, prior beliefs affect the proportions of the individual Gaussian components within the mixture; more precisely, the un-normalized weights of each Gaussian mode are proportional to the values of the prior evaluated at the corresponding unknowns.
Remark 5.3. In general, when {η i } i∈N is a sequence of fixed realizations of the normal distribution, Theorem 5.2 does not hold for the measure µ η N defined in (52) in the large data case. However, we will show that D KL (ν N ||µ η N ) will converge to zero in some average sense; see Theorem 5.4.

A Bernstein-Von Mises type result.
The asymptotic Gaussian phenomenon in Theorem 5.2 is very much in the same spirit as the celebrated Bernstein-Von Mises (BvM) theorem [28]. This theorem asserts that for a certain class of regular priors, the posterior distribution converges to a Gaussian distribution, independently of the prior, as the sample size tends to infinity. Let us state the Bernstein-Von Mises theorem more precisely in the i.i.d case.
Consider observing a set of i.i.d samples X N := {X 1 , X 2 , · · · , X N }, where X i is drawn from distribution P θ , indexed by an unknown parameter θ ∈ Θ. Let P N θ be the law of X N . Let Π be the prior distribution on θ and denote by Π(·|X N ) the resulting posterior distribution. The Bernstein-Von Mises Theorem is concerned with the behavior of the posterior Π(·|X N ) under the frequentist assumption that X i is drawn from some true model P θ 0 . A standard finitedimensional BvM result (see e.g. [28,Theorem 10.1]) states that, under certain conditions on the prior Π and the model P θ , as N → ∞ whereθ N is an efficient estimator for θ, I θ is the Fisher information matrix of P θ and d T V represents the total variation distance. As an important consequence of the BvM result, Bayesian credible sets are asymptotically equivalent to frequentist confidence intervals. Moreover, it has been proved that the optimal rate of convergence in the Bernstein-Von Mises theorem is O(1/ √ N ); see, for instance, [6,13]. This means that for any δ > 0, there exists M = M (δ) > 0 such that Unfortunately, BvM results like (56) and (57) do not fully generalize to infinite dimensional spaces, see counterexamples in [9]. Regarding the asymptotic frequentist properties of posterior distributions in nonparametric models, various positive results have been obtained recently, see e.g. [10,26,16,17,7,8]. For the convergence rate in the nonparametric case, we refer to [10,26,7].
In the remainder of the section, we prove a Bernstein-Von Mises type result for the posterior distribution µ η N defined by (51) and (52). If we view the observational noise η and η i appearing in the data as random variables, then the posterior measures appearing become random probability measures. Furthermore, exploiting the randomness of the η i , we claim that the posterior distribution in the large date case can be rewritten in the form of the small noise case. Indeed, by completing the square, we can write the expression (52) as due to the normality assumptions on η and η i . As a consequence it makes no difference which formulation is chosen when one is concerned with the statistical dependence of µ η N on the law of η. For this reason, we will only prove the Bernstein-Von Mises result for µ η N given directly in the form (51). For notational simplicity, we write the noise level √ ε in place of 1 √ N and consider random observations {y ε }, generated from a truth x † and normal noise η, i.e.
Given the same prior defined as before, we obtain the posterior distribution For any fixed η, let ν η ε be the best Gaussian measure which minimizes the Kullback-Leibler divergence D KL (ν||µ η ε ) over A. For ease of calculations, from now on we only consider the rate of convergence under Assumption 5.1 (i); the other case can be dealt with in the same manner, see Remark 5.8. The main result is as follows.
Theorem 5.4. There exists C > 0 such that With the help of Pinsker's inequality (2) as well as the Markov inequality, one can derive the following BvM-type result from Theorem 5.4.
(i) Because of the statistical equivalence of posterior measures in the limit of large data size and small noise, the posterior measure µ η N in the large data case (given by (52)) has the same convergence rate as (60), namely, for any δ > 0, there exists a constant M = M (δ) > 0 such that as N → ∞. This recovers the optimal rate of convergence for the posterior as proved for statistical models, see (57). (ii) For a fixed realization of the noise η, we have shown in Theorem 5.2 (i) that D KL (ν N ||µ η N ) ↓ 0 as N → ∞. In fact, by following the proof of the Laplace method, one can prove that . However, we obtain the linear convergence rate in (59) (with ε replacing 1/N ) by utilizing the symmetric cancellations in the evaluation of Gaussian integrals.
To prove Theorem 5.4, we start with an averaging estimate for the logarithm of the normalization constant Z η µ,ε . Lemma 5.7.
where r ε ≤ Cε for some C > 0 independent of ε.
Proof. Take a constant γ ∈ (0, 1 2 ). We write E η log Z η µ,ε as the sum We first find an upper bound for I 2 . By definition, It follows that For I 1 , we need to estimate Z η µ,ε under the assumption that |η| ≤ ε −γ . Thanks to the condition (i) on G, when ε ≪ 1 there exists a unique m † ε,η such that G(m † ε,η ) = G(x † ) + √ εη. Moreover, denoting by H the inverse of G in the neighborhood of G(x † ), we get from Taylor expansion that with some ξ ∈ (0, 1). Thanks to the smoothness assumption on G, the function H is differentiable up to the fourth order and hence the coefficients in the summation are uniformly bounded. Moreover, noting that DH(G(x † )) = DG(x † ) −1 , we obtain for some ξ = ξ(y) ∈ (0, 1). Then it follows from (65) and (66) that (67) with |r 2,ε,η | ≤ Cε|η| 2 . Notice that the linear term in the expansion (66) vanishes from the second line to the third line because the region of integration is symmetric with respect to the origin; the final equality holds because we have counted the exponentially decaying Gaussian integral outside of the ball B(0, ε − 1 2 δ) in the residual r 2,ε,η . Hence we obtain that for |η| ≤ ε −γ and ε small enough with |r 2,ε,η | ≤ Cε|η| 2 . As a result, Z η µ,ε satisfies the same bound as above. Then by using the Taylor expansion of the log function, one obtains that where p is vector depending only on L, G, V 0 and x † and |r 3,ε,η | ≤ Cε|η| 2 . This implies that when ε is sufficiently small, with |r ε | ≤ Cε. Again the first order term ε 1 2 p T η vanishes because of the symmetry of the integration region; the bound |r ε | ≤ Cε follows from the bound for r 3,ε,η and the Gaussian tail bound. This completes the proof.
Remark 5.8. Theorem 5.4 proves the rate of convergence with the assumption that G satisfies Assumption 5.1 (i). However, the convergence rate remains the same when Assumption 5.1 (ii) is fulfilled, and when the best Gaussian measure is replaced by the best Gaussian mixture.

Comparison with Classical BvM Results.
We would like to make comparisons between our BvM result for Bayesian inverse problems and classical finite dimensional BvM results for general statistical models [11,13].
• Assumption. In the classical framework of Bayesian inferences, the posterior converges to a Gaussian in the total variation distance (with optimal rate) under the typical assumption that the likelihood function is C 3 and that the Fisher information matrix is non-degenerate; see e.g. [11,Theorem 1.4.2] and [13,Section 4]. The asymptotic covariance of the limiting Gaussian is given by the inverse of the Fisher information matrix. In the Bayesian inverse problem setting, we improve the convergence to the stronger sense of KL-divergence, but at the expense of requiring higher differentiability (C 4 ) on the forward map G. Moreover, the matrix product DG T DG takes the place of the Fisher information matrix in the asymptotic covariance, where DG is invertible because of Assumption 5.1. • Multimodal Distribution. The proposed KL-approximation framework allows us to prove the convergence of a multimodal probability measure to a mixture of Gaussian measures. The limiting KL-discrepency between the target measure and the Gaussian approximation is characterized explicitly as a sum of two relative entropies, see Theorem 4.2. In addition, in this case the prior does not disappear in the limit and its influence on the posterior is reflected in the weighted coefficients in the Gaussian mixture. To the best of our knowledge, such results have not been stated in the statistical literature. • Proof. Both our proof and classical proofs for the finite dimensional BvM theorems are essentially based on the local Taylor expansion of the posterior around the truth. But the proofs are carried out in different ways. Classical BvM results in the TV-distance are usually proved by first expanding the posterior density around the maximum likelihood estimator (MLE), which requires tracking the normalization constant, and then applying the local asymptotic normality of MLE and LeCam's contiguity arguments to obtain the convergence of the posterior. Our proof, instead, takes advantage of the special formulation of the KL-divergence, i.e. the separation of the normalization constant from the log density, thereby reducing the convergence proof to establishing precise estimates on the normalization constant (see Lemma 5.7).
6. Conclusions. We have studied a methodology widely used in applications, yet little analyzed, namely the approximation of a given target measure by a Gaussian, or by a Gaussian mixture. We have employed relative entropy as a measure of goodness of fit. Our theoretical framework demonstrates the existence of minimizers of the variational problem, and studies their asymptotic form in a relevant small parameter limit where the measure concentrates; the small parameter limit is studied by use of tools from Γ-convergence. In the case of a target with asymptotically unimodal distribution the Γ-limit demonstrates perfect reconstruction by the approximate single Gaussian method in the measure concentration limit; and in the case of multiple modes it quantifies the errors resulting from using a single mode fit. Furthermore the Gaussian mixture is shown to overcome the limitations of a single mode fit, in the case of target measure with multiple modes. These ideas are exemplified in the analysis of a Bayesian inverse problem in the small noise or large data set limits, and connections made to the Bernstein-von Mises theory from asymptotic statistics.
The BvM theorem of this paper is essentially still parametric. A natural interesting future direction would be to study infinite-dimensional statistical models [12]. In particular it would be interesting to apply our measure approximation approach from Γ-convergence to understand the BvM phenomenon of infinite dimensional non-linear Bayesian inverse problems. In our finite dimensional setting, the inverse problem of interest is essentially well-posed since we assume that both G and DG are invertible, so the only ill-posedness comes from the lack of uniqueness. However, for infinite dimensional inverse problems, the degree of ill-posedness (mild/severe) has a big influence on the precise statement of the BvM theorem. Understanding of this issue requires delicate quantitative stability estimates for the underlying inverse problem. The recent paper [18] proved a BvM result for high dimensional non-linear inverse problems where dimension of the unknown parameter increases with the decreasing noise level. However, it remains an open problem whether the BvM theorem holds for genuinely infinite dimensional non-linear inverse problems. We will address this problem in future work.