Randomized Quasi-Newton Updates are Linearly Convergent Matrix Inversion Algorithms

We develop and analyze a broad family of stochastic/randomized algorithms for inverting a matrix. We also develop specialized variants maintaining symmetry or positive definiteness of the iterates. All methods in the family converge globally and linearly (i.e., the error decays exponentially), with explicit rates. In special cases, we obtain stochastic block variants of several quasi-Newton updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). Ours are the first stochastic versions of these updates shown to converge to an inverse of a fixed matrix. Through a dual viewpoint we uncover a fundamental link between quasi-Newton updates and approximate inverse preconditioning. Further, we develop an adaptive variant of randomized block BFGS, where we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. By inverting several matrices from varied applications, we demonstrate that AdaRBFGS is highly competitive when compared to the well established Newton-Schulz and minimal residual methods. In particular, on large-scale problems our method outperforms the standard methods by orders of magnitude. Development of efficient methods for estimating the inverse of very large matrices is a much needed tool for preconditioning and variable metric optimization methods in the advent of the big data era.


Introduction
Matrix inversion is a standard tool in numerics, needed, for instance, in computing a projection matrix or a Schur complement, which are common place calculations. When only an approximate inverse is required, then iterative methods are the methods of choice, for they can terminate the iterative process when the desired accuracy is reached. This can be far more efficient than using a direct method. Calculating an approximate inverse is a much needed tool in preconditioning [31] and, if the output is guaranteed to be positive definite, then the it can be used to design variable metric optimization methods. Furthermore, iterative methods can make use of an initial estimate of the inverse when available.
The driving motivation of this work is the need to develop algorithms capable of computing the inverse of very large matrices, where standard techniques take an exacerbating amount of time or simply fail. In particular, we develop a family of randomized/stochastic methods for inverting a matrix, with specialized variants maintaining symmetry or positive definiteness of the iterates. All methods in the family converge globally (i.e., from any starting point) and linearly (i.e., the error decays exponentially). We give an explicit expression for the convergence rate.
As special cases, we obtain stochastic block variants of several quasi-Newton (qN) updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). To the best of our knowledge, these are the first stochastic versions of qN updates. Moreover, this is the first time that qN updates are shown to be iterative methods for inverting a matrix. We offer a new interpretation of the qN methods through a Lagrangian dual viewpoint, uncovering a fundamental link between qN updates and approximate inverse preconditioning.
We develop an adaptive variant of randomized block BFGS, in which we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. Through extensive numerical experiments with large matrices arising from several applications, we show that AdaRBFGS can significantly outperform the well established Newton-Schulz and minimal residual methods.

Outline
The rest of the paper is organized as follows. In Section 2 we summarize the main contributions of this paper. In Section 3 we describe the qN methods, which is the main inspiration of our methods. Subsequently, Section 4 describes two algorithms, each corresponding to a variant of the inverse equation, for inverting general square matrices. We also provide insightful dual viewpoints for both methods. In Section 5 we describe a method specialized to inverting symmetric matrices. Convergence in expectation is examined in Section 6, were we consider two types of convergence: the convergence of i) the expected norm of the error, and the convergence of ii) the norm of the expected error. In Section 7 we specialize our methods to discrete distributions, and comment on how one may construct a probability distribution leading to better complexity rates (i.e., importance sampling). We then describe a convenient probability distribution which leads to convergence rates which can be described in terms of spectral properties of the original matrix to be inverted. In Section 8 we detail several instantiations of our family of methods, and their resulting convergence rates. We show how via the choice of the parameters of the method, we obtain stochastic block variants of several well known quasi Newton methods. We also describe the simultaneous randomized Kaczmarz method here. Section 9 is dedicated to the development of an adaptive variant of our randomized BFGS method, AdaRBFS, for inverting positive definite matrices. This method adapts stochasticity throughout the iterative process to obtain faster practical convergence. Finally, in Section 10 we show through numerical tests that AdaRBFGS significantly outperforms state-ofthe-art iterative matrix inversion methods on large-scale matrices.

Notation
By I we denote the n × n identity matrix. Let X, Y F (W −1 ) def = Tr(X T W −1 Y W −1 ) denote the weighted Frobenius inner product, where X, Y ∈ R n×n and W ∈ R n×n is a symmetric positive definite "weight" matrix. Further, let where we have used the convention F = F (I), since · F (I) is the standard Frobenius norm. Let · 2 denote the induced operator norm for square matrices defined via Y 2 def = max v 2 =1 Y v 2 . Finally, for positive definite W ∈ R n×n , we define the weighted induced norm via

Previous work
A widely used iterative method for inverting matrices is the Newton-Schulz method [32] introduced in 1933, and its variants which is still subject of ongoing research [25]. The drawback of the Newton-Schulz methods is that they do not converge for any initial estimate. Instead, an initial estimate that is close to A −1 (in some norm) is required. In contrast, the methods we present converge globally for any initial estimate. Bingham [3] describes a method that uses the characteristic polynomial to recursively calculate the inverse, though it requires the calculating the coefficients of the polynomial when initiated, which is costly, and the method has fallen into disuse. Goldfarb [10] uses Broyden's method [4] for iteratively inverting matrices. Our methods include a stochastic variant of Broyden's method. The approximate inverse preconditioning (AIP ) methods [6,31,12,1] calculate an approximate inverse by minimizing the residual XA − I F in X. s by applying a number of iterations of the steepest descent or minimal residual method. A considerable drawback of the AIP methods is that the iterates are not guaranteed to be positive definite nor symmetric, even when A is both. A solution to the lack of symmetry is to "symmetrize" the estimate between iterations. However, then it is difficult to guarantee the quality of the new symmetric estimate. Another solution is to calculate directly a factored form LL T = X and minimize in L the residual L T AL − I F . But now this residual is a non-convex function, and is thus difficult to minimize. A variant of our method naturally maintains symmetry of the iterates.

Contributions
We now describe the main contributions of this work.

New algorithms
We develop a novel and surprisingly simple family of stochastic algorithms for inverting matrices. The problem of finding the inverse of an n × n invertible matrix A can be characterized as finding the solution to either one of the two inverse equations 1 AX = I or XA = I. Our methods make use of randomized sketching [29,14,28,30] to reduce the dimension of the inverse equations in an iterative fashion. To the best of our knowledge, these are the first stochastic algorithms for inverting a matrix with global complexity rates.
In particular, our nonsymmetric method (Algorithm 1) is based on the inverse equation AX = I, and performs the sketch-and-project iteration where S ∈ R n×q is a random matrix drawn in an i.i.d. fashion from a fixed distribution D, and W ∈ R n×n is the positive definite "weight" matrix. The distribution D and matrix W are the parameters of the method. Note that if we choose q n, the constraint in the projection problem (2) will be of a much smaller dimension than the original inverse equation, and hence the iteration (2) will become cheap.
In an analogous way, we design a method based on the inverse equation XA = I (Algorithm 2). Adding the symmetry constraint X = X T leads to Algorithm 3-a specialized method for symmetric A capable of maintaining symmetric iterates.

Dual formulation
Besides the primal formulation described in Section 2.1-sketch-and-project-we also provide dual formulations of all three methods (Algorithms 1, 2 and 3). For instance, the dual formulation of (2) is where the minimization is performed over X ∈ R n×n and Y ∈ R n×q . We call the dual formulation constrain-and-approximate as one seeks to perform the best approximation of the inverse (with respect to the weighted Frobenius distance) while constraining the search to a random affine space of matrices passing through X k . While the projection (3) cannot be performed directly since A −1 is not known, it can be performed indirectly via the equivalent primal formulation (2).

Quasi-Newton updates and approximate inverse preconditioning
As we will discuss in Section 3, through the lens of the sketch-and-project formulation, Algorithm 3 can be seen as randomized block extension of the quasi-Newton (qN) updates [4,9,11,33]. We distinguish here between qN methods, which are algorithms used in optimization, and qN updates, which are matrix-update rules used in qN methods. Standard qN updates work with q = 1 ("block" refers to the choice q > 1) and S chosen in a deterministic way, depending on the sequence of iterates of the underlying optimization problem. To the best of our knowledge, this is the first time stochastic versions of qN updates were designed and analyzed. On the other hand, through  [4,9,11,33] can be seen as a projection of A −1 onto a space of rank-2 symmetric matrices. It seems this has not been observed before.

Complexity: general results
Our framework leads to global linear convergence (i.e., exponential decay) under very weak assumptions on D. In particular, we provide an explicit convergence rate ρ for the exponential decay of the norm of the expected error of the iterates (line 2 of Table 1) and the expected norm of the error (line 3 of Table 1), where the rate is given by where Z def = A T S(S T AW A T S) −1 SA T . We show that ρ is always bounded between 0 and 1. Furthermore, we provide a lower bound on ρ that shows that the rate can potentially improve as the number of columns in S increases. This sets our method apart from current methods for inverting matrices that lack global guarantees, such as Newton-Schulz, or the self-conditioning variants of the minimal residual method.

Complexity: discrete distributions
We detail a convenient choice of probability for discrete distributions D that gives easy-to-interpret convergence results depending on a scaled condition number of A. This way we obtain methods for inverting matrices with the same convergence rate as the randomized Kaczmarz method [35] and randomized coordinate descent [23] for solving linear systems. We also obtain importance sampling results by optimizing an upper bound on the convergence rate.

Adaptive randomized BFGS
We develop an additional highly efficient method-adaptive randomized BFGS (AdaRBFGS)for calculating an approximate inverse of positive definite matrices. Not only does the method greatly outperform the state-of-the-art methods such as Newton-Schulz and approximate inverse preconditioning methods, but it also preserves positive definiteness, a quality not present in previous methods. Therefore, AdaRBFGS can be used to precondition positive definite systems and to design new variable-metric optimization methods. Since the inspiration behind this method comes from the desire to design an optimal adaptive distribution for S by examining the complexity rate ρ, this work also highlights the importance of developing algorithms with explicit convergence rates.

Extensions
This work opens up many possible avenues for extensions. For instance, new efficient methods could be designed by experimenting and analyzing through our framework with different sophisticated sketching matrices S, such as the Walsh-Hadamard matrix [26,29]. Furthermore, our methods produce low rank estimates of the inverse and can be adapted to calculate low rank estimates of any matrix. They can be applied to singular matrices, in which case they converge to a particular pseudo-inverse. Our results can be used to push forward work into stochastic variable metric optimization methods, such as the work by Leventhal and Lewis [24], where they present a randomized iterative method for estimating Hessian matrices that converge in expectation with known convergence rates for any initial estimate. Stich et al. [34] use Leventhal and Lewis' method to design a stochastic variable metric method for black-box minimization, with explicit convergence rates, and promising numeric results. We leave these and other extensions to future work.

Randomization of Quasi-Newton Updates
Our methods are inspired by, and in some cases can be considered to be, randomized block variants of the quasi-Newton (qN) updates. Here we explain how our algorithms arise naturally from the qN setting. Readers familiar with qN methods may jump ahead to Section 3.3.

Quasi-Newton methods
A problem of fundamental interest in optimization is the unconstrained minimization problem where f : R n → R is a sufficiently smooth function. Quasi-Newton (QN) methods, first proposed by Davidon in 1959 [7], are an extremely powerful and popular class of algorithms for solving this problem, especially in the regime of moderately large n. In each iteration of a QN method, one approximates the function locally around the current iterate x k by a quadratic of the form where B k is a suitably chosen approximation of the Hessian (B k ≈ ∇ 2 f (x k )). After this, a direction s k is computed by minimizing the quadratic approximation in s: assuming B k is invertible. The next iterate is then set to for a suitable choice of stepsize α k , often chosen by a line-search procedure (i.e., by approximately minimizing f (x k + αs k ) in α). Gradient descent arises as a special case of this process by choosing B k to be constant throughout the iterations. A popular choice is B k = LI, where I is the identity matrix and L ∈ R + is the Lipschitz constant of the gradient of f . In such a case, the quadratic approximation (6) is a global upper bound on f (x k + s), which means that f (x k + s k ) is guaranteed to be at least as good (i.e., smaller or equal) as f (x k ), leading to guaranteed descent. Newton's method also arises as a special case: by choosing B k = ∇ 2 f (x k ). These two algorithms are extreme cases on the opposite end of a spectrum. Gradient descent benefits from a trivial update rule for B k and from cheap iterations due to the fact that no linear systems need to be solved. However, curvature information is largely ignored, which slows down the practical convergence of the method. Newton's method utilizes the full curvature information contained in the Hessian, but requires the computation of the Hessian in each step, which is expensive for large n. QN methods aim to find a sweet spot on the continuum between these two extremes. In particular, the QN methods choose B k+1 to be a matrix for which the secant equation is satisfied: The basic reasoning behind this requirement is the following: if f is a convex quadratic, then the Hessian satisfies the secant equation for all pairs of vectors x k+1 and x k . If f is not a quadratic, the reasoning is as follows. Using the fundamental theorem of calculus, we have By selecting B k+1 that satisfies the secant equation, we are enforcing B k+1 to mimic the action of the integrated Hessian along the line segment joining x k and x k+1 . Unless n = 1, the secant equation (8) does not have a unique solution in B k+1 . All qN methods differ only in which particular solution is used. The formulas transforming B k to B k+1 are called qN updates.
Since these matrices are used to compute the direction s k via (7), it is often more reasonable to instead maintain a sequence of inverses X k = B −1 k . By multiplying both sides of (8) by X k+1 , we arrive at the secant equation for the inverse: The most popular classes of qN updates choose X k+1 as the closest matrix to X k , in a suitable norm (usually a weighted Frobenius norm with various weight matrices), subject to the secant equation, often with an explicit symmetry constraint:

Quasi-Newton updates
Consider now problem (5) with where A is an n × n symmetric positive definite matrix, b ∈ R n and c ∈ R. Granted, this is not a typical problem for which qN methods would be used by a practitioner. Indeed, the Hessian of f does not change, and hence one does not have to track it. The problem can simply be solved by setting the gradient to zero, which leads to the system Ax = b, the solution being x * = A −1 b. As solving a linear system is much simpler than computing the inverse A −1 , approximately tracking the (inverse) Hessian of f along the path of the iterates {x k }-the basic strategy of all qN methodsseems like too much effort for what is ultimately a much simpler problem. However, and this is one of the main insights of this work, instead of viewing qN methods as optimization algorithms, we can alternatively interpret them as iterative algorithms producing a sequence of matrices, {B k } or {X k }, hopefully converging to some matrix of interest. In particular, one would hope that X k → A −1 if a qN method is applied to (11), with any symmetric positive definite initial guess X 0 . In this case, the qN updates of the minimum distance variety given by (10) take the form

Randomized quasi-Newton updates
While the motivation for our work comes from optimization, having arrived at the update (12), we can dispense of some of the implicit assumptions and propose and analyze a wider class of methods. In particular, in this paper we analyze a large class of randomized algorithms of the type (12), where the vector h k is replaced by a random matrix S and A is any invertible 2 , and not necessarily symmetric or positive definite matrix. This constitutes a randomized block extension of the qN updates.

Inverting Nonsymmetric Matrices
In this paper we are concerned with the development of a family of stochastic algorithms for computing the inverse of a nonsingular matrix A ∈ R n×n . The starting point in the development of our methods is the simple observation that the inverse A −1 is the (unique) solution of a linear matrix equation, which we shall refer to as the inverse equation: Alternatively, one can use the inverse equation XA = I instead. Since (13) is difficult to solve directly, our approach is to iteratively solve a small randomly relaxed version of (13). That is, we choose a random matrix S ∈ R n×q , with q n, and instead solve the following sketched inverse equation: If we base the method on the second inverse equation, the sketched inverse equation XAS = S should be used instead. Note that A −1 satisfies (14). If q n, the sketched inverse equation is of a much smaller dimension than the original one, and hence easier to solve. However, the equation will no longer have a unique solution and in order to design an algorithm, we need a way of picking a particular solution. Our algorithm defines X k+1 to be the solution that is closest to the current iterate X k in a weighted Frobenius norm. This is repeated in an iterative fashion, each time drawing S independently from a fixed distribution D. The distribution D and the matrix W can be seen as parameters of our method. The flexibility of being able to adjust D and W is important: by varying these parameters we obtain various specific instantiations of the generic method, with varying properties and convergence rates. This gives the practitioner the flexibility to adjust the method to the structure of A, to the computing environment and so on.

Projection viewpoint: sketch-and-project
The next iterate X k+1 is the nearest point to X k that satisfies a sketched version of the inverse equation: In the special case when S = I, the only such matrix is the inverse itself, and (15) is not helpful. However, if S is "simple", (15) will be easy to compute and the hope is that through a sequence of such steps, where the matrices S are sampled in an i.i.d. fashion from some distribution, X k will converge to A −1 . Alternatively, we can sketch the equation XA = I and project onto XAS = S: While method (15) sketches the rows of A, method (15) sketches the columns of A. Thus, we refer to (15) as the row variant and to (16) as the column variant. Both variants converge to the inverse of A, as will be established in Section 6. If A is singular, then it can be shown that the iterates of (16) converge to the left inverse, while the iterates of (15) converge to the right inverse.

Optimization viewpoint: constrain-and-approximate
The row sketch-and-project method can be cast in an apparently different yet equivalent way: Minimization is done over X ∈ R n×n and Y ∈ R n×q . In this viewpoint, in each iteration (17), we select a random affine space that passes through X k and then select the point in this space that is as close as possible to the inverse. This random search space is special in that, independently of the input pair (W, S), we can efficiently compute the projection of A −1 onto this space, without knowing A −1 explicitly. Method (16) also has an equivalent constrain-and-approximate formulation: Methods (17) and (18) can be viewed as new variants of approximate inverse preconditioning (AIP) [1,12,22,21], which are a class of methods for computing an approximate inverse of A by minimizing XA − I F via iterative optimization algorithms, such as steepest descent or the minimal residual method. Our methods use a different iterative procedure (projection onto a randomly generated affine space), and work with a more general norm (weighted Frobenius norm).

Equivalence
We now prove that (15) and (16) are equivalent to (17) and (18), respectively, and give their explicit solution.
Theorem 4.1. Viewpoints (15) and (17) are equivalent to (16) and (18), respectively. Further, if S has full column rank, then the explicit solution to (15) is and the explicit solution to (16) is Proof. We will prove all the claims for the row variant, that is, we prove that (15) are (17) equivalent and that their solution is given by (19). The remaining claims, that (16) are (18) are equivalent and that their solution is given by (20), follow with analogous arguments. It suffices to consider the case when W = I, as we can perform a change of variables to recover the solution for any W . Indeed, in view of (1), with the change of variableŝ Equation (15) becomes Moreover, if we letŶ = W −1/2 Y , then (17) becomes min X∈R n×n ,Ŷ ∈R n×q By substituting the constraint in (23) into the objective function, then differentiating to find the stationary point, we obtain that is the solution to (23). Changing the variables back using (21), (24) becomes (36). Now we prove the equivalence of (22) and (23) using Lagrangian duality. The sketch-and-project viewpoint (22) has a convex quadratic objective function with linear constraints, thus strong duality holds. Introducing Lagrangian multiplierŶ ∈ R n×q , the Langrangian dual of (22) is given by Algorithm 1 Stochastic Iterative Matrix Inversion (SIMI) -nonsym. row variant 1: input: invertible matrix A ∈ R n×n 2: parameters: D = distribution over random matrices; positive definite matrix W ∈ R n×n 3: initialize: arbitrary square matrix X 0 ∈ R n×n 4: for k = 0, 1, 2, . . . do

5:
Sample an independent copy S ∼ D 6: This is equivalent to (15) and (17) 8: output: last iterate X k Clearly, We will now prove that thus proving that (22) and (23) are equivalent by strong duality. Differentiating the Lagrangian in X and setting to zero givesX =X k +Â TŜŶ T .
Substituting into (25) gives (26) into the last equation, minimizing inX, then maximizing inŶ , and dispensing of the term 1 2 X k −Â −1 2 F as it does not depend onŶ norX, we obtain the dual problem: It now remains to change variables using (21) and set Y = W 1/2Ŷ to obtain (17).

5:
Sample an independent copy S ∼ D 6: This is equivalent to (16) and (18) 8: output: last iterate X k Based on Theorem 4.1, we can summarize the methods described in this section as Algorithm 1 and Algorithm 2. The explicit formulas (19) and (20) for (15) and (16) allow us to efficiently implement these methods, and facilitate convergence analysis. In particular, we now see that the convergence analysis of (20) follows trivially from analyzing (19). This is because (19) and (20) differ only in terms of a transposition.That is, transposing (20) gives , which is the solution to the row variant of the sketch-andproject viewpoint but where the equation A T X T = I is sketched instead of AX = I. Thus, since the weighted Frobenius norm is invariant under transposition, it suffices to study the convergence of (19); convergence of (20) follows by simply swapping the role of A for A T . We collect this observation is the following remark.
Remark 4.1. The expression for the rate of convergence of Algorithm 2 is the same as the expression for the rate of convergence of Algorithm 1, but with every occurrence of A swapped for A T .

Relation to multiple linear systems
Any iterative method for solving linear systems can be applied to the n linear systems that define the inverse through AX = I to obtain an approximate inverse. However, not all methods for solving linear systems can be applied to solve these n linear systems simultaneously, which is necessary for efficient matrix inversion.
The recently proposed methods in [14] for solving linear systems can be easily and efficiently generalized to inverting a matrix, and the resulting method is equivalent to our row variant method (15) and (17). To show this, we perform the change of variablesX The above is a separable problem and each column ofX k+1 can be calculated separately. Letx i k+1 be the ith column ofX k+1 which can be calculated througĥ The above was proposed as a method for solving linear systems in [14] applied to the systemÂx = e i . Thus, the convergence results established in [14] carry over to our row variant (15) and (17). In particular, the theory in [14] proves that the expected norm difference of each column of W −1/2 X k converges to W −1/2 A −1 with rate ρ as defined in (4). This equivalence breaks down when we impose additional matrix properties through constraints, such as symmetry.

Inverting Symmetric Matrices
When A is symmetric, it may be useful to maintain symmetry in the iterates, in which case the nonsymmetric methods-Algorithms 1 and 2-have an issue, as they do not guarantee that the iterates are symmetric. However, we can modify (15) by adding a symmetry constraint. The resulting symmetric method naturally maintains symmetry in the iterates.
Projection · Figure 1: The new estimate X k+1 is obtained by projecting X k onto the affine space formed by intersecting

Projection viewpoint: sketch-and-project
The new iterate X k+1 is the result of projecting X k onto the space of matrices that satisfy a sketched inverse equation and that are also symmetric, that is See Figure 1 for an illustration of the symmetric update (27). This viewpoint can be seen as a randomized block version of the quasi-Newton (qN) methods [11,18], as detailed in Section 3. The flexibility in using a weighted norm is important for choosing a norm that better reflects the geometry of the problem. For instance, when A is symmetric positive definite, it turns out that W −1 = A results in a good method. This added freedom of choosing an appropriate weighting matrix has proven very useful in the qN literature, in particular, the highly successful BFGS method [4,9,11,33] selects W −1 as an estimate of the Hessian matrix.

Optimization viewpoint: constrain-and-approximate
The viewpoint (27) also has an interesting dual viewpoint: The minimum is taken over X ∈ R n×n and Y ∈ R n×q . The next iterate, X k+1 , is the best approximation to A −1 restricted to a random affine space of symmetric matrices. Furthermore, (28) is a symmetric equivalent of (17); that is, the constraint in (28) is the result of intersecting the constraint in (17) with the space of symmetric matrices.

Equivalence
We now prove that the two viewpoints (27) and (28) are equivalent, and show their explicit solution.
Theorem 5.1. If A and X k are symmetric, then the viewpoints (27) and (28) are equivalent.
That is, they define the same X k+1 . Furthermore, if S has full column rank, and we let Λ def = (S T AW AS) −1 , then the explicit solution to (27) and (28) is Proof. It was recently shown in [13,Section 2] and [20, Section 4] 3 that (29) is the solution to (27). We now prove the equivalence of (27) and (28) using Lagrangian duality. It suffices to prove the claim for W = I as we did in the proof of Theorem 4.1, since the change of variables (21) applied to (27) shows that (27) is equivalent to min X∈R n×n Since (27) has a convex quadratic objective with linear constraints, strong duality holds. Thus we will derive a dual formulation for (30) then use the change of coordinates (21) to recover the solution to (27). LetŶ ∈ R n×q and Γ ∈ R n×n and consider the Lagrangian of (30) which is Differentiating inX and setting to zero giveŝ Applying the symmetry constraint X = X T gives Γ − Γ T = 1 2 (ŶŜ TÂ −Â TŜŶ T ). Substituting the above into (32) givesX Now let Θ = 1 2 (ŶŜ TÂ +Â TŜŶ T ) and note that, since the matrix Θ +X k −Â −1 is symmetric, we get Substituting (33) into (31) gives L(X,Ŷ , Finally, using (33) and maximizing overŶ then minimizing over X gives the dual: It now remains to change variables according to (21) and set Y = W 1/2Ŷ .

5:
Sample an independent copy S ∼ D 6: We now analyze the convergence of the error, X k − A −1 , for iterates of Algorithms 1, 2 and 3. For the sake of economy of space, we only analyze Algorithms 1 and 3. Convergence of Algorithm 2 follows from convergence of Algorithm 1 by observing Remark 4.1.
The first analysis we present in Section 6.1 is concerned with the convergence of E X k − A −1 2 , that is, the norm of the expected error. We then analyze the convergence of E X k − A −1 2 , the expected norm of the error. The latter is a stronger type of convergence, as explained in the following proposition. Proposition 6.1. Let X ∈ R n×n be a random matrix, · a matrix norm induced by an inner product, and fix A −1 ∈ R n×n . Then , A −1 from the right hand side, then grouping the appropriate terms, yields the desired result.
But the converse is not necessarily true. Rather, the variance E[ X k − E [X k ] 2 ] must converge to zero for the converse to be true 4 . The convergence of Algorithms 1 and 3 can be characterized by studying the random matrix The update step of Algorithm 1 can be re-written as a simple fixed point formula We can also simplify the iterates of Algorithm 3 to The only stochastic component in both methods is contained in the matrix Z, and ultimately, the convergence of the iterates will depend on E [Z], the expected value of this matrix. Thus we start with two lemmas concerning the Z and E [Z] matrices. Lemma 6.1. If Z is defined as in (35), then 1. the eigenvalues of W 1/2 ZW 1/2 are either 0 or 1, 2. W 1/2 ZW 1/2 projects onto the q-dimensional subspace Range W 1/2 A T S .

Norm of the expected error
We start by proving that the norm of the expected error of the iterates of Algorithm 1 and Algorithm 3 converges to zero. The following theorem is remarkable in that we do not need to make any assumptions on the distribution S, except that S has full column rank. Rather, the theorem pinpoints that convergence depends solely on the spectrum of Theorem 6.1. Let S be a random matrix which has full column rank with probability 1 (so that Z is well defined). Then the iterates X k+1 of Algorithm 1 satisfy Let X 0 ∈ R n×n . If X k is calculated in either one of these two ways 1. Applying k iterations of Algorithm 1, 2. Applying k iterations of Algorithm 3 (assuming A and X 0 are symmetric), then X k converges to the inverse exponentially fast, according to Moreover, we have the following lower and upper bounds on the convergence rate: Proof. Let for all k. Left and right multiplying (36) by W −1/2 gives Taking expectation with respect to S in (43) gives Taking full expectation in (43) and using the tower rule gives Applying the norm in (45) gives Furthermore, where we used to symmetry of (I − E[Ẑ]) when passing from the operator norm to the spectral radius. Note that the symmetry of E[Ẑ] derives from the symmetry ofẐ. It now remains to unroll the recurrence in (46) to get (39). Now we analyze the iterates of Algorithm 3. Left and right multiplying (37) by W −1/2 we have AsP is a linear operator, taking expectation again yields Letting |||P ||| 2 def = max R 2 =1 P (R) 2 , applying norm in (49) gives Clearly, P is a positive linear map, that is, it is linear and maps positive semi-definite matrices to positive semi-definite matrices. Thus, by Jensen's inequality, the mapP is also a positive linear map. As every positive linear map attains its norm at the identity matrix (see Inserting the above equivalence in (50) and unrolling the recurrence gives (39). Finally, to prove (41), as proven in Lemma 6.2, the spectrum of W 1/2 E [Z] W 1/2 is contained in [0, 1] consequently 0 ≤ ρ ≤ 1. Furthermore, as the trace of a matrix is equal to the sum of its eigenvalues, we have where we used that W 1/2 ZW 1/2 projects onto a q-dimensional subspace (Lemma 6.1), and thus Tr(W 1/2 ZW 1/2 ) = q. Rearranging (51) gives (41).
If ρ = 1, this theorem does not guarantee convergence. However, ρ < 1 when E [Z] is positive definite, which is the case in all practical variants of our method, some of which we describe in Section 8.

Expectation of the norm of the error
Now we consider the convergence of the expected norm of the error. This form of convergence is preferred, as it also proves that the variance of the iterates converges to zero (see Proposition 6.1).
Theorem 6.2. Let S be a random matrix that has full column rank with probability 1 and such that E [Z] is positive definite, where Z is defined in (35). Let X 0 ∈ R n×n . If X k is calculated in either one of these two ways 1. Applying k iterations of Algorithm 1, 2. Applying k iterations of Algorithm 3 (assuming A and X 0 are symmetric), then X k converges to the inverse according to Proof. First consider Algorithm 1, where X k+1 is calculated by iteratively applying (36). Using the substitution (42) again, then from (36) we have R k+1 = (I −Ẑ)R k , from which we obtain Taking expectations, we get In order to arrive at (52), it now remains to take full expectation, unroll the recurrence and use the substitution (42) Now we assume that A and X 0 are symmetric and {X k } are the iterates computed by Algorithm 3. Left and right multiplying (37) by W −1/2 we have Taking norm we have where in the last inequality we used that I −Ẑ is an orthogonal projection and thus it is symmetric positive semi-definite, whence Tr(R kẐ R k (I −Ẑ)) = Tr(Ẑ 1/2 R k (I −Ẑ)R kẐ 1/2 ) ≥ 0. The remainder of the proof follows similar steps as those we used in the first part of the proof from (53) onwards.
Theorem 6.2 establishes that the expected norm of the error converges exponentially fast to zero. Moreover, the convergence rate ρ is the same that appeared in Theorem 6.1, where we established the convergence of the norm of the expected error. Both results can be recast as iteration complexity bounds. For instance, using standard arguments, from Theorem 6.1 we observe that for a given 0 < < 1 we get On the other hand, from Theorem 6.2 we have To push the expected norm of the error below (see(56)), we require double the iterates compared to bringing the norm of expected error below (see (55)). This is because in Theorem 6.2 we determined that ρ is the rate at which the expectation of the squared norm error converges, while in Theorem 6.1 we determined that ρ is the rate at which the norm, without the square, of the expected error converges. However, as proven in Proposition 6.1, the former is a stronger form of convergence. Thus, Theorem 6.1 does not give a stronger result than Theorem 6.2, but rather, these theorems give qualitatively different results.

Discrete Random Matrices
We now consider the case of a discrete random matrix S. We show that when S is a complete discrete sampling, then E [Z] is positive definite, and thus from Theorems 6.1 and 6.2, Algorithms 1-3 converge.
Definition 7.1 (Complete Discrete Sampling). The random matrix S has a finite discrete distribution with r outcomes. In particular, S = S i ∈ R n×q i with probability p i > 0 for i = 1, . . . , r, where S i is of full column rank. We say that S is a complete discrete sampling when S def = [S 1 , . . . , S r ] ∈ R n×n has full row rank.
As an example of a complete discrete sampling, let S = e i (the ith unit coordinate vector in R n ) with probability p i = 1/n, for i = 1, . . . , n. Then S, as defined in Definition 7.1, is equal to the identity matrix: S = I. Consequently, S is a complete discrete sampling. In fact, from any basis of R n we could construct a complete discrete sampling in an analogous way.
Next we establish that when S is discrete random matrix, that S having a complete discrete distribution is a necessary and sufficient condition for E [Z] to be positive definite. This will allow us to determine an optimized distribution for S in Section 7.1.
Proposition 7.1. Let S be a discrete random matrix with r outcomes S r all of which have full column rank. The matrix E [Z] is positive definite if and only if S is a complete discrete sampling. Furthermore Proof. Taking the expectation of Z as defined in (35) gives ) and v = 0, thus 0 = v A SD 2 S Av = DS Av 2 2 , which shows that S Av = 0 and thus v ∈ Null S A . As A is nonsingular, it follows that v = 0 if and only if S has full column rank.
With a closed form expression for E [Z] we can optimize ρ over the possible distributions of S to yield a better convergence rate.

Optimizing an upper bound on the convergence rate
So far we have proven two different types of convergence for Algorithms 1, 2 and 3 in Theorems 6.1 and 6.2. Furthermore, both forms of convergence depend on the same convergence rate ρ for which we have a closed form expression (40).
The availability of a closed form expression for the convergence rate opens up the possibility of designing particular distributions for S optimizing the rate. In [14] it was shown that (in the context of solving linear systems) for a complete discrete sampling, computing the optimal probability distribution, assuming that the matrices {S i } r i=1 are fixed, leads to a semi-definite program (SDP). In some cases, the gain in performance from the optimal probabilities is much larger than the loss incurred by having to solve the SDP. However, this is not always the case. Here we propose an alternative: to optimize the following upper bound on the convergence rate: By writing Z p instead of Z, we emphasized the dependence of Z on p = (p 1 , . . . , p r ) ∈ R r , belonging to the probability simplex ∆ r def = p = (p 1 , . . . , p r ) ∈ R r : Our goal is to minimize γ(p) over ∆ r .
Theorem 7.1. Let S be a complete discrete sampling and let S i ∈ R n×q i , for i = 1, 2, . . . , r, be such that S −T = [S 1 , . . . , S r ]. Then Proof. In view of (59), minimizing γ in p is equivalent to minimizing the expression in p. Further, we have Applying Lemma 12.1 in the Appendix, the minimum of the above subject to the constraint p ∈ ∆ r is given by Plugging this into (61) gives the result (60).
Observe that in general, the optimal probabilities (62) cannot be calculated, since the formula involves the inverse of A, which is not known. However, if A is symmetric positive definite, we can choose W = A 2 , which eliminates this issue. If A is not symmetric positive definite, or if we do not wish to choose W = A 2 , we can approach the formula (62) as a recipe for a heuristic choice of the probabilities: we can use the iterates {X k } as a proxy for A −1 . With this setup, the resulting method is not guaranteed to converge by the theory developed in this paper. However, in practice one would expect it to work well. We have not done extensive experiments to test this, and leave this to future research. To illustrate, let us consider a concrete simple example. Choose W = I and S i = e i (the unit coordinate vector in R n ). We have S = [e 1 , . . . , e n ] = I, whence S i = e i for i = 1, . . . , r. Plugging into (62), we obtain

Convenient sampling
We now ask the following question: given matrices S 1 , . . . , S r defining a complete discrete sampling, assign probabilities p i to S i so that the convergence rate ρ becomes easy to interpret. The following result, first stated in [14] in the context of solving linear systems, gives a convenient choice of probabilities resulting in ρ which depends on a (scaled) condition number of A.
Proposition 7.2. Let S be a complete discrete sampling, where S = S i with probability Then the convergence rate takes the form Proof. Theorem 5.1 in [14] gives (64). The bound in (65) follows trivially.
Following from Remark 4.1, we can determine a convergence rate for Algorithm 2 based on the Theorem 7.2.
Remark 7.1. Let S be a complete discrete sampling where S = S i with probability Then Algorithm 2 converges at the rate . (67)

Optimal and adaptive samplings
Having decided on the probabilities p 1 , . . . , p r associated with the matrices S 1 , . . . , S r in Proposition 7.2, we can now ask the following question. How should we choose the matrices {S i } if we want ρ to be as small as possible? Since the rate improves as the condition number κ 2 2,F (W 1/2 A T S) decreases, we should aim for matrices that minimize the condition number. Notice that the lower bound in (65) is reached for S = (W 1/2 A T ) −1 = A −T W −1/2 . While we do not know A −1 , we can use our best current approximation of it, X k , in its place. This leads to a method which adapts the probability distribution governing S throughout the iterative process. This observation inspires a very efficient modification of Algorithm 3, which we call AdaRBFGS (Adaptive Randomized BFGS), and describe in Section 9. Notice that, luckily and surprisingly, our twin goals of computing the inverse and optimizing the convergence rate via the above adaptive trick are compatible. Indeed, we wish to find A −1 , whose knowledge gives us the optimal rate. This should be contrasted with the SDP approach mentioned earlier: i) the SDP could potentially be harder than the inversion problem, and ii) having found the optimal probabilities {p i }, we are still not guaranteed the optimal rate. Indeed, optimality is relative to the choice of the matrices S 1 , . . . , S r , which can be suboptimal.
Remark 7.2 (Adaptive sampling). The convergence rate (64) suggests how one can select a sampling distribution for S that would result in faster practical convergence. We now detail several practical choices for W and indicate how to sample S. These suggestions require that the distribution of S depends on the iterate X k , and thus no longer fit into our framework. Nonetheless, we collect these suggestions here in the hope that others will wish to extend these ideas further, and as a demonstration of the utility of developing convergence rates. a) If W = I, then Algorithm 1 converges at the rate ρ = 1 − 1/κ 2 2,F (A T S), and hence S should be chosen so that S is a preconditioner of A T . For example S = X T k , that is, S should be a sampling of the rows of X k . b) If W = I, then Algorithm 2 converges at the rate ρ = 1 − 1/κ 2 2,F (AS), and hence S should be chosen so that S is a preconditioner of A. For example S = X k ; that is, S should be a sampling of the columns of X k . c) If A is symmetric positive definite, we can choose W = A −1 , in which case Algorithm 3 converges at the rate ρ = 1 − 1/κ 2 2,F (A 1/2 S). This rate suggests that S should be chosen so that S is an approximation of A −1/2 . In Section 9 we develop this idea further, and design the AdaRBFGS algorithm. d) If W = (A T A) −1 , then Algorithm 1 can be efficiently implemented with S = AV , where V is a complete discrete sampling. Furthermore, This rate suggests that V should be chosen so that V is a preconditioner of A. For example, we can set V = X k , i.e., V should be a sampling of the rows of X k . e) If W = (AA T ) −1 , then Algorithm 2 can be efficiently implemented with S = A T V , where V is a complete discrete sampling. From Remark 7.1, the convergence rate of the resulting method is given by 1 − 1/κ 2 2,F (A T V). This rate suggests that V should be chosen so that V is a preconditioner of A T . For example, V = X T k ; that is, V should be a sampling of the columns of X k . f ) If A is symmetric positive definite, we can choose W = A 2 , in which case Algorithm 3 can be efficiently implemented with S = AV. Furthermore ρ = 1 − 1/κ 2 2,F (AV). This rate suggests that V should be chosen so that V is a preconditioner of A. For example V = X k , that is, V should be a sampling of the rows or the columns of X k .

Randomized Quasi-Newton Updates
Algorithms 1, 2 and 3 are in fact families of algorithms indexed by the two parameters: i) positive definite matrix W and ii) distribution D (from which we pick random matrices S). This allows us to design a myriad of specific methods by varying these parameters. Here we highlight some of these possibilities, focusing on complete discrete distributions for S so that convergence of the iterates is guaranteed through Theorems 6.1 and 6.2. We also compute the convergence rate for these special methods for the convenient probability distribution given by (63) and (66) (Proposition 7.2) so that the convergence rates (64) and (67), respectively, depend on a scaled condition number which is easy to interpret. We will also make some connections to existing qN and Approximate Inverse Preconditioning methods.  Table 2: Specific randomized updates for inverting matrices discussed in this section, obtained as special cases of our algorithms. First column: "sym" means "symmetric" and "s.p.d." means "symmetric positive definite"; Third column: "inv" means invertible. Block versions of all these updates are obtained by choosing S as a matrix with more than one column.

One Step Update
We have the freedom to select S as almost any random matrix that has full column rank. This includes choosing S to be a constant and invertible matrix, such as the identity matrix I, in which case X 1 must be equal to the inverse. Indeed, the sketch-and-project formulations of all our algorithms reveal that. For Algorithm 1, for example, the sketched system is S T AX = S T , which is equivalent to AX = I, which has as its unique solution X = A −1 . Hence, X 1 = A −1 , and we have convergence in one iteration/step. Through inspection of the complexity rate, we see that W 1/2 E [Z] W 1/2 = I and ρ = λ min (W 1/2 E [Z] W 1/2 ) = 1, thus this one step convergence is predicted in theory by Theorems 6.1 and 6.2.

Simultaneous Randomized Kaczmarz Update
Perhaps the most natural choice for the weighting matrix W is the identity W = I. With this choice, Algorithm 1 is equivalent to applying the randomized Kaczmarz update simultaneously to the n linear systems encoded in AX = I. To see this, note that the sketch-and-project viewpoint (15) of Algorithm 1 is which, by (19), results in the explicit update If S is a random coordinate vector, then (68) is equivalent to projecting the jth column of X k onto the solution space of A i: x = δ ij , which is exactly an iteration of the randomized Kaczmarz update applied to solving Ax = e j . In particular, if S = e i with probability p i = A i: 2 2 / A 2 F then according to Proposition 7.2, the rate of convergence of update (69) is given by where we used that κ 2,F (A) = κ 2,F (A T ). This is exactly the rate of convergence given by Strohmer and Vershynin in [35] for the randomized Kaczmarz method.

Randomized Bad Broyden Update
The update (69) can also be viewed as an adjoint form of the bad Broyden update [4,19]. To see this, if we use Algorithm 2 with W = I, then the iterative process is This update (70) is a randomized block form of the bad Broyden update [4,19]. In the qN setting, S is not random, but rather the previous step direction S = δ ∈ R n . Furthermore, if we rename γ def = AS ∈ R n , then (70) becomes which is the standard way of writing the bad Broyden update [19]. Update (69) is an adjoint form of bad Broyden in the sense that, if we transpose (69), set S = δ and write γ = A T S, we obtain the bad Broyden update, but applied to X T k instead. From the constrain-and-approximate viewpoint (18) we give a new interpretation to the bad Broyden update, namely, the update (71) can be written as Thus, the bad Broyden update is the best rank-one update (with fixed γ) approximating the inverse. We can determine the rate at which our randomized variant of the BB update (70) converges by using Remark 7.1. In particular, if S = S i with probability p i = AS i 2 F AS 2 F , then (76) converges with the rate

Randomized Powell-Symmetric-Broyden Update
If A is symmetric and we use Algorithm 3 with W = I, the iterates are given by which is a randomized block form of the Powell-Symmetric-Broyden update [13]. If S = S i with probability p i = AS i 2 F / AS 2 F , then according to Proposition 7.2, the iterates (72) and (69) converge according to

Randomized Good Broyden Update
Next we present a method that shares certain properties with Gaussian elimination and can be viewed as a randomized block variant of the good Broyden update [4,19]. This method requires the following adaptation of Algorithm 2: instead of sketching the inverse equation, consider the update (73) that performs a column sketching of the equation XA −1 = I by right multiplying with Ae i , where e i is the ith coordinate vector. Projecting an iterate X k onto this sketched equation gives The iterates defined by the above are given by Given that we are sketching and projecting onto the solution space of XA −1 = I, the iterates of this method converge to A. Therefore the inverse iterates X −1 k converge to A −1 . We can efficiently compute the inverse iterates by using the Woodbury formula [36] which gives This update (75) behaves like Gaussian elimination in the sense that, if i is selected in a cyclic fashion, that is i = k on the kth iteration, then from (74) it is clear that That is, on the kth iteration, the first k columns of the matrix X −1 k+1 A are equal to the first k columns of the identity matrix. Consequently, X n = A and X −1 n = A −1 . If instead, we select i uniformly at random, then we can adapt Proposition 7.2 by swapping each occurrence of A T for A −1 and observing that S i = Ae i thus S = A. Consequently the iterates (74) converge to A at a rate of and thus the lower bound (41) is achieved and X k converges to A according to Note that this does not say anything about how fast X −1 k converges to A −1 . Therefore (75) is not an efficient method for calculating an approximate inverse. If we replace e i by a step direction δ k ∈ R d , then the update (75) is known as the good Broyden update [4,19].

Approximate Inverse Preconditioning
When A is symmetric positive definite, we can choose W = A −1 , and Algorithm 1 is given by The constrain-and-approximate viewpoint (17) of this update is This viewpoint reveals that the update (76) is akin to the Approximate Inverse Preconditioning (AIP ) methods [1,12,22,21]. We can determine the rate a which (76) converges using (64). In particular, if S = S i with probability p i = Tr(S T i AS i )/Tr(S T AS), then (76) converges with rate ρ (64) and according to

Randomized DFP Update
If A is symmetric positive definite then we can choose W = A. Furthermore, if we adapt the sketchand-project formulation (15) to sketch the equation XA −1 = I by right multiplying by AS, and additionally impose symmetry on the iterates, we arrive at the following update: The solution to the above is given by 5 Using the Woodbury formula [36], we find that The update (80) is a randomized variant of the Davidon-Fletcher-Powell (DFP) update [7,9]. We can adapt Proposition 7.2 to determine the rate at which X k converges to A by swapping each occurrence of A T for A −1 . If we let S i = Ae i with probability p i = λ min (A)/ Tr(A), for instance, then the iterates (74) satisfy Thus X k converges to A. However, this does not indicate at what rate does X −1 k converge to A −1 . This is in contrast with randomized BFGS, which produces iterates that converge to A −1 at this same rate, as we show in the next section. This perhaps sheds new light on why BFGS update performs better than the DFP update.

Randomized BFGS Update
If A is symmetric and positive definite, we can choose W = A −1 and apply Algorithm 3 to maintain symmetry of the iterates. The iterates are given by This is a block variant, see [13], of the BFGS update [4,9,11,33]. The constrain-and-approximate viewpoint gives a new interpretation to the block BFGS update. That is, from (27), the iterates (82) can be equivalently defined by Thus, the standard and block BFGS updates produced an improved estimate by doing best approximation to the inverse subject to a particular symmetric affine space passing through the current iterate. This is a new way of interpreting BFGS.
If p i = Tr(S T i AS i )/Tr(SAS T ), then according to Proposition 7.2, the update (82) converges according to It is remarkable that the update (82) preserves positive definiteness of the iterates. Indeed, assume that X k is positive definite and let v ∈ R n and P def = S(S T AS) −1 S T . Left and right multiplying (82) by v T and v, respectively, gives Thus v T X k+1 v = 0 implies that P v = 0 and (I − AP ) v = 0, which when combined gives v = 0. This proves that X k+1 is positive definite. Thus the update (82) is particularly well suited for calculating the inverse of a positive definite matrices.
In Section 9, we detail an update designed to improve the convergence rate in (83). The result is a method that is able to invert large scale positive definite matrices orders of magnitude faster than the state-of-the-art.

Randomized Column Update
We now describe a method whose deterministic variant does not seem to exist in the literature. For this update, we need to perform a linear transformation of the sampling matrices. For this, let V be a complete discrete sampling where V = V i ∈ R n×q i with probability p i > 0, for i = 1, . . . , r.
Let the sampling matrices be defined as S i = AV i ∈ R n×q i for i = 1, . . . , r. As A is nonsingular, and S = AV, then S is a complete discrete sampling. With these choices and W −1 = A T A, the sketch-and-project viewpoint (15) is given by The solution to the above are the iterates of Algorithm 1, which is given by From the constrain-and-approximate viewpoint (17), this can be written as With the same parameters S and W , the iterates of Algorithm 3 are given by If we choose , then according to Proposition 7.2, the iterates (84) and (85) converge linearly in expectation to the inverse according to There also exists an analogous "row" variant of (84), which arises by using Algorithm 2, but we do not explore it here.

AdaRBFGS: Adaptive Randomized BFGS
All the updates we have developed thus far use a sketching matrix S that is sampled in an i.i.d. fashion from a fixed distribution D at each iteration. In this section we assume that A is symmetric positive definite, and propose AdaRBFGS: a variant of the RBFGS update, discussed in Section 8.8, which adaptively changes the distribution D throughout the iterative process. Due to this change, Theorems 6.1 and 6.2 and Proposition 7.2 are no longer applicable. Superior numerical efficiency of this update is verified through extensive numerical experiments in Section 10.

Motivation
We now motivate the design of this new update by examining the convergence rate (83) of the RBFGS iterates (82). Recall that in RBFGS we choose W = A −1 and S = S i with probability where S is a complete discrete sampling and S = [S 1 , . . . , S r ], then Consider now the question of choosing the matrix S in such a way that ρ is as small as possible.
Note that the optimal choice is any S such that S T AS = I. Indeed, then ρ = 1 − 1/n, and the lower bound (65) is attained. For instance, the choice S = A −1/2 would be optimal. Hence, in each iteration we would choose S to be a random column (or random column submatrix) of A −1/2 . Clearly, this is not a feasible choice, as we do not know the inverse of A. In fact, it is A −1 which we are trying to find! However, this leads to the following observation: the goals of finding the inverse of A and of designing an optimal distribution D are in synchrony.

The algorithm
While we do not know A −1/2 , we can use the iterates {X k } themselves to construct a good adaptive sampling. Indeed, the iterates contain information about the inverse and hence we can use them to design a better sampling S. In order to do so, it will be useful to maintain a factored form of the iterates, where L k ∈ R n×n is invertible. With this in place, let us choose S to be a random column submatrix of L k . In particular, let C 1 , C 2 , . . . , C r be nonempty subsets of [n] = {1, 2, . . . , n} forming a partition of [n], and at iteration k choose with probability p i given by (87) Note that now both S and p i depend on k. The method described above satisfies the following recurrence.
Theorem 9.1. After one step of the AdaRBFGS method we have Proof. Using the same arguments as those in the proof of Theorem 6.2, we obtain So, we only need to show that ρ k ≥ λ min (AX k )/Tr(AX k ). Since S is a complete discrete sampling, Proposition 7.1 applied to our setting says that We now have In the second equality we have used the fact that the largest eigenvalue of a block diagonal matrix is equal to the maximum of the largest eigenvalues of the blocks.
If X k converges to A −1 , then necessarily the one-step rate of AdaRBFGS proved in Theorem 9.1 asymptotically reaches the lower bound In other words, as long as this method works, the convergence rate gradually improves, and becomes asymptotically optimal and independent of the condition number. We leave a deeper analysis of this and other adaptive variants of the methods developed in this paper to future work.

Implementation
To implement the AdaRBFGS update, we need to maintain the iterates X k in the factored form (88). Fortunately, a factored form of the update (82) was introduced in [17], which we shall now describe and adapt to our objective. Assuming that X k is symmetric positive definite such that X k = L k L T k , we shall describe how to obtain a corresponding factorization of X k+1 . Letting If we instead of (89) consider the more general update S = L kS , whereS is chosen in an i.i.d. fashion from some fixed distributionD, then The above can now be implemented efficiently, see Algorithm 4.

5:
Sample an independent copyS ∼D 6: Compute S = L kS S is sampled adaptively, as it depends on k 7: Compute R k = (S T AS) −1/2 8: In Section 10 we test two variants based on (97). The first is the AdaRBFGS gauss update, in which the entries ofS are standard Gaussian. The second is AdaRBFGS cols, whereS = I :C i , as described above, and |C i | = q for all i for some q.

Numerical Experiments
Given demand for approximate inverses of positive definite matrices in preconditioning and in variable metric optimization methods, we restrict our tests to inverting positive definite matrices.
The code for this section can downloaded from the authors' webpages, together with scripts that allow for the tests we describe here to be easily reproduced.
We test four iterative methods for inverting matrices. This rules out the all-or-nothing direct methods such as Gaussian elimination of LU based methods. For our tests we use two variants of Algorithm 4: AdaRBFGS gauss, whereS ∈ R n×q is a normal Gaussian matrix, and AdaRBFGS cols, whereS consists of a collection of q distinct coordinate vectors in R n , selected uniformly at random. At each iteration the AdaRBFGS methods compute the inverse of a small matrix S T AS of dimension q × q. To invert this matrix we use MATLAB's inbuilt inv function, which uses LU decomposition or Gaussian elimination, depending on the input. Either way, inv costs O(q 3 ). For simplicity, we selected q = √ n in all our tests. We compare our method to two well established and competitive methods, the Newton-Schulz method [32] and the global self-conditioned Minimal Residual (MR) method [6]. The Newton-Schulz method arises from applying the Newton-Raphson method to solve the equation The MR method was designed to calculate approximate inverses, and it does so by minimizing the norm of the residual along the preconditioned residual direction: See [31, chapter 10.5] for a didactic introduction to MR methods. Letting R k = I − AX k , the resulting iterates of the MR method are given by We perform two sets of tests. On the first set, we choose a different starting matrix for each method which is optimized, in some sense, for that method. We then compare the empirical convergence of each method, including the time taken to calculate X 0 . In particular, the Newton-Schulz is only guaranteed to converge for an initial matrix X 0 such that ρ(I − X 0 A) < 1. Indeed, the Newton-Schulz method did not converge in most of our experiments when X 0 was not carefully chosen according to this criteria. To remedy this, we choose X 0 = 0.99 · A T /ρ 2 (A) for the Newton-Schulz method, so that ρ(I − X 0 A) < 1 is satisfied. To compute ρ(A) we used the inbuilt MATLAB function normest which is coded in C++. For MR we followed the suggestion in [31] and used the projected identity for the initial matrix X 0 = (Tr(A)/Tr(AA T )) · I. For our AdaRBFGS methods we simply used X 0 = I, as this worked well in practice.
In the second set of tests, which we relegate to the Appendix, we compare the empirical convergence of the methods starting from the same matrix, namely the identity matrix X 0 = I.
We run each method until the relative error I − AX k F / I − AX 0 F is below 10 −2 . All experiments were performed and run in MATLAB R2014b. To appraise the performance of each method we plot the relative error against time taken and against the number of floating point operations (flops).

Experiment 1: synthetic matrices
First we compare the four methods on synthetic matrices generated using the matrix function rand. To appraise the difference in performance of the methods as the dimension of the problem grows, we tested for n = 1000, 2000 and 5000. As the dimension grows, only the two variants of the AdaRBFGS method are able to reach the 10 −2 desired tolerance in a reasonable amount time and number of flops (see Figure 2).

Experiment 2: LIBSVM matrices
Next we invert the Hessian matrix ∇ 2 f (x) of four ridge-regression problems of the form using data from LIBSVM [5], see Figure 3. We use λ = 1 as the regularization parameter. On the two problems of smaller dimension, aloi and protein, the four methods have a similar performance, and encounter the inverse in a few seconds. On the two larger problems, gisette-scale and real-sim, the two variants of AdaRBFGS significantly outperform the MR and the Newton-Schulz method.

Experiment 3: UF sparse matrices
For our final batch of tests, we invert several sparse matrices from the Florida sparse matrix collection [8]. We have selected six problems from six different applications, so that the set of matrices display a varied sparsity pattern and structure, see Figure 4. On the matrix Bates/Chem97ZtZ of moderate size, the four methods perform well, with the Newton-Schulz method converging first in time and AdaRBFGS cols first in flops. On the matrices of larger dimensions, the two variants of AdaRBFGS converge often orders of magnitude faster. The significant difference between the performance of the methods on large scale problems can be, in part, explained by their iteration cost. The iterates of the Newton-Schulz and MR method  compute n × n matrix-matrix products. While the cost of an iteration of the AdaRBFGS methods is dominated by the cost of a n × n matrix by n × q matrix product. As a result, and because we set q = √ n, this is difference of n 3 to n 2+1/2 in iteration cost, which clearly shows on the larger dimensional instances. On the other hand, both the Newton-Schulz and MR method are quadratically locally convergent, thus when the iterates are close to the solution, these methods enjoy a notable speed-up.

Conclusion
We developed a family of stochastic methods for iteratively inverting matrices, with specialized variants for nonsymmetric, symmetric and positive definite matrices. The methods have dual viewpoints: a sketch-and-project viewpoint (which is an extension of the least-change formulation of the qN methods), and a constrain-and-approximate viewpoint (which is related to the approximate inverse preconditioning (API) methods). The equivalence between these viewpoints reveals a deep connection between the qN and the API methods, which were previously considered to be unrelated.
Under mild conditions, we establish convergence of the expected norm of the error, and the norm of the expected error. Our convergence theorems are general enough to accommodate discrete samplings and continuous samplings, though we only explore discrete samplings here in more detail. For discrete samplings, we determine a probability distribution for which the convergence rates are equal to a scaled condition number, and thus are easily interpretable. Furthermore, for discrete sampling, we determining a practical optimized sampling distribution, that is obtained by minimizing an upper bound on the convergence rate. We develop new randomized block variants of the qN updates, including the BFGS update, complete with convergence rates, and provide new  insights into these methods using our dual viewpoint.
For positive definite matrices, we develop an Adaptive Randomized BFGS method (AdaRBFGS), which in large-scale numerical experiments can be orders of magnitude faster (in time and flops) than the self-conditioned minimal residual method and the Newton-Schulz method. In particular, only the AdaRBFGS methods are able to approximately invert the 20, 958×20, 958 ridge regression matrix based on the real-sim data set in reasonable time and flops. This paper opens up many avenues for future work, such as developing methods that use continuous random sampling and implementing a limited memory approach akin to the LBFGS [27] method, which could maintain an operator that serves as an approximation to the inverse. The same method can also be used to calculating the pseudo-inverse. As recently shown in [16], an analogous method can be applied to linear systems, converging with virtually no assumptions on the system matrix.