Abstract

We consider the sequence acceleration problem for the alternating direction method of multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the general minimum residual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a $\kappa$-conditioned problem in $O(\sqrt{\kappa})$ iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in $O(\kappa^{1/4})$ iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of $O(\sqrt{\kappa})$ iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like $O(1/k^{2})$, instead of $O(1/k)$, where $k$ is the iteration index.

Keywords

  1. ADMM
  2. alternating direction
  3. method of multipliers
  4. augmented Lagrangian
  5. sequence acceleration
  6. GMRES
  7. Krylov subspace

MSC codes

  1. 49M20
  2. 90C06
  3. 65B99

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton, Complementarity and nondegeneracy in semidefinite programming, Math. Program., 77 (1997), pp. 111--128.
2.
M. S. Andersen, J. Dahl, and L. Vandenberghe, Logarithmic barriers for sparse matrix cones, Optim. Methods Softw., 28 (2013), pp. 396--423.
3.
O. Axelsson, Unified analysis of preconditioning methods for saddle point matrices, Numer. Linear Algebra Appl., 22 (2015), pp. 233--253.
4.
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183--202.
5.
S. Becker, J. Bobin, and E. J. Candès, NESTA: A fast and accurate first-order method for sparse recovery, SIAM J. Imaging Sci., 4 (2011), pp. 1--39.
6.
S. Bellavia, J. Gondzio, and B. Morini, A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems, SIAM J. Sci. Comput., 35 (2013), pp. A192--A211.
7.
S. Bellavia, J. Gondzio, and M. Porcelli, An inexact dual logarithmic barrier method for solving sparse semidefinite programs, Math. Program., to appear.
8.
S. J. Benson, Y. Ye, and X. Zhang, Solving large-scale sparse semidefinite programs for combinatorial optimization, SIAM J. Optim., 10 (2000), pp. 443--461.
9.
M. Benzi, M. J. Gander, and G. H. Golub, Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems, BIT, 43 (2003), pp. 881--900.
10.
M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 20--41.
11.
M. Benzi, G. H. Golub, and J. Liesen, Numerical solution of saddle point problems, Acta Numer., 14 (2005), pp. 1--137.
12.
M. Benzi and V. Simoncini, On the eigenvalues of a class of saddle point matrices, Numer. Math., 103 (2006), pp. 173--196.
13.
L. Bergamaschi, J. Gondzio, and G. Zilli, Preconditioning indefinite systems in interior point methods for optimization, Comput. Optim. Appl., 28 (2004), pp. 149--171.
14.
B. Borchers, SDPLIB 1.2, a library of semidefinite programming test problems, Optim. Methods Softw., 11 (1999), pp. 683--690.
15.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2011), pp. 1--122.
16.
J. H. Bramble and J. E. Pasciak, A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems, Math. Comp., 50 (1988), pp. 1--17.
17.
P. N. Brown and Y. Saad, Convergence theory of nonlinear Newton--Krylov algorithms, SIAM J. Optim., 4 (1994), pp. 297--330.
18.
E. Candès and J. Romberg, $l_1$-magic: Recovery of sparse signals via convex programming, https://statweb.stanford.edu/~candes/l1magic/downloads/l1magic.pdf.
19.
A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), pp. 120--145.
20.
J. Dahl, L. Vandenberghe, and V. Roychowdhury, Covariance selection for nonchordal graphs via chordal embedding, Optim. Methods Softw., 23 (2008), pp. 501--520.
21.
D. Davis and W. Yin, Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions, Math. Oper. Res., 42 (2017), pp. 783--805.
22.
C. De Boor and J. R. Rice, Extremal polynomials with application to Richardson iteration for indefinite linear systems, SIAM J. Sci. Stat. Comput., 3 (1982), pp. 47--57.
23.
W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, J. Sci. Comput., 66 (2012), pp. 889--916.
24.
D. L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. Inform. Theory, 47 (2001), pp. 2845--2862.
25.
T. A. Driscoll, K.-C. Toh, and L. N. Trefethen, From potential theory to matrix iterations in six steps, SIAM Rev., 40 (1998), pp. 547--578.
26.
L. Elsner and M. Paardekooper, On measures of nonnormality of matrices, Linear Algebra Appl., 92 (1987), pp. 107--123.
27.
M. Embree, How Descriptive are GMRES Convergence Bounds?, manuscript.
28.
E. Esser, X. Zhang, and T. F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science, SIAM J. Imaging Sci., 3 (2010), pp. 1015--1046.
29.
V. Faber and T. Manteuffel, Necessary and sufficient conditions for the existence of a conjugate gradient method, SIAM J. Numer. Anal., 21 (1984), pp. 352--362.
30.
B. Fischer, A. Ramage, D. J. Silvester, and A. J. Wathen, Minimum residual methods for augmented systems, BIT, 38 (1998), pp. 527--543.
31.
M. Fortin and R. Glowinski, Augmented Lagrangian Methods, North-Holland, Amsterdam, 1983.
32.
M. Fukuda, M. Kojima, K. Murota, and K. Nakata, Exploiting sparsity in semidefinite programming via matrix completion I: General framework, SIAM J. Optim., 11 (2001), pp. 647--674.
33.
D. Gabay, Applications of the Method of Multipliers to Variational Inequalities, North-Holland, Amsterdam, 1983.
34.
D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation, Comput. Math. Appl., 2 (1976), pp. 17--40.
35.
E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson, Optimal parameter selection for the alternating direction method of multipliers (ADMM): Quadratic problems, IEEE Trans. Automat. Control, 60 (2015), pp. 644--658.
36.
P. Giselsson and S. Boyd, Diagonal scaling in Douglas-Rachford splitting and ADMM, in Proceedings of the 2014 IEEE 53rd Annual Conference on Decision and Control (CDC), IEEE, Piscataway, NJ, 2014, pp. 5033--5039.
37.
R. Glowinski and A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problèmes de Dirichlet non linéaires, ESAIM Math. Model. Numer. Anal., 9 (1975), pp. 41--76.
38.
T. Goldstein, B. O'Donoghue, S. Setzer, and R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imaging Sci., 7 (2014), pp. 1588--1623.
39.
G. H. Golub and C. Greif, On solving block-structured indefinite linear systems, SIAM J. Sci. Comput., 24 (2003), pp. 2076--2092.
40.
J. Gondzio, Matrix-free interior point method, Comput. Optim. Appl., 51 (2012), pp. 457--480.
41.
A. Greenbaum, Iterative Methods for Solving Linear Systems, Front. Appl. Math. 17, SIAM, Philadelphia, 1997.
42.
B. He and X. Yuan, On the o(1/n) convergence rate of the Douglas--Rachford alternating direction method, SIAM J. Numer. Anal., 50 (2012), pp. 700--709.
43.
P. Henrici, Bounds for iterates, inverses, spectral variation and fields of values of non-normal matrices, Numer. Math., 4 (1962), pp. 24--40.
44.
M. Kadkhodaie, K. Christakopoulou, M. Sanjabi, and A. Banerjee, Accelerated alternating direction method of multipliers, in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, New York, 2015, pp. 497--506.
45.
A. Kalbat and J. Lavaei, A fast distributed algorithm for decomposable semidefinite programs, in IEEE 54th Annual Conference on Decision and Control (CDC), 2015, IEEE, Piscataway, NJ, 2015, pp. 1742--1749.
46.
N. Karmarkar, A new polynomial-time algorithm for linear programming, in Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing, ACM, New York, 1984, pp. 302--311.
47.
C. Keller, N. I. M. Gould, and A. J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1300--1317.
48.
C.-J. Lin and J. J. Moré, Incomplete Cholesky factorizations with limited memory, SIAM J. Sci. Comput., 21 (1999), pp. 24--45.
49.
R. Madani, A. Kalbat, and J. Lavaei, ADMM for sparse semidefinite programming with applications to optimal power flow problem, in Proceedings of the 2015 IEEE 54th Annual Conference on Decision and Control (CDC), IEEE, Piscataway, NJ, 2015, pp. 5932--5939.
50.
J. E. Mitchell, P. M. Pardalos, and M. G. Resende, Interior Point Methods for Combinatorial Optimization, in Handbook of Combinatorial Optimization, Kluwer, Boston, 1998, pp. 189--297.
51.
N. M. Nachtigal, L. Reichel, and L. N. Trefethen, A hybrid GMRES algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 796--825.
52.
K. Nakata, K. Fujisawa, M. Fukuda, M. Kojima, and K. Murota, Exploiting sparsity in semidefinite programming via matrix completion II: Implementation and numerical results, Math. Program., 95 (2003), pp. 303--327.
53.
A. Nemirovskii, D. B. Yudin, and E. R. Dawson, Problem Complexity and Method Efficiency in Optimization, Wiley, New York, 1983.
54.
Y. Nesterov, A method of solving a convex programming problem with convergence rate $O(1/k^2)$, in Sov. Math. Dokl. 27, 1983, pp. 372--376.
55.
Y. Nesterov, Introductory Lectures on Convex Optimization, Appl. Optim, 87, Kluwer, Boston, 2004.
56.
Y. Nesterov, Smooth minimization of non-smooth functions, Math. Program., 103 (2005), pp. 127--152.
57.
Y. Nesterov, Smoothing technique and its applications in semidefinite optimization, Math. Program., 110 (2007), pp. 245--259.
58.
R. Nishihara, L. Lessard, B. Recht, A. Packard, and M. I. Jordan, A general analysis of the convergence of ADMM, in Proceedings of the 32nd International Conference on Machine Learning, Lille, France, Proc. Mach. Learn. Res. 37, F. Bach and D. Blei, eds., PMLR, 2015, pp. 343--352.
59.
B. O'Donoghue, E. Chu, N. Parikh, and S. Boyd, Conic optimization via operator splitting and homogeneous self-dual embedding, J. Optim. Theory Appl., 169 (2016), pp. 1042--1068.
60.
Y. Ouyang, Y. Chen, G. Lan, and E. Pasiliao, Jr., An accelerated linearized alternating direction method of multipliers, SIAM J. Imaging Sci., 8 (2015), pp. 644--681.
61.
S. K. Pakazad, A. Hansson, and M. S. Andersen, Distributed interior-point method for loosely coupled problems, IFAC Proc., 47 (2014), pp. 9587--9592.
62.
G. Pataki and S. Schmieta, The DIMACS Library of Semidefinite-Quadratic-Linear Programs, Technical report, Computational Optimization Research Center, Columbia University, New York, 2002.
63.
P. Patrinos, L. Stella, and A. Bemporad, Douglas-Rachford splitting: Complexity estimates and accelerated variants, in Proceedings of the 2014 IEEE 53rd Annual Conference on Decision and Control (CDC), IEEE, Piscataway, NJ, 2014, pp. 4234--4239.
64.
T. J. Rivlin, The Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory, Wiley, New York, 1974.
65.
M. Rudelson and R. Vershynin, Sampling from large matrices: An approach through geometric functional analysis, J. ACM, 54 (2007), 21.
66.
T. Rusten and R. Winther, A preconditioned iterative method for saddlepoint problems, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 887--904.
67.
Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., SIAM, Philadelphia, 2003.
68.
Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 856--869.
69.
V. Simoncini, Block triangular preconditioners for symmetric saddle-point problems, Appl. Numer. Math., 49 (2004), pp. 63--80.
70.
V. Simoncini and M. Benzi, Spectral properties of the Hermitian and skew-Hermitian splitting preconditioner for saddle point problems, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 377--389.
71.
J. F. Sturm, Using SeDuMi $1.02$, a MATLAB toolbox for optimization over symmetric cones, Optim. Methods Softw., 11 (1999), pp. 625--653.
72.
K.-C. Toh, Solving large scale semidefinite programs via an iterative solver on the augmented systems, SIAM J. Optim., 14 (2004), pp. 670--698.
73.
K.-C. Toh and M. Kojima, Solving some large scale semidefinite programs via the conjugate residual method, SIAM J. Optim., 12 (2002), pp. 669--691.
74.
L. N. Trefethen and M. Embree, Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators, Princeton University Press, Princeton, NJ, 2005.
75.
L. Vandenberghe and M. S. Andersen, Chordal graphs and semidefinite optimization, Found. Trends Optim., 1 (2015), pp. 241--433.
76.
L. Vandenberghe and S. Boyd, A primal-dual potential reduction method for problems involving matrix inequalities, Math. Program., 69 (1995), pp. 205--236.
77.
L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Rev., 38 (1996), pp. 49--95.
78.
H. F. Walker and P. Ni, Anderson acceleration for fixed-point iterations, SIAM J. Numer. Anal., 49 (2011), pp. 1715--1735.
79.
Z. Wen, D. Goldfarb, and W. Yin, Alternating direction augmented Lagrangian methods for semidefinite programming, Math. Program. Comput., 2 (2010), pp. 203--230.
80.
M. H. Wright, Interior methods for constrained optimization, Acta Numer., 1 (1992), pp. 341--407.
81.
R. Y. Zhang, S. Fattahi, and S. Sojoudi, Large-scale sparse inverse covariance estimation via thresholding and Max-Det matrix completion, in Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, Proc. Mach. Learn. Res. 80, J. Dy and A. Krause, eds., PMLR, 2018, pp. 5766--5775.
82.
Y. Zheng, G. Fantuzzi, A. Papachristodoulou, P. Goulart, and A. Wynn, Fast admm for semidefinite programs with chordal sparsity, in Proceedings of the American Control Conference (ACC), 2017, IEEE, Piscataway, NJ, 2017, pp. 3335--3340.
83.
W. Zulehner, Analysis of iterative methods for saddle point problems: A unified approach, Math. Comp., 71 (2002), pp. 479--505.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 3025 - 3056
ISSN (online): 1095-7189

History

Submitted: 4 February 2016
Accepted: 28 August 2018
Published online: 25 October 2018

Keywords

  1. ADMM
  2. alternating direction
  3. method of multipliers
  4. augmented Lagrangian
  5. sequence acceleration
  6. GMRES
  7. Krylov subspace

MSC codes

  1. 49M20
  2. 90C06
  3. 65B99

Authors

Affiliations

Funding Information

Defense Advanced Research Projects Agency https://doi.org/10.13039/100000185
Air Force Office of Scientific Research https://doi.org/10.13039/100000181
Office of Naval Research https://doi.org/10.13039/100000006
Skolkovo Institute of Science and Technology https://doi.org/10.13039/501100007455

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

View options

PDF

View PDF

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media

The SIAM Publications Library now uses SIAM Single Sign-On for individuals. If you do not have existing SIAM credentials, create your SIAM account https://my.siam.org.