Abstract

The problem of finding sparse solutions to underdetermined systems of linear equations arises in several applications (e.g., signal and image processing, compressive sensing, statistical inference). A standard tool for dealing with sparse recovery is the $\ell_1$-regularized least squares approach that has been recently attracting the attention of many researchers. In this paper, we describe an active set estimate (i.e., an estimate of the indices of the zero variables in the optimal solution) for the considered problem that tries to quickly identify as many active variables as possible at a given point, while guaranteeing that some approximate optimality conditions are satisfied. A relevant feature of the estimate is that it gives a significant reduction of the objective function when setting to zero all those variables estimated to be active. This enables us to easily embed it into a given globally converging algorithmic framework. In particular, we include our estimate into a block coordinate descent algorithm for $\ell_1$-regularized least squares, analyze the convergence properties of this new active set method, and prove that its basic version converges with a linear rate. Finally, we report some numerical results showing the effectiveness of the approach.

Keywords

  1. $\ell_1$-regularized least squares
  2. active set
  3. sparse optimization

MSC codes

  1. 65K05
  2. 90C25
  3. 90C06

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
M. Aharon, M. Elad, and A. M. Bruckstein, On the uniqueness of overcomplete dictionaries and a practical way to retrieve them, Linear Algebra Appl., 416, 2006, pp. 48--67.
2.
M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, Fast image recovery using variable splitting and constrained optimization, IEEE Trans. Image Process., 19 (2010), pp. 2--45.
3.
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problem, SIAM J. Imaging Sci., 2 (2009), pp. 183--202.
4.
J. M. Bioucas-Dias and M. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration, IEEE Trans. Image Process., 16 (2007), pp. 2992--3004.
5.
R. H. Byrd, G. M. Chi, J. Nocedal, and F. Oztoprak, A family of second-order methods for convex $L1$-regularized optimization, Math. Program., to appear.
6.
E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), pp. 4203--4215.
7.
P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), pp. 1168--1200.
8.
A. Daneshmand, F. Facchinei, V. Kungurtsev, and G. Scutari, Hybrid Random/Deterministic Parallel Algorithms for Nonconvex Big Data Optimization, preprint, arXiv:1407.4504v2, 2014.
9.
I. Daubechies, M. Defriese, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math., 57 (2004), pp. 1413--1457.
10.
M. De Santis, G. Di Pillo, and S. Lucidi, An active set feasible method for large-scale minimization problems with bound constraints, Comput. Optim. Appl., 53 (2012), pp. 395--423.
11.
E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), pp. 201--213.
12.
F. Facchinei and S. Lucidi, Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems, J. Optim. Theory Appl., 85 (1995), pp. 265--289.
13.
F. Facchinei, S. Sagratella, and G. Scutari, Flexible Parallel Algorithms for Big Data Optimization, preprint, arXiv:1311.2444, 2013.
14.
K. Fountoulakis and R. Tappenden, Robust Block Coordinate Descent, preprint, arXiv:1407.7573, 2015.
15.
K. Fountoulakis and J. Gondzio, A second-order method for strongly convex $L1$-regularization problems, Math. Program., to appear.
16.
R. Griesse and D. A. Lorenz, A semismooth Newton method for Tikhonov functionals with sparsity constraints, Inverse Problems, 24 (2008), 035007.
17.
E. T. Hale, W. Yin, and Y. Zhang, Fixed-point continuation for $\ell_1$-minimization: Methodology and convergence, SIAM J. Optim., 19 (2008), pp. 1107--1130.
18.
C. J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar, Sparse inverse covariance matrix estimation using quadratic approximation, in Advances in Neural Information Processing Systems, Curran, Red Hook, NY, 2011.
19.
Z.-Q. Luo and P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim., 30 (1992), pp. 408--425.
20.
J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, classic, Appl. Math. 30, SIAM, Philadelphia, 1970.
21.
M. Porcelli and F. Rinaldi, Variable fixing version of the two-block nonlinear constrained Gauss-Seidel algorithm for $l1$-regularized least-squares, Comput. Optim. Appl., 59 (2014), pp. 565--589.
22.
P. Richtárik and M. Takáč, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, Math. Program., 144 (2014), pp. 1--38.
23.
P. Richtárik and M. Takáč, Parallel coordinate descent methods for big data optimization, Math. Program., to appear.
24.
M. Schmidt, Graphical Model Structure Learning with $\ell_1$-Regularization, Ph.d. Thesis. 2010.
25.
P. Tseng, A coordinate gradient descent method for nonsmooth separable minimization, J. Optim. Theory Appl., 109 (2001), pp. 475--494.
26.
P. Tseng and S. Yun, A coordinate gradient descent method for nonsmooth separable minimization, Math. Program., 117 (2009), pp. 387--423.
27.
S. Wright, R. Nowak, and M. Figueiredo, Sparse reconstruction by separable approximation, IEEE Trans. Signal Process., 57 (2009), pp. 2479--2493.
28.
Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang, A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation, SIAM J. Sci. Comput., 32 (2010), pp. 1832--1857.
29.
Z. Wen, W. Yin, H. Zhang, and D. Goldfarb, On the convergence of an active-set method for $l1$-minimization, Optim. Methods Softw., 27 (2012), pp. 1127--1146.
30.
S. J. Wright, Accelerated block-coordinate relaxation for regularized optimization, SIAM J. Optim., 22 (2012), pp. 159--186.
31.
G. X. Yuan, K. W. Chang, C. J. Hsieh, and C. J. Lin, A comparison of optimization methods and software for large-scale $l1$-regularized linear classification, J. Mach. Learn. Res., 11 (2010), pp. 3183--3234.
32.
S. Yun and K. Toh, A coordinate gradient descent method for $l1$-regularized convex minimization, Comput. Optim. Appl., 48 (2011), pp. 273--307.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 781 - 809
ISSN (online): 1095-7189

History

Submitted: 19 December 2014
Accepted: 21 December 2015
Published online: 23 March 2016

Keywords

  1. $\ell_1$-regularized least squares
  2. active set
  3. sparse optimization

MSC codes

  1. 65K05
  2. 90C25
  3. 90C06

Authors

Affiliations

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

View Options

View options

PDF

View PDF

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media