Abstract

The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.

MSC codes

  1. 49M29
  2. 65K05
  3. 90C25
  4. 90C06

Keywords

  1. basis pursuit
  2. compressed sensing
  3. convex program
  4. duality
  5. group sparsity
  6. matrix completion
  7. Newton's method
  8. root-finding
  9. sparse solutions

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183–202.
2.
S. Becker, J. Bobin, and E. Candès, Nesta: A Fast and Accurate First-Order Method for Sparse Recovery, technical report, California Institute of Technology, 2009.
3.
E. van den Berg, Convex Optimization for Generalized Sparse Recovery, Ph.D. thesis, University of British Columbia, 2009.
4.
E. van den Berg and M. P. Friedlander, SPGL1: A Solver for Large-Scale Sparse Reconstruction, available online at http://www.cs.ubc.ca/labs/scl/spgl1/ (2007).
5.
E. van den Berg and M. P. Friedlander, Probing the Pareto frontier for basis pursuit solutions, SIAM J. Sci. Comput., 31 (2008), pp. 890–912.
6.
E. van den Berg and M. P. Friedlander, Theoretical and empirical results for recovery from multiple measurements, IEEE Trans. Inform. Theory, 56 (2010), pp. 2516–2527.
7.
E. van den Berg, M. Schmidt, M. P. Friedlander, and K. Murphy, Group Sparsity via Linear-Time Projection, Technical Report TR-2008-09, Department of Computer Science, University of British Columbia, 2008.
8.
D. Bertsekas, Convex Optimization Theory, Athena Scientific, Belmont, MA, 2009.
9.
D. P. Bertsekas, A. Nedic, and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003.
10.
E. G. Birgin, J. M. Martínez, and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets, SIAM J. Optim., 10 (2000), pp. 1196–1211.
11.
J.-F. Cai, E. J. Candès, and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), pp. 1956–1982
12.
E. J. Candès, Compressive sampling, in Proceedings of the International Congress of Mathematicians, 2006.
13.
E. J. Candès, X. Li, Y. Ma, and J. Wright, Robust Principal Component Analysis?, arXiv preprint, math/0912.3599 (2009).
14.
E. J. Candès and Y. Plan, Matrix completion with noise, Proc. IEEE, 98 (2009), pp. 925–936.
15.
E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), pp. 717–772.
16.
E. J. Candès, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), pp. 1207–1223.
17.
E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), pp. 2053–2080.
18.
E. J. Candès, M. B. Wakin, and S. P. Boyd, Enhancing sparsity by reweighted $l_1$ minimization, J. Fourier Anal. Appl., 14 (2008), pp. 877–905.
19.
W. L. Chan, M. L. Moravec, R. G. Baraniuk, and D. M. Mittleman, Terahertz imaging with compressed sensing and phase retrieval, Opt. Lett., 33 (2008), pp. 974–976.
20.
V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21 (2011), pp. 572–596.
21.
S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 20 (1998), pp. 33–61.
22.
R. S. Dembo, S. C. Eisenstat, and T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal., 19 (1982), pp. 400–408.
23.
D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), pp. 1289–1306.
24.
D. L. Donoho, For most large underdetermined systems of equations the minimal $\ell_1$-norm near-solution approximates the sparsest near-solution, Comm. Pure Appl. Math., 59 (2006), pp. 907–934.
25.
D. L. Donoho, For most large underdetermined systems of linear equations the minimal $\ell_1$-norm solution is also the sparsest solution, Comm. Pure Appl. Math., 59 (2006), pp. 797–829.
26.
D. L. Donoho and J. Tanner, Neighborliness of randomly-projected simplices in high dimensions, Proc. Natl. Acad. Sci. USA, 102 (2005), pp. 9452–9457.
27.
D. L. Donoho and J. Tanner, Sparse nonnegative solution of underdetermined linear equations by linear programming, Proc. Natl. Acad. Sci. USA, 102 (2005), pp. 9446–9451.
28.
P.-C. Du and R. H. Angeletti, Automatic deconvolution of isotope-resolved mass spectra using variable selection and quantized peptide mass distribution, Anal. Chem., 78 (2006), pp. 3385–3392.
29.
J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra, Efficient projections onto the $\ell_1$-ball for learning in high dimensions, in Proceedings of the 25th International Conference on Machine Learning, 2008, pp. 272–279.
30.
Y. C. Eldar and M. Mishali, Robust recovery of signals from a union of subspaces, IEEE Trans. Inform. Theory, 55 (2009), pp. 5302–5316.
31.
M. Fazel, Matrix Rank Minimization with Applications, Ph.D. thesis, Stanford University, Stanford, CA, 2002.
32.
M. Figueiredo, R. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process., 1 (2007), pp. 586–597.
33.
M. Grant and S. Boyd, CVX: MATLAB Software for Disciplined Convex Programming (Web Page and Software), available online at http://stanford.edu/+ +boyd/cvx (2009).
34.
M. Gu, L.-H. Lim, and C. J. Wu, PARNES: A Rapidly Convergent Algorithm for Accurate Recovery of Sparse and Approximately Sparse Signals, arXiv preprint, 0911.0492 (2009).
35.
E. Hale, W. Yin, and Y. Zhang, A Fixed-Point Continuation Method for $l_1$-Regularized Minimization with Applications to Compressed Sensing, Technical Report TR07-07, Department of Computational and Applied Mathematics, Rice University, Houston, TX, 2007.
36.
X. Hang and F. Wu, L1 least square for cancer diagnosis using gene expression data, J. Comput. Sci. Syst. Biol., 2 (2009), pp. 167–173.
37.
F. Herrmann, Y. Erlangga, and T. Lin, Compressive simultaneous full-waveform simulation, Geophys., 74 (2009), A35.
38.
F. J. Herrmann and G. Hennenfent, Non-parametric seismic data recovery with curvelet frames, Geophys. J. Int., 173 (2008), pp. 233–248.
39.
A. HesamMohseni, M. Babaie-Zadeh, and C. Jutten, Inflating compressed samples: A joint source-channel coding approach for noise-resistant compressed sensing, in Proceedings of ICASSP2009, IEEE International Conference on Acoustics, Speech, and Signal Processing, Taipei, Taiwan, 2009, pp. 2957–2960.
40.
S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, An interior-point method for large-scale $l_1$-regularized least squares, IEEE J. Sel. Top. Signal Process., 1 (2007), pp. 606–617.
41.
Y.-J. Liu, D. Sun, and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 2011, pp. 1–38.
42.
M. Lustig, D. L. Donoho, and J. M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Mag. Resonance Med., 58 (2007), pp. 1182–1195.
43.
S. Ma, D. Goldfarb, and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), pp. 321–353.
44.
D. M. Malioutov, A Sparse Signal Reconstruction Perspective for Source Localization with Sensor Arrays, Master's thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 2003.
45.
D. Malioutov, M. Çetin, and A. S. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays, IEEE Trans. Signal Process., 53 (2005), pp. 3010–3022.
46.
S. Marchesini, Ab Initio Compressive Phase Retrieval, arXiv preprint, 0809.2006 (2008).
47.
F. W. McLafferty and F. Turec˘ek, Interpretation of Mass Spectra, 4th ed., University Science Books, Mill Valley, CA, 1993.
48.
B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), pp. 227–234.
49.
National Institute of Standards and Technology, NIST Chemistry WebBook, available online at http://webbook.nist.gov/chemistry/ (2009).
50.
Y. Nesterov, Smooth minimization of nonsmooth functions, Math. Program., 103 (2005), pp. 127–152.
51.
B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), pp. 471–501.
52.
R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.
53.
M. Schmidt, E. van den Berg, M. P. Friedlander, and K. Murphy, Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm, in Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS) 2009, vol. 5, D. van Dyk and M. Welling, eds., Clearwater Beach, FL, 2009, pp. 456–463.
54.
R. M. Smith, Understanding Mass Spectra: A Basic Approach, 2nd ed., John Wiley and Sons, Hoboken, NJ, 2004.
55.
E. Specht, Packing of Circles in the Unit Circle, available online at http://hydra.nat.uni-magdeburg.de/packing/cci/cci.html (2009).
56.
M. Stojnic, F. Parvaresh, and B. Hassibi, On the Reconstruction of Block-Sparse Signals with an Optimal Number of measurements, arXiv preprint, 0804.0041 (2008).
57.
J. F. Sturm, Using SeDuMi $1.02$, a MATLAB Toolbox for Optimization over Symmetric Cones (updated for Version 1.05), technical report, Department of Econometrics, Tilburg University, Tilburg, The Netherlands, 2001.
58.
R. Tibshirani, Regression shrinkage and selection via the Lasso, J. R. Statist. Soc. B., 58 (1996), pp. 267–288.
59.
K.-C. Toh and S. Yun, An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Least Squares Problems, preprint, 2009.
60.
J. A. Tropp, Algorithms for simultaneous sparse approximation: Part II: Convex relaxation, Signal Process., 86 (2006), pp. 589–602.
61.
R. H. Tütüncü, K. C. Toh, and M. J. Todd, Solving semidefinite-quadratic-linear programs using SDPT3, Math. Program., 95 (2003), pp. 189–217.
62.
Y. Wiaux, L. Jacques, G. Puy, A. Scaife, and P. Vandergheynst, Compressed sensing imaging techniques for radio interferometry, Monthly Not. Roy. Astronom. Soc., 395 (2009), pp. 1733–1742.
63.
S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, Sparse Reconstruction by Separable Approximation, technical report, Computer Sciences Department, University of Wisconsin, Madison, 2007.
64.
J. Zheng and E. Jacobs, Video compressive sensing using spatial domain sparsity, Optical Engrg., 48 (2009), 087006.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 1201 - 1229
ISSN (online): 1095-7189

History

Submitted: 3 February 2010
Accepted: 7 February 2011
Published online: 4 October 2011

MSC codes

  1. 49M29
  2. 65K05
  3. 90C25
  4. 90C06

Keywords

  1. basis pursuit
  2. compressed sensing
  3. convex program
  4. duality
  5. group sparsity
  6. matrix completion
  7. Newton's method
  8. root-finding
  9. sparse solutions

Authors

Affiliations

Michael P. Friedlander

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

View Options

View options

PDF

View PDF

Figures

Tables

Media

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media