Abstract

The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266–273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. Programming, 9 (1975), pp. 87–99] and by Bertsekas, Nedić, and Ozdaglar [Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a “weak sharp minimum” is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the $\ell_1$ regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of $\ell_1$ regularization in finding sparse solutions.

MSC codes

  1. 90C25
  2. 90C05
  3. 90C51
  4. 65K10
  5. 49N15

Keywords

  1. convex program
  2. conic program
  3. linear program
  4. regularization
  5. exact penalization
  6. Lagrange multiplier
  7. degeneracy
  8. sparse solutions
  9. interior-point algorithms

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
A. Altman and J. Gondzio, Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization, Optim. Methods Softw., 11 (1999), pp. 275–302.
2.
F. R. Bach, R. Thibaux, and M. I. Jordan, Computing regularization paths for learning multiple kernels, in Advances in Neural Information Processing Systems (NIPS) 17, L. Saul, Y. Weiss, and L. Bottou, eds., Morgan Kaufmann, San Mateo, CA, 2005.
3.
A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MPS/SIAM Ser. Optim. 2, SIAM, Philadelphia, 2001.
4.
D. P. Bertsekas, Necessary and sufficient conditions for a penalty method to be exact, Math. Programming, 9 (1975), pp. 87–99.
5.
D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, 1982.
6.
D. P. Bertsekas, A note on error bounds for convex and nonconvex programs, Comput. Optim. Appl., 12 (1999), pp. 41–51.
7.
D. P. Bertsekas, A. Nedić, and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003.
8.
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004.
9.
J. V. Burke, An exact penalization viewpoint of constrained optimization, SIAM J. Control Optim., 29 (1991), pp. 968–998.
10.
J. V. Burke and S. Deng, Weak sharp minima revisited. II. Application to linear regularity and error bounds, Math. Program., 104 (2005), pp. 235–261.
11.
J. V. Burke and M. C. Ferris, Weak sharp minima in mathematical programming, SIAM J. Control Optim., 31 (1993), pp. 1340–1359.
12.
E. J. Candés, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), pp. 1207–1223.
13.
E. J. Candés, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), pp. 489–509.
14.
S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., 43 (2001), pp. 129–159.
15.
A. R. Conn, N. I. M. Gould, and Ph. L. Toint, Trust-Region Methods, MPS-SIAM Ser. Optim. 1, SIAM, Philadelphia, 2000.
16.
D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization, Proc. Natl. Acad. Sci. USA, 100 (2003), pp. 2197–2202.
17.
D. L. Donoho, M. Elad, and V. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Trans. Inform. Theory, 52 (2006), pp. 6–18.
18.
D. L. Donoho and J. Tanner, Sparse nonnegative solution of underdetermined linear equations by linear programming, Proc. Natl. Acad. Sci. USA, 102 (2005), pp. 9446–9451.
19.
B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, Least angle regression, Ann. Statist., 32 (2004), pp. 407–499.
20.
M. C. Ferris and O. L. Mangasarian, Finite perturbation of convex programs, Appl. Math. Optim., 23 (1991), pp. 263–273.
21.
R. Fletcher, An $\ell_1$ penalty method for nonlinear constraints, in Numerical Optimization, 1984, P. T. Boggs, R. H. Byrd, and R. B. Schnabel, eds., SIAM, Philadelphia, 1985, pp. 26–40.
22.
R. Fletcher, Practical Methods of Optimization, 2nd ed., John Wiley and Sons, Chichester, UK, 1987.
23.
C. C. Gonzaga, Generation of Degenerate Linear Programming Problems, Tech. report, Department of Mathematics, Federal University of Santa Catarina, Santa Catarina, Brazil, 2003.
24.
S.-P. Han and O. L. Mangasarian, Exact penalty functions in nonlinear programming, Math. Programming, 17 (1979), pp. 251–269.
25.
C. Kanzow, H. Qi, and L. Qi, On the minimum norm solution of linear programs, J. Optim. Theory Appl., 116 (2003), pp. 333–345.
26.
S. Lucidi, A finite algorithm for the least two-norm solution of a linear program, Optimization, 18 (1987), pp. 809–823.
27.
S. Lucidi, A new result in the theory and computation of the least-norm solution of a linear program, J. Optim. Theory Appl., 55 (1987), pp. 103–117.
28.
Z.-Q. Luo and J.-S. Pang, Error bounds for analytic systems and their applications, Math. Programming, 67 (1994), pp. 1–28.
29.
Z.-Q. Luo and J.-S. Pang, eds., Error bounds in mathematical programming, Math. Program., 88 (2000), pp. 221–410.
30.
O. L. Mangasarian, Normal solutions of linear programs, Math. Programming Stud., 22 (1984), pp. 206–216.
31.
O. L. Mangasarian, Sufficiency of exact penalty minimization, SIAM J. Control Optim., 23 (1985), pp. 30–37.
32.
O. L. Mangasarian, A simple characterization of solution sets of convex programs, Oper. Res. Lett., 7 (1988), pp. 21–26.
33.
O. L. Mangasarian, A Newton method for linear programming, J. Optim. Theory Appl., 121 (2004), pp. 1–18.
34.
O. L. Mangasarian and R. R. Meyer, Nonlinear perturbation of linear programs, SIAM J. Control Optim., 17 (1979), pp. 745–752.
35.
Y. E. Nesterov and A. Nemirovski, Interior Point Polynomial Algorithms in Convex Programming, SIAM Stud. Appl. Math. 13, SIAM, Philadelphia, 1994.
36.
NETLIB Linear Programming Library, available online at http://www.netlib.org/lp/infeas/, 2006.
37.
G. Pataki and S. Schmieta, The DIMACS Library of Semidefinite-Quadratic-Linear Programs, Tech. report preliminary draft, Computational Optimization Research Center, Columbia University, New York, 2002.
38.
J. Renegar, A Mathematical View of Interior-Point Methods in Convex Optimization, MPS/SIAM Ser. Optim. 3, SIAM, Philadelphia, 2001.
39.
S. M. Robinson, Local structure of feasible sets in nonlinear programming. II. Nondegeneracy, Math. Programming Stud., 22 (1984), pp. 217–230.
40.
R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.
41.
S. Sardy and P. Tseng, On the statistical analysis of smoothing by maximizing dirty Markov random field posterior distributions, J. Amer. Statist. Assoc., 99 (2004), pp. 191–204.
42.
M. A. Saunders, Cholesky-based methods for sparse least squares: The benefits of regularization, in Linear and Nonlinear Conjugate Gradient-Related Methods, L. Adams and J. L. Nazareth, eds., SIAM, Philadelphia, 1996, pp. 92–100.
43.
J. F. Sturm, Using Sedumi $1.02$, a Matlab Toolbox for Optimization over Symmetric Cones (updated for Version $1.05$), Tech. report, Department of Econometrics, Tilburg University, Tilburg, The Netherlands, 2001.
44.
R. Tibshirani, Regression shrinkage and selection via the Lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), pp. 267–288.
45.
A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, V. H. Winston and Sons, Washington, DC, 1977 (translated from Russian).
46.
Z. Wu and J. J. Ye, On error bounds for lower semicontinuous functions, Math. Program., 92 (2002), pp. 301–314.
47.
Y. Ye, Interior-Point Algorithms: Theory and Analysis, John Wiley and Sons, New York, 1997.
48.
Y.-B. Zhao and D. Li, Locating the least 2-norm solution of linear programs via a path-following method, SIAM J. Optim., 12 (2002), pp. 893–912.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 1326 - 1350
ISSN (online): 1095-7189

History

Submitted: 18 November 2006
Accepted: 17 April 2007
Published online: 14 November 2007

MSC codes

  1. 90C25
  2. 90C05
  3. 90C51
  4. 65K10
  5. 49N15

Keywords

  1. convex program
  2. conic program
  3. linear program
  4. regularization
  5. exact penalization
  6. Lagrange multiplier
  7. degeneracy
  8. sparse solutions
  9. interior-point algorithms

Authors

Affiliations

Michael P. Friedlander

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

View Options

View options

PDF

View PDF

Figures

Tables

Media

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media