Abstract

We consider unconstrained randomized optimization of convex objective functions. We analyze the Random Pursuit algorithm, which iteratively computes an approximate solution to the optimization problem by repeated optimization over a randomly chosen one-dimensional subspace. This randomized method only uses zeroth-order information about the objective function and does not need any problem-specific parametrization. We prove convergence and give convergence rates for smooth objectives assuming that the one-dimensional optimization can be solved exactly or approximately by an oracle. A convenient property of Random Pursuit is its invariance under strictly monotone transformations of the objective function. It thus enjoys identical convergence behavior on a wider function class. To support the theoretical results we present extensive numerical performance results of Random Pursuit, two gradient-free algorithms recently proposed by Nesterov, and a classical adaptive step size random search scheme. We also present an accelerated heuristic version of the Random Pursuit algorithm which significantly improves standard Random Pursuit on all numerical benchmark problems. A general comparison of the experimental results reveals that (i) standard Random Pursuit is effective on strongly convex functions with moderate condition number and (ii) the accelerated scheme is comparable to Nesterov's fast gradient method and outperforms adaptive step size strategies.

Keywords

  1. continuous optimization
  2. convex optimization
  3. randomized algorithm
  4. line search

MSC codes

  1. 90C25
  2. 90C56
  3. 68W20
  4. 62L10

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
R. L. Anderson, Recent advances in finding best operating conditions, J. Amer. Statist. Assoc., 48 (1953), pp. 789--798.
2.
A. Auger, N. Hansen, J. M. Perez Zerpa, R. Ros, and M. Schoenauer, Experimental comparisons of derivative free optimization algorithms, in Proceedings of the 8th International Symposium on Experimental Algorithms, Springer-Verlag, Berlin, 2009, pp. 3--15.
3.
B. Betro and L. De Biase, A Newton-like method for stochastic optimization, in Towards Global Optimization, Vol. 2, North-Holland, Amsterdam, 1978, pp. 269--289.
4.
H. G. Beyer, The theory of evolution strategies, Nat. Comput., Springer-Verlag, New York, 2001.
5.
C. Brif, R. Chakrabarti, and H. Rabitz, Control of quantum phenomena: Past, present and future, New J. Phys., 12 (2010), p. 075008.
6.
S. H. Brooks, A Discussion of random methods for seeking maxima, Oper. Res., 6 (1958), pp. 244--251.
7.
H. B. Cheng, L. T. Cheng, and S. T. Yau, Minimization with the affine normal direction, Comm. Math. Sci., 3 (2005), pp. 561--574.
8.
A. R. Conn, K. Scheinberg, and L. N. Vicente, Introduction to Derivative-Free Optimization, MPS/SIAM Ser. Optim., SIAM, Philadelphia, 2009.
9.
E. den Boef and D. den Hertog, Efficient line search methods for convex functions, SIAM J. Optim., 18 (2007), pp. 338--363.
10.
E. Hazan, Sparse approximate solutions to semidefinite programs, in Proceedings of the 8th Latin American Conference on Theoretical Informatics, Springer-Verlag, Berlin, 2008, pp. 306--316.
11.
R. Heijmans, When does the expectation of a ratio equal the ratio of expectations?, Statist. Papers, 40 (1999), pp. 107--115.
12.
T. C. Hu, V. Klee, and D. Larman, Optimization of globally convex functions, SIAM J. Control Optim., 27 (1989), pp. 1026--1047.
13.
C. Igel, T. Suttorp, and N. Hansen, A computational efficient covariance matrix update and a $(1+1)$-CMA for evolution strategies, in GECCO '06: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, ACM, New York, 2006, pp. 453--460.
14.
J. Jägersküpper, Rigorous runtime analysis of the $(1+1)$ ES: $1/5$-rule and ellipsoidal fitness landscapes, in Foundations of Genetic Algorithms, A. Wright, M. Vose, K. De Jong, and L. Schmitt, eds., Lecture Notes in Comput. Sci. 3469, Springer, Berlin, 2005, pp. 356--361.
15.
J. Jägersküpper, Lower bounds for hit-and-run direct search, in Stochastic Algorithms: Foundations and Applications, J. Hromkovic, R. Královic, M. Nunkesser, and P. Widmayer, eds., Lecture Notes in Comput. Sci. 4665, Springer, Berlin, 2007, pp. 118--129.
16.
V. G. Karmanov, Convergence estimates for iterative minimization methods, USSR Comput. Math. Math. Phys., 14 (1974), pp. 1--13.
17.
V. G. Karmanov, On convergence of a random search method in convex minimization problems, Theory Probab. Appl., 19 (1974), pp. 788--794 (in Russian).
18.
D. C. Karnopp, Random search techniques for optimization problems, Automatica, 1 (1963), pp. 111--121.
19.
G. Kjellström and L. Taxen, Stochastic optimization in system design, IEEE Trans. Circuits Systems, 28 (1981), pp. 702--715.
20.
A. Kleiner, A. Rahimi, and M. I. Jordan, Random conic pursuit for semidefinite programming, in Advances in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 2010, pp. 1135--1143.
21.
T. G. Kolda, R. M. Lewis, and V. Torczon, Optimization by direct search: New perspectives on some classical and modern methods, SIAM Rev., 45 (2004), pp. 385--482.
22.
V. N. Krutikov, On the rate of convergence of the minimization method along vectors in given directional sy, USSR Comput. Maths. Phys., 23 (1983), pp. 154--155 (in Russian).
23.
D. Leventhal and A. S. Lewis, Randomized Hessian estimation and directional search, Optimization, 60 (2011), pp. 329--345.
24.
R. L. Maybach, Solution of optimal control problems on a high-speed hybrid computer, Simulation, 7 (1966), pp. 238--245.
25.
C. L. Müller and I. F. Sbalzarini, Gaussian adaptation revisited - an entropic view on covariance matrix adaptation, in EvoApplications, C. Di Chio et al., ed., Lecture Notes in Comput. Sci. 6024, Springer, Berlin, 2010, pp. 432--441.
26.
V. A. Mutseniyeks and L. A. Rastrigin, Extremal control of continuous multi-parameter systems by the method of random search, Eng. Cybernetics, 1 (1964), pp. 82--90.
27.
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, Robust stochastic approximation approach to stochastic programming, SIAM J. Optim., 19 (2009), pp. 1574--1609.
28.
Y. Nesterov, Introductory Lectures on Convex Optimization, Kluwer, Boston, 2004.
29.
Y. Nesterov, Random Gradient-Free Minimization of Convex Functions, Technical report, ECORE, 2011.
30.
H. X. Phu, Minimizing convex functions with bounded perturbation, SIAM J. Optim., 20 (2010), pp. 2709--2729.
31.
B. Polyak, Introduction to Optimization, Optimization Software, New York, 1987.
32.
MATLAB R2012a, http://www.mathworks.ch/help/toolbox/optim/ug/fminunc.html (2012).
33.
G. Rappl, On linear convergence of a class of random search algorithms, ZAMM Z. Angew. Math. Mech., 69 (1989), pp. 37--45.
34.
L. A. Rastrigin, The convergence of the random search method in the extremal control of a many parameter system, Autom. Remote Control, 24 (1963), pp. 1337--1342.
35.
I. Rechenberg, Evolutionsstrategie; Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog, Stuttgart--Bad Cannstatt, 1973.
36.
M. Schumer and K. Steiglitz, Adaptive step size random search, IEEE Trans. Automat. Control, 13 (1968), pp. 270--276.
37.
S. U. Stich and C. L. Müller, On spectral invariance of randomized Hessian and covariance matrix adaptation schemes, in Parallel Problem Solving from Nature - PPSN XII, Lecture Notes in Comput. Sci. 7941, Springer, Berlin, Heidelberg, pp. 448--457.
38.
S. U. Stich, C. L. Müller, and B. Gärtner, Supporting online material for optimization of convex functions with random pursuit, arXiv:1111.0194v2, 2012.
39.
J. Sun, J. M. Garibaldi, and C. Hodgman, Parameter estimation using metaheuristics in systems biology: A comprehensive review, IEEE/ACM Trans. Comput. Biol. Bioinformatics, 9 (2012), pp. 185--202.
40.
S. Vempala, Recent progress and open problems in algorithmic convex geometry, in IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010), K. Lodaya and M. Mahajan, eds., Leibniz International Proceedings in Informatics (LIPIcs) 8, Dagstuhl, Germany, 2010, pp. 42--64.
41.
A. A. Zhigljavsky and A. G. Zilinskas, Stochastic Global Optimization, Springer-Verlag, Berlin, Germany, 2008.
42.
R. Zieliński and P. Neumann, Stochastische Verfahren zur Suche nach dem Minimum einer Funktion, Akademie-Verlag, Berlin, Germany, 1983.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 1284 - 1309
ISSN (online): 1095-7189

History

Submitted: 1 November 2011
Accepted: 26 March 2013
Published online: 27 June 2013

Keywords

  1. continuous optimization
  2. convex optimization
  3. randomized algorithm
  4. line search

MSC codes

  1. 90C25
  2. 90C56
  3. 68W20
  4. 62L10

Authors

Affiliations

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

View Options

View options

PDF

View PDF

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media