SURVEY OF MODERN SMOOTH OPTIMIZATION ALGORITHMS WITH COMPARISON ORACLE
- Authors: Lobanov A.V1,2, Gasnikov A.V3,2
-
Affiliations:
- National Research University Higher School of Economics
- Moscow Institute of Physics and Technology
- Innopolis University
- Issue: Vol 526, No 1 (2025)
- Pages: 16-23
- Section: MATHEMATICS
- URL: https://medbiosci.ru/2686-9543/article/view/364244
- DOI: https://doi.org/10.7868/S3034504925060036
- ID: 364244
Cite item
Abstract
About the authors
A. V Lobanov
National Research University Higher School of Economics; Moscow Institute of Physics and Technology
Email: lobbsasha@mail.ru
Moscow, Russia
A. V Gasnikov
Innopolis University; Moscow Institute of Physics and Technology
Email: gasnikov@yandex.ru
Corresponding Member of the RAS Innopolis, Russia; Moscow, Russia
References
- Conn A.R., Scheinberg K., and Vicente L.N. Introduction to derivative-free optimization // SIAM. 2009.
- Kimiaei M. and Neumaier A. Efficient unconstrained black box optimization // Mathematical Programming Computation. V. 14(2). 2022. P. 365–414.
- Rosenbrock H. An automatic method for finding the greatest or least value of a function // The computer journal. V. 3(3). 1960. P. 175–184.
- Chen P.Y., Zhang H., Sharma Y., Yi J., and Hsieh C.J. Zoo: Zeroth order optimization based blackbox attacks to deep neural networks without training substitute models // In Proceedings of the 10th ACMworkshop on artificial intelligence and security. 2017. P. 15–26.
- Gao J., Lanchantin J., Soffa M.L., and Qi Y. Black-box generation of adversarial text sequences to evade deep learning classifiers // In 2018 IEEE Security and Privacy Workshops (SPW). IEEE. 2018. P. 50–56.
- Dai Z., Low B.K.H., and Jaillet P. Federated bayesian optimization via thompson sampling // Advances in Neural Information Processing Systems. V. 33. 2020. P. 9687–9699.
- Alashqar B., Gasnikov A., Dvinskikh D., and Lobanov A. Gradient-free federated learning methods with l1 and l2-randomization for non-smooth convex stochastic optimization problems // Computational Mathematics and Mathematical Physics. V. 63(9). 2023. P. 1600–1653.
- Patel K.K., Saha A., Wang L., and Srebro N. Distributed online and bandit convex optimization // In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop). 2022.
- Choromanski K., Rowland M., Sindhwani V., Turner R., and Weller A. Structured evolution with compact architectures for scalable policy optimization / In International Conference on Machine Learning. PMLR. 2018. P. 970–978.
- Mania H., Guy A., and Recht B. Simple random search of static linear policies is competitive for reinforcement learning // Advances in Neural Information Processing Systems. V. 31. 2018.
- Lobanov A. and Gasnikov A. Accelerated zero-order sgd method for solving the black box optimization problem under “overparametrization” condition / In International Conference on Optimization and Applications. Springer. 2023. P. 72–83.
- Agarwal A., Dekel O., and Xiao L. Optimal algorithms for online convex optimization with multi-point bandit feedback // In Colt. Citeseer. 2010. P. 28–40.
- Bach F. and Perchet V. Highly-smooth zero-th order online optimization // In Conference on Learning Theory. PMLR. 2016. P. 257–283.
- Akhavan A., Chzhen E., Pontil M., and Tsybakov A. A gradient estimator via l1-randomization for online zero-order optimization with two point feedback // Advances in Neural Information Processing Systems. V. 35. 2022. P. 7685–7696.
- Shamir O. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback // The Journal of Machine Learning Research. V. 18(1). 2017. P. 1703–1713.
- Lattimore T. and Gyorgy A. Improved regret for zeroth-order stochastic convex bandits // In Conference on Learning Theory. PMLR. 2021. P. 2938–2964.
- Bergstra J. and Bengio Y. Random search for hyper-parameter optimization // Journal of machine learning research. V. 13(2). 2012.
- Hernandez-Lobato J.M., Hoffman M.W., and Ghahramani Z. Predictive entropy search for efficient global optimization of black-box functions // Advances in neural information processing systems. V. 27. 2014.
- Nguyen A. and Balasubramanian K. Stochastic zeroth-order functional constrained optimization: Oracle complexity and applications // INFORMS Journal on Optimization. 2022.
- Bansal S., Calandra R., Xiao T., Levine S., and Tomlin C.J. Goal-driven dynamics learning via bayesian optimization / In 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE. 2017. P. 5168–5173.
- Xu W., Jones C.N., Svetozarevic B., Laughman C.R., and Chakrabarty A. Vabo: Violation-aware bayesian optimization for closed-loop control performance optimization with unmodeled constraints / In 2022 American Control Conference (ACC). IEEE. 2022. P. 5288–5293.
- Gasnikov A., Dvinskikh D., Dvurechensky P., Gorbunov E., Beznosikov A., and Lobanov A. Randomized gradient-free methods in convex optimization / arXiv preprint arXiv:2211.13566. 2022.
- Nesterov Y. Efficiency of coordinate descent methods on huge-scale optimization problems // SIAM Journal on Optimization. V. 22(2). 2012. P. 341–362.
- Lin Q., Lu Z., and Xiao L. An accelerated proximal coordinate gradient method // Advances in Neural Information Processing Systems. V. 27. 2014.
- Zhang Y. and Xiao L. Stochastic primal-dual coordinate method for regularized empirical risk minimization // Journal of Machine Learning Research. V. 18(84). 2017. P. 1–42.
- Mangold P., Bellet A., Salmon J., and Tommasi M. High-dimensional private empirical risk minimization by greedy coordinate descent / In International Conference on Artificial Intelligence and Statistics. PMLR. 2023. P. 4894–4916.
- Duchi J. Lecture notes for statistics 311/electrical engineering 377 // https://stanford.edu/class/stats311/Lectures/fullnotes.pdf . Last visited on. V. 2. 2016. Р. 23.
- Shi B., Du S.S., Su W., and Jordan M.I. Acceleration via symplectic discretization of high-resolution differential equations // Advances in Neural Information Processing Systems. V. 32. 2019.
- Asi H., Feldman V., Koren T., and Talwar K. Private stochastic convex optimization: Optimal rates in l1 geometry / In International Conference on Machine Learning. PMLR. 2021. P. 393–403.
- Lee Y.T. and Sidford A. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems / In 2013 ieee 54th annual symposium on foundations of computer science. IEEE. 2013. P. 147–156.
- Allen-Zhu Z., Qu Z., Richtarik P., and Yuan Y. Even faster accelerated coordinate descent using non-uniform sampling / In International Conference on Machine Learning. PMLR. 2016. P. 1110–1119.
- Bergou E.H., Gorbunov E., and Richtarik P. Stochastic three points method for unconstrained smooth minimization // SIAM Journal on Optimization. V. 30(4). 2020. P. 2726–2749.
- Gorbunov E., Bibi A., Sener O., Bergou E.H., and Richtarik P. A stochastic derivative free optimization method with momentum / In International Conference on Learning Representations. 2019.
- Saadi T.E.B.E.K. et al. On the almost sure convergence of the stochastic three points algorithm / arXiv preprint arXiv:2501.13886. 2025.
- Lobanov A., Gasnikov A., and Krasnov A. Acceleration exists! optimization problems when oracle can only compare objective function values // In The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024.
- Saha A., Koren T., and Mansour Y. Dueling convex optimization / In International Conference on Machine Learning. PMLR. 2021. P. 9245–9254.
- Nesterov Y. and Stich S.U. Efficiency of the accelerated coordinate descent method on structured optimization problems // SIAM Journal on Optimization. V. 27(1). 2017. P. 110–123.
- Polyak B.T. and Tsypkin Y.Z. Optimal pseudogradient adaptation algorithms // Avtomatika i Telemekhanika, (8). 1980. P. 74–84.
- Smirnov V., Kazistova K., Sudakov I., Leplat V., Gasnikov A., and Lobanov A. Ruppert-polyak averaging for stochastic order oracle / arXiv preprint arXiv:2411.15866. 2024.
- Polyak B.T. and Juditsky A.B. Acceleration of stochastic approximation by averaging // SIAM journal on control and optimization. V. 30(4). 1992. P. 838–855.
- Gadat S. and Panloup F. Optimal non-asymptotic analysis of the ruppert–polyak averaging stochastic algorithm // Stochastic Processes and their Applications. V. 156. 2023. P. 312–348.
Supplementary files


