Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

High Probability and Risk-Averse Guarantees for a Stochastic Accelerated Primal-Dual Method

Yassine Laguel, Necdet Serhat Aybat, Mert Gürbüzbalaban; 25(421):1−56, 2024.

Abstract

We consider stochastic strongly-convex-strongly-concave (SCSC) saddle point (SP) problems which frequently arise in applications ranging from distributionally robust learning to game theory and fairness in machine learning. We focus on the recently developed stochastic accelerated primal-dual algorithm (SAPD), which admits optimal complexity in several settings as an accelerated algorithm. We provide high probability guarantees for convergence to a neighborhood of the saddle point that reflects accelerated convergence behavior. We also provide an analytical formula for the limiting covariance matrix of the iterates for a class of stochastic SCSC quadratic problems where the gradient noise is additive and Gaussian. This allows us to develop lower bounds for this class of quadratic problems which show that our analysis is tight in terms of the high probability bound dependence on the problem parameters. We also provide a risk-averse convergence analysis characterizing the “Conditional Value at Risk”, the “Entropic Value at Risk”, and the χ2-divergence of the distance to the saddle point for the iterate sequence, highlighting the trade-offs between the bias and the risk associated with an approximate solution obtained by terminating the algorithm at any iteration.

[abs][pdf][bib]       
© JMLR 2024. (edit, beta)

Mastodon